CN105575392A - System and method for user interaction - Google Patents

System and method for user interaction Download PDF

Info

Publication number
CN105575392A
CN105575392A CN201510666508.XA CN201510666508A CN105575392A CN 105575392 A CN105575392 A CN 105575392A CN 201510666508 A CN201510666508 A CN 201510666508A CN 105575392 A CN105575392 A CN 105575392A
Authority
CN
China
Prior art keywords
sense
input
user
human language
time interval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201510666508.XA
Other languages
Chinese (zh)
Inventor
纳拉亚南·阿拉文德
斯里坎斯·瓦里耶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ford Global Technologies LLC
Original Assignee
Ford Global Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ford Global Technologies LLC filed Critical Ford Global Technologies LLC
Publication of CN105575392A publication Critical patent/CN105575392A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q5/00Arrangement or adaptation of acoustic signal devices
    • B60Q5/005Arrangement or adaptation of acoustic signal devices automatically actuated
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals

Abstract

The subject of the invention relates to a system and method for user interaction. In one example, auditory input generated in a closed environment is obtained. In addition, the auditory input is analyzed to determine the absence of human language within a predetermined time interval. According to speech recognition technology or human voice detection statndards, the auditory input can be analyzed. One or more interaction prompts can be provided for a user when the absence of the human language in the auditory input within the predetermined time interval is determined.

Description

For the system and method for user interactions
Background technology
Along with easy acquisition and the better road infrastructure of vehicle, user prefers to utilize road transport instead of such other means of transportation of such as transportation by railroad and air transportation to travel frequently usually.As other means of transportation, comprise the safety of the vehicle of its passenger, among other aspects, depend on the driver of vehicle.Such as, during long-distance travel, driver may be tired due to the such a variety of causes of such as sleep insuffience or rest, dull riving condition or health status.In other examples, the notice of driver or alertness may decline at night or time in morning, and this may cause accident and casualties.
Because vehicle may be privately held, can not as the situation of transportation by railroad or air transportation by the monitoring of each independent driver of central office.Therefore, some technology has been implemented with the alertness of the monitoring driving person when steering vehicle.Disclose a kind of such technology in patent documentation US6236968 (' 968 patent), the document describes a kind of automated dialog systems that driver can be made to keep clear-headed during trip in long-distance or evening.According to ' 968 patents, driver may have to participate in and the dialogue of automotive dialog system.In order to make driver keep clear-headed, the sense of hearing input that process and analyzing receives from user is to determine driver's whether vigilance.Such as, driver reaction mass and the voice output provided by system is made a response the spent time.According to analysis, driver can participate in dialogue to make driver keep clear-headed.
Although ' 968 patents describe the technology that a kind of driver of making keeps clear-headed, this technology needs user to participate in automotive dialog system on one's own initiative, and automotive dialog system may hinder driving experience or may be loaded down with trivial details concerning user.Such as, when driver talks with fellow bus passenger, driver and automated system are talked with may be uncomfortable, and this may have a negative impact to driving experience.In addition, due to the labor of input provided by driver, such system may be complicated, therefore may be difficult to carry out.Therefore, such technology may not be provided for determining the alertness of driver and make him keep the effective mechanism of participating in.
Accompanying drawing explanation
Detailed description reference accompanying drawing below, in accompanying drawing:
Fig. 1 be a kind of embodiment according to the inventive subject matter for carrying out the block diagram of mutual user interactive system with the user of vehicle; And
Fig. 2 be a kind of embodiment according to the inventive subject matter for carrying out the process flow diagram of mutual method with the user of vehicle.
Summary of the invention
This summary of the invention is provided to carry out the relevant design of mutual system and method for introducing with for the user in the enclosed environment such with such as vehicle.Design describes below in a specific embodiment further.This summary of the invention is neither intended to the essential characteristic of the theme of identification requirement protection, is not also intended to the scope for determining or limit claimed theme.
In one embodiment, the sense of hearing input in vehicle is analyzed.Sense of hearing input can be analyzed to determine there is not human language within predetermined time interval.The time interval can or limit according to the time disappeared or according to the distance travelled.Such as, speech recognition technology, human sound examination criteria or its combination can be used to carry out execution analysis.Human sound examination criteria can based on audio attribute and/or audio mode.According to analysis, the human language in presence or absence vehicle can be determined.There is not the decline that human language signal can be considered to the alertness level representing driver.In order to remind driver or guarantee that driver is vigilance, interaction prompts can be provided to driver.
Embodiment
As mentioned previously, the user that the driver of such as vehicle is such may be tired or sleeping during driving, thus make the safety of the individual in vehicle and other vehicles neighbouring on the line.Such as, although driver may wake, due to driving or the barren riving condition of dullness, alertness level can decline as time goes by, and in some cases, the alertness of reduction may cause accident.In order to ensure vehicle and individual safety, various technology is proposed.But such technology may need the active participate of driver to determine the alertness level of driver, when driving, this can disperse the notice of driver.In addition, some other technologies can provide the reactive solution such as after driver is sleeping, thus can not monitoring driving person effectively.In addition, some other technologies can based on the analysis of the such physiological parameter of such as eye motion, head movement and neck movement.Such analysis may be complicated and may need the installation of optional feature in vehicle.
An embodiment according to the inventive subject matter, describes and carries out mutual system and method for the user such with the driver of such as vehicle.In one example, the sense of hearing that can obtain from enclosed environment inputs, the sense of hearing input of the aggregated forms in such as vehicle.The music that sense of hearing input can be the sound signal of the dialogue such as corresponding to the individual in vehicle, playing in vehicle and the noise produced by various vehicle part.The sense of hearing input of acquisition can be analyzed to determine whether the sense of hearing input obtained corresponds to human language.To understand, regularly or continuously can analyze sense of hearing input.
In one example, execution analysis can be carried out according to speech recognition technology.Speech recognition technology can be distinguished other such for the song play in human language and such as vehicle sound signals according to speech recognition and synthesis attribute and come.Speech recognition and synthesis attribute contribute to the identification of human conversation.The example of speech recognition and synthesis attribute comprises such as grammatical attribute, linguistic property, text attribute (transcriptattribute) and confidence attribute.
In other examples, execution analysis can be carried out according to human sound examination criteria.Human sound examination criteria can based on pre-qualified audio attribute scope and/or pre-qualified human sound pattern.It is the measured value of sound signal that the such audio attribute of such as intensity and frequency can be understood as.Therefore, the audio attribute of the input received and/or audio mode can be determined and be compared with pre-qualified human sound pattern with threshold value audio attribute scope respectively.
According to analysis, the voice signal do not existed within predetermined time interval corresponding to the human sound in vehicle can be detected.The time interval can or limit according to the time disappeared or according to the distance travelled.In order to the object explained, there is not the voice signal corresponding to human sound---hereinafter referred to as human language, can be understood as " noiseless " under the enclosed environment background of present subject matter.In one example, there is not human language can to represent or driver is drowsy or individual in vehicle is boring.Therefore, in order to attract driver and/or other people, one or more interaction prompts can be provided.Interaction prompts can be audio prompt, visual cues, physical stimulation or its form combined.Interaction prompts can be provided for some psychological activity attraction driver or for making driver recover with the mood of other people.
Due to present subject matter be based on predetermined time during in or there is not the detection of human language in predetermined distance, therefore can provide interaction prompts on one's own initiative.Such as, driver may be energetic and clear-headed at first; But he/her may start to feel drowsy and tired over time, become.This correspondingly may cause the response time of the increase of driver, thus when quick decision will be made the probability of increase accident.Under these circumstances, about not having the cycle of the detection of audio active or regularly prompting that driver can be helped to keep its alertness.
In addition, because present subject matter is based on presence or absence sense of hearing input in vehicle, active users therefore may not be needed to participate in interactive system.Such as, when driver participates in the dialogue with fellow bus passenger, interaction prompts can not be provided.Therefore, when detecting noiseless, interaction prompts can be provided to driver, thus not hinder the driving experience of driver.
In one example, present subject matter can be implemented by the calculation element that the smart phone of such as user is such, therefore, needs to dispose attachment device in vehicle.In addition, because present subject matter is based on presence or absence human language, therefore it easily may be implemented and may not need the computational resource that adds, the complex analyses that the such as content analysis that computational resource may need the sense of hearing to input is such.
Here embodiment above-mentioned is further described with reference to accompanying drawing.It should be pointed out that instructions is relevant with example embodiment with accompanying drawing, and should not be construed the restriction to present subject matter.Also be understandable that, although various layout can be designed to not describe clearly or illustrate, embody the principle of present subject matter here.In addition, the whole statement and the concrete example that describe the principle of present subject matter, aspect and embodiment are here intended to comprise its equivalence principle, aspect and embodiment.
What Fig. 1 provided an example according to the inventive subject matter is embodied as the user interactive system 100 carrying out mutual computing system for the user such with the driver of such as vehicle.The example of such computing system comprises smart phone, laptop computer, panel computer or any type of portable computing system.In one example, user interactive system 100 can be deployed in the communicator of driver.In another example, user interactive system 100 can be implemented in the vehicle based computing system integrated with vehicle.
User interactive system 100 can comprise processor 102, interface 104 and storer 106 further.Processor 102 can be single processing unit or some unit, and whole unit all may comprise multiple computing unit.Processor 102 may be embodied as one or more microprocessor, microcomputer, digital signal processor, CPU (central processing unit), state machine, logical circuit and/or carrys out any device of control signal according to operational order.Among other functions, processor 102 is suitable for extracting and performs the computer-readable instruction stored in memory.
The function that various element shown in the drawings---comprises any functional block being labeled as " processor "---, can by specialized hardware and can perform the software relevant with appropriate software hardware make be used to provide.When provided by a processor, function can by single application specific processor, provide by single share processor or by multiple independent processor, and some in processor can be shared.In addition, specifically used term " processor " should not be construed that only refer to can the hardware of executive software, and impliedly can include but not limited to digital signal processor (DSP) hardware, network processing unit, special IC (ASIC), field programmable gate array (FPGA), the ROM (read-only memory) (ROM) for storing software, random access memory (RAM), nonvolatile memory.Also other hardware that is traditional and/or customization can be comprised.
Interface 104 can comprise various software and hardware interface, the interface of the interface of such as peripherals, such as microphone and the such audio frequency apparatus of loudspeaker.In addition, interface can comprise one or more port for being connected with other calculation elements by user interactive system 100.Interface can promote to communicate with multiple in network range in various agreement, such as comprises the network that the wireless network of the cable network of such as LAN (LAN (Local Area Network)), cable etc. and such as WLAN (WLAN (wireless local area network)), honeycomb, satellite etc. is such.
Storer 106 can be connected to processor 102 and can comprise the computer-readable medium of any non-transitory as known in the art, such as, the volatile memory of such as static RAM (SRAM) and dynamic RAM (DRAM) and/or nonvolatile memory that such as ROM (read-only memory) (ROM), erasable programmable ROM, flash memory, hard disk, CD and tape are such is comprised.
User interactive system 100 can comprise module 108 and data 110 further.Module 108 and data 110 can be connected to processor 102.Module 108, among other aspects, comprise routine, program, object, parts, data structure etc., it performs specific task or implements specific abstract data type.Module 108 also may be embodied as signal processor, state machine, logical circuit and/or comes any other device or the parts of control signal according to operational order.
In alternative aspects of the inventive subject matter, module 108 can be computer-readable instruction, and when being performed by processor/processing unit, it performs the function of any description.Computer-readable instruction can be stored in the medium of electronic storage device, hard disk, CD or other machines readable storage medium storing program for executing or non-transitory.In one embodiment, computer-readable instruction also can download to storage medium by network connection.
In one example, module 108 comprises input analysis module 112, interactive module 114 and other modules 116.Other modules 116 comprise the program of supplementing application program or the function performed by user interactive system 100.On the other hand, data 110 comprise analysis data 118, interaction data 120 and other data 122.Other data 122 comprise the data generated due to the execution of the one or more modules in other modules 116.
In one example, user interactive system 100 can also comprise the such input receiving unit of such as microphone 124 to be received in the sense of hearing input generated in enclosed environment, such as in vehicle.In operation, input analysis module 112 to trigger input receiving unit 124 termly and input to provide the sense of hearing.Such as, inputting analysis module 112 can after a predetermined period or the sense of hearing input impelling audio input unit to be provided in after predetermined distance to collect in final time interval.Alternatively, input analysis module 112 and can receive sense of hearing input continuously from input receiving unit 124.
Once receive sense of hearing input, input analysis module 112 just can analyze the sense of hearing input of acquisition to determine there is not the human language in sense of hearing input within predetermined time interval.Will understand, predetermined period and predetermined distance can have default value, but it can be configurable by user.The value of predetermined period and predetermined distance can be stored in be analyzed in data 118.
In one example, the input that analysis module 112 can analyze acquisition continuously or is termly inputted.In order to analyze, in one example, input analysis module 112 can comprise wave filter to filter out noise, the noise that such as, other vehicles produce and the noise that engine and miscellaneous part by the vehicle in consideration produce.In addition, can the input of analysis and filter further to determine that whether the input of filtering corresponds to human language.Will understand, also can directly analyze the sense of hearing input obtained.In order to analyze sense of hearing input, input analysis module 112 can implement various technology Sum fanction.The relevant information of the information Sum fanction relevant with analytical technology can be stored in be analyzed in data 118.
In one example, input analysis module 112---hereinafter referred to as analysis module 112---and such as Google (Google can be implemented tM) the such speech recognition technology of speech recognition technology.Speech recognition technology can be provided for that other such to human conversation and such as song, vehicle noise, other environmental noises sense of hearings are inputted difference and come.Input analysis module 112 can analyze sense of hearing input according to speech recognition and synthesis attribute.Speech recognition and synthesis attribute contribute to the identification of human conversation.Speech recognition and synthesis attribute comprise such as grammatical attribute, weight properties, linguistic property, text attribute, confidence attribute, Volume attribute, rate attribute, tonal properties and explanatory attribute.
Grammatical attribute can storaged voice grammar object, weight properties represents the weight that analysis module 112 should be relevant to the grammer of speech recognition, linguistic property represents the language arranged for identifying, text attribute helps the original speech identifying that user says, with numeric scale, confidence attribute represents that system has identification and how to be sure of, Volume attribute represents for identifying that the scope of volume said in human language, rate attribute represents speed of speaking, tonal properties represents the scope of the tone for identifying human language, and explanatory attribute represents from the said semantic meaning of user.
To understand, the analysis based on such speech recognition technology is provided for the detection of human conversation, namely carrys out the human language of the sense of hearing input in comfortable predetermined time interval.
In another example, analysis module 112 can determine whether to meet human sound examination criteria.Human sound examination criteria can comprise the regular audio attribute such with the such as frequency inputted by audio frequency and intensity compared with the scope limited in advance limited for audio attribute.It is the measured value of sound signal that audio attribute can be understood as, and namely sound signal can be limited by the audio attribute that such as frequency intensity is such.In the present circumstance, the frequency range limited in advance can be limited and/or the amplitude range that limits in advance.The scope limited in advance corresponds to the frequency range of human language and the strength range limited in advance corresponds to the strength range of human language.In one example, the frequency range limited in advance can from about 20Hz to 20KHz; And the strength range limited in advance can from 20dB to 100dB.Therefore, can be determined and compared with the audio attribute scope limited in advance at the audio attribute of the audio frequency input in the given time interval.
Substituting except audio attribute or as audio attribute, human sound examination criteria can comprise check with determine audio mode that audio frequency input whether with the human sound patterns match limited in advance corresponding to human language.The human sound pattern limited in advance can be stored in be analyzed in data 118.
Analysis module 112, according to audio attribute and/or audio mode respectively with the comparing of the audio attribute scope limited in advance and the human sound pattern limited in advance, detect and there is not human language within predetermined time interval.Analysis module 112 can Implementation Modes matching technique to compare the audio mode of sense of hearing input and the human sound pattern that limits in advance.To understand, in various embodiments, use the analysis of speech recognition technology, audio attribute and audio mode can perform in proper order or simultaneously with any order.
According to analysis above, determine corresponding to when there is the sense of hearing input of human language at analysis module 112, can think that driver is that individual in vigilance and/or vehicle participates in dialogue.Therefore, driver may can not be prompted or be provided prompting to carry out alternately with user interactive system 100, thus allows driver to enjoy driving when not jeopardizing safe.
In one example, once determine to there is human language, analysis module 112 just can determine whether the human language through determining comprises the sound signal of the sound corresponding to driver further.The dialogue that such analysis may participate between them to process fellow bus passenger can be performed, and the situation that driver may not participate in; As a result, driver may experience the decline of alertness level.Analysis module 112 can implement voice recognition technology, and the sense of hearing can input compared with the audio user attribute limited in advance such with the audio attribute of the sound such as corresponding to driver by it.The audio user limited in advance can be stored in be analyzed in data 118.In order to the object explained, correspond to the human language of the sound of the such specific user of such as driver, user's sense of hearing input can be called as.
In one example, there is human language once exist in the time interval determining continuous print predetermined quantity in sense of hearing input and/or user's sense of hearing input, just can increase time interval length.Equally, once determine to there is not human language and/or user's sense of hearing input, analysis module 112 just can shorten time interval length to check driver continually.Therefore, there is human language according within the time interval of continuous print predetermined quantity, analysis module 112 can change time interval length.
In addition, once determine there is not human language and/or user's sense of hearing input within predetermined time interval, carry out alternately, to guarantee that driver is vigilance with regard to triggering interactive module 114 with driver.With reference to the change of above-mentioned predetermined time interval, in one example, interaction prompts can be provided in each continuous print predetermined time interval, in continuous print predetermined time interval, detect to there is not human language.
Interactive module 114 can provide interaction prompts to attract driver to driver.Interaction prompts can be audio prompt, visual cues, physical stimulation or its combination.Interaction prompts can be stored in interaction data 120.Such as, audio prompt can be by the notice of hummer, musical works, joke, or a series ofly carries out mutual problem with driver; Visual cues can be that the bright light display on any other display screen of arranging in the display screen of user interactive system 100 or vehicle is shown; And physical stimulation can be vibrations alarm and the water spray using the sprayer relevant with user interactive system 100.
In addition, interactive module 114 alternatively can also attract driver in dialogue.In order to engage in the dialogue with driver intelligently, interactive module 114 can implement speech recognition engine and speech production engine.The example of the interaction prompts engaged in the dialogue with driver can be " you want to play what song? " and in response to the input of driver, interactive module 114 can with " having from same other songs artistical, you also want to listen them? " or " this song be 1980 distribution and sensational in India.”。In addition, interactive module 114 can be arranged to the music storage of calling party interactive system 100 to obtain the musical works that will play.Alternatively, musical works can be stored in interaction data 120.
In other examples, interactive module 114 can with interaction prompts " you want to have a rest? " or " be about to arrive good dining room, you want to stop and have a rest? " respond.In addition, interactive module 114 can also provide the map of the position with nearest food chain store as visual cues.Other examples of visual cues are included in message or game that the screen relevant with user interactive system 100 shows.
The above-mentioned example of interaction prompts is provided for object instead of the conduct restriction of explanation.In addition, understanding, when not deviating from the scope of present subject matter, also can implement other examples of interaction prompts.
As learnt from foregoing description, with the noiseless detection in the vehicle being based in predetermined time interval alternately of driver.Therefore, once detect noiseless, just can provide prompting to driver, otherwise not provide, the discomfort caused driver correspondingly can minimize by prompting, and guarantees not jeopardize safety simultaneously.In addition, noiseless in order to detect, driver's active participate user interactive system 100 may not be needed.In addition, in some examples, present subject matter also can be provided for by playing active musical works or laughing at by chanting the mood that changes in vehicle.Therefore, as described, present subject matter a kind of simple enforcement is provided and effective mechanism to guarantee vehicle on road and user security.
Fig. 2 show a kind of embodiment according to the inventive subject matter for carrying out mutual method 200 with the user of vehicle.The order of describing method is not intended to be interpreted as restriction, and the method frame of any amount of description can with any sequential combination to implement the above described methods or optional method.In addition, method 200 can be implemented by process resource or calculation element by the machine readable instructions of any suitable hardware, non-transitory or its combination.
Also be understandable that, method 200 can be performed by the program calculation device that such as user interactive system 100 is as shown in Figure 1 such.In addition, as will be readily understood, method 200 can perform according to the instruction be stored in the computer-readable medium of non-transitory.The computer-readable medium of non-transitory can comprise such as number storage, magnetic-based storage media, hard disk drive, optical readable digital data storage medium that such as one or more Disk and tape is such.But, below with reference to user interactive system 100 describing method 200 as above, other the suitable systems for the execution of these methods also can be utilized.In addition, the embodiment of these methods is not limited to such example.
At frame 202, the sense of hearing input generated in the such enclosed environment of such as vehicle can be obtained.Such as, sense of hearing input can obtain from the input receiving unit 124 of user interactive system 100.
At frame 204, sense of hearing input can be analyzed to determine there is human language in sense of hearing input.Can according to the time or according to the distance limiting time interval travelled.In one example, analysis module 112 can analyze sense of hearing input according to speech recognition technology, audio attribute, audio mode or its combination.
At frame 206, according to analysis, can determine whether there is not human language in the sense of hearing input within predetermined time interval.In one example, analysis module 112 can determine to there is not human language.At frame 206, if human language detected in sense of hearing input, then method 200 can be branched off into ('No' branch) frame 208.
At frame 208, determine whether human language comprises user's sense of hearing input, namely corresponds to the sound signal of the such specific user of the driver of such as vehicle.According to speech recognition technology, can determine to there is user's sense of hearing input.At frame 208, if determine that human language comprises user's sense of hearing input, then method 200 branch can turn back to step 204, in the sense of hearing input that step 204 is analyzed in following time interval.
Reference block 206, if determine that sense of hearing input does not comprise human language, then method 200 can proceed to (' being ' branch) frame 210.Equally, at frame 208, if determine that human language does not comprise user's sense of hearing input, then method 200 can proceed to ('No' branch) frame 210.
At frame 210, one or more interaction prompts is provided to carry out alternately with the user such with the driver of such as vehicle.Interaction prompts can be audio prompt, visual cues, physical stimulation or its combination.In one example, interactive module 114 can provide interaction prompts.
Although example of the present disclosure to be described for the language of architectural feature and/or method, should be understood that, appended claim there is no need to be limited to described specific features or method.On the contrary, specific features and method carry out disclosing and explaining as example of the present disclosure.

Claims (12)

1. a user interactive system (100), it comprises:
Processor (102);
Obtain the input receiving unit (124) of the sense of hearing input generated in enclosed environment;
Input analysis module (112), its be connected to described processor (102) with:
According at least one in speech recognition technology and human sound examination criteria, analyze described sense of hearing input; And
According to described analysis, determine to there is not human language in the described sense of hearing input within predetermined time interval; And
Be connected to the interactive module (114) of described processor (102), when determining there is not described human language in the described sense of hearing input within described predetermined time interval, described interactive module (114) provides one or more interaction prompts.
2. user interactive system (100) as claimed in claim 1, wherein said predetermined time interval limits according in the time of disappearance and the distance of traveling.
3. user interactive system (100) as claimed in claim 1, wherein said input analysis module (112) determines to there is described human language according to described human sound examination criteria:
Determine to correspond at least one in the audio attribute of described sense of hearing input and audio mode, described audio attribute is the measured value of sound signal; And
Described audio attribute is compared to determine to there is described human language with the human sound pattern limited in advance with the audio attribute limited in advance with described audio mode respectively.
4. user interactive system (100) as claimed in claim 1, wherein once determine to there is described human language, described input analysis module (112) just determines whether described human language comprises user's sense of hearing input, and wherein said user's sense of hearing input corresponds to the sound of specific user.
5. user interactive system (100) as claimed in claim 4, wherein when described human language does not comprise described user's sense of hearing input, described interactive module (114) provides described one or more interaction prompts.
6. user interactive system (100) as claimed in claim 1, wherein said input analysis module (112) is according to there is not the length that described human language changes described predetermined time interval within the time interval of continuous print predetermined quantity.
7., for carrying out a mutual method with user, it comprises:
Obtain the sense of hearing input generated in enclosed environment;
According at least one in speech recognition technology and human sound examination criteria, analyze described sense of hearing input;
According to described analysis, determine to there is not human language in the described sense of hearing input within predetermined time interval; And
When determining there is not described human language in the described sense of hearing input within described predetermined time interval, provide one or more interaction prompts.
8. method as claimed in claim 7, wherein said predetermined time interval limits according in the time of disappearance and the distance of traveling.
9. method as claimed in claim 7, wherein analyze the input of described audio frequency according to described human sound examination criteria and comprise:
Determine at least one in the audio attribute that the audio frequency corresponding to described sense of hearing input inputs and audio mode, described audio attribute is the measured value of sound signal; And
Described audio attribute is compared to determine to there is described human language with the human sound pattern limited in advance with the audio attribute limited in advance with described audio mode respectively.
10. method as claimed in claim 7, wherein analyze to comprise further and determine whether described human language comprises user's sense of hearing input, described user's sense of hearing input corresponds to the sound of specific user, and wherein when described human language does not comprise described user's sense of hearing input, provide described one or more interaction prompts.
11. methods as claimed in claim 7, wherein said method comprises further, there is described human language according within the time interval of continuous print predetermined quantity, changes the length of described predetermined time interval.
The computer-readable medium of 12. 1 kinds of non-transitory, its have imbody thereon for performing the computer program carrying out mutual method with user, described method comprises:
Obtain the sense of hearing input generated in enclosed environment;
According at least one in speech recognition technology and human sound examination criteria, analyze described sense of hearing input;
According to described analysis, determine to there is human language in the described sense of hearing input within predetermined time interval; And
When determining there is not described human language in the described sense of hearing input within described predetermined time interval, provide one or more interaction prompts.
CN201510666508.XA 2014-10-28 2015-10-15 System and method for user interaction Withdrawn CN105575392A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN5372/CHE/2014 2014-10-28
IN5372CH2014 2014-10-28

Publications (1)

Publication Number Publication Date
CN105575392A true CN105575392A (en) 2016-05-11

Family

ID=55885446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510666508.XA Withdrawn CN105575392A (en) 2014-10-28 2015-10-15 System and method for user interaction

Country Status (1)

Country Link
CN (1) CN105575392A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106653064A (en) * 2016-12-13 2017-05-10 北京云知声信息技术有限公司 Audio playing method and device
CN110326041A (en) * 2017-02-14 2019-10-11 微软技术许可有限责任公司 Natural language interaction for intelligent assistant

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6236968B1 (en) * 1998-05-14 2001-05-22 International Business Machines Corporation Sleep prevention dialog based car system
CN101976563A (en) * 2010-10-22 2011-02-16 深圳桑菲消费通信有限公司 Method for judging whether mobile terminal has call voices after call connection
US20120323577A1 (en) * 2011-06-16 2012-12-20 General Motors Llc Speech recognition for premature enunciation
CN103943105A (en) * 2014-04-18 2014-07-23 安徽科大讯飞信息科技股份有限公司 Voice interaction method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6236968B1 (en) * 1998-05-14 2001-05-22 International Business Machines Corporation Sleep prevention dialog based car system
CN101976563A (en) * 2010-10-22 2011-02-16 深圳桑菲消费通信有限公司 Method for judging whether mobile terminal has call voices after call connection
US20120323577A1 (en) * 2011-06-16 2012-12-20 General Motors Llc Speech recognition for premature enunciation
CN103943105A (en) * 2014-04-18 2014-07-23 安徽科大讯飞信息科技股份有限公司 Voice interaction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周小东: "《录音工程师手册》", 31 January 2006, 中国广播电视出版社 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106653064A (en) * 2016-12-13 2017-05-10 北京云知声信息技术有限公司 Audio playing method and device
CN106653064B (en) * 2016-12-13 2019-05-07 北京云知声信息技术有限公司 Audio frequency playing method and device
CN110326041A (en) * 2017-02-14 2019-10-11 微软技术许可有限责任公司 Natural language interaction for intelligent assistant
CN110326041B (en) * 2017-02-14 2023-10-20 微软技术许可有限责任公司 Natural language interactions for intelligent assistants

Similar Documents

Publication Publication Date Title
CN106803423B (en) Man-machine interaction voice control method and device based on user emotion state and vehicle
CN111508474B (en) Voice interruption method, electronic equipment and storage device
CN106553653A (en) Vehicle control system of regaining consciousness
CN111381673A (en) Bidirectional vehicle-mounted virtual personal assistant
CN110060685A (en) Voice awakening method and device
CN111354371B (en) Method, device, terminal and storage medium for predicting running state of vehicle
JP2017073125A (en) Generation of dialog for action recommendation
CN110035358B (en) Vehicle-mounted audio output device, audio output control method, and recording medium
KR102474247B1 (en) Personal safety device and its operation method
CN110740901A (en) Multimedia information pushing method and device, storage medium and electronic equipment
CN111325386A (en) Method, device, terminal and storage medium for predicting running state of vehicle
CN110349579B (en) Voice wake-up processing method and device, electronic equipment and storage medium
CN112071309B (en) Network appointment vehicle safety monitoring device and system
CN114360527B (en) Vehicle-mounted voice interaction method, device, equipment and storage medium
JP2022095768A (en) Method, device, apparatus, and medium for dialogues for intelligent cabin
CN112669822B (en) Audio processing method and device, electronic equipment and storage medium
CN109059953A (en) It wakes up support system and wakes up support method
CN111292737A (en) Voice interaction and voice awakening detection method, device, equipment and storage medium
CN115316992A (en) Device and method for caring about emotion based on vehicle sound
JP2010149757A (en) Awakening continuance support system
CN105575392A (en) System and method for user interaction
CN111768759A (en) Method and apparatus for generating information
US20240034344A1 (en) Detecting and handling driving event sounds during a navigation session
CN110097775A (en) A kind of running information based reminding method, apparatus and system
US20210326659A1 (en) System and method for updating an input/output device decision-making model of a digital assistant based on routine information of a user

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20160511