CN108803879A - A kind of preprocess method of man-machine interactive system, equipment and storage medium - Google Patents

A kind of preprocess method of man-machine interactive system, equipment and storage medium Download PDF

Info

Publication number
CN108803879A
CN108803879A CN201810632458.7A CN201810632458A CN108803879A CN 108803879 A CN108803879 A CN 108803879A CN 201810632458 A CN201810632458 A CN 201810632458A CN 108803879 A CN108803879 A CN 108803879A
Authority
CN
China
Prior art keywords
corpus
user
interaction scenarios
environment
man
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810632458.7A
Other languages
Chinese (zh)
Inventor
张印帅
周峰
史元春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uisee Shanghai Automotive Technologies Ltd
Original Assignee
Uisee Shanghai Automotive Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uisee Shanghai Automotive Technologies Ltd filed Critical Uisee Shanghai Automotive Technologies Ltd
Priority to CN201810632458.7A priority Critical patent/CN108803879A/en
Publication of CN108803879A publication Critical patent/CN108803879A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure is directed to a kind of preprocess method of man-machine interactive system, equipment and storage medium, the method includes:Obtain the perception information in environment to be measured;According to the perception information, interaction scenarios are predicted;Matching corpus corresponding with the interaction scenarios;The priority of the corpus for matching and obtaining is set.This method provided by the present application before not receiving the operation of input of user or interaction, can be detected the perception information of environment to be measured in the man-machine interactive system course of work on backstage, be predicted interaction scenarios according to the perception information detected.For the interaction scenarios predicted, corresponding corpus can be directly matched, finally the priority of matched corpus is configured.It is preferential using the corpus being matched to and then when subsequently the input of user operation is identified, accuracy and the flexibility of man-machine interactive system can be improved.

Description

A kind of preprocess method of man-machine interactive system, equipment and storage medium
Technical field
This disclosure relates to smart machine control technology field more particularly to a kind of preprocess method of man-machine interactive system, Equipment and storage medium.
Background technology
Under the promotion of the internet of high speed development and intellectual technology, more and more terminal devices become human-computer interaction Interface.And different terminal devices takes different mode to be interacted with people also according to its respective main interactive task, example Such as:Interactive voice, touch operation etc..
Therefore, in today, almost all people lives under a kind of multi-modal interactive form.As long as and by design and Technology can be merged preferably " context aware " of multimodal systems, so that it may to enhance the reason to " interaction scenarios " residing for user Solution, to provide better interactive experience.
In multimodal systems, the interactive voice interface important as one is more burning hot in recent years.Apple, Microsoft, The one line Internet company such as Google has released interactive voice application.At present for the technology of the conversion of voice to word phase To maturation, accuracy rate is considerable, but but also barely satisfactory for the understanding of the meaning of one's words.
Invention content
In order to solve the above-mentioned technical problem, a kind of pretreatment side of man-machine interactive system is provided in the embodiment of the present invention Method, equipment and storage medium.
The embodiment of the invention discloses following technical solutions:
In a first aspect, the embodiment of the present application provides a kind of preprocess method of man-machine interactive system, including:Obtain ring to be measured Perception information in border;According to the perception information, interaction scenarios are predicted;Matching language material corresponding with the interaction scenarios Library;The priority of the corpus for matching and obtaining is set.
Second aspect, the embodiment of the present application provide a kind of pretreatment unit of man-machine interactive system, including:Perception information obtains Unit is taken, for obtaining the perception information in environment to be measured;Interaction scenarios predicting unit is used for according to the perception information, in advance Survey interaction scenarios;Corpus matching unit, for matching corpus corresponding with the interaction scenarios;Priority setting is single Member, for the priority for matching obtained corpus to be arranged.
The third aspect, the embodiment of the present application provide a kind of pre-processing device of man-machine interactive system, including:Processor is deposited Reservoir, network interface and user interface;The processor, memory, network interface and user interface are coupled by bus system Together;The processor is by the program for calling the memory to store or instruction, for executing such as aforementioned human-computer interaction system The step of preprocess method of system.
Fourth aspect, the embodiment of the present application provide a kind of non-transient computer readable storage medium, the non-transient calculating Machine readable storage medium storing program for executing stores computer instruction, and the computer instruction makes the computer execute such as aforementioned man-machine interactive system Preprocess method the step of.
This method provided by the embodiments of the present application is not receiving the defeated of user in the man-machine interactive system course of work Before the operation or interaction that enter, the perception information of environment to be measured can be detected on backstage, and according to the sense detected Know that information predicts interaction scenarios.For the interaction scenarios predicted, corresponding corpus can be directly matched, it is finally right The priority of matched corpus is configured.And then when subsequently the input of user operation is identified, preferential use The corpus being fitted on can improve accuracy and the flexibility of man-machine interactive system.
It should be understood that above general description and following detailed description is only exemplary and explanatory, not The disclosure can be limited.
Description of the drawings
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the present invention Example, and be used to explain the principle of the present invention together with specification.
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technology description to be briefly described, it should be apparent that, for those of ordinary skill in the art Speech, without having to pay creative labor, other drawings may also be obtained based on these drawings.
Fig. 1 is a kind of schematic diagram of a scenario of smart home device provided by the embodiments of the present application;
Fig. 2 is a kind of schematic diagram of a scenario of intelligent driving provided by the embodiments of the present application;
Fig. 3 is the structural schematic diagram of electronic equipment provided in an embodiment of the present invention;
Fig. 4 is the flow diagram of the preprocess method of man-machine interactive system provided by the embodiments of the present application;
Fig. 5 is the flow diagram of the preprocess method of another man-machine interactive system provided by the embodiments of the present application;
Fig. 6 is the flow diagram of the preprocess method of another man-machine interactive system provided by the embodiments of the present application;
Fig. 7 is the flow diagram of the preprocess method of another man-machine interactive system provided by the embodiments of the present application;
Fig. 8 is the flow diagram of the preprocess method of another man-machine interactive system provided by the embodiments of the present application;
Fig. 9 is a kind of structural schematic diagram of the pretreatment unit of man-machine interactive system provided by the embodiments of the present application.
Specific implementation mode
In order to make those skilled in the art more fully understand the technical solution in the present invention, below in conjunction with of the invention real The attached drawing in example is applied, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described implementation Example is only a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, this field is common The every other embodiment that technical staff is obtained without making creative work, should all belong to protection of the present invention Range.
The preprocess method of the man-machine interactive system provided by the embodiments of the present application can be used for common on the market at present In man-machine interactive system, such as:Intelligent automobile, smart home, intelligent mobile terminal and it is other have closely monitoring, control In the equipment of function.In these man-machine interactive systems, a variety of different types of sensors are typically provided with, for man-machine The people in environment and environment where interactive system is monitored, in addition, being further typically provided with processing in man-machine interactive system Device, processor can execute some operations that man-machine interactive system has, and then complete to man-machine according to the data monitored The intelligent control of interactive device.
Fig. 1 is a kind of schematic diagram of a scenario of smart home device provided by the embodiments of the present application.
Scene shown in Fig. 1 is indoor scene, and Fig. 1 includes:Router 100, computer 101, smart mobile phone 102, intelligence TV 103, IP Camera 104 and intelligent air condition 105.Wherein, smart television 103, IP Camera 104 and intelligent air condition 105 etc. belong to smart home device, and the smart home device in certain Fig. 1 is only some citings of the application, this field skill Art personnel should be known that other than above-mentioned smart home device, the equipment that interior can be controlled by signal belongs to intelligence Home equipment.
Computer 101 and smart mobile phone 102 can be used as control and data processing equipment, wherein computer 101 can be Smart home data processing centre, it is usually irremovable, and fixed position indoors is set, and smart mobile phone 102 can be made For a mobile data processing platform, smart home device is controlled.
Router 100 is the network center of indoor a variety of smart home devices, and all smart home devices can be with road It is communicated by wifi signals by device 100.
In each equipment shown in Fig. 1, sensor is can be provided with, wherein sensor includes but is not limited to:Temperature Sensor, humidity sensor, luminance sensor, microphone and image acquisition device are spent, it, can be in addition, indoors in scene Sensor is separately provided in the other positions in addition to above equipment.When in user indoors scene, based on aforementioned set Sensor can predict interaction scenarios.Interaction scenarios refer to user's currently possible interactive environment, including but are not limited to In:To airconditioning control scene, to TV control scene, to camera control scene, to the control scene of air filter and Scene is controlled to refrigerator.
After each equipment shown in FIG. 1 collects gathered data, gathered data can be sent to by router 100 Either smart mobile phone 102 can carry out at data gathered data in computer 101 or smart mobile phone 102 computer 101 Reason, can predict the interaction scenarios residing for user based on data processed result.
Fig. 2 is a kind of schematic diagram of a scenario of intelligent driving provided by the embodiments of the present application.
Fig. 2 includes:Automobile 200, mobile terminal 201 and bracelet 202.Wherein, vehicle-mounted control is may be provided in automobile 200 Device processed, Vehicle Controller include but is not limited to:Desktop computer, server, microcontroller, remaining is with data-handling capacity Equipment can be used as Vehicle Controller, such as:Tablet computer etc..
In some embodiments, the data collection terminal mouth of Vehicle Controller is also connected with the other sensors being arranged on vehicle It connects, including but not limited to:Velocity sensor, temperature sensor, position sensor.Pass through data collection terminal mouth, vehicle-mounted control Device can collect the operating parameter of vehicle itself, while can be with the environmental parameter around collection vehicle.
The signal output end of Vehicle Controller is connected with the control system of vehicle, and the control system includes but do not limit to In:Engine control system, direction of traffic control system, vehicle braking control system, powershift control system, signal light control System, Vehicle Controller can control the traveling of vehicle and operation by generating and sending different control signals.
Vehicle Controller can be communicated between mobile terminal 201, bracelet 202, in addition, in automobile 200, movement Sensor is may be provided in terminal 201 or bracelet 202.It is all when scene of the user in automobile 200 or where automobile 200 As parking lot or garage can predict interaction scenarios based on aforementioned set sensor.Interaction scenarios refer to user Current possible interactive environment, including but not limited to:Navigation scenarios, lost scene, parking scene and car light control scene.
Fig. 3 is the structural schematic diagram of man-machine interactive system pre-processing device provided by the embodiments of the present application.The human-computer interaction System pre-processing device can be applied in the computer 101 in Fig. 1 or smart mobile phone 102, can also be applied to Fig. 2 automobiles In Vehicle Controller or mobile terminal 201 in 200.
Man-machine interactive system pre-processing device 300 shown in Fig. 3 includes:Display screen 306 (in some embodiments, 306 can Think touch screen), at least one processor 301, at least one processor 302, at least one network interface 304 and other users Interface 303.Various components in electronic equipment are coupled by bus system 305.It is understood that bus system 305 is used for Realize the connection communication between these components.Bus system 305 further includes power bus, control in addition to including data/address bus Bus and status signal bus in addition.But for the sake of clear explanation, various buses are all designated as bus system 305 in figure 3.
Wherein, user interface 303 may include display, keyboard or pointing device (for example, mouse, trace ball (trackball) or touch-sensitive plate etc..
It is appreciated that the memory 302 in the present embodiment can be volatile memory or nonvolatile memory, or can Including both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory (Read-Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), Erasable Programmable Read Only Memory EPROM (Erasable PROM, EPROM), electrically erasable programmable read-only memory (Electrically EPROM, EEPROM) dodge It deposits.Volatile memory can be random access memory (Random Access Memory, RAM), be used as external high speed Caching.By exemplary but be not restricted explanation, the RAM of many forms is available, such as static RAM (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links Dynamic random access memory (Synch link DRAM, SLDRAM) and direct rambus random access memory (Direct Rambus RAM, DRRAM).Memory 302 described herein is intended to including but not limited to these and any other suitable type Memory.
In some embodiments, memory 302 stores following element, can perform unit or data structure, or Their subset of person or their superset:Operating system 3021 and application program 3022.
Wherein, operating system 3021, including various system programs, such as ccf layer, core library layer, driving layer etc., are used for Realize various basic businesses and the hardware based task of processing.Application program 3022, including various application programs, such as media Player (Media Player), browser (Browser) etc., for realizing various applied business.Realize the embodiment of the present invention The program of method may be embodied in application program 3022.
In embodiments of the present invention, processor 301 is by the program for calling memory 302 to store or instruction, specifically, can To be the program stored in application program 3022 or instruction, processor 301 is for executing the method step that Fig. 6 embodiments are provided Suddenly, for example, including:Obtain the perception information in environment to be measured;According to the perception information, interaction scenarios are predicted;Matching with it is described The corresponding corpus of interaction scenarios;The priority of the corpus for matching and obtaining is set.
The method that the embodiments of the present invention disclose can be applied in processor 301, or be realized by processor 301. Processor 301 may be a kind of IC chip, the processing capacity with signal.During realization, the above method it is each Step can be completed by the integrated logic circuit of the hardware in processor 301 or the instruction of software form.Above-mentioned processing Device 301 can be general processor, digital signal processor (Digital Signal Processor, DSP), special integrated electricity Road (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.It is general Processor can be microprocessor or the processor can also be any conventional processor etc..In conjunction with institute of the embodiment of the present invention The step of disclosed method, can be embodied directly in hardware decoding processor and execute completion, or with the hardware in decoding processor And software unit combination executes completion.Software unit can be located at random access memory, and flash memory, read-only memory may be programmed read-only In the storage medium of this fields such as memory or electrically erasable programmable memory, register maturation.The storage medium is located at The step of memory 302, processor 301 reads the information in memory 302, the above method is completed in conjunction with its hardware.
It is understood that embodiments described herein can use hardware, software, firmware, middleware, microcode or its It combines to realize.For hardware realization, processing unit may be implemented in one or more application-specific integrated circuits (ASIC), number letter Number processor (DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (PLD), scene can compile Journey gate array (FPGA), general processor, controller, microcontroller, microprocessor, for execute herein described function its In its electronic unit or combinations thereof.
For software implementations, the techniques described herein can be realized by executing the unit of function described herein.Software generation Code is storable in memory and is executed by processor.Memory can in the processor or portion realizes outside the processor.
Fig. 4 is the flow diagram of the preprocess method of man-machine interactive system provided by the embodiments of the present application.
As shown in figure 4, the preprocess method of the man-machine interactive system may comprise steps of.
S110 obtains the perception information in environment to be measured.
In the scene residing for user, a plurality of types of sensors, the function of these sensors and installation position may be provided with Setting etc. can be arranged as required to, you can be separately provided, can also be arranged in equipment in the scene.As sensed in Fig. 1 Device can be arranged in smart home device, as sensor can be arranged in automobile 200 in Fig. 2.
Sensor acquisition data be for the ease of to man-machine interactive system, this is controlled, can using environmental parameter as A part for input, with improve final result control it is intelligent, in the embodiment of the present application, in environment people or object The information perceived all can serve as perception information.
In the application some embodiments, perception information is the self information of environment to be measured, such as:Position, height above sea level, temperature Degree, humidity, luminous intensity, volume etc..
In the application some embodiments, perception information is the information of some particular devices in environment to be measured, such as:Vehicle Working condition, driving information, the working condition of mobile phone, the working condition etc. of camera.
In the application some embodiments, perception information can be the physiologic information of user in environment to be measured, such as:People's Height, weight, respiratory rate, heart rates, skin perspiration amount etc..
In the application some embodiments, perception information can be some behavior acts of user in environment to be measured, example Such as:The actions such as squat down or takeoff of user, the direction of motion and speed of user, user's limbs mobile route and user's hand behaviour It makes a sign with the hand.
Certainly, it will be appreciated by those skilled in the art that the aforementioned description to perception information is set based on concrete scene Fixed, the difference with scene and variation, perception information can also increase or change, therefore should not constitute the limit to the application System.
S120 predicts interaction scenarios according to the perception information.
Any one perception information in environment to be measured is for defining the interaction scenarios of environment to be measured.But Scene determined by perception information due to single type is susceptible to error, causes accuracy relatively low, and then needs are multiple not The perception information of same type is predicted jointly.
Different types of perception information may there is a situation where mutually coordinated, it is also possible to there is the case where runing counter to.Into And in the scene of environment clearly to be measured, a variety of different types of perception informations can be utilized, scene is carried out from multiple dimensions It is clear, so that finally determining scene accuracy is higher.
It focuses on and equipment is controlled using the behavior of people, correspondingly the scene mentioned in the embodiment of the present application is also led Refer to interaction scenarios.Wherein, interaction scenarios can be artificial setting or definition obtains or Machine self-learning obtains.
In the application some embodiments, it can pre-define according to the interactive performance of man-machine interactive system or setting is more A different interaction scenarios.Wherein, each interaction scenarios can be correspondingly arranged a set of relatively independent mode of operation, such as: Identical operating gesture can express different operation contents in different interaction scenarios, and similarly, identical voice input exists Different semantemes can also be expressed in different interaction scenarios.
It pre-defines or multiple and different interaction scenarios is set, each interaction scenarios are acquired one or more by sensor Different perception informations constrain.It in turn, can be using obtained perception information to interaction scenarios progress when determining interaction scenarios Match, the interaction scenarios that finally will match to, is determined as the interaction scenarios of prediction.
In the application some embodiments, in the case where pre-setting a variety of interaction scenarios, if according to current one kind Or a variety of perception informations are not matched to a corresponding interaction scenarios, alternatively, man-machine interactive system it is undefined in advance it is multiple not Same interaction scenarios.At this point, the perception information that this or history are repeatedly got can also be learnt according to priori, A new interaction scenarios can be obtained by study.
The interaction scenarios and perception information corresponding with this interaction scenarios that are obtained by study are stored, through long Time integral, you can form an interaction scenarios database.
Step S130 matches corpus corresponding with the interaction scenarios.
Corpus includes but is not limited to:Control instruction collection in the operating mode of man-machine interactive system, man-machine interactive system And the semantic collection in man-machine interactive system.Wherein, the semantic voice collected when user speech inputs for identification.
For the operating mode of man-machine interactive system, refer to man-machine interactive system at work, can be according to environment difference Switch between different operating modes, such as:It is logical to the control model of air-conditioning in summer for smart home control device It is often refrigeration control, and winter is usually to heat control to the control model of air-conditioning.But the operating mode being typically different is corresponding Scene is different.
Refer to man-machine interactive system after receiving input operation for the control instruction collection in man-machine interactive system, it will Input operation is converted into being referred to when corresponding instruction, after detecting the input operation of user, can pass through control Instruction set processed goes to match corresponding instruction, such as:In the bedroom scene of summer evenings, determining control instruction collection can be sky The corresponding instruction set of refrigerating function of tune is matched to when user gesture is the gesture of " upward " from the control instruction lump Control instruction may be 1 degree of temperature rise;But in the scene that living-room TV plays, determining control instruction collection can be and electricity It is same when user gesture is the gesture of " upward " depending on operating corresponding instruction set, from the instruction set corresponding with TV operation In matched control instruction may be 1 lattice of volume UP.
It is to refer exclusively to carry out control situation for using voice input, user is not for the semanteme collection of man-machine interactively system When carrying out voice input in same scene, semanteme also can be different, such as:It, can be according to semanteme when " upward " is said in bedroom Analysis obtains being the temperature raising control instruction to air-conditioning, and when " upward " is said in parlor, it can be obtained according to semantic analysis To being that control instruction is turned up to the volume of TV.
The priority of the corpus for matching and obtaining is arranged in step S140.
If it is only one to match obtained corpus, correspondingly, the priority of the corpus can be set to highest.If It is two or more with obtained corpus, when subsequently the input operation of user is identified using corpus, meeting There are problems that first with which corpus, for this purpose, after accurately determining corresponding multiple corpus by scene, it can be right The priority of multiple corpus is configured.
It, can be certainly high according to priority after the input operation for receiving user by the way that the priority of multiple corpus is arranged Corpus is selected to handle input operation successively to low sequence, the standard handled with the input operation improved to user True property.
This method provided by the embodiments of the present application, in the man-machine interactive system course of work, in the input for not receiving user Operation or interaction before, you can the perception information of environment to be measured is detected on backstage, and according to the perception detected Information predicts interaction scenarios, for the interaction scenarios predicted, can directly match corresponding corpus, and finally to The priority for the corpus matched is configured.Therefore, when processing subsequently is identified to the input of user operation, can preferentially make With the corpus being matched to, and then the processing speed, accuracy and flexibility of man-machine interactive system can be improved.
In the application some embodiments, after the priority of the corpus obtained to matching is configured, subsequent To in the control of man-machine interactive system, this method may also comprise the following steps::
S150 obtains user and inputs operation information.
Operation information input by user refers to the corresponding information of operation that can be identified by man-machine interactive system.In this Shen Please in embodiment, operation including but being confined to:Can be voice input, operating gesture, limbs posture, touch screen pressing or sliding Operation, physical button manipulation.In addition, it will be appreciated by those skilled in the art that with human-computer interaction device difference, corresponding behaviour Work can also change therewith, in this regard, the above-mentioned citing to operation should not constitute the limitation to the application.
S160, user described in the preferential level conversion based on the corpus input operation information and instruct in order to control.
In the embodiment of the present application, when being converted to user's input operation information, from high to low according to priority Sequence selects corpus successively.
It, can be according to semantic set pair voice if the operation of active user is voice input in the application one embodiment It is identified.It in another embodiment, can be according to control if the operation of active user is to touch screen pressing or slide Operation is converted to pressing or the corresponding control instruction of shiding matching by instruction set.In another embodiment, if the behaviour of active user When being inputted as non-voice, operation information can also be converted into corresponding work according to the correspondence of operation and operating mode Operation mode.
This method provided by the present application uses human-computer interaction system after being configured to corpus according to scene in user When system, operation information input by user is detected in real time, and operation information input by user is converted to by corresponding control based on corpus System instruction, improves accuracy and promptness that man-machine interactive system responds user's operation.
Fig. 5 is the flow diagram of the preprocess method of another man-machine interactive system provided by the embodiments of the present application.Such as Shown in Fig. 5, this method may comprise steps of:
S210 obtains the perception information in environment to be measured.
S220 predicts interaction scenarios according to the perception information.
S230 matches corpus corresponding with the interaction scenarios.
S240 judges whether the corpus to match with the interaction scenarios is unique.
For some special scenes, such as:When user is located in the vehicle of movement, scene is corresponded at this time be only and drive Scene is sailed, and corresponding corpus is mostly just the control instruction collection to vehicle with Driving Scene, hence, it can be determined that with friendship Whether the corpus that mutual scene matches is unique.
S250, when the corpus to match with the interaction scenarios is unique, by the priority of unique corpus It is promoted to highest.
For man-machine interactive system, multiple corpus are usually pre-set, are subsequently using corpus to user's Which when input operation is identified, can there are problems that first with corpus, if pre- according to the perception information of environment to be measured It is unique to measure the matched corpus of interaction scenarios institute, and then in this step, it can be directly by the preferential of unique corpus Grade is set as highest.
Using this method, when user operates the man-machine interactive system, without being selected in multiple corpus Select, select, and can preferentially be handled using unique corpus, improve man-machine interactive system response accuracy and Promptness.
Fig. 6 is the flow diagram of the preprocess method of another man-machine interactive system provided by the embodiments of the present application.Such as Shown in Fig. 6, this method may comprise steps of:
S310 obtains the perception information in environment to be measured.
S320 predicts interaction scenarios according to the perception information.
S330 matches corpus corresponding with the interaction scenarios.
S340 judges whether the corpus to match with the interaction scenarios is unique.
S350, when the corpus to match with the interaction scenarios is not unique, determine each corpus for finding with The degree of association of the interaction scenarios.
In the application some embodiments, multiple and different scene tags can be pre-set for each corpus, into And in this step, the degree of association of corpus and interaction scenarios can be determined using scene tag.Ordinary skill people Member is it is found that the degree of association can be any number, or multiple and different grades.
The priority of each corpus is arranged according to the priority mode directly proportional to the degree of association in S360.
Priority is directly proportional to the degree of association, refers to that priority is higher, and the degree of association also increases, on the contrary, priority Lower, the degree of association also decreases.
In the embodiment of the present application, the degree of association that can first find each corpus and the interaction scenarios, then according to pass Priority is arranged in connection degree, and when priority is arranged, the degree of association is bigger, and priority is higher.Using this priority set-up mode, when When user operates the man-machine interactive system, first priority can be selected higher from the sequence of priority from high to low Corpus, when the corpus of the high priority of selection and scene are there are when contradiction, the lower corpus of reselection priority, in turn Improve the accuracy and promptness of man-machine interactive system response.
Fig. 7 is the flow diagram of the preprocess method of another man-machine interactive system provided by the embodiments of the present application.Such as Shown in Fig. 7, this method may comprise steps of:
S410 obtains the perception information in environment to be measured.
When being perceived to environment to be measured, can be needed according to the type for the sensor being arranged in environment to be measured to determine The perception information to be acquired.
In the embodiment of the present application, the perception information includes but is not limited to:The environmental characteristic of the environment to be measured is joined The cybernetics control number of user in the physiological characteristic parameter of user, the environment to be measured in several, the described environment to be measured is waited for described Survey the equipment characteristic parameter in environment.It will be appreciated by those skilled in the art that in addition to above-mentioned perception information, in environment to be measured The type of sensor increase, perception information can also change therewith, and the limitation to application should not be constituted to this.
S420, obtains same type of multiple characteristic parameters or different types of multiple characteristic parameters carry out comprehensive point Analysis.
Characteristic parameter based on a type is analyzed to user, may usually be had some one-sidedness, i.e., can not Accurately, user behavior is analyzed comprehensively.For this purpose, can also compound point be carried out a variety of different types of multiple characteristic parameters Analysis, to improve the accuracy confirmed to interaction scenarios.
By taking the physiological characteristic parameter of user as an example:It is single if user's scope of activities in driver's cabin is larger and volume of perspiration is larger Solely according to physiological characteristic parameter, it may judge that driving indoor temperature is larger, need to open air-conditioning or open big vehicle window, but user at this time Sense of reality may be due to lost and more anxiety causes impatient and volume of perspiration to increase.For this purpose, for user in driver's cabin Interior situation, it is also necessary to which whether deviation situation and route or travel by vehicle between result current vehicle position and navigation routine Situations such as being overlapped, carries out comprehensive analysis, and finally obtained result accuracy can just be guaranteed.
S430 determines interaction scenarios according to the result of comprehensive analysis
S440 matches corpus corresponding with the interaction scenarios.
The priority of the corpus for matching and obtaining is arranged in S450.
Fig. 8 is the flow diagram of the preprocess method of another man-machine interactive system provided by the embodiments of the present application.Such as Shown in Fig. 8, this method may comprise steps of:
S510 obtains the perception information in environment to be measured.
When being perceived to environment to be measured, can be needed according to the type for the sensor being arranged in environment to be measured to determine The perception information to be acquired.
In the embodiment of the present application, the perception information includes but is not limited to:The environmental characteristic of the environment to be measured is joined The cybernetics control number of user in the physiological characteristic parameter of user, the environment to be measured in several, the described environment to be measured is waited for described Survey the equipment characteristic parameter in environment.It will be appreciated by those skilled in the art that in addition to above-mentioned perception information, in environment to be measured The type of sensor increase, perception information can also change therewith, and the limitation to application should not be constituted to this.
S520 determines ambient enviroment according to the environment characteristic parameters.
Environmental parameter including but being confined to:Position data, elevation data etc., wherein position data can be GPS data, Or road, administrative region title and building name etc..Ambient enviroment includes but is not limited to:Indoor environment, room Outer road environment, outdoor park environment, vehicle drive environment and parking lot environment.
S530, according in the physiological characteristic parameter, the cybernetics control number and the equipment characteristic parameter at least A kind of parameter determines user view.
User in the environment, can be identified the intention of user in conjunction with the passive information of user, wherein physiological characteristic Parameter includes but is not limited to:Heart rate, pulse, respiratory rate and volume of perspiration, such as:In cinema's environment, heartbeat adds Soon, it is short of breath and skin perspiration, then the current film of current refusal viewing, such as horror film can be determined.
Cybernetics control number refers to some behaviors of user when carrying out non-active control, including but not limited to:Walking Speed, motion mode, limbs gradient.If the speed of travel is very fast, you can determine that user is making up for lost time, if motion mode is dance It steps, you can determine that user is practicing dancing etc. at this time.
Equipment characteristic parameter refer to that user is entrained or user around the number that can be detected with the sensor of user-association According to, such as:User eyeball situation of movement is captured by eye tracker, determines the track of the viewpoint of user, it, can according to the track of viewpoint To predict the next browsing content of user.
In the embodiment of the present application, the aforementioned standalone case only for each parameter is described, in other embodiments, The complexity of the scene of consideration and variability, can also carry out many kinds of parameters compound, utilize the characteristic parameter more than two The intention of user is precisely predicted.
S540 predicts the interaction scenarios of user based on the ambient enviroment and the user view.
S550 matches corpus corresponding with the interaction scenarios.
The priority of the corpus for matching and obtaining is arranged in S560.
Other steps can be found in previous embodiment, and details are not described herein.
The preprocess method of the man-machine interactive system provided by the embodiments of the present application is carried out with reference to concrete scene It elaborates.
In some scenes of the application, when man-machine interactive system monitors interior map interface in the open state, if It monitors that user sends out phonetic order, and carries place in phonetic order, meanwhile, eye movement sensor device can be worked as by capturing user Preceding watched attentively map location, to sum up, you can determine that interaction scenarios are driving navigation scene at this time.
It, can be same with one or two of current location and destination place in driving navigation scene matching corpus Shi Jinhang is matched, such as:In the case of one kind, the destination watched attentively by monitoring user matches week corresponding with destination Surrounding environment corpus;In another case, according to GPS positioning, the current location residing for current vehicle is detected.And by destination And the corpus priority of the place name on current location periphery improves, and then it is currently located place and destination with user The corpus priority of place name around site is relatively high.
In view of user is in the driving navigation scene, selects and be currently located around place and destination site The possibility highest in place, such as:In aforementioned driving navigation scene, if user is inputted using voice, what may be inputted is " cinema around destination ", since the corresponding corpus priority of cinema around destination being turned up in advance, And then the matched accuracy of operation can be improved.
In some scenes of the application, when being equally driving navigation, determines that vehicle is in by the sensor on vehicle and stop Scene before vehicle, at this point it is possible to by being improved with the priority for the relevant corpus that stops, and then led when monitoring that user watches attentively Navigate interface in destination when, only need to say " STOP " instruction, you can control will be on the parking stall in vehicle parking to target location.
In some scenes of the application, for the simplicity or multifunction of product, it will usually be arranged in touch screen Multiple hide menus or submenu.
By taking voice control as an example, when user carries out voice input, it usually needs by voice and each possible menu It is matched, so that the accuracy and promptness of speech recognition are poor.
In the voice control scene, for hide menu or submenu, whether monitoring user watches some on touch screen attentively Icon, when determine watch A icons attentively when, be determined as A icon scenes, and then can will be with the relevant hide menu of A icons on backstage Or the priority of the corresponding corpus of submenu is promoted, and then when user is look at A icons when the input operation of progress voice, it can Preferentially to be matched using the relevant hide menu of A icons or the corresponding corpus of submenu, such as:It is set on vehicular touch screen Multiple icons such as " air-conditioning ", " sound equipment " are equipped with, when user watches air-conditioning icon attentively, say " opening ", then at the first time can To open air-conditioning, rather than sound equipment opened, the accuracy and promptness of speech recognition may be implemented in this way.
Equally, multi-functional compound when also having been carried out to physical operation key or operating lever, then equally can be first with some Perception information determines matched operating mode, can be direct then when user really operates the physical operation key or operating lever It works under the operating mode matched in advance.
In some scenes of the application, when detecting user in browsing pictures, such as:Map or photo are browsed, in conjunction with Phonetic order and eye movement that user sends out capture the mobile range that instrument captures user's institute's blinkpunkt on the screen, can predict Intention when user's browsing pictures, and corresponding corpus is matched according to the intention, and then when user sends out phonetic order, It can be matched by directly in corpus, improve the accuracy and promptness of user speech identification.
In some scenes of the application, when detecting that driving navigation starts, the state of driver can be monitored, Such as:Eye movement frequency, number of views, the attention time of single viewpoint and the heart rate etc. of driver of driver's road pavement, Once detecting that driver increases the eye movement frequency on ground, and number of views increases, while when the attention of single viewpoint Between shorten, and, the heart rate of driver is on the rise, then be assured that interaction scenarios at this time are lost scene, into And when matching corpus, priority match semantic collection corresponding with road conditions, because user most wants that the possibility understood is exactly road at this time Condition information, once and then user carry out voice input, preferentially corresponding to semantic set pair voice using traffic information is converted, raising To the accuracy of speech recognition.
Aforementioned described scene, is the detail schema that some provided for preceding method embodiment are realized, in this Shen Please be in other embodiments, those skilled in the art can apply according to the method that embodiment provides in more scenes, This belongs to the protection domain of the application.
Fig. 9 is a kind of structural schematic diagram of the pretreatment unit of man-machine interactive system provided by the embodiments of the present application.Fig. 9 institutes The man-machine interactive system pretreatment unit shown can be applied in the computer 101 in Fig. 1 or smart mobile phone 102, can also Applied in Fig. 2 automobiles 200 Vehicle Controller or mobile terminal 201 in.
As shown in figure 9, the pretreatment unit may include:Perception information acquiring unit 61, for obtaining in environment to be measured Perception information;In the embodiment of the present application, the perception information includes but is not limited to:The environmental characteristic of the environment to be measured The cybernetics control number of user and in institute in the physiological characteristic parameter of user, the environment to be measured in parameter, the environment to be measured State the equipment characteristic parameter in environment to be measured.Interaction scenarios predicting unit 62, for according to the perception information, predicting interaction field Scape;Corpus matching unit 63, for matching corpus corresponding with the interaction scenarios;In the embodiment of the present application, institute Stating corpus includes but is not limited to:Control instruction collection and man-machine in the operating mode of man-machine interactive system, man-machine interactive system Semantic collection in interactive system, wherein the semantic voice collected when user speech inputs for identification.Priority setting unit 64, for the priority for matching obtained corpus to be arranged.
On the basis of embodiment shown in Fig. 9, in another embodiment of the application, the priority setting unit, including: Whether judgment sub-unit, the corpus for judging to match with the interaction scenarios are unique;Priority promotes subelement, is used for When the corpus to match with the interaction scenarios is unique, the priority of unique corpus is promoted to highest.
On the basis of aforementioned illustrated embodiment, in another embodiment of the application, described in the priority setting unit Priority setting unit, including:Degree of association determination subelement, it is not unique for working as the corpus to match with the interaction scenarios When, determine the degree of association of each corpus and the interaction scenarios that find;Subelement is arranged in priority, for according to preferential The grade mode directly proportional to the degree of association, is arranged the priority of each corpus.
On the basis of aforementioned illustrated embodiment, in the application other embodiments, the interaction scenarios predicting unit, packet It includes:Subelement is analyzed, it is comprehensive for obtaining same type of multiple characteristic parameters or different types of multiple characteristic parameter progress Analysis is closed, the type includes:Environment characteristic parameters, physiological characteristic parameter, cybernetics control number and equipment characteristic parameter;Interaction Scene determination subelement determines interaction scenarios according to the result of comprehensive analysis.
On the basis of embodiment shown in Fig. 9, in the application other embodiments, the interaction scenarios predicting unit, packet It includes:Environment determination subelement, for determining ambient enviroment according to the environment characteristic parameters;It is intended to determination subelement, is used for root User is determined according at least one of the physiological characteristic parameter, the cybernetics control number and equipment characteristic parameter parameter It is intended to;Interaction scenarios determination subelement, the interaction scenarios for predicting user based on the ambient enviroment and the user view.
On the basis of embodiment shown in Fig. 9, in the application other embodiments, described device further comprises:Operation Information acquisition unit inputs operation information for obtaining user;Instruction converting unit is used for the priority based on the corpus User's input operation information is converted to instruct in order to control.
The embodiment of the present invention also proposes a kind of non-transient computer readable storage medium, and the non-transient computer is readable to deposit Storage media stores computer instruction, and the computer instruction makes the computer execute the side that Fig. 4-7 any embodiments are provided Method step, such as including:Obtain the perception information in environment to be measured;According to the perception information, interaction scenarios are predicted;Matching with The corresponding corpus of interaction scenarios;The priority of the corpus for matching and obtaining is set.
The embodiment of the present application provides the preprocess method of man-machine interactive system, including:
A1, a kind of preprocess method of man-machine interactive system, including:Obtain the perception information in environment to be measured;According to institute Perception information is stated, predicts interaction scenarios;Matching corpus corresponding with the interaction scenarios;The language for matching and obtaining is set Expect the priority in library.
A2, the method according to A1, the setting priority for matching obtained corpus, including:Judge with Whether the corpus that the interaction scenarios match is unique;When the corpus to match with the interaction scenarios is unique, by institute The priority for stating unique corpus is promoted to highest.
A3, the method according to A2, the setting priority for matching obtained corpus further include:When with When the corpus that the interaction scenarios match is not unique, the determining each corpus found is associated with the interaction scenarios Degree;According to the priority mode directly proportional to the degree of association, the priority of each corpus is set.
A4, the method according to A2 or A3, the corpus include following at least one:The work of man-machine interactive system Pattern;Control instruction collection in man-machine interactive system;Semantic collection in man-machine interactive system, wherein the semantic collection is used for identification Voice when the voice input of family.
A5, the method according to A1, the perception information, including it is at least one of following:The environment of the environment to be measured Characteristic parameter;The physiological characteristic parameter of user in the environment to be measured;The cybernetics control number of user in the environment to be measured;? Equipment characteristic parameter in the environment to be measured.
A6, the method according to A5, it is described that interaction scenarios are predicted according to the perception information, including:Obtain same class Multiple characteristic parameters of type or different types of multiple characteristic parameters carry out comprehensive analysis, and the type includes:Environmental characteristic Parameter, physiological characteristic parameter, cybernetics control number and equipment characteristic parameter;Interaction scenarios are determined according to the result of comprehensive analysis.
A7, the method according to A5, it is described that interaction scenarios are predicted according to the perception information, including:According to the ring Border characteristic parameter determines ambient enviroment;Joined according to the physiological characteristic parameter, the cybernetics control number and the equipment feature At least one of number parameter determines user view;The interaction field of user is predicted based on the ambient enviroment and the user view Scape.
A8. the method according to A1, the method further includes:It obtains user and inputs operation information;Based on described User described in the preferential level conversion of corpus inputs operation information and instructs in order to control.
B1, a kind of pretreatment unit of man-machine interactive system, including:Perception information acquiring unit, for obtaining ring to be measured Perception information in border;Interaction scenarios predicting unit, for according to the perception information, predicting interaction scenarios;Language material storehouse matching Unit, for matching corpus corresponding with the interaction scenarios;Priority setting unit is obtained for the matching to be arranged Corpus priority.
B2, the device according to B1, the priority setting unit, including:Judgment sub-unit, for judge with it is described Whether the corpus that interaction scenarios match is unique;Priority promotes subelement, for what ought be matched with the interaction scenarios When corpus is unique, the priority of unique corpus is promoted to highest.
B3, the device according to B2, the priority setting unit, including:Degree of association determination subelement, for when with When the corpus that the interaction scenarios match is not unique, the determining each corpus found is associated with the interaction scenarios Degree;Subelement is arranged in priority, for according to the priority mode directly proportional to the degree of association, the excellent of each corpus to be arranged First grade.
B4, the device according to B2 or B3, the corpus include following at least one:The work of man-machine interactive system Pattern;Control instruction collection in man-machine interactive system;Semantic collection in man-machine interactive system, wherein the semantic collection is used for identification Voice when the voice input of family.
B5, the device according to B1, the perception information, including it is at least one of following:The environment of the environment to be measured Characteristic parameter;The physiological characteristic parameter of user in the environment to be measured;The cybernetics control number of user in the environment to be measured;? Equipment characteristic parameter in the environment to be measured.
B6, the device according to B5, the interaction scenarios predicting unit, including:Subelement is analyzed, it is same for obtaining Multiple characteristic parameters of type or different types of multiple characteristic parameters carry out comprehensive analysis, and the type includes:Environment is special Levy parameter, physiological characteristic parameter, cybernetics control number and equipment characteristic parameter;Interaction scenarios determination subelement, according to comprehensive point The result of analysis determines interaction scenarios.
B7, the device according to B5, the interaction scenarios predicting unit, including:Environment determination subelement is used for basis The environment characteristic parameters determine ambient enviroment;It is intended to determination subelement, for according to the physiological characteristic parameter, the behavior At least one of characteristic parameter and the equipment characteristic parameter parameter determine user view;Interaction scenarios determination subelement is used In the interaction scenarios for predicting user based on the ambient enviroment and the user view.
B8. the device according to B1, described device further comprise:Operation information acquisition unit, for obtaining user Input operation information;Instruction converting unit inputs operation information for user described in the preferential level conversion based on the corpus It instructs in order to control.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only A kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or component can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, device or unit It connects, can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it can also It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer read/write memory medium.Based on this understanding, the technical solution of the embodiment of the present invention is substantially The part of the part that contributes to existing technology or the technical solution can embody in the form of software products in other words Come, which is stored in a storage medium, including some instructions are used so that a computer equipment (can To be personal computer, server or the network equipment etc.) execute all or part of each embodiment the method for the present invention Step.And storage medium above-mentioned includes:USB flash disk, mobile hard disk, ROM, RAM, magnetic disc or CD etc. are various can to store program The medium of code.
It should be noted that herein, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that process, method, article or device including a series of elements include not only those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this There is also other identical elements in the process of element, method, article or device.
It will be appreciated by those of skill in the art that although some embodiments described herein include being wrapped in other embodiments Certain features for including rather than other feature, but the combination of the feature of different embodiments mean in the scope of the present invention it It is interior and form different embodiments.

Claims (18)

1. a kind of preprocess method of man-machine interactive system, which is characterized in that including:
Obtain the perception information in environment to be measured;
According to the perception information, interaction scenarios are predicted;
Matching corpus corresponding with the interaction scenarios;
The priority of the corpus for matching and obtaining is set.
2. according to the method described in claim 1, it is characterized in that, the setting is described to match the preferential of obtained corpus Grade, including:
Judge whether the corpus to match with the interaction scenarios is unique;
When the corpus to match with the interaction scenarios is unique, the priority of unique corpus is promoted to most It is high.
3. according to the method described in claim 2, it is characterized in that, the setting is described to match the preferential of obtained corpus Grade, further include:
When the corpus to match with the interaction scenarios is not unique, determine that each corpus found interacts field with described The degree of association of scape;
According to the priority mode directly proportional to the degree of association, the priority of each corpus is set.
4. according to the method in claim 2 or 3, which is characterized in that the corpus includes following at least one:
The operating mode of man-machine interactive system;
Control instruction collection in man-machine interactive system;
Semantic collection in man-machine interactive system, wherein the semantic voice collected when user speech inputs for identification.
5. according to the method described in claim 1, it is characterized in that, the perception information, including it is at least one of following:
The environment characteristic parameters of the environment to be measured;
The physiological characteristic parameter of user in the environment to be measured;
The cybernetics control number of user in the environment to be measured;
Equipment characteristic parameter in the environment to be measured.
6. according to the method described in claim 5, it is characterized in that, described according to the perception information, prediction interaction scenarios, packet It includes:
It obtains same type of multiple characteristic parameters or different types of multiple characteristic parameters carries out comprehensive analysis, the type Including:Environment characteristic parameters, physiological characteristic parameter, cybernetics control number and equipment characteristic parameter;
Interaction scenarios are determined according to the result of comprehensive analysis.
7. according to the method described in claim 5, it is characterized in that, described according to the perception information, prediction interaction scenarios, packet It includes:
Ambient enviroment is determined according to the environment characteristic parameters;
It is true according at least one of the physiological characteristic parameter, the cybernetics control number and equipment characteristic parameter parameter Determine user view;
The interaction scenarios of user are predicted based on the ambient enviroment and the user view.
8. according to the method described in claim 1, it is characterized in that, the method further includes:
It obtains user and inputs operation information;
User described in preferential level conversion based on the corpus inputs operation information and instructs in order to control.
9. a kind of pretreatment unit of man-machine interactive system, which is characterized in that including:
Perception information acquiring unit, for obtaining the perception information in environment to be measured;
Interaction scenarios predicting unit, for according to the perception information, predicting interaction scenarios;
Corpus matching unit, for matching corpus corresponding with the interaction scenarios;
Priority setting unit, for the priority for matching obtained corpus to be arranged.
10. device according to claim 9, which is characterized in that the priority setting unit, including:
Whether judgment sub-unit, the corpus for judging to match with the interaction scenarios are unique;
Priority promotes subelement, for when the corpus to match with the interaction scenarios is unique, by unique language The priority in material library is promoted to highest.
11. device according to claim 10, which is characterized in that the priority setting unit, including:
Degree of association determination subelement is determined and to be found for when the corpus to match with the interaction scenarios is not unique The degree of association of each corpus and the interaction scenarios;
Subelement is arranged in priority, for according to the priority mode directly proportional to the degree of association, each corpus to be arranged Priority.
12. the device according to claim 10 or 11, which is characterized in that the corpus includes following at least one:
The operating mode of man-machine interactive system;
Control instruction collection in man-machine interactive system;
Semantic collection in man-machine interactive system, wherein the semantic voice collected when user speech inputs for identification.
13. device according to claim 9, which is characterized in that the perception information, including it is at least one of following:
The environment characteristic parameters of the environment to be measured;
The physiological characteristic parameter of user in the environment to be measured;
The cybernetics control number of user in the environment to be measured;
Equipment characteristic parameter in the environment to be measured.
14. device according to claim 13, which is characterized in that the interaction scenarios predicting unit, including:
Subelement is analyzed, it is comprehensive for obtaining same type of multiple characteristic parameters or different types of multiple characteristic parameter progress Analysis is closed, the type includes:Environment characteristic parameters, physiological characteristic parameter, cybernetics control number and equipment characteristic parameter;
Interaction scenarios determination subelement determines interaction scenarios according to the result of comprehensive analysis.
15. device according to claim 13, which is characterized in that the interaction scenarios predicting unit, including:
Environment determination subelement, for determining ambient enviroment according to the environment characteristic parameters;
It is intended to determination subelement, for joining according to the physiological characteristic parameter, the cybernetics control number and the equipment feature At least one of number parameter determines user view;
Interaction scenarios determination subelement, the interaction scenarios for predicting user based on the ambient enviroment and the user view.
16. device according to claim 9, which is characterized in that described device further comprises:
Operation information acquisition unit inputs operation information for obtaining user;
Instruction converting unit inputs operation information for user described in the preferential level conversion based on the corpus and refers in order to control It enables.
17. a kind of pre-processing device of man-machine interactive system, which is characterized in that including:
Processor, memory, network interface and user interface;
The processor, memory, network interface and user interface are coupled by bus system;
The processor is by the program for calling the memory to store or instruction, for executing the people as described in claim 1 to 8 The step of preprocess method of machine interactive system.
18. a kind of non-transient computer readable storage medium, which is characterized in that the non-transient computer readable storage medium is deposited Store up computer instruction, the computer instruction makes the computer execute the pre- of as described in claim 1 to 8 man-machine interactive system The step of processing method.
CN201810632458.7A 2018-06-19 2018-06-19 A kind of preprocess method of man-machine interactive system, equipment and storage medium Pending CN108803879A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810632458.7A CN108803879A (en) 2018-06-19 2018-06-19 A kind of preprocess method of man-machine interactive system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810632458.7A CN108803879A (en) 2018-06-19 2018-06-19 A kind of preprocess method of man-machine interactive system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN108803879A true CN108803879A (en) 2018-11-13

Family

ID=64083564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810632458.7A Pending CN108803879A (en) 2018-06-19 2018-06-19 A kind of preprocess method of man-machine interactive system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108803879A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110012166A (en) * 2019-03-31 2019-07-12 联想(北京)有限公司 A kind of information processing method and device
CN110211584A (en) * 2019-06-04 2019-09-06 广州小鹏汽车科技有限公司 Control method for vehicle, device, storage medium and controlling terminal
CN110569806A (en) * 2019-09-11 2019-12-13 上海软中信息系统咨询有限公司 Man-machine interaction system
CN110716706A (en) * 2019-10-30 2020-01-21 华北水利水电大学 Intelligent human-computer interaction instruction conversion method and system
CN110737337A (en) * 2019-10-18 2020-01-31 向勇 human-computer interaction system
CN110824940A (en) * 2019-11-07 2020-02-21 深圳市欧瑞博科技有限公司 Method and device for controlling intelligent household equipment, electronic equipment and storage medium
CN110895673A (en) * 2019-05-30 2020-03-20 腾讯科技(深圳)有限公司 Method, apparatus and computer-readable storage medium for controlling internal environment
CN111045436A (en) * 2019-12-31 2020-04-21 广州享药户联优选科技有限公司 Height control method and device for intelligent medicine chest
CN111508482A (en) * 2019-01-11 2020-08-07 阿里巴巴集团控股有限公司 Semantic understanding and voice interaction method, device, equipment and storage medium
CN111818172A (en) * 2020-07-21 2020-10-23 海信视像科技股份有限公司 Method and device for controlling intelligent equipment by management server of Internet of things
CN112689826A (en) * 2020-04-09 2021-04-20 华为技术有限公司 Method and device for generating instruction unit group
CN112787899A (en) * 2021-01-08 2021-05-11 青岛海尔特种电冰箱有限公司 Equipment voice interaction method, computer readable storage medium and refrigerator
CN113325767A (en) * 2021-05-27 2021-08-31 深圳Tcl新技术有限公司 Scene recommendation method and device, storage medium and electronic equipment
CN113452853A (en) * 2021-07-06 2021-09-28 中国电信股份有限公司 Voice interaction method and device, electronic equipment and storage medium
CN113591659A (en) * 2021-07-23 2021-11-02 重庆长安汽车股份有限公司 Gesture control intention recognition method and system based on multi-modal input
CN113625599A (en) * 2020-05-08 2021-11-09 未来穿戴技术有限公司 Massage instrument control method, device, system, computer equipment and storage medium
CN113689853A (en) * 2021-08-11 2021-11-23 北京小米移动软件有限公司 Voice interaction method and device, electronic equipment and storage medium
CN114265505A (en) * 2021-12-27 2022-04-01 中国电信股份有限公司 Man-machine interaction processing method and device, storage medium and electronic equipment
CN114936000A (en) * 2019-12-26 2022-08-23 上海擎感智能科技有限公司 Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101373468A (en) * 2007-08-20 2009-02-25 北京搜狗科技发展有限公司 Method for loading word stock, method for inputting character and input method system
CN101777250A (en) * 2010-01-25 2010-07-14 中国科学技术大学 General remote control device and method for household appliances
CN103472990A (en) * 2013-08-27 2013-12-25 小米科技有限责任公司 Appliance, and method and device for controlling same
US20160179064A1 (en) * 2014-12-17 2016-06-23 General Electric Company Visualization of additive manufacturing process data
CN105912138A (en) * 2016-04-06 2016-08-31 百度在线网络技术(北京)有限公司 Phrase input method and device
CN106649409A (en) * 2015-11-04 2017-05-10 陈包容 Method and apparatus for displaying search result based on scene information
CN106713633A (en) * 2016-12-19 2017-05-24 中国科学院计算技术研究所 Deaf people prompt system and method, and smart phone
CN107610695A (en) * 2017-08-08 2018-01-19 问众智能信息科技(北京)有限公司 Driver's voice wakes up the dynamic adjusting method of instruction word weight
CN107785014A (en) * 2017-10-23 2018-03-09 上海百芝龙网络科技有限公司 A kind of home scenarios semantic understanding method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101373468A (en) * 2007-08-20 2009-02-25 北京搜狗科技发展有限公司 Method for loading word stock, method for inputting character and input method system
CN101777250A (en) * 2010-01-25 2010-07-14 中国科学技术大学 General remote control device and method for household appliances
CN103472990A (en) * 2013-08-27 2013-12-25 小米科技有限责任公司 Appliance, and method and device for controlling same
US20160179064A1 (en) * 2014-12-17 2016-06-23 General Electric Company Visualization of additive manufacturing process data
CN106649409A (en) * 2015-11-04 2017-05-10 陈包容 Method and apparatus for displaying search result based on scene information
CN105912138A (en) * 2016-04-06 2016-08-31 百度在线网络技术(北京)有限公司 Phrase input method and device
CN106713633A (en) * 2016-12-19 2017-05-24 中国科学院计算技术研究所 Deaf people prompt system and method, and smart phone
CN107610695A (en) * 2017-08-08 2018-01-19 问众智能信息科技(北京)有限公司 Driver's voice wakes up the dynamic adjusting method of instruction word weight
CN107785014A (en) * 2017-10-23 2018-03-09 上海百芝龙网络科技有限公司 A kind of home scenarios semantic understanding method

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111508482A (en) * 2019-01-11 2020-08-07 阿里巴巴集团控股有限公司 Semantic understanding and voice interaction method, device, equipment and storage medium
CN110012166A (en) * 2019-03-31 2019-07-12 联想(北京)有限公司 A kind of information processing method and device
CN110895673A (en) * 2019-05-30 2020-03-20 腾讯科技(深圳)有限公司 Method, apparatus and computer-readable storage medium for controlling internal environment
CN110211584A (en) * 2019-06-04 2019-09-06 广州小鹏汽车科技有限公司 Control method for vehicle, device, storage medium and controlling terminal
CN110569806A (en) * 2019-09-11 2019-12-13 上海软中信息系统咨询有限公司 Man-machine interaction system
CN110737337A (en) * 2019-10-18 2020-01-31 向勇 human-computer interaction system
CN110716706A (en) * 2019-10-30 2020-01-21 华北水利水电大学 Intelligent human-computer interaction instruction conversion method and system
CN110716706B (en) * 2019-10-30 2023-11-14 华北水利水电大学 Intelligent man-machine interaction instruction conversion method and system
CN110824940A (en) * 2019-11-07 2020-02-21 深圳市欧瑞博科技有限公司 Method and device for controlling intelligent household equipment, electronic equipment and storage medium
CN114936000A (en) * 2019-12-26 2022-08-23 上海擎感智能科技有限公司 Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework
CN114936000B (en) * 2019-12-26 2024-02-13 上海擎感智能科技有限公司 Vehicle-machine interaction method, system, medium and equipment based on picture framework
CN111045436A (en) * 2019-12-31 2020-04-21 广州享药户联优选科技有限公司 Height control method and device for intelligent medicine chest
CN112689826A (en) * 2020-04-09 2021-04-20 华为技术有限公司 Method and device for generating instruction unit group
CN113625599A (en) * 2020-05-08 2021-11-09 未来穿戴技术有限公司 Massage instrument control method, device, system, computer equipment and storage medium
CN113625599B (en) * 2020-05-08 2023-09-22 未来穿戴技术有限公司 Massage device control method, device, system, computer equipment and storage medium
CN111818172A (en) * 2020-07-21 2020-10-23 海信视像科技股份有限公司 Method and device for controlling intelligent equipment by management server of Internet of things
CN112787899B (en) * 2021-01-08 2022-10-28 青岛海尔特种电冰箱有限公司 Equipment voice interaction method, computer readable storage medium and refrigerator
CN112787899A (en) * 2021-01-08 2021-05-11 青岛海尔特种电冰箱有限公司 Equipment voice interaction method, computer readable storage medium and refrigerator
CN113325767A (en) * 2021-05-27 2021-08-31 深圳Tcl新技术有限公司 Scene recommendation method and device, storage medium and electronic equipment
CN113452853A (en) * 2021-07-06 2021-09-28 中国电信股份有限公司 Voice interaction method and device, electronic equipment and storage medium
CN113591659B (en) * 2021-07-23 2023-05-30 重庆长安汽车股份有限公司 Gesture control intention recognition method and system based on multi-mode input
CN113591659A (en) * 2021-07-23 2021-11-02 重庆长安汽车股份有限公司 Gesture control intention recognition method and system based on multi-modal input
CN113689853A (en) * 2021-08-11 2021-11-23 北京小米移动软件有限公司 Voice interaction method and device, electronic equipment and storage medium
CN114265505A (en) * 2021-12-27 2022-04-01 中国电信股份有限公司 Man-machine interaction processing method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN108803879A (en) A kind of preprocess method of man-machine interactive system, equipment and storage medium
US10977918B2 (en) Method and system for generating a smart time-lapse video clip
US20190349214A1 (en) Smart home automation systems and methods
US20190066473A1 (en) Methods and devices for presenting video information
CN104838335B (en) Use the interaction and management of the equipment of gaze detection
EP3611055B1 (en) Multimedia information push method and apparatus, storage medium, and electronic device
US20160105617A1 (en) Method and System for Performing Client-Side Zooming of a Remote Video Feed
US20160217348A1 (en) Image Processing Method and Electronic Device for Supporting the Same
CN110875940B (en) Application program calling method, device and equipment based on virtual robot
CN106406119A (en) Service robot based on voice interaction, cloud technology and integrated intelligent home monitoring
CN109409354A (en) UAV Intelligent follows target to determine method, unmanned plane and remote controler
CN111800331A (en) Notification message pushing method and device, storage medium and electronic equipment
CN106201448A (en) Information processing method and user terminal
CN113696849B (en) Gesture-based vehicle control method, device and storage medium
CN105637448A (en) Contextualizing sensor, service and device data with mobile devices
CN108287903A (en) Question searching method combined with projection and intelligent pen
CN113495487A (en) Terminal and method for adjusting operation parameters of target equipment
CN109857787A (en) A kind of methods of exhibiting and terminal
CN110517523A (en) Method for recording parking position, device and storage medium
CN114077227A (en) Page switching method and device, scene control panel, equipment and storage medium
CN107622300B (en) Cognitive decision method and system of multi-modal virtual robot
US11115615B1 (en) Augmented reality display of local information
KR20170097890A (en) Electronic apparatus and Method for providing service thereof
CN115086094A (en) Device selection method and related device
US11087798B2 (en) Selective curation of user recordings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181113

RJ01 Rejection of invention patent application after publication