CN105892661A - Machine intelligent decision-making method - Google Patents
Machine intelligent decision-making method Download PDFInfo
- Publication number
- CN105892661A CN105892661A CN201610200880.6A CN201610200880A CN105892661A CN 105892661 A CN105892661 A CN 105892661A CN 201610200880 A CN201610200880 A CN 201610200880A CN 105892661 A CN105892661 A CN 105892661A
- Authority
- CN
- China
- Prior art keywords
- gesture
- user
- machine
- making
- decision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses a machine intelligent decision-making method. According to the method, user intention is expressed as a factor set through characteristic item expression, user gesture semanteme is designed according to the factor set, machine training is conducted, then gestures are saved in a gesture knowledge base so that gestures captured by a machine can be understood, and user intention is judged and decision making is conducted according to the mapping relation between the factor set and a gesture set through user interaction preference information analysis. Compared with the method based on context judgment and the like, the machine intelligent decision-making method has the advantages that responding speed and precision are high, and the classical problem, namely the Midas Touch problem of machine recognition is solved.
Description
Technical field
The present invention relates to a kind of method that field of human-computer interaction is specifically related to machine intelligence decision-making.
Background technology
Man-machine interaction is the knowledge of an interactive relation between Study system and user.System can be various
The machine of various kinds, it is also possible to be computerized system and software.Human-computer interaction interface typically refers to user can
The part seen.User is exchanged with system by human-computer interaction interface, and operates.Broadcasting of little Source Music
Put button, greatly to the instrument board on aircraft or the control room in power plant.The design of human-computer interaction interface is wanted
Comprising user's understanding (i.e. mental model) to system, that is the availability for system or user friendly.
As a branch of science, the theory and technology that man-machine interaction research is relevant, it is intended to set up can from image or
Obtaining the artificial intelligence system of ' information ' in person's multidimensional data, described information can be such as user cognition and intention
Etc. information.
User view has the feature of subjectivity, ambiguity and EA hardware, it is difficult to encodes and measures, if
User view can not be understood by machine, and machine just cannot make a policy, and man-machine interaction exists for obstacle.People
The mutual obstacle of machine is mainly reflected in two aspects, and one is whether machine interface is prone to user and understands and operation,
The window interface till now of dos interface from the beginning arrives natural interface based on computer vision again.Due to machine
Device is that the mankind create out, so this is on the one hand in addition to the difference of learning cost, does not haves too disaster
Topic.It is difficult to another one aspect, how machine understands user view.In mouse and the age of keyboard,
User passes on limited operation to be intended to by non-natural operation, and man-machine interaction can be said to be smooth and easy, but along with natural hand
The development of the technology such as gesture interface is developed, brain machine communication, user gradually passes through the mode of the most more various dimensions
Passing on and be intended to, up to now, the intention perceived effect of this mode is fairly limited, and its basic reason is to use
The perception that family is intended to cannot be accurate, is especially difficult to use computer language performance.
Summary of the invention
For the deficiencies in the prior art, a kind of method that it is an object of the invention to provide machine intelligence decision-making,
To improve machine to user view perception and the degree of accuracy of intelligent decision.
To achieve these goals, the present invention adopts the technical scheme that:
A kind of method of machine intelligence decision-making, including step:
In man-machine interactive system, predetermined user's interactive object, and collect this interactive object institute likely
The set of user cognition and user view, formative factor collection, the factor in set of factors is mapped to gesture,
The various gestures being mapped to are carried out gesture semanteme design and machine training, forms gesture knowledge base;
Under man-machine interaction environment, detect the gesture of user, obtain user's gesture operation behavior stream;
By the user's gesture operation behavior stream detected, according to the mapping relations of gesture Yu factor, identify and use
Family is intended to and carries out machine decision-making.
The method of machine intelligence decision-making of the present invention, user's interactive object is represented by the mode using characteristic item to express
Become set of factors, then the factor in set of factors is mapped to gesture, various gestures are trained forms gesture and know
Know storehouse, understand according to the gesture that computer is caught by gesture knowledge base, then according to set of factors and gesture
This system logic of mapping relations of collection judges user view, sentences in combination with user's mutual preference information auxiliary
Disconnected user view also carries out decision-making, compares the methods such as dependence context determination, and this method has higher response
Speed and higher degree of accuracy, also solve classical problem-" Midas Touch " difficult problem of machine recognition.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the method for machine intelligence decision-making of the present invention;
Fig. 2 is the model schematic of the method for machine intelligence decision-making of the present invention.
Detailed description of the invention
Below in conjunction with detailed description of the invention, the present invention is further illustrated.
The method of machine intelligence decision-making of the present invention, as it is shown in figure 1, include step:
Step s101, in man-machine interactive system, determine the interactive object of user's gesture, such as one Tai Adi
Pot, user wishes to be carried A Di pot by gesture, cooking operates with cleaning etc., the most now, A Di
Pot is the interactive object of user's gesture.
A A Di pot for example, early stage is collected and is anticipated with user about all possible user cognition of A Di pot
The set of figure, this set is defined as set of factors, and according to Cognitive Science, user cognition has with user behavior
Mapping relations, are mapped to user behavior by the factor in set of factors, i.e. user's gesture operation, according to set of factors
Set user's gesture semantic, gesture is carried out gesture training.
Step s102, the mode of employing text mining solve the set of factors of interactive object.
Continuing as a example by A Di pot, so-called text mining will be retrieved as key word by A Di pot, obtains
The various features of A Di pot, the set of various features defines set of factors, and each factor therein represents
Certain concrete user cognition and intention.
Step s103, the factor in set of factors is mapped to gesture operation.
Cognitive Science is divided into cognitive model and behavior model the human brain mechanism of people, and that set of factors is expressed is people
Cognitive model, what gesture operation was expressed is the behavior model of people, and factor and gesture are the relations mapped one by one.
Step s104, various gestures are carried out gesture semanteme design and machine training, formed gesture knowledge base.
Gesture knowledge base saves the various gesture relevant with interactive object, each gesture can be carried out ID
Numbering.
Step s105, analyze the mutual preference information of user, such as user's scene, gesture amplitude and gesture start-stop
Points etc. are as the householder method of machine decision-making.
Identical gesture can represent different intentions, such as this gesture of waving, can represent user
The intention of " goodbye ", it is also possible to represent " not " the meaning, even can also is that " page up " and " under
One page " the meaning.
Step s106, under man-machine interaction environment, detect the gesture of user.
In man-machine interaction, with photographic head or the machine of other gesture awareness apparatus, recognition and tracking user's gesture.
Step s107, gesture and the gesture contrast in gesture knowledge base that will detect, understand user's gesture.
Understood corresponding factor by the gesture detected, thus understand user cognition and intention.Computer detects
The gesture arrived, some gesture is meaningful, and some is then nonsensical, by the gesture detected and hands
Gesture in gesture knowledge base compares, and understands the gesture of user, perception user view.
Step s108, this system logic of mapping relations according to set of factors Yu gesture collection, in combination with user
Mutual preference information auxiliary judgment user view also carries out decision-making, and this is the main logic of machine decision-making
When the gesture caught matches with gesture logic, then what identification user currently thought execution is to patrol with this
Collect corresponding task, i.e. user view is carried out this task.User view determined by according to response command,
Complete the feedback operation of user's gesture.
Lifting a simply example, when the slip to the right of user's gesture, the meaning of this gesture is " lower one page ",
But the when that gesture being returned, we are often that a natural gesture is regained, but withdrawal action is likely
The order of " page up " can be triggered, be accomplished by the auxiliary judgment of the mutual preference information of user this time,
Under this auxiliary judgment, the erroneous judgement of user view is minimized.
Said method proposes a kind of fuzzy sets to describe hiding user on the basis of fuzzy mathematics theory
It is intended to.First, user view characteristic item represents, is somebody's turn to do solving to represent solving user view to be converted into
The characteristic item set of user view, is also sets of factors, and this complex object of user view is referred to as domain.
Secondly, find set of factors by data mining technology and process, solving each factor obtained and represent certain
Individual concrete user cognition and intention, then semantic according to the gesture that set of factors design is corresponding, and compile with ID
Number mode be saved in gesture knowledge base.
As a preferred embodiment, for the same gesture detected, when determined by user view
There are two and during two or more, based on the mutual preference information of user, user view finally determined.This
It is owing to, under different use scenes, same gesture might mean that different user views, such as waves
It is likely to be the meaning of " goodbye ", it is also possible to the meaning of " negating ", it addition, the size of gesture amplitude, gesture
Starting point and ending point etc. be likely to cause ambiguity, introducing the mutual preference informations of user such as using scene has
It is beneficial to identify more accurately and decision-making.Preference information decision-making is an important supplement of gesture logical design.
Furthermore it is also possible to use the mode of machine learning to improve user's gesture-capture degree of accuracy.
Fig. 2 show the user view that feature based item of the present invention represents " detection-understanding-decision-making-
Output " model.
1) " user view → set of factors ": the fuzzy sets of user view U,
U={u1,u2,…,un, solve u by data mining technologyi。
2) " set of factors → logical design → gesture knowledge base ": to each uiCarry out system to patrol
Collect design, extract elementary cell i.e. gesture, it is trained, form gesture knowledge base.
3) " user view → preference information → multiple criteria matrix ": the expression of user view is non-patrolling
Volume changing, user is likely to be for the intention of some object and exists in the way of function, it is also possible to
It is an operation behavior stream, or one uses scene etc..Subjectivity and ambiguity the most clearly,
Ambiguous decision-making is can solve the problem that by preference analysis.
4) " gesture detection → understand ": gesture knowledge base helps to understand user's gesture.
5) " understanding → decision-making ": logical design and preference analysis by system together decide on.
6) wherein, the 4th behavior model embodying user, the 2nd cognition embodying user
Model, their mapping relations one by one determine the understanding of gesture and decision-making has the highest accuracy.
And preference analysis, allow this decision model have bigger serious forgiveness.
Except user view, above-mentioned model can also be used to other complex object.Described complex object is such as
Military applications: when detecting that bomber, fighter plane are set out in place, during these weapons, be usually used for
Destroying what target, reach what purpose, these can be by the feature and at ordinary times of weapon
Behavior is trained, and logical design, meet generally for the corresponding weapon restrained of certain weapon needs
Enemy, this is logical design.But, specifically in different battlefields, different weather there may be shadow
Ring, so needing to carry out preference analysis.Certainly, above-mentioned model can be also used for business and other answer
Miscellaneous system, does not enumerates at this.
Above-listed detailed description is illustrating for possible embodiments of the present invention, and this embodiment is also not used to
Limiting the scope of the claims of the present invention, all equivalences done without departing from the present invention are implemented or change, all should comprise
In the scope of the claims of this case.
Claims (5)
1. the method for a machine intelligence decision-making, it is characterised in that include step:
In man-machine interactive system, predetermined user's interactive object, and collect this interactive object institute likely
The set of user cognition and user view, formative factor collection, the factor in set of factors is mapped to gesture,
The various gestures being mapped to are carried out gesture semanteme design and machine training, forms gesture knowledge base;
Under man-machine interaction environment, detect the gesture of user, obtain user's gesture operation behavior stream;
By the user's gesture operation behavior stream detected, according to the mapping relations of gesture Yu factor, identify and use
Family is intended to and carries out machine decision-making.
The method of machine intelligence decision-making the most according to claim 1, it is characterised in that
For the same gesture detected, when determined by user view have two or more time, adopt
Assist to judge final user view with the many attribute matrixs analyzed based on preference information.
The method of machine intelligence decision-making the most according to claim 2, it is characterised in that
Dimension in described many attribute matrixs based on preference information analysis include use scene, gesture amplitude and
Gesture terminal.
The method of machine intelligence decision-making the most according to claim 3, it is characterised in that
Using the mapping relations of factor and gesture as judging the Main Basis of user view, with preference information analysis
Householder method as decision-making.
5. according to the method for the machine intelligence decision-making described in claim 1 or 2 or 3, it is characterised in that
Gesture is trained by the mode using machine learning.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610200880.6A CN105892661B (en) | 2016-03-31 | 2016-03-31 | The method of machine intelligence decision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610200880.6A CN105892661B (en) | 2016-03-31 | 2016-03-31 | The method of machine intelligence decision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105892661A true CN105892661A (en) | 2016-08-24 |
CN105892661B CN105892661B (en) | 2019-07-12 |
Family
ID=57012839
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610200880.6A Active CN105892661B (en) | 2016-03-31 | 2016-03-31 | The method of machine intelligence decision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105892661B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109101105A (en) * | 2017-06-20 | 2018-12-28 | 联想(新加坡)私人有限公司 | Information processing method and information processing equipment |
CN109783733A (en) * | 2019-01-15 | 2019-05-21 | 三角兽(北京)科技有限公司 | User's portrait generating means and method, information processing unit and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110173574A1 (en) * | 2010-01-08 | 2011-07-14 | Microsoft Corporation | In application gesture interpretation |
CN104615984A (en) * | 2015-01-28 | 2015-05-13 | 广东工业大学 | User task-based gesture identification method |
CN104932804A (en) * | 2015-06-19 | 2015-09-23 | 济南大学 | Intelligent virtual assembly action recognition method |
CN104992156A (en) * | 2015-07-07 | 2015-10-21 | 济南大学 | Gesture control method based on flexible mapping between gesture and multiple meanings |
-
2016
- 2016-03-31 CN CN201610200880.6A patent/CN105892661B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110173574A1 (en) * | 2010-01-08 | 2011-07-14 | Microsoft Corporation | In application gesture interpretation |
CN104615984A (en) * | 2015-01-28 | 2015-05-13 | 广东工业大学 | User task-based gesture identification method |
CN104932804A (en) * | 2015-06-19 | 2015-09-23 | 济南大学 | Intelligent virtual assembly action recognition method |
CN104992156A (en) * | 2015-07-07 | 2015-10-21 | 济南大学 | Gesture control method based on flexible mapping between gesture and multiple meanings |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109101105A (en) * | 2017-06-20 | 2018-12-28 | 联想(新加坡)私人有限公司 | Information processing method and information processing equipment |
CN109783733A (en) * | 2019-01-15 | 2019-05-21 | 三角兽(北京)科技有限公司 | User's portrait generating means and method, information processing unit and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105892661B (en) | 2019-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110717339B (en) | Semantic representation model processing method and device, electronic equipment and storage medium | |
WO2021169400A1 (en) | Artificial intelligence-based named entity recognition method and apparatus, and electronic device | |
US10984780B2 (en) | Global semantic word embeddings using bi-directional recurrent neural networks | |
RU2688271C2 (en) | Image search in natural language | |
Chen et al. | Real‐time hand gesture recognition using finger segmentation | |
Wu et al. | Fusing multi-modal features for gesture recognition | |
CN103226388B (en) | A kind of handwriting sckeme based on Kinect | |
CN104090652A (en) | Voice input method and device | |
CN104484666A (en) | Advanced image semantic parsing method based on human-computer interaction | |
CN103593673B (en) | A kind of on-line trial authentication method based on dynamic threshold | |
EP2833271A1 (en) | Multimedia question and answer system and method | |
US20150120777A1 (en) | System and Method for Mining Data Using Haptic Feedback | |
DE102012202558A1 (en) | Generation of a query from displayed text documents using virtual magnets | |
CN102930270A (en) | Method and system for identifying hands based on complexion detection and background elimination | |
CN103425257B (en) | A kind of reminding method of uncommon character information and device | |
CN108664599A (en) | Intelligent answer method, apparatus, intelligent answer server and storage medium | |
Xiao et al. | Multi-sensor data fusion for sign language recognition based on dynamic Bayesian network and convolutional neural network | |
CN107742095A (en) | Chinese sign Language Recognition Method based on convolutional neural networks | |
CN103177245B (en) | gesture recognition method and device | |
CN105892661A (en) | Machine intelligent decision-making method | |
Zhang et al. | Multi-touch gesture recognition of Braille input based on Petri Net and RBF Net | |
VanderHoeven et al. | Robust motion recognition using gesture phase annotation | |
Tianxiong et al. | Identifying chinese event factuality with convolutional neural networks | |
CN113254653B (en) | Text classification method, system, device and medium | |
Cho et al. | Recognizing human–human interaction activities using visual and textual information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |