CN110045904A - Man-machine interactive system, method and the vehicle including the system - Google Patents
Man-machine interactive system, method and the vehicle including the system Download PDFInfo
- Publication number
- CN110045904A CN110045904A CN201811355370.1A CN201811355370A CN110045904A CN 110045904 A CN110045904 A CN 110045904A CN 201811355370 A CN201811355370 A CN 201811355370A CN 110045904 A CN110045904 A CN 110045904A
- Authority
- CN
- China
- Prior art keywords
- operator
- man
- interactive system
- acquisition
- machine interactive
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 118
- 238000000034 method Methods 0.000 title abstract description 80
- 210000001508 eye Anatomy 0.000 claims description 54
- 230000009471 action Effects 0.000 claims description 42
- 230000009466 transformation Effects 0.000 claims description 30
- 210000005252 bulbus oculi Anatomy 0.000 claims description 20
- 230000000007 visual effect Effects 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 13
- 230000006399 behavior Effects 0.000 claims description 12
- 238000010801 machine learning Methods 0.000 claims description 10
- 230000003993 interaction Effects 0.000 abstract description 107
- 230000006870 function Effects 0.000 abstract description 6
- 210000003811 finger Anatomy 0.000 description 18
- 230000004913 activation Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 10
- 230000008901 benefit Effects 0.000 description 9
- 238000012790 confirmation Methods 0.000 description 6
- 230000002708 enhancing effect Effects 0.000 description 4
- 210000003128 head Anatomy 0.000 description 4
- 210000003813 thumb Anatomy 0.000 description 4
- 241001269238 Data Species 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 210000003414 extremity Anatomy 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013502 data validation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention relates to field of human-computer interaction, and in particular to a kind of man-machine interactive system, method and the vehicle including the system.Present invention seek to address that existing man-machine interactive system can not realize the problem of functional operation of institute in centering control screen by non-contact manipulation.For this purpose, man-machine interactive system of the invention includes acquisition equipment, processing equipment and display equipment.The operation data including eye information and command information of operator is acquired by acquisition equipment, and selected project can be confirmed in operation item according to each in current display interface of operation data by processing equipment, and command information is converted into corresponding operation instruction and is intended to the operation for realizing operator, then display screen is controlled according to selected project and operational order by display equipment, corresponding display content is shown.The present invention can realize the manipulation for the various functions of can be realized to man-machine interactive system by Touchless manipulation, so that human-computer interaction has higher flexibility and accuracy.
Description
Technical field
The present invention relates to human-computer interaction technique fields, and in particular to a kind of man-machine interactive system, method and including the system
Vehicle.
Background technique
Under some particular contexts, human-computer interaction is difficult to rely on contact operation, for example, when driving, driver
Member is in order to guarantee that driving safety shields without control in optionally touching, so that human-computer interaction is limited.
In view of the above technical problems, corresponding to be converted to using analysis is carried out to the gesture of operator in the prior art
Operational order realize human-computer interaction, but current man-machine interactive system is only capable of completing volume tune using gesture identification
The flexibility of the simple operations such as control, closing of the circuit, i.e. non-contact type human-machine interaction is poor.
Correspondingly, this field needs a kind of new man-machine interactive system and method to solve the above problems.
Summary of the invention
In order to solve the above problem in the prior art, i.e. the contactless man-machine friendship of man-machine interactive system in the prior art
The poor problem of mutual flexibility, the present invention provides a kind of man-machine interactive system, the man-machine interactive system packet: acquisition is set
It is standby, it is configured as the operation data of acquisition operator, the operation data includes eye information and command information;Processing is set
It is standby, be configured as according to the operation data in current display interface it is each can confirm selected project in operation item, and
Described instruction information is converted to corresponding operation instruction to be intended to the operation for realizing the operator;Show equipment, packet
It includes display screen and is configured as being shown on the display screen according to the selected project and the operational order in corresponding
Hold.
In the preferred embodiment of above-mentioned man-machine interactive system, the quantity at least two of the acquisition equipment, at least two
The acquisition equipment acquires the operation data of different operators respectively.
In the preferred embodiment of above-mentioned man-machine interactive system, the acquisition equipment includes image acquisition device, the eye letter
Breath includes the eyes image of the operator of described image collector acquisition, and described instruction information includes described image acquisition
The motion images of the operator of device acquisition.
In the preferred embodiment of above-mentioned man-machine interactive system, the acquisition equipment includes eyeball tracking device and movement sensing
Device, the eye information of the eyeball tracking device acquisition operator, described instruction information includes what the action sensor acquired
The action message of operator.
In the preferred embodiment of above-mentioned man-machine interactive system, the acquisition equipment further includes voice messaging identification device, the finger
Enabling information further includes the voice of the operator of the voice messaging identification device acquisition.
In the preferred embodiment of above-mentioned man-machine interactive system, the processing equipment is configured as true according to the eye information
Determine direction of visual lines and pupil center and will be by the pupil center and the straight line parallel with the direction of visual lines and the display
Project where the intersection point of screen can operate as the selected project.
In the preferred embodiment of above-mentioned man-machine interactive system, described instruction information includes the gesture of dynamic change, the place
Reason equipment is configured as determining the pupil center of the operator according to the eye information and selectes the operator's
The specific site of hand, will be by where the straight line of the pupil center and the specific site and the intersection point of the display screen
Can operation item as the selected project.
In the preferred embodiment of above-mentioned man-machine interactive system, the processing equipment is configured as will be described by transformation model
Command information is converted to corresponding operation instruction.
In the preferred embodiment of above-mentioned man-machine interactive system, the operation data further includes user's letter of the operator
Breath, the processing equipment are additionally configured to by identifying the user information to determine concrete operations personnel and passing through machine learning
Method is trained the historical operating data of each concrete operations personnel to update the transformation model to obtain personalization
Transformation model.
In the preferred embodiment of above-mentioned man-machine interactive system, the acquisition equipment is configured as acquiring the operator defeated
User information of the account entered as the operator.
In the preferred embodiment of above-mentioned man-machine interactive system, the acquisition equipment is configured as acquiring the operator's
User information of the biological information as the operator.
In the preferred embodiment of above-mentioned man-machine interactive system, the processing equipment is configured as according to the operator's
Setting data by the operator movement is converted to corresponding operation instruction, the setting data include set action and with institute
State the corresponding operational order of set action.
In the preferred embodiment of above-mentioned man-machine interactive system, the operation data further includes the identity letter of the operator
Breath, the processing equipment are additionally configured to identify the identity of the operator according to the identity information and according to the operation
The identity of personnel distributes corresponding operation permission to the operator.
In the preferred embodiment of above-mentioned man-machine interactive system, the identity information be the location of described operator or
Identification badge.
In the preferred embodiment of above-mentioned man-machine interactive system, the man-machine interactive system includes at least two operation modes,
The processing equipment is additionally configured to distribute corresponding operation permission to the operator according to the operation mode.
In the preferred embodiment of above-mentioned man-machine interactive system, the display screen includes the first display screen and second display screen,
The display equipment is configured as the display content being divided into the first display content and the second display content, and controls described the
One display screen, which shows the first display content and controls the second display screen, shows the second display content.
In the preferred embodiment of above-mentioned man-machine interactive system, the display equipment will at least portion according to corresponding operation instruction
The first display content is divided to be shown on the second display screen.
In the preferred embodiment of above-mentioned man-machine interactive system, the display equipment is additionally configured to institute according to wake up instruction
It states display screen and activation pattern is converted to by energy-saving mode, the wake up instruction is that the sight of operator stops on a display screen
Time is more than or equal to setting time.
In the preferred embodiment of above-mentioned man-machine interactive system, the display equipment is additionally configured to institute according to wake up instruction
It states display screen and activation pattern is converted to by energy-saving mode, the wake up instruction is turned by the corresponding voice messaging of the operator
It changes.
In the preferred embodiment of above-mentioned man-machine interactive system, the man-machine interactive system is vehicle-mounted man-machine interactive system.
In the preferred embodiment of above-mentioned man-machine interactive system, the man-machine interactive system is virtual reality display system or void
Quasi- enhancing display system.
In order to solve the above-mentioned technical problem, the present invention also provides a kind of man-machine interaction method, the man-machine interaction methods
Include the following steps: the operation data for acquiring operator, the operation data includes eye information and command information;According to institute
Selected project can be confirmed in operation item by stating each in current display interface of operation data;Described instruction information is converted into phase
The operational order answered is intended to the operation for realizing the operator;It is being shown according to the selected project and the operational order
The corresponding content of screen display.
In the preferred embodiment of above-mentioned man-machine interaction method, the step of operation data of operator " acquisition " includes: logical
Cross the operation data of acquisition equipment acquisition operator.
In the preferred embodiment of above-mentioned man-machine interaction method, the acquisition equipment includes image acquisition device, " is set by acquisition
The step of operation data of standby acquisition operator " includes: the eye that the operator is acquired by described image collector
The motion images of image and the acquisition operator, the eye information includes the eyes image, described instruction information
Including the motion images.
In the preferred embodiment of above-mentioned man-machine interaction method, the acquisition equipment includes eyeball tracking device and movement sensing
The step of device, " operation data of operator is acquired by acquisition equipment " includes: by described in eyeball tracking device acquisition
The eye information of operator, and acquire by the action sensor action message of the operator, described instruction
Information includes the action message.
In the preferred embodiment of above-mentioned man-machine interaction method, the acquisition equipment further includes voice messaging identification device, " logical
Cross the operation data of acquisition equipment acquisition operator " further include: operator is acquired by the voice messaging identification device
Voice, described instruction information further includes the voice.
It is " each in current display interface according to the operation data in the preferred embodiment of above-mentioned man-machine interaction method
Selected project can be confirmed in operation item " the step of include: that direction of visual lines and the operator are determined according to the eye information
The pupil center of member;The pupil center will be passed through and the intersection point institute of the straight line parallel with the direction of visual lines and the display screen
Can operation item as the selected project.
In the preferred embodiment of above-mentioned man-machine interaction method, described instruction information includes the gesture of dynamic change, " according to institute
Selected project can be confirmed in operation item in each in current display interface by stating operation data " the step of include: according to the eye
Portion's information determines the pupil center of the operator;Select the specific site of the hand of the operator;It will be by described
Where the intersection point of the straight line of pupil center and the specific site and the display screen can operation item as the selected item
Mesh.
In the preferred embodiment of above-mentioned man-machine interaction method, " described instruction information is converted into corresponding operation instruction "
Step further include: described instruction information is converted to by corresponding operation instruction by transformation model.
In the preferred embodiment of above-mentioned man-machine interaction method, the operation data further includes user's letter of the operator
Breath, the man-machine interaction method further include: determine concrete operations personnel by identifying the user information;Pass through machine learning
Method is trained to generate personalized transformation model the historical operating data of the concrete operations personnel.
In the preferred embodiment of above-mentioned man-machine interaction method, the step of " identifying the user information " includes: described in acquisition
User information of the account of operator's input as the operator.
In the preferred embodiment of above-mentioned man-machine interaction method, the step of " identifying the user information " includes: described in acquisition
User information of the biological information of operator as the operator.
In the preferred embodiment of above-mentioned man-machine interaction method, " described instruction information is converted into corresponding operation instruction "
Step further include: the movement of the operator is converted into corresponding operation according to the setting data of the operator and is referred to
It enables, the setting data include set action and operational order corresponding with the set action.
In the preferred embodiment of above-mentioned man-machine interaction method, the operation data further includes the identity number of the operator
According to the man-machine interaction method further include: identify the identity of the operator according to the identity data and according to the behaviour
The identity for making personnel distributes corresponding operation permission to the operator.
In the preferred embodiment of above-mentioned man-machine interaction method, the identity data be the location of described operator or
Identification badge.
In the preferred embodiment of above-mentioned man-machine interaction method, give operator distribution corresponding behaviour according to operation mode
Make permission, wherein the operation mode is at least two kinds.
In the preferred embodiment of above-mentioned man-machine interaction method, the display screen includes the first display screen and second display screen,
The step of " showing corresponding content on a display screen according to the selected project and the operational order " includes: by the display
Content be divided into the first display content and second display content, and it is described first display screen display described in first display content with
And the second display of display content on the second display screen.
In the preferred embodiment of above-mentioned man-machine interaction method, " shown according to the selected project and the operational order
The step of corresponding content of screen display " further include: instructed according to corresponding operation by at least portion on first display screen
Point content is shown on the second display screen.
In the preferred embodiment of above-mentioned man-machine interaction method, " shown according to the selected project and the operational order
The step of corresponding content of screen display " further include: the display screen is converted to by energy-saving mode by activation according to wake up instruction
Mode, the wake up instruction are when being more than or equal to setting the time of the sight stop of operator on the display screen
Between.
In the preferred embodiment of above-mentioned man-machine interaction method, " shown according to the selected project and the operational order
The step of corresponding content of screen display " further include: the display screen is converted to by energy-saving mode by activation according to wake up instruction
Mode, the wake up instruction are converted by the corresponding voice messaging of the operator.
In the preferred embodiment of above-mentioned man-machine interaction method, the man-machine interaction method is used for vehicle-mounted man-machine interactive system.
In the preferred embodiment of above-mentioned man-machine interaction method, the man-machine interaction method for virtual reality display system or
Virtual enhancing display system.
In order to solve the above-mentioned technical problem, the present invention also provides a kind of vehicle, the vehicle includes any of the above-described people
Machine interactive system.
Man-machine interactive system and method for the invention, is acquired, and according to operation by the operation data of operator
Data validation selectes project and command information is converted to corresponding operation instruction, that is, that project is grasped in determination
Make, and determine the specific instruction of operation, so, it is possible to realize by Touchless manipulation can be realized man-machine interactive system
The manipulation of various functions, so that human-computer interaction has higher flexibility and accuracy.
In optimal technical scheme, acquisition equipment includes at least two sub- equipment groups, this at least two sub- equipment group of conversing
The operation data for acquiring different operators respectively enables acquisition equipment sufficiently to acquire the operand of each operator
According to the needs of preferably meeting more people while carrying out human-computer interaction.
In optimal technical scheme, operation data further includes the user information of operator, by with each concrete operations
The historical operating data of personnel is trained to obtain as training data, and using machine learning method to these training datas
To personalized transformation model, the command information of each concrete operations personnel is converted into operation using personalized transformation model and is referred to
It enables, can be improved the accuracy rate and speed of conversion, to improve the Experience Degree of human-computer interaction.
In optimal technical scheme, operation data further includes the identity data of operator, by according to operator's
Identity distributes to operator's corresponding operation permission, can sufficiently carry out human-computer interaction in guarantee section operator
Meanwhile and another part operator can be primarily focused in other work.
In optimal technical scheme, by distributing to operator's corresponding operation permission, Neng Goubao according to operation mode
Card provides more suitable interactive experience under different scenes.
In optimal technical scheme, display screen includes the first display screen and second display screen, is instructed according to corresponding operation
At least partly first display content is shown on a second display, display content is enabled pointedly to show behaviour
Make personnel.
In optimal technical scheme, display screen can be converted to activity pattern by energy-saving mode by wake up instruction, also
It is display equipment without being constantly in activity pattern, being conducive to save electric energy and improving the service life of display equipment.
Scheme 1, a kind of man-machine interactive system, which is characterized in that the man-machine interactive system includes: acquisition equipment, quilt
It is configured to the operation data of acquisition operator, the operation data includes eye information and command information;Processing equipment, quilt
Be configured to according to the operation data in current display interface it is each can confirm selected project in operation item, and by the finger
It enables information be converted to corresponding operation instruction to be intended to the operation for realizing the operator;Show equipment comprising display screen
And it is configured as showing corresponding content on the display screen according to the selected project and the operational order.
Scheme 2, man-machine interactive system according to scheme 1, which is characterized in that the quantity of the acquisition equipment is at least two
A, at least two acquisition equipment acquire the operation data of different operators respectively.
Scheme 3, man-machine interactive system according to scheme 1, which is characterized in that the acquisition equipment includes image acquisition device,
The eye information includes the eyes image of the operator of described image collector acquisition, and described instruction information includes institute
State the motion images of the operator of image acquisition device acquisition.
Scheme 4, man-machine interactive system according to scheme 1, which is characterized in that the acquisition equipment include eyeball tracking device and
Action sensor, the eye information of the eyeball tracking device acquisition operator, described instruction information include the movement sensing
The action message of the operator of device acquisition.
Scheme 5, the man-machine interactive system according to scheme 3 or 4, which is characterized in that the acquisition equipment further includes voice letter
Identification device is ceased, described instruction information further includes the voice of the operator of the voice messaging identification device acquisition.
Scheme 6, man-machine interactive system according to scheme 1, which is characterized in that the processing equipment is configured as according to
Eye information determines direction of visual lines and pupil center and will pass through the pupil center and the straight line parallel with the direction of visual lines
With where the intersection point of the display screen can operation item as the selected project.
Scheme 7, man-machine interactive system according to scheme 1, which is characterized in that described instruction information includes the hand of dynamic change
Gesture, the processing equipment are configured as determining the pupil center of the operator according to the eye information and select the behaviour
The specific site for making the hand of personnel, will be by the friendship of the straight line and the display screen of the pupil center and the specific site
Point where can operation item as the selected project.
Scheme 8, the man-machine interactive system according to scheme 6 or 7, which is characterized in that the processing equipment is configured as passing through
Described instruction information is converted to corresponding operation instruction by transformation model.
Scheme 9, the man-machine interactive system according to scheme 8, which is characterized in that the operation data further includes the operator
The user information of member, the processing equipment are additionally configured to determine concrete operations personnel by identifying the user information and lead to
Cross machine learning method the historical operating data of each concrete operations personnel is trained with update the transformation model to
Obtain personalized transformation model.
Scheme 10, man-machine interactive system according to scheme 9, which is characterized in that the acquisition equipment is configured as acquisition institute
State user information of the account of operator's input as the operator.
Scheme 11, man-machine interactive system according to scheme 9, which is characterized in that the acquisition equipment is configured as acquisition institute
State user information of the biological information of operator as the operator.
Scheme 12, the man-machine interactive system according to scheme 6 or 7, which is characterized in that the processing equipment is configured as basis
The movement of the operator is converted to corresponding operation instruction, the setting data packet by the setting data of the operator
Include set action and operational order corresponding with the set action.
Scheme 13, the man-machine interactive system according to scheme 6 or 7, which is characterized in that the operation data further includes the behaviour
Make the identity information of personnel, the processing equipment is additionally configured to identify the identity of the operator according to the identity information
And corresponding operation permission is distributed to the operator according to the identity of the operator.
Scheme 14, man-machine interactive system according to scheme 13, which is characterized in that the identity information is the operator
Identification badge or the location of the operator.
Scheme 15, man-machine interactive system according to scheme 1, which is characterized in that the man-machine interactive system includes at least two
A operation mode, the processing equipment are additionally configured to distribute corresponding operation to the operator according to the operation mode
Permission.
Scheme 16, man-machine interactive system according to scheme 1, the display screen include the first display screen and second display screen,
The display equipment is configured as the display content being divided into the first display content and the second display content, and controls described the
One display screen, which shows the first display content and controls the second display screen, shows the second display content.
Scheme 17, man-machine interactive system according to scheme 16, the display equipment will at least according to corresponding operation instruction
Part the first display content is shown on the second display screen.
Scheme 18, man-machine interactive system according to scheme 1, which is characterized in that the display equipment is additionally configured to basis
The display screen is converted to activation pattern by energy-saving mode by wake up instruction, and the wake up instruction is that the sight of operator stops
Time on a display screen is more than or equal to setting time.
Scheme 19, the man-machine interactive system according to scheme 5, which is characterized in that the display equipment is additionally configured to basis
The display screen is converted to activation pattern by energy-saving mode by wake up instruction, and the wake up instruction is corresponding by the operator's
Voice messaging convert.
Scheme 20, man-machine interactive system according to scheme 1, which is characterized in that the man-machine interactive system is vehicle-mounted man-machine
Interactive system.
Scheme 21, man-machine interactive system according to scheme 1, which is characterized in that the man-machine interactive system is virtual reality
Display system virtually enhances display system.
Scheme 22, a kind of man-machine interaction method, which is characterized in that the man-machine interaction method includes the following steps: acquisition operation
The operation data of personnel, the operation data include eye information and command information;It is shown according to the operation data currently
Each on interface can confirm selected project in operation item;Described instruction information is converted into corresponding operation instruction to realize
The operation for stating operator is intended to;Corresponding content is shown on a display screen according to the selected project and the operational order.
Scheme 23, the man-machine interaction method according to scheme 22, which is characterized in that " operation data of acquisition operator "
Step includes: that the operation data of operator is acquired by acquisition equipment.
Scheme 24, the man-machine interaction method according to scheme 23, which is characterized in that the acquisition equipment includes Image Acquisition
The step of device, " operation data of operator is acquired by acquisition equipment " includes: by described in the acquisition of described image collector
The eyes image of operator and the motion images of the acquisition operator, the eye information includes the eye figure
Picture, described instruction information include the motion images.
Scheme 25, the man-machine interaction method according to scheme 23, which is characterized in that the acquisition equipment includes eyeball tracking device
And action sensor, " pass through acquisition equipment acquire operator operation data " the step of include: by the eyeball tracking
Device acquires the eye information of the operator, and is believed by the movement that the action sensor acquires the operator
Breath, described instruction information includes the action message.
Scheme 26, the man-machine interaction method according to scheme 24 or 25, which is characterized in that the acquisition equipment further includes voice
Information recognition device, " operation data of operator is acquired by acquisition equipment " further include: identified by the voice messaging
Device acquires the voice of operator, and described instruction information further includes the voice.
Scheme 27, the man-machine interaction method according to scheme 22, which is characterized in that " shown according to the operation data currently
That shows on interface each can confirm selected project in operation item " the step of include: that direction of visual lines is determined according to the eye information
With the pupil center of the operator;The pupil center will be passed through and the straight line parallel with the direction of visual lines is shown with described
Where the intersection point of display screen can operation item as the selected project.
Scheme 28, the man-machine interaction method according to scheme 22, which is characterized in that described instruction information includes dynamic change
The step of gesture, " selected project can be confirmed in operation item in each in current display interface according to the operation data ", wraps
It includes: determining the pupil center of the operator according to the eye information;Select the certain bits of the hand of the operator
Point;Will by where the straight line of the pupil center and the specific site and the intersection point of the display screen can operation item make
For the selected project.
Scheme 29, the man-machine interaction method according to scheme 27 or 28, which is characterized in that " be converted to described instruction information
The step of corresponding operation instruction " further include: described instruction information is converted to by corresponding operation instruction by transformation model.
Scheme 30, the man-machine interaction method according to scheme 29, which is characterized in that the operation data further includes the operation
The user information of personnel, the man-machine interaction method further include: determine concrete operations personnel by identifying the user information;
It is trained by historical operating data of the machine learning method to the concrete operations personnel to generate personalized transformation model.
Scheme 31, the man-machine interaction method according to scheme 30, which is characterized in that the step of " identifying the user information " wraps
It includes: acquiring user information of the account of operator's input as the operator.
Scheme 32, the man-machine interaction method according to scheme 30, which is characterized in that the step of " identifying the user information " wraps
It includes: acquiring user information of the biological information of the operator as the operator.
Scheme 33, the man-machine interaction method according to scheme 27 or 28, which is characterized in that " be converted to described instruction information
The step of corresponding operation instruction " further include: turned the movement of the operator according to the setting data of the operator
It is changed to corresponding operation instruction, the setting data include set action and operational order corresponding with the set action.
Scheme 34, the man-machine interaction method according to scheme 27 or 28, which is characterized in that the operation data further includes described
The identity data of operator, the man-machine interaction method further include: identify the operator's according to the identity data
Identity simultaneously distributes corresponding operation permission to the operator according to the identity of the operator.
Scheme 35, the man-machine interaction method according to scheme 34, which is characterized in that the identity data is the operator
Location or identification badge.
Scheme 36, the man-machine interaction method according to scheme 22, which is characterized in that the man-machine interaction method further include: root
Corresponding operation permission is distributed to the operator according to operation mode, wherein the operation mode is at least two kinds.
Scheme 37, the man-machine interaction method according to scheme 22, the display screen include the first display screen and second display screen,
The step of " showing corresponding content on a display screen according to the selected project and the operational order " includes: by the display
Content be divided into the first display content and second display content, and it is described first display screen display described in first display content with
And the second display of display content on the second display screen.
Scheme 38, the man-machine interaction method according to scheme 37 " are being shown according to the selected project and the operational order
The step of corresponding content of screen display " further include: instructed according to corresponding operation by at least portion on first display screen
Point content is shown on the second display screen.
Scheme 39, the man-machine interaction method according to scheme 22, which is characterized in that " according to the selected project and the behaviour
Instruct and show corresponding content on a display screen " the step of further include: according to wake up instruction by the display screen by energy saving mould
Formula is converted to activation pattern, the wake up instruction be the time that the sight of operator stops on the display screen be greater than or
Equal to setting time.
Scheme 40, the man-machine interaction method according to scheme 26, which is characterized in that " according to the selected project and the behaviour
Instruct and show corresponding content on a display screen " the step of further include: according to wake up instruction by the display screen by energy saving mould
Formula is converted to activation pattern, and the wake up instruction is converted by the corresponding voice messaging of the operator.
Scheme 41, the man-machine interaction method according to scheme 22, which is characterized in that the man-machine interaction method is used for vehicle-mounted people
Machine interactive system.
Scheme 42, the man-machine interaction method according to scheme 22, which is characterized in that the man-machine interaction method is for virtually existing
Real display system virtually enhances display system.
Scheme 43, a kind of vehicle, which is characterized in that including the man-machine interactive system as described in any one of scheme 1-20.
Detailed description of the invention
With reference to the accompanying drawings to describe man-machine interactive system of the invention, man-machine interaction method and including the vehicle of the system
?.In attached drawing:
Fig. 1 is the structural schematic diagram of one of embodiment of the present invention man-machine interactive system;
Fig. 2 is the structural schematic diagram of another man-machine interactive system in the embodiment of the present invention;
Fig. 3 is the schematic diagram that the first in the embodiment of the present invention confirms selected project;
Fig. 4 is the schematic diagram of the selected project of second of confirmation in the embodiment of the present invention;
Fig. 5 is the structural schematic diagram that one of embodiment of the present invention shows equipment;
Fig. 6 is the flow chart of one of embodiment of the present invention man-machine interaction method;
Fig. 7 is a kind of flow chart of man-machine interaction method hair step S2 in the embodiment of the present invention;
Fig. 8 is a kind of flow chart of man-machine interaction method hair step S2 in the embodiment of the present invention.
Specific embodiment
In order to solve man-machine interactive system in the prior art non-contact type human-machine interaction flexibility poor, this
Invention provides a kind of man-machine interactive system, method and the vehicle including the system, by carrying out to collected operation data
Processing to obtain selected project and corresponding operation instruction, that is, by determination " being operated to what project " and
" to carry out what operation " functional to the institute in man-machine interactive system can operate, and enable man-machine interactive system
Contactless operation is neatly realized, improves the interactive experience of user.
The preferred embodiment of the present invention described with reference to the accompanying drawings.It will be apparent to a skilled person that this
A little embodiments are used only for explaining technical principle of the invention, it is not intended that limit the scope of the invention.
Man-machine interactive system of the invention is described with reference first to Fig. 1, Fig. 1 is one of embodiment of the present invention people
The structural schematic diagram of machine interactive system.As shown in Figure 1, man-machine interactive system of the invention mainly include acquisition equipment 1, processing set
For 2 and display equipment 3, wherein acquisition equipment 1 is configured as the operation data of acquisition operator, and operation data includes eye
Information and command information;Processing equipment 2 be configured according to operation data in current display interface it is each can be in operation item really
Recognize selected project, and command information is converted into corresponding operation instruction and is intended to the operation for realizing the operator, display
Equipment 3 includes display screen and is configured as showing corresponding content on a display screen according to the project of selecting and operational order.It needs
It is noted that " operation of operator is intended to " refers to that operator wishes to interact by man-machine interactive system so that people
The corresponding function that machine interactive system is realized according to the idea of operator, for example, operator wish to play selected song, into
Row route guidance, display interface switching etc. function.The present embodiment is acquired by the operation data of operator, and according to
Operation data confirms selected project and command information is converted to corresponding operation instruction, that is, determines and carry out to that project
Operation, and determine the specific instruction of operation, so, it is possible to realize by Touchless manipulation can be realized man-machine interactive system
Various functions manipulation so that human-computer interaction have higher flexibility and accuracy.
It should be noted that man-machine interactive system provided by the invention can be vehicle-mounted man-machine interactive system, virtual reality
Display system (VR) and virtual enhancing display system (AR) etc..
Fig. 2 is the structural schematic diagram of another man-machine interactive system in the embodiment of the present invention, as shown in Fig. 2, this implementation
In the man-machine interactive system that example provides, the quantity at least two of equipment 1 is acquired, at least two acquisition equipment 1 acquire not respectively
The operation data of same operator.It should be noted that " different operators " are for inscribing in same acquisition,
Namely in the same time, an acquisition equipment acquires the operation data of one or some operators, another acquisition equipment
Acquire the operation data of another or other operator.One benefit of the present embodiment is can sufficiently to acquire each operator
The operation data of member;Another benefit is that collected operation data is more targeted and error is smaller, for example, acquisition equipment
Collected operation data is image, the image obtained can be made because of shape caused by acquisition angles then multiple acquisition equipment are arranged
It changes small.In some specific embodiments, man-machine interactive system is vehicle-mounted man-machine interactive system, and setting two is adopted in vehicle
Collect equipment 1, the two acquisition equipment 1 are respectively used to acquisition driver and are sitting in the operation data of the passenger of copilot station, make
Equipment, which must be acquired, can sufficiently acquire the operation data of driver and the passenger in copilot station, preferably meet more people simultaneously
Carry out the demand of human-computer interaction;Certainly, an acquisition equipment 1 is arranged also for each seat in vehicle, to meet to the every of car
The abundant acquisition of the operation data of a operator.
Further, continuing with referring to Fig. 1 or Fig. 2, in the man-machine interactive system of the present embodiment, acquisition equipment 1 can be with
It takes various forms, can realize the acquisition to the operation data of operator.Below to several preferred realities of sub- equipment group
The mode of applying is illustrated.Optionally, acquisition equipment 1 includes image acquisition device, and eye information includes the behaviour of image acquisition device acquisition
Make the eyes image of personnel, command information includes the motion images of the operator of image acquisition device acquisition.Optionally, acquisition is set
Standby 1 includes eyeball tracking device and action sensor, and eyeball tracking device acquires the eye information of operator, described instruction packet
Include the action message of the operator of action sensor acquisition.Optionally, acquisition equipment 1 includes eyeball tracking device and voice messaging
Identification device, eyeball tracking device acquire the eye information of operator, and command information includes what voice messaging identification device acquired
The voice of operator.Optionally, acquisition equipment 1 includes that image acquisition device and voice messaging identification device, image acquisition device are adopted
Collect the eye information of operator, the eyes image and/or voice letter of the operator of command information report image acquisition device acquisition
Cease the voice of the operator of identification device acquisition.Optionally, acquisition equipment 1 includes eyeball tracking device, action sensor and language
Sound information recognition device, eye information include the eyes image of the operator of image acquisition device acquisition, command information includes dynamic
Make the voice of the action message of the operator of sensor acquisition and/or the operator of voice messaging identification device acquisition.?
It, can be using camera as image acquisition device in some specific embodiments;Eyeball tracking device can be according to eyeball and eyeball
The changing features on periphery are tracked, and are also tracked according to iris angle change, also actively project the light beams such as infrared ray to rainbow
Film extracts feature;Action sensor can be optics action sensor (such as camera), can also be with acoustics action sensor
Or electromagnetism action sensor etc..It should be noted that the form of acquisition equipment can be selected according to specific implementation condition, especially
Ground, when action message and acoustic information can be all acquired by acquiring equipment, operator can both be carried out using movement
Human-computer interaction can also carry out human-computer interaction using voice, enable interaction more flexible.
Processing equipment can in different ways, incorporated by reference to Fig. 3 and Fig. 4 to the selected item of confirmation when confirming selected project
Purpose principle is illustrated.Wherein, Fig. 3 is the schematic diagram that the first in the embodiment of the present invention confirms selected project, and Fig. 4 is this hair
The schematic diagram of the selected project of second of confirmation in bright embodiment.
As shown in figure 3, processing equipment is configured as according to eye validation of information direction of visual lines and pupil center and will pass through
Where pupil center and the intersection point of the straight line parallel with direction of visual lines and display screen can operation item as selected project.At this
In the mode for confirming selected project, no matter command information is action message or voice messaging, can confirm selected project, is had
There is stronger adaptability.
As shown in figure 4, command information includes the gesture of dynamic change, processing equipment is configured as according to eye validation of information
The specific site of the hand of the pupil center of operator and selected operator, will be by the straight of pupil center and specific site
Where the intersection point of line and display screen can operation item as selected project.For example, the finger tip portion of the index finger of selected operator
Position is used as specific site, this is because people habitually carries out gesture change against the position of eye gaze when carrying out human-computer interaction
Change to realize that operation, the especially finger tip of some finger are usually against the position of eye gaze, therefore, using specific site
Selected sight is determined in conjunction with the method for direction of visual lines, can more accurately determine the drop point of the sight of operator on a display screen
To more accurately determine operation item, to further increase the accuracy of human-computer interaction.
Further, continuing with referring to Fig. 1, processing equipment 2 is configured as being converted to command information by transformation model
Corresponding operation instruction.Transformation model is corresponding with operational order about building according to existing common movement and/or voice
Vertical.Specifically, the relationship of common gesture and operational order is also: index finger and thumb mean away from each other by chosen content
Amplification, index finger and thumb is close to each other means to reduce chosen content, hand slides to the left to be meaned to move to left chosen content, hand
Sliding to the right means to move to right chosen content etc..The relationship of common limb action and operational order has: nodding means
Confirmation, shaking the head means negative etc..The common voice messaging that can be used for carrying out human-computer interaction has: " playing XX " " returns to master
Interface ", " sound is more greatly " etc..
Further, continuing with referring to Fig. 1, operation data further includes the user information of operator, and processing equipment 2 is also
It is configured as determining concrete operations personnel by identifying the user information and by machine learning method to concrete operations people
The historical operating data of member is trained to update transformation model to obtain personalized transformation model.In the present embodiment, it uses
The historical operating data of each concrete operations personnel as training data, and using machine learning method to these training datas into
Row training is turned the command information of each concrete operations personnel to obtain personalized transformation model, using personalized transformation model
It is changed to operational order, can be improved the accuracy rate and speed of conversion, to improve the Experience Degree of human-computer interaction.For example, user A is practised
Usual to nod to express the operation intention into selected project, user B habit is expressed with finger double-click into selected project
Operation is intended to, and the nodding action of user A can be converted to the operational order into selected project, individual character by personalized transformation model
Change transformation model and the movement that the finger of user B is double-clicked is converted into the instruction into selected project.It should be noted that acquisition is set
User information of the standby account that can acquire operator's input as operator, acquisition equipment can also acquire operator
User information of the biological information as operator, wherein biological information refers to the recognition of face information of user, fingerprint, sound
The information such as line.
Further, continuing with referring to Fig. 1, processing equipment 2 is additionally configured to be moved according to the set information of operator
Operational order is converted to as data, set information includes set action and operational order corresponding with set action.Namely grasp
Personalized setting can be carried out by making personnel, and one benefit of this bring is another benefit so that human-computer interaction is more interesting
It is that unauthorized person possibly can not be by common movement finishing man-machine interaction so that human-computer interaction has stronger safety.
Further, continuing with referring to Fig. 1, operation data further includes the identity information of operator, and processing equipment 2 goes back quilt
It is configured to identify the identity of operator according to identity information and be distributed accordingly according to the identity of operator to operator
Operating right.The identity of operator described in the present invention refers to that each operator is in operator when more people interact
The role that is served as in the group of member's composition, for example, teacher-student, driver-passenger and selling seller-buyer etc..This reality
It applies example and operator's corresponding operation permission is distributed to by the identity according to operator, allow each operator's root
According to oneself identity carry out relevant operation, can while guarantee section operator sufficiently can carry out human-computer interaction, and
Another part operator can be primarily focused in other work.Further, identity information is operator's
The location of identification badge or operator, wherein identification badge can specific clothes, nfc card or other
Mark etc..In some specific embodiments, in vehicle-mounted man-machine interactive system, the identity of operator is passenger or driving
Member, identity information are the location of operator, for example, the operator for being sitting in steering position is identified as driver and is divided
The personal identification for being sitting in copilot station is passenger and distributes to passenger and grasp accordingly by dispensing driver's corresponding operation permission
Make permission;This can be while guaranteeing that passenger sufficiently can carry out human-computer interaction saferly, so that driver drives
It sails to guarantee traffic safety.
Further, man-machine interactive system includes at least two operation modes, and processing equipment 2 is additionally configured to according to operation
Mode distributes corresponding operation permission to operator.The present embodiment is corresponding by being distributed according to operation mode to operator
Operating right can guarantee to provide more suitable interactive experience under different scenes.By taking vehicle-mounted man-machine interactive system as an example,
It can be divided into driving mode, car-parking model and auxiliary driving mode etc..In driving mode, the operation of driver should be limited
Operating right of the permission without limiting passenger plays video display for example, driver can carry out the operation relevant to driving such as navigate
The relevant operation of the amusement such as play, game is then prohibited, and still, passenger can then carry out all operations.In car-parking model, drive
Person and passenger can be carried out all operations.In auxiliary driving mode, limit driver's according to the degree that auxiliary drives
Operating right.It should be noted that processing equipment is also to collected behaviour when vehicle-mounted man-machine interactive system is in driving mode
It is analyzed as data to confirm driver whether in fatigue driving, for example, when the sight of driver deviates front then for a long time
Judge that driver is in fatigue driving, at this point, should remind driver that safety zone is looked for carry out parking rest as early as possible.
The display equipment in the present invention is illustrated incorporated by reference to Fig. 5, Fig. 5 is that the display of one of embodiment of the present invention is set
Standby structural schematic diagram.As shown in figure 5, display screen 30 includes the first display screen 301 and second display screen 302;Show 3 quilt of equipment
It is configured to show that content is divided into the first display content and the second display content, and it is aobvious to control the first display screen 301 display first
Show content and control second display screen 302 display the second display content.In some specific embodiments, in vehicle-mounted man-machine friendship
In mutual system, the first display screen 301 is master control screen, and second display screen 302 is instrument display screen, the two display screens are for showing
Different contents, certainly, the number of display screen are not limited to two, set for example, being also equipped with head-up display on some vehicles
Standby (Head Up Display, HUD), at this point, the front windshield of vehicle can also be used as one of display screen.Further, it shows
Show that equipment 3 shows at least partly first display content according to corresponding operation instruction on second display screen 302.Example
Such as, in vehicle-mounted man-machine interactive system, the navigation information retrieved can be passed through corresponding operation by the passenger of copilot station
Instruction is shown on instrument display screen, to enable a driver to more convenient acquisition navigation information.
Further, continuing with referring to Fig. 5, when being more than not operated the regular hour, display screen 30 can enter section
Energy mode, therefore, it is necessary to some specific operations to wake up display screen 30.Based on the above situation, display equipment 3 is additionally configured to
Display screen 30 is converted into activation pattern by energy-saving mode according to wake up instruction, wake up instruction is that the sight of operator rests on
Time on display screen is more than or equal to setting time, when acquiring equipment includes voice messaging identification device, wake up instruction
It can also be converted by the corresponding voice messaging of operator.In this example it is shown that screen can pass through wake up instruction
Activation pattern is converted to by energy-saving mode, that is, display screen is without being constantly in activation pattern, be conducive to save electric energy and
Improve the service life of display equipment.
It please refers to Fig. 6 man-machine interaction method of the invention is described, Fig. 6 is that one of embodiment of the present invention is man-machine
The flow chart of exchange method.Man-machine interaction method in the present invention can be used for vehicle-mounted man-machine interactive system, virtual reality is shown
System (VR) and virtual enhancing display system (AR) etc..As shown in fig. 6, man-machine interaction method provided in this embodiment includes as follows
Step:
Step S1: acquiring the operation data of operator, and operation data includes eye information and command information.
Step S2: root, which states each in current display interface of operation data, can confirm selected project in operation item.
Step S3: command information is converted into corresponding operation instruction and is intended to the operation for realizing the operator.
Step S4: corresponding content is shown according to selected project and operational order on a display screen.
The present embodiment is acquired by the operation data of operator, and operation data includes eye information and instruction letter
Breath, and the operation operational order by the way that operation data to be converted to selected project, that is, determination operate that project,
And determine the specific instruction of operation, it so, it is possible to can be realized man-machine interactive system by Touchless manipulation realization each
The manipulation of kind function, so that human-computer interaction has higher flexibility and accuracy.
Further, step S1 includes: the operation data that operator is acquired by acquisition equipment.In some cases
Under, operator is not only one, in order to sufficiently acquire the operation data of operator, further, acquires the number of equipment
Amount at least two, at least two acquisition equipment acquire the operation data of different operators respectively.It should be noted that
" different operators " are for inscribing in same acquisition, that is, in same time, an acquisition equipment acquisition one
The operation data of a or some operators, another acquisition equipment acquire the operand of another or other operator
According to.One benefit of the present embodiment is can sufficiently to acquire the operation data of each operator;Another benefit is collected
Operation data is more targeted and error is smaller, for example, acquisition equipment operation data are images, sets then multiple acquisitions are arranged
It is standby to enable to image smaller because of deformation caused by acquisition angles.In some specific embodiments, man-machine interaction method is used
In vehicle-mounted man-machine interactive system, two acquisition equipment are set in vehicle, the two acquisition equipment be respectively used to acquisition driver with
It is sitting in the operation data of the passenger of copilot station, acquisition equipment is enabled sufficiently to acquire driver and in copilot station
The operation data of passenger, the needs of preferably meeting more people while carrying out human-computer interaction;Certainly, vehicle is set also for each seat
An acquisition equipment is set, to meet the abundant acquisition of the operation data to interior each operator.
Acquisition equipment can take various forms, and can realize the acquisition to the operation data of operator.It is right below
Several preferred embodiments of the operation data of acquisition equipment acquisition operator are illustrated.Optionally, acquisition equipment includes
Image acquisition device, at this point, step S1 include: by image acquisition device acquire operator eyes image as eye information simultaneously
The motion images of operator are acquired as command information.Optionally, acquisition equipment includes eyeball tracking device and action sensor,
At this point, step S1 includes: the eye information for acquiring operator by eyeball tracking device, operator is acquired by action sensor
The action message of member is as command information.Optionally, acquisition equipment further includes voice messaging identification device, at this point, step S1 is also
It include: the voice data that operator is acquired by voice messaging identification device;Based on this, man-machine interaction method further include: will
Voice data is converted to corresponding operation instruction.It should be noted that when acquiring equipment includes voice messaging identification device, behaviour
Make personnel both can using movement carry out human-computer interaction, can also using voice carry out human-computer interaction, enable interaction more
Flexibly.
Fig. 7 is a kind of flow chart of man-machine interaction method hair step S2 in the embodiment of the present invention.Fig. 3 and Fig. 7 are referred to,
In some optional embodiments, step S2 includes: step S201: the direction of visual lines of operator is determined according to eye information
With the pupil center of operator;Step S202: will be by pupil center and the straight line and display screen parallel with direction of visual lines
Where intersection point can operation item as selected project.In the mode that project is selected in the confirmation, no matter command information is movement
Information or voice messaging can confirm selected project, have stronger adaptability.
Fig. 8 is a kind of flow chart of man-machine interaction method hair step S2 in the embodiment of the present invention.Fig. 4 and Fig. 8 are referred to,
In some optional embodiments, in the case that command information includes the gesture of dynamic change, then step S2 may include: step
Rapid S201': the pupil center of operator is determined according to eye information;Step S202': the specific of the hand of operator is selected
Site;Step S203': will by where the straight line of pupil center and specific site and the intersection point of display screen can operation item make
To select project.For example, the pad of finger of the index finger of selected operator is used as specific site, this is because people is man-machine in progress
Gesture variation habitually is carried out to realize that operation, the especially finger tip of some finger are logical against the position of eye gaze when interaction
It is often the position against eye gaze, therefore, selected sight is determined using the method for specific site combination direction of visual lines, it can
The drop point of the sight of operator on a display screen is more accurately determined to more accurately determine operation item, thus further
Improve the accuracy of human-computer interaction.
Further, in some alternative embodiments, step S3 includes: to be converted command information by transformation model
For corresponding operation instruction.Transformation model be according to existing common movement and/or voice it is corresponding with operational order about
It establishes.Specifically, the relationship of common gesture and operational order is also: in index finger and thumb mean will to select away from each other
Receiving is big, index finger and thumb is close to each other means to reduce chosen content, and hand slides to the left to be meaned to move to left chosen content,
Hand, which slides to the right, to be meaned to move to right chosen content etc..The relationship of common limb action and operational order has: meaning of nodding
Confirmation, shake the head mean negative etc..The common voice messaging that can be used for carrying out human-computer interaction has: " playing XX " " returns
Main interface ", " sound is more greatly " etc..
Further, operation data further includes the user information of operator, man-machine interaction method further include: passes through knowledge
Not described user information determines concrete operations personnel;By machine learning method to the historical operating data of concrete operations personnel
It is trained to generate personalized transformation model.In the present embodiment, made with the historical operating data of each concrete operations personnel
For training data, and these training datas are trained using machine learning method to obtain personalized transformation model, benefit
The command information of each concrete operations personnel is converted into operational order with personalized transformation model, can be improved the accurate of conversion
Rate and speed, to improve the Experience Degree of human-computer interaction.For example, user A habit expresses the behaviour into selected project with nodding
Work is intended to, and the operation that user B habit is expressed with finger double-click into selected project is intended to, and personalized transformation model can will be used
The nodding action of family A is converted to the operational order into selected project, and personalized transformation model double-clicks the finger of user B dynamic
Be converted to the instruction into selected project.It should be noted that in the present embodiment, the step of " recognition user information ", is wrapped
Include: user information of the account of acquisition operator's input as operator, or the biological information of acquisition operator are made
For the user information of operator, wherein biological information refers to the information such as the recognition of face information of user, fingerprint, vocal print.
Further, in some alternative embodiments, step S3 includes: that will be moved according to the set information of operator
Operational order is converted to as data, set information includes set action and operational order corresponding with set action.Namely grasp
Personalized setting can be carried out by making personnel, and one benefit of this bring is another benefit so that human-computer interaction is more interesting
It is that unauthorized person possibly can not be by common movement finishing man-machine interaction so that human-computer interaction has stronger safety.
Further, operation data further includes the identity information of operator, man-machine interaction method provided in this embodiment
Further include: it is corresponding that operator is distributed to according to the identity of identity information identification operator and according to the identity of operator
Operating right.The identity of operator described in the present invention refers to that each operator is in operator when more people interact
The role that is served as in the group of member's composition, for example, teacher-student, driver-passenger and selling seller-buyer etc..This reality
It applies example and operator's corresponding operation permission is distributed to by the identity according to operator, it can be in guarantee section operator
While human-computer interaction can sufficiently be carried out, and another part operator can be primarily focused in other work.More
Further, identity information is the location of identification badge or operator of operator, wherein identification mark
Will can specific clothes, nfc card or other marks etc..In some specific embodiments, in vehicle-mounted man-machine interactive system
Interactive process in, the identity of operator is passenger or driver, and identity information is the location of operator, for example,
The operator for being sitting in steering position is identified as driver and distributes to driver's corresponding operation permission, copilot will be sitting in
The personal identification of position is passenger and distributes to passenger's corresponding operation permission;This can sufficiently carry out people guaranteeing passenger
While machine interaction, so that driver is driven to guarantee traffic safety saferly.
Further, man-machine interaction method provided in this embodiment further include: distributed according to operation mode to operator
Corresponding operation permission.By taking vehicle-mounted man-machine interactive system as an example, driving mode, car-parking model and auxiliary can be divided into and driven
Mode etc..In driving mode, operating right of the operating right of driver without limiting passenger should be limited, for example, driver
The operation relevant to driving such as navigate can be carried out, the relevant operation of the amusement such as movie and television play, game is played and is then prohibited, still,
Passenger can then carry out all operations.In car-parking model, driver and passenger can be carried out all operations.It is driven in auxiliary
It sails in mode, according to the operating right for the degree limitation driver that auxiliary drives, assists the more high then driver tool of the grade driven
There are more operating rights.It should be noted that when vehicle-mounted man-machine interactive system is in driving mode, processing equipment is also to adopting
The operation data collected is analyzed to confirm driver whether in fatigue driving, for example, the sight as driver is inclined for a long time
Then judge that driver is in fatigue driving from front, at this point, should remind driver that safety zone is looked for carry out parking rest as early as possible.
Further, display screen includes the first display screen and second display screen, then step S4 includes: to be divided into display content
First display content and the second display content, and show the first display content in the first display screen and shown in second display screen
Second display content.Further, step S4 further include: instructed according to corresponding operation by at least portion on the first display screen
Point content is shown on a second display.In vehicle-mounted man-machine interactive system, the first display screen is master control screen, the second display
Screen is instrument display screen, the two display screens are for showing different contents, and certainly, the number of display screen is not limited to two
It is a, for example, head-up display device (Head Up Display, HUD) is also equipped on some vehicles, at this point, the front windshield of vehicle
Glass can also be used as one of display screen.Further, display equipment 3 at least partly first will be shown according to corresponding operation instruction
Show that content is shown on second display screen 302.For example, the passenger of copilot station can in vehicle-mounted man-machine interactive system
The navigation information retrieved to be shown on instrument display screen by corresponding operation instruction, to enable a driver to
More convenient acquisition navigation information.
Further, when being more than not operated the regular hour, display screen can enter energy-saving mode, therefore, it is necessary to
Some specific operations are to wake up display screen.Based on the above situation, step S4 further include: according to wake up instruction will show equipment by
Energy-saving mode is converted to activation pattern, and wake up instruction is the time that the sight of operator stops on a display screen to be greater than or wait
In setting time, when acquiring equipment includes voice messaging identification device, wake up instruction can also be by the corresponding of operator
Voice messaging is converted.In this example it is shown that screen can be converted to activation pattern by energy-saving mode by wake up instruction,
Namely display screen is without being constantly in activation pattern, being conducive to save electric energy and improving the service life of display screen.
In order to solve the above-mentioned technical problem, the present embodiment provides a kind of vehicle, which includes appointing in above-described embodiment
It anticipates a kind of man-machine interactive system, the beneficial effect with corresponding man-machine interactive system, details are not described herein.
So far, it has been combined preferred embodiment shown in the drawings and describes technical solution of the present invention, still, this field
Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these specific embodiments.Without departing from this
Under the premise of the principle of invention, those skilled in the art can make equivalent change or replacement to the relevant technologies feature, these
Technical solution after change or replacement will fall within the scope of protection of the present invention.
Claims (10)
1. a kind of man-machine interactive system, which is characterized in that the man-machine interactive system includes:
Equipment is acquired, is configured as the operation data of acquisition operator, the operation data includes eye information and instruction
Information;
Processing equipment, be configured as according to the operation data in current display interface each can confirm choosing in operation item
Determine project, and described instruction information is converted into corresponding operation instruction and is intended to the operation for realizing the operator;
Show equipment comprising display screen and be configured as according to the selected project and the operational order in the display
The corresponding content of screen display.
2. man-machine interactive system according to claim 1, which is characterized in that the quantity of the acquisition equipment is at least two
A, at least two acquisition equipment acquire the operation data of different operators respectively.
3. man-machine interactive system according to claim 1, which is characterized in that the acquisition equipment includes image acquisition device,
The eye information includes the eyes image of the operator of described image collector acquisition, and described instruction information includes institute
State the motion images of the operator of image acquisition device acquisition.
4. man-machine interactive system according to claim 1, which is characterized in that the acquisition equipment include eyeball tracking device and
Action sensor, the eye information of the eyeball tracking device acquisition operator, described instruction information include the movement sensing
The action message of the operator of device acquisition.
5. man-machine interactive system according to claim 3 or 4, which is characterized in that the acquisition equipment further includes voice letter
Identification device is ceased, described instruction information further includes the voice of the operator of the voice messaging identification device acquisition.
6. man-machine interactive system according to claim 1, which is characterized in that the processing equipment is configured as according to
Eye information determines direction of visual lines and pupil center and will pass through the pupil center and the straight line parallel with the direction of visual lines
With where the intersection point of the display screen can operation item as the selected project.
7. man-machine interactive system according to claim 1, which is characterized in that described instruction information includes the hand of dynamic change
Gesture, the processing equipment are configured as determining the pupil center of the operator according to the eye information and select the behaviour
The specific site for making the hand of personnel, will be by the friendship of the straight line and the display screen of the pupil center and the specific site
Point where can operation item as the selected project.
8. man-machine interactive system according to claim 6 or 7, which is characterized in that the processing equipment is configured as passing through
Described instruction information is converted to corresponding operation instruction by transformation model.
9. man-machine interactive system according to claim 8, which is characterized in that the operation data further includes the operator
The user information of member, the processing equipment are additionally configured to determine concrete operations personnel by identifying the user information and lead to
Cross machine learning method the historical operating data of each concrete operations personnel is trained with update the transformation model to
Obtain personalized transformation model.
10. man-machine interactive system according to claim 9, which is characterized in that the acquisition equipment is configured as acquisition institute
State user information of the account of operator's input as the operator.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811355370.1A CN110045904A (en) | 2018-11-14 | 2018-11-14 | Man-machine interactive system, method and the vehicle including the system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811355370.1A CN110045904A (en) | 2018-11-14 | 2018-11-14 | Man-machine interactive system, method and the vehicle including the system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110045904A true CN110045904A (en) | 2019-07-23 |
Family
ID=67273594
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811355370.1A Pending CN110045904A (en) | 2018-11-14 | 2018-11-14 | Man-machine interactive system, method and the vehicle including the system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110045904A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111240471A (en) * | 2019-12-31 | 2020-06-05 | 维沃移动通信有限公司 | Information interaction method and wearable device |
CN111694424A (en) * | 2020-04-20 | 2020-09-22 | 上汽大众汽车有限公司 | System and method for awakening vehicle-mounted intelligent voice function |
CN112429007A (en) * | 2019-08-23 | 2021-03-02 | 比亚迪股份有限公司 | Vehicle and auxiliary control method and device thereof, electronic equipment and storage medium |
WO2023272635A1 (en) * | 2021-06-30 | 2023-01-05 | 华为技术有限公司 | Target position determining method, determining apparatus and determining system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201965747U (en) * | 2010-12-21 | 2011-09-07 | 上海华勤通讯技术有限公司 | System for carrying out remote control to automobiles based on wireless communication |
US20150062003A1 (en) * | 2012-09-04 | 2015-03-05 | Aquifi, Inc. | Method and System Enabling Natural User Interface Gestures with User Wearable Glasses |
CN105913844A (en) * | 2016-04-22 | 2016-08-31 | 乐视控股(北京)有限公司 | Vehicle-mounted voice acquisition method and device |
CN106354259A (en) * | 2016-08-30 | 2017-01-25 | 同济大学 | Automobile HUD gesture-interaction-eye-movement-assisting system and device based on Soli and Tobii |
CN106407772A (en) * | 2016-08-25 | 2017-02-15 | 北京中科虹霸科技有限公司 | Human-computer interaction and identity authentication device and method suitable for virtual reality equipment |
CN107310476A (en) * | 2017-06-09 | 2017-11-03 | 武汉理工大学 | Eye dynamic auxiliary voice interactive method and system based on vehicle-mounted HUD |
CN107924239A (en) * | 2016-02-23 | 2018-04-17 | 索尼公司 | Remote control, remote control thereof, remote control system and program |
-
2018
- 2018-11-14 CN CN201811355370.1A patent/CN110045904A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201965747U (en) * | 2010-12-21 | 2011-09-07 | 上海华勤通讯技术有限公司 | System for carrying out remote control to automobiles based on wireless communication |
US20150062003A1 (en) * | 2012-09-04 | 2015-03-05 | Aquifi, Inc. | Method and System Enabling Natural User Interface Gestures with User Wearable Glasses |
CN107924239A (en) * | 2016-02-23 | 2018-04-17 | 索尼公司 | Remote control, remote control thereof, remote control system and program |
CN105913844A (en) * | 2016-04-22 | 2016-08-31 | 乐视控股(北京)有限公司 | Vehicle-mounted voice acquisition method and device |
CN106407772A (en) * | 2016-08-25 | 2017-02-15 | 北京中科虹霸科技有限公司 | Human-computer interaction and identity authentication device and method suitable for virtual reality equipment |
CN106354259A (en) * | 2016-08-30 | 2017-01-25 | 同济大学 | Automobile HUD gesture-interaction-eye-movement-assisting system and device based on Soli and Tobii |
CN107310476A (en) * | 2017-06-09 | 2017-11-03 | 武汉理工大学 | Eye dynamic auxiliary voice interactive method and system based on vehicle-mounted HUD |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112429007A (en) * | 2019-08-23 | 2021-03-02 | 比亚迪股份有限公司 | Vehicle and auxiliary control method and device thereof, electronic equipment and storage medium |
CN112429007B (en) * | 2019-08-23 | 2022-05-13 | 比亚迪股份有限公司 | Vehicle and auxiliary control method and device thereof, electronic equipment and storage medium |
CN111240471A (en) * | 2019-12-31 | 2020-06-05 | 维沃移动通信有限公司 | Information interaction method and wearable device |
CN111240471B (en) * | 2019-12-31 | 2023-02-03 | 维沃移动通信有限公司 | Information interaction method and wearable device |
CN111694424A (en) * | 2020-04-20 | 2020-09-22 | 上汽大众汽车有限公司 | System and method for awakening vehicle-mounted intelligent voice function |
WO2023272635A1 (en) * | 2021-06-30 | 2023-01-05 | 华为技术有限公司 | Target position determining method, determining apparatus and determining system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110045904A (en) | Man-machine interactive system, method and the vehicle including the system | |
US10913463B2 (en) | Gesture based control of autonomous vehicles | |
Döring et al. | Gestural interaction on the steering wheel: reducing the visual demand | |
JP5910903B1 (en) | Driving support device, driving support system, driving support method, driving support program, and autonomous driving vehicle | |
CN110045825A (en) | Gesture recognition system for vehicle interaction control | |
CN104838335B (en) | Use the interaction and management of the equipment of gaze detection | |
US9128520B2 (en) | Service provision using personal audio/visual system | |
CN108099790A (en) | Driving assistance system based on augmented reality head-up display Yu multi-screen interactive voice | |
US20190092169A1 (en) | Gesture and Facial Expressions Control for a Vehicle | |
CN107199571A (en) | Robot control system | |
JP6621032B2 (en) | Driving support device, driving support system, driving support method, driving support program, and autonomous driving vehicle | |
CN106354259A (en) | Automobile HUD gesture-interaction-eye-movement-assisting system and device based on Soli and Tobii | |
CN105584368A (en) | System For Information Transmission In A Motor Vehicle | |
CN103732480A (en) | Method and device for assisting a driver in performing lateral guidance of a vehicle on a carriageway | |
WO2015173271A2 (en) | Method for displaying a virtual interaction on at least one screen and input device, system and method for a virtual application by means of a computing unit | |
CN108430819A (en) | Car-mounted device | |
CN109314768A (en) | For generating the method and apparatus of picture signal and the display system of vehicle | |
Pfeiffer et al. | A multi-touch enabled steering wheel: exploring the design space | |
CN107310476A (en) | Eye dynamic auxiliary voice interactive method and system based on vehicle-mounted HUD | |
CN112162688A (en) | Vehicle-mounted virtual screen interactive information system based on gesture recognition | |
CN106362402A (en) | VR driving game making and experiencing system based on online visual programming | |
Detjen et al. | User-defined voice and mid-air gesture commands for maneuver-based interventions in automated vehicles | |
CN108995590A (en) | A kind of people's vehicle interactive approach, system and device | |
CN110825216A (en) | Method and system for man-machine interaction of driver during driving | |
JP6090727B2 (en) | Driving support device, driving support system, driving support method, driving support program, and autonomous driving vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200904 Address after: Susong Road West and Shenzhen Road North, Hefei Economic and Technological Development Zone, Anhui Province Applicant after: Weilai (Anhui) Holding Co.,Ltd. Address before: 30 Floor of Yihe Building, No. 1 Kangle Plaza, Central, Hong Kong, China Applicant before: NIO NEXTEV Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190723 |