CN110211586A - Voice interactive method, device, vehicle and machine readable media - Google Patents
Voice interactive method, device, vehicle and machine readable media Download PDFInfo
- Publication number
- CN110211586A CN110211586A CN201910531997.6A CN201910531997A CN110211586A CN 110211586 A CN110211586 A CN 110211586A CN 201910531997 A CN201910531997 A CN 201910531997A CN 110211586 A CN110211586 A CN 110211586A
- Authority
- CN
- China
- Prior art keywords
- user
- action message
- sight
- voice messaging
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Abstract
The embodiment of the invention provides a kind of voice interactive methods, device, vehicle and machine readable media, applied to vehicle, the vehicle includes display device, by the sight action message for obtaining interior user, then visual focusing corresponding with the sight action message region in display device is determined, when there are when at least one UI element in visual focusing region, obtain the voice messaging for UI element of user's input, then UI element voice responsive information, and execute operation corresponding with the voice messaging, to pass through visual interactive and interactive voice, so that user is quickly positioned at UI element by way of sight interaction, it is inputted in conjunction with voice and completes complicated input process, improve the convenience of interior interactive voice.
Description
Technical field
The present invention relates to vehicle-mounted voice interaction technique fields, hand over more particularly to a kind of voice interactive method, a kind of voice
Mutual device, a kind of vehicle and a kind of machine readable media.
Background technique
With the fast development and the improvement of people's living standards of automotive industry, the Vehicle's quantity of residential households quickly increases
Add, especially with the development of Vehicular intelligent, the vehicle with phonetic function assistant is increasingly liked by people.
People-car interaction is the core of user experience, orthodox car cockpit functional areas fragment of placement, information overload band messenger
The obstacle of vehicle interaction, causes the value that vehicle itself is used as to interactive entrance to be underestimated, and as voice technology is in vehicle application
Further extensively, the mode of people-car interaction is enriched, improves experiencing by bus for user.Vehicle interior can configure multiple intelligence eventually
Display device is held, such as is located at the middle control large-size screen monitors at front row and is set to the display device of seat back, these intelligent terminals
Display device is provided simultaneously with voice interactive function.Current interactive voice mode is easy to interrupt current interface alternation process, example
Such as, upgrade the date and time executed when user needs to set primary system, then can complete in the following way the task:
1, it by the mode of UI (User Interface, user interface) touch-control of middle control large-size screen monitors, is provided in car-mounted terminal
The interface UI in, option date and time, then complete setting;
2, by way of interactive voice, user is talked with by carrying out more wheels with car-mounted terminal, and such as voice inquiry first needs
The date to be set, completes to set at the time that then voice inquiry needs to set with this.
However, above two interactive mode has inconvenience, the first interface alternation mode needs to carry out repeatedly
The setting of interface operation ability target date and time, and interactive voice mode in second, then generally require to jump out active user
Operation interface is completed in voice dialogue again.Therefore, it is necessary to Jie of touch-control interaction is carried out in a kind of reduction user driving scene
Enter, improves the scheme of the convenience of interactive voice.
Summary of the invention
In view of the above problems, it proposes the embodiment of the present invention and overcomes the above problem or at least partly in order to provide one kind
A kind of voice interactive method and a kind of corresponding voice interaction device to solve the above problems.
To solve the above-mentioned problems, a kind of voice interactive method is on the one hand provided, vehicle is applied to, the vehicle includes
Display device, which comprises
The sight action message of interior user is obtained, determination is corresponding with the sight action message in the display device
Visual focusing region;
When in the visual focusing region there are when at least one UI element, obtain user input for the UI
The voice messaging of element;
Execute operation corresponding with the voice messaging.
Optionally, further includes:
Obtain the mapping relations between preset UI element and preset voice messaging;
Using the mapping relations, from least one described UI element, the determining voice messaging with user input
Matched target UI element.
Optionally, the UI element includes input unit, described to execute operation corresponding with the voice messaging, comprising:
When the UI element is the input unit, the input unit is highlighted;
Semantics recognition is carried out to the voice messaging, generates text information;
The text information is inputted into the input unit.
Optionally, the UI element includes executing control, described to execute operation corresponding with the voice messaging, comprising:
When the UI element is the execution control, the execution control is highlighted;
Semantics recognition is carried out to the voice messaging, generates phonetic order;
Using the phonetic order, triggers the execution control and execute operation corresponding with the phonetic order.
Optionally, the UI element includes display area, described to execute operation corresponding with the voice messaging, comprising:
When the UI element is the display area, the display area is highlighted;
Semantics recognition is carried out to the voice messaging, generates phonetic order;
Using the phonetic order, the display area is adjusted.
Optionally, the display device is vehicle-mounted middle control screen, and the user includes the first user and second user, institute
The sight action message for obtaining interior user is stated, determines vision corresponding with the sight action message in the display device
Focal zone, comprising:
When the vehicle is in high-speed travel state, obtain the first sight action message of interior first user with
And the second sight action message of the second user;
Eliminate the first sight action message, and determine in the vehicle-mounted middle control screen with the second sight activity
The corresponding visual focusing region of information.
Optionally, the sight action message for obtaining interior user, determine in the display device with the sight
The corresponding visual focusing region of action message, comprising:
Obtain the sight action message of at least one interior user;
From the sight action message of at least one user, the sight action message of a user is extracted as target
Sight action message;
Determine visual focusing region corresponding with the line of sight action message in the display device.
Optionally, further includes:
Determine target sound area corresponding with the line of sight action message;
When what is inputted in the visual focusing region there are the user for when at least one UI element, obtaining the target sound area
For the voice messaging of the UI element.
On the other hand, a kind of voice interaction device is additionally provided, is applied to vehicle, the vehicle includes display device, institute
Stating device includes:
Focal zone determining module determines in the display device for obtaining the sight action message of interior user
Visual focusing region corresponding with the sight action message;
Voice messaging obtain module, for when in the visual focusing region exist when at least one UI element, acquisition institute
State the voice messaging for the UI element of user's input;
Operation executing module, for executing operation corresponding with the voice messaging.
Optionally, further includes:
Mapping relations obtain module, for obtaining the mapping relations between UI element and preset voice messaging;
UI element determining module, for using the mapping relations, from least one described UI element, it is determining with it is described
The matched target UI element of voice messaging of user's input.
Optionally, the UI element includes input unit, and the operation executing module includes that the first voice messaging executes son
Module is used for:
When the UI element is the input unit, the input unit is highlighted;
Semantics recognition is carried out to the voice messaging, generates text information;
The text information is inputted into the input unit.
Optionally, the UI element includes executing control, and the operation executing module includes that the second voice messaging executes son
Module is used for:
When the UI element is the execution control, the execution control is highlighted;
Semantics recognition is carried out to the voice messaging, generates phonetic order;
Using the phonetic order, triggers the execution control and execute operation corresponding with the phonetic order.
Optionally, the UI element includes display area, and the operation executing module includes that third voice messaging executes son
Module is used for:
When the UI element is the display area, the display area is highlighted;
Semantics recognition is carried out to the voice messaging, generates phonetic order;
Using the phonetic order, the display area is adjusted.
Optionally, the display device is vehicle-mounted middle control screen, and the user includes the first user and second user, institute
Stating focal zone determining module includes:
First sight acquisition of information submodule, for obtaining described in car when the vehicle is in high-speed travel state
The first sight action message of first user and the second sight action message of the second user;
First focal zone determines submodule, for eliminating the first sight action message, and determines described vehicle-mounted
Visual focusing region corresponding with the second sight action message in middle control screen.
Optionally, the sight action message for obtaining interior user, determine in the display device with the sight
The corresponding visual focusing region of action message, comprising:
Second sight acquisition of information submodule, for obtaining the sight action message of at least one interior user;
Sight information extraction submodule, for from the sight action message of at least one user, extracting a use
The sight action message at family is as line of sight action message;
Second focal zone determines submodule, is used to determine in the display device and the line of sight action message
Corresponding visual focusing region.Further include:
Optionally, sound area determines submodule, for determining target sound area corresponding with the line of sight action message;
Voice messaging acquisition submodule, for when in the visual focusing region exist when at least one UI element, acquisition
The voice messaging for the UI element of user's input in the target sound area.
On the other hand, a kind of vehicle is additionally provided, comprising:
One or more processors;With
One or more machine readable medias of instruction are stored thereon with, are executed when by one or more of processors
When, so that the vehicle executes one or more method as described above.
On the other hand, one or more machine readable medias are additionally provided, instruction are stored thereon with, when by one or more
When processor executes, so that the processor executes one or more method as described above.
The embodiment of the present invention includes following advantages:
In embodiments of the present invention, it is applied to vehicle, the vehicle includes display device, by the view for obtaining interior user
Line action message then determines visual focusing corresponding with the sight action message region in display device, when visual focusing area
There are when at least one UI element in domain, the voice messaging for UI element of user's input is obtained, then UI element responds language
Message breath, and operation corresponding with the voice messaging is executed, thus by visual interactive and interactive voice, so that user is logical
The mode for crossing sight interaction is quickly positioned at UI element, inputs in conjunction with voice and completes complicated input process, improves interior language
The convenience of sound interaction.
Detailed description of the invention
Fig. 1 is a kind of step flow chart of voice interactive method embodiment one of the invention;
Fig. 2 is a kind of step flow chart of voice interactive method embodiment two of the invention;
Fig. 3 is a kind of structural block diagram of voice interaction device embodiment of the invention.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, with reference to the accompanying drawing and specific real
Applying mode, the present invention is described in further detail.
Referring to Fig.1, a kind of step flow chart of voice interactive method embodiment of the invention is shown, vehicle is applied to,
The vehicle includes display device, can specifically include following steps:
Step 101, obtain the sight action message of interior user, determine in the display device with the sight activity
The corresponding visual focusing region of information;
As an example, it can be applied to the onboard system of vehicle, onboard system may include voice assistant function, use
Family can realize the interaction between onboard system by voice assistant function.
In the concrete realization, onboard system can be communicatively coupled from interior different equipment, and pass through electronic control
Unit executes corresponding operation.Specifically, the sound source that onboard system can obtain interior user's input by microphone array is believed
Number, the action message of interior user can also be obtained by being set to interior camera several different, can also be passed through
The eyeball iris image etc. of user in infrared sensor collecting vehicle.
Wherein, microphone array can be made of multiple microphones, can be with for receiving the sound-source signal of different location
It is set to the position of ceiling above vehicle compartment, and is provided in round, the shapes such as polygon.Camera is set to interior
Different location, such as it is set to ceiling above the display device and vehicle compartment of first line center control screen, seat back.
Infrared sensor can be set in the eyeball iris image etc. on display device, for user in collecting vehicle.
Display device may include be set to the front-seat vehicle-mounted middle control screen in compartment, the new line immediately ahead of operator seat is usually shown
Showing device, the display screen for being set to seat back and the display screen being set in compartment etc..Onboard system can pass through setting
Display device in vehicle carries out the interface display of onboard system, in UI (the User Interface, Yong Hujie of display device
Face) it can show that different UI elements, the element that UI element can execute instruction in the interface UI such as apply journey in interface
Sequence icon, button in application program can operate picture, input frame, list and control etc..
In embodiments of the present invention, the sight action message of available interior user, and determine in display device with
The corresponding visual focusing region of sight action message, so as to be chased after in the realization of the display device of vehicle to the sight of interior user
Track.Wherein, visual focusing region can be a figurate region of tool, such as rectangular area, border circular areas and ellipse
Region etc..
Wherein, after obtaining the sight action message of interior user, can also determine in display device with sight activity
By the visual focusing point precise positioning to UI element in the interface UI may be implemented, into one in the corresponding visual focusing point of information
Step improves the interactive voice experience of user.
In the concrete realization, in order to avoid maloperation, such as interior user is unconscious to rotate eyeball to a direction, practical
The intention interacted with onboard system is not want, then is easy onboard system and is easy to adopt the sight action message of user
Collection, and generate maloperation.Therefore, certain way activation visual interactive mode can be preset.
Specifically, the activation that can be installed in the steering wheel of vehicle for activating the visual interactive mode of onboard system is pressed
Button or touch tablet;Can also be by the visual interactive mode of the push button exciting onboard system on steering wheel for vehicle, driver's is double
Hand is without leaving steering wheel;It can also be by touching the activation control in the middle control screen of vehicle or the display device of chair back
Activate the visual interactive mode of onboard system;The visual interactive mode etc. of onboard system can also be activated by input phonetic order
Deng.
In the concrete realization, sight action message may include the changing features of user eyeball and eyeball periphery, iridial angle
Degree variation and the iris feature etc. for actively projecting infrared collection.It, can after obtaining the sight action message of interior user
To use the sight action message, visual focusing corresponding with the sight action message region in display device is determined, thus
After onboard system activates visual interactive mode, user can carry out human-computer interaction by eye movement and onboard system, avoid
User frequently carries out touch control operation to the display device of vehicle, improves experiencing by bus for user.
In a kind of example of the embodiment of the present invention, changing features, the iris of user eyeball and eyeball periphery can be used
The first plane that the foundation such as angle change and the iris feature for actively projecting infrared collection are adapted to user eyeball simultaneously will
The mapping relations between the first plane and the second plane are established as the second plane in the interface display device UI.Specifically, can be
The focus of user eyeball is determined in first plane, while being become in the second plane according to user eyeball and the feature on eyeball periphery
It is poly- to calculate vision corresponding with eyeball focus for change, iris angle change and the iris feature for actively projecting infrared collection etc.
Burnt region, and in the interface UI of displaying and middle control screen.
Specifically, the image information of interior user can be obtained by camera, it may further be to the eye of interior user
Ball projects infrared ray to acquire the sight action message of user, improves the precision of vision tracking and obtains the clear of eyeball image
Degree.If vehicle is in night environment or the environment of dark, the image data of camera acquisition is simultaneously unintelligible, then can lead to
It crosses infrared sensor to track the eye movement of user, improves the precision of vision tracking.In collection process, Ke Yitong
The eyeball iris image for the camera acquisition user being set on human eye direction is crossed, infrared sensor is projected to the eyeball of user
When infrared ray, corresponding feature can produce on human eye iris, when the eyeball of user is seen to different directions, eye has subtle
Variation, then can be acquired capture, extraction by the feature changed to these and handle.
It, can be to car when onboard system opens visual interactive mode in another example of the embodiment of the present invention
The sight action message of user tracks.It, can be to after infrared sensor projects when obtaining sight action message
The eyeball of eyeball acts, and the central point for acquiring eyeball iris image establishes coordinate system as origin, to determine the eyeball of user
The direction of motion, to obtain the sight action message of user.Specifically, can be using the central point between two cameras as origin
Coordinate system is established, determines eye movement direction.It is expert in the scape of parking lot, due to getting on the bus, get off and move etc. driver, holds
It easily leads to precalibrated position to change, therefore (can be able to be to be set in display device by two cameras
Two cameras) between central point establish coordinate system as origin, to identify the eye movement of user, thus realize upwards,
Downwards, control to the left and to the right etc..
It should be noted that the embodiment of the present invention includes but be not limited to above-mentioned example, those skilled in the art can be
Under the thought guidance of the embodiment of the present invention, obtains and determine using sight action message of the other modes to interior user
Visual focusing corresponding with sight action message region etc., the invention is not limited in this regard.
Step 102, when there are the needles for when at least one UI element, obtaining user's input in the visual focusing region
To the voice messaging of the UI element;
In embodiments of the present invention, interior user can control visual focusing region, work as view by sight activity
When feeling that focal zone is moved with the activity of user's sight, and being covered at least one UI element, onboard system can be reminded
User's input voice information, and obtain user input the voice messaging for UI element, thus onboard system can according to
The voice messaging of family input, realizes the interaction with user.
In the concrete realization, UI element can be input unit in the interface UI, execute control and display area etc.,
In, input unit can be input frame or list etc., and execute control can be to be used in the icon of application program, application program
Execute certain instruction button and picture etc. can be operated, display area can for picture, digital map navigation interface, desktop plug-ins with
And pop-up etc..When user is mobile by sight, controls the movement of visual focusing region and is covered in UI element, onboard system can
To remind user's input voice information, and obtain the voice messaging for UI element of user's input.
Wherein, when for visual focusing point, then visual focusing point can be moved with the activity of user's sight, when being moved to
When on UI element, the voice messaging for the UI element of available user's input, so as to trigger UI member by single-point
Plain voice responsive information, improves the specific aim of voice messaging, wakes up so that user is more convenient to UI element, and execute phase
The operation answered improves the convenience of interactive voice.
Step 103, operation corresponding with the voice messaging is executed.
In embodiments of the present invention, when obtain user's input for UI element voice messaging after, can execute and language
Message ceases corresponding operation, such as inputs text information, the function of trigger button, display area adjustment instruction etc..
It in the concrete realization, can be by the microphone that is set in microphone array on different directions, while to each sound
Area is oriented pickup, and filters out to non-human voice signal, to acquire the sound-source signal for UI element of user's input.
Specifically, human voice signal concentrates between 100Hz-800Hz, the microphone of each on microphone array can be subjected to object
The BPF (Band-pass Filter, bandpass filter) of a 100Hz-2000Hz is arranged to acquisition in the bandpass filtering of reason mode
Signal carry out frequency extraction, obtain user input the sound-source signal for UI element, thus by mechanical-physical filtering, filter
Signal in addition to voice frequency range improves the anti-interference of sound-source signal acquisition.
In the concrete realization, microphone array can collect the voice signal of user's input, can then believe voice
Match cognization is carried out in number preset speech model of input, voice messaging is converted voice signals into, so that voice signal be turned
It is changed to text information.Wherein, preset speech model may include dynamic time warping algorithm (DTW), Hidden Markov Model
(HMM), artificial neural network (ANN) etc..
In the concrete realization, after converting voice signals into text information, nature semanteme reason can be carried out to text information
Solution, by voice signal command information and corresponding database match.Specifically, voice messaging can be sent to cloud
It holds server to carry out semantics recognition, can also locally carry out semantics recognition, refer to generate voice corresponding with voice signal
It enables, and then can determine the phonetic order of user's input, then execute operation corresponding with phonetic order in UI element.
In the concrete realization, by carrying out vision tracking to user, while the phonetic order of user speech input is obtained, so
Operation corresponding with phonetic order is executed afterwards, compared with simple interface alternation, complexity can be completed by the access of voice
Input process can not interrupt the current interface process of user compared with interactive voice originally, execute the mistake of task in user
The convenience of integrated speech in journey, while when occurring the execution control of multiple same types in the interface UI, reality can also be passed through
It now tracks, quickly navigates to the control that user wants operation, further improve the convenience of interactive mode.
In a preferred embodiment of an embodiment of the present invention, when UI element is input unit, input unit is carried out
It is highlighted;Semantics recognition is carried out to voice messaging, generates text information;Text information is inputted into input unit.As user needs to set
The date and time that primary system upgrading executes is determined, when onboard system detects that visual focusing region stays in date entry box
In, the date for being highlighted, and user being prompted to input, the voice for the date entry box for obtaining user speech input is believed
Breath then carries out semantics recognition to the voice messaging and obtains date information, and according to the rule for meeting current date input frame content
Model is inserted in input frame, and the input of the date and time executed to system upgrade is completed, to pass through visual interactive and voice
Interaction inputs in conjunction with voice so that user is quickly positioned at UI element by way of sight interaction and completes complicated input
Journey improves the convenience of interior interactive voice.
In another preferred embodiment of the embodiment of the present invention, when UI element is to execute control, carried out to control is executed
It is highlighted;Semantics recognition is carried out to voice messaging, generates phonetic order;Using phonetic order, triggering executes control execution and voice
Instruct corresponding operation.If user needs to open music application, when onboard system detects that visual focusing region stays in
When on the icon of music application, user speech inputs " opening ", then can trigger to the icon of the music application, and
Open music application program;Then when user carries out cutting song operation, visual focusing region can be moved to " under
One is first " button, and voice input " next " or the voice messagings such as " cutting song " or " determination ", " next " button is triggered to current
The song of broadcasting switches over processing, thus by visual interactive and interactive voice, so that side of the user by sight interaction
Formula is quickly positioned at UI element, inputs in conjunction with voice and completes complicated input process, improves the convenience of interior interactive voice.
In another preferred embodiment of the embodiment of the present invention, when UI element be display area when, to display area into
Row is highlighted, then carries out semantics recognition to voice messaging, generates phonetic order, then uses the phonetic order, adjusts the display
Region.If user needs to adjust the scale bar that current map is shown, when showing current digital map navigation in display device, user
Voice inputs " amplification ", then can amplify to the scale bar that current map is shown.Or vehicle-mounted system is shown in the current interface UI
When the status bar of system, visual focusing region can be moved to the status bar by the way that sight is mobile by user, and voice input is " hidden
Hiding ", then can be hidden the status bar of onboard system, thus by visual interactive and interactive voice, so that user is logical
The mode for crossing sight interaction is quickly positioned at UI element, inputs in conjunction with voice and completes complicated input process, improves interior language
The convenience of sound interaction.
In embodiments of the present invention, it is applied to vehicle, the vehicle includes display device, by the view for obtaining interior user
Line action message then determines visual focusing corresponding with the sight action message region in display device, when visual focusing area
There are when at least one UI element in domain, the voice messaging for UI element of user's input is obtained, then UI element responds language
Message breath, and operation corresponding with the voice messaging is executed, thus by visual interactive and interactive voice, so that user is logical
The mode for crossing sight interaction is quickly positioned at UI element, inputs in conjunction with voice and completes complicated input process, improves interior language
The convenience of sound interaction.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method
It closes, but those skilled in the art should understand that, embodiment of that present invention are not limited by the describe sequence of actions, because according to
According to the embodiment of the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should
Know, the embodiments described in the specification are all preferred embodiments, and the related movement not necessarily present invention is implemented
Necessary to example.
Referring to Fig. 2, a kind of step flow chart of voice interactive method embodiment two of the invention is shown, is applied to vehicle
, the vehicle includes display device, it can specifically include following steps:
Step 201, obtain the sight action message of interior user, determine in the display device with the sight activity
The corresponding visual focusing region of information;
The image information of interior user can be obtained by camera, may further be projected to the eyeball of interior user red
Outside line improves the precision of vision tracking and obtains the clarity of eyeball image to acquire the sight action message of user.
In a preferred embodiment of an embodiment of the present invention, when display device is the vehicle-mounted middle control for being set to compartment front row
Screen, and when vehicle is in high-speed travel state, the first sight action message and second for obtaining interior first user are used
The second sight action message at family;The first sight action message is eliminated, and is determined living with the second sight in vehicle-mounted middle control screen
The dynamic corresponding visual focusing region of information.
In the concrete realization, the first user can drive user based on, and second user can be the passenger side user, when vehicle is in
When high-speed travel state, in order to avoid the main dispersion for driving user's driving demand power, then when the visual interactive mould of activation onboard system
When formula, when master drives user, and sight is moved to vehicle-mounted middle control screen, onboard system can believe the main sight activity that drive user
Breath is eliminated, and detects the sight action message of the passenger side user, when getting the sight action message of the passenger side user, is then existed
Visual focusing region corresponding with the sight action message of the passenger side user is determined in vehicle-mounted middle control screen, thus in vehicle high-speed row
It sails under state, avoids driving the frequent visual interactive of user's progress with main, the main attention for driving user of dispersion ensure that the peace of driving
Quan Xing.
Step 202, when there are the needles for when at least one UI element, obtaining user's input in the visual focusing region
To the voice messaging of the UI element;
In the concrete realization, UI element can be input unit in the interface UI, execute control and display area etc.,
In, input unit can be input frame or list etc., and execute control can be to be used in the icon of application program, application program
Execute certain instruction button and picture etc. can be operated, display area can for picture, digital map navigation interface, desktop plug-ins with
And pop-up etc..When user is mobile by sight, controls the movement of visual focusing region and is covered in UI element, onboard system can
To remind user's input voice information, and obtain the voice messaging for UI element of user's input.
In a preferred embodiment of an embodiment of the present invention, the sight activity letter of at least one available interior user
Breath;From the sight action message of at least one user, the sight action message of a user is extracted as line of sight activity
Information;Determine visual focusing corresponding with the line of sight action message region in display device;It is determining movable with line of sight
The corresponding target sound area of information;When, there are when at least one UI element, the user for obtaining target sound area is defeated in visual focusing region
The voice messaging for UI element entered.
In embodiments of the present invention, onboard system can receive car not by being set to the microphone array of interior
The sound-source signal in unisonance area.Wherein, for different automobiles, the distribution in interior sound area is also different.
For example, interior sound area can be divided into the area Zhu Jiayin and the passenger side sound area for the automobile of Double-seat;For four
For the automobile of people's seat, interior sound area can be divided into the area Zhu Jiayin, the passenger side sound area and player No.5's sound area and back right sound area;
For the automobile of seven people seat, interior sound area can be divided into the area Zhu Jiayin, the passenger side sound area, intermediate first sound area, centre second
Sound area, after ranked first sound area, after ranked second sound area and after ranked third sound area etc..
In the concrete realization, after onboard system has activated visual interactive mode, onboard system can obtain car simultaneously
The sight action message of several users is then likely to occur the sight action message for obtaining the first user, and responds second user
Therefore the case where voice messaging, in order to avoid sight action message and voice messaging mismatch, can obtain car at least one
After the sight action message of a user, from the sight action message of at least one user, the sight activity of a user is extracted
Information determines target sound area corresponding with the line of sight action message as line of sight action message, to work as vision
There are when a UI element in focal zone, can be directed to by what user in microphone array orientation acquisition target sound area inputted
The voice messaging of UI element realizes the matching of sight action message and voice messaging, improves visual interactive and interactive voice
Matching degree, and then improve the interactive voice experience of user.
In a kind of example of the embodiment of the present invention, after onboard system has activated visual interactive mode, it is set to compartment
The display screen of back seat after the sight action message for receiving heel row user first and heel row user's second, can choose heel row user
The sight action message of first or heel row user's second as line of sight action message, by the first sound area where heel row user first or after
The second sound area is as target sound area where arranging user's second, thus the matching of sight action message and voice messaging.
Step 203, the mapping relations between preset UI element and preset voice messaging are obtained;
In the concrete realization, for the UI element in the interface onboard system UI, corresponding voice messaging can be preset,
Thus when user's input voice information, it can be with quick response voice messaging, and UI element is made to execute corresponding operation.Such as work as UI
When element is application icon, corresponding voice messaging can be arranged, music application can according to the type of application program
With the voice messagings such as setting " opening ", " broadcasting ", " opening ", " navigation " etc. is can be set in map application;When UI element
When for input unit, corresponding voice messaging can be arranged, day can be set in date entry box according to the type of input unit
Phase, time etc..
Step 204, using the mapping relations, from least one described UI element, determining and user input
The matched target UI element of voice messaging;
In the concrete realization, when user by sight activity control visual focusing region when the interface UI is moved, depending on
Feel that focal zone can be covered at least one UI element, it, then can be with so that there are multiple UI elements in visual focusing region
Using the mapping relations between UI element and preset voice messaging, the determining matched target UI of voice messaging with user's input
Element.
In a kind of example of the embodiment of the present invention, when user controls visual focusing region at the interface UI by sight activity
It is moved, and is covered in " map application icon " and " music application icon ", the voice of user can be believed
Breath carry out semantics recognition, obtain voice messaging be about the voice messaging of map application after, using UI element and preset voice
" map application icon " is used as target UI element by the mapping relations between information.
Step 205, operation corresponding with the voice messaging is executed.
In the concrete realization, when obtain user's input for target UI element voice messaging after, can execute and language
Message ceases corresponding operation, such as inputs text information, the function of trigger button, display area adjustment instruction etc..
In embodiments of the present invention, it is applied to vehicle, the vehicle includes display device, by the view for obtaining interior user
Line action message then determines visual focusing corresponding with the sight action message region in display device, when visual focusing area
There are when at least one UI element in domain, the voice messaging for UI element of user's input is obtained, then UI element responds language
Message breath, and operation corresponding with the voice messaging is executed, thus by visual interactive and interactive voice, so that user is logical
The mode for crossing sight interaction is quickly positioned at UI element, inputs in conjunction with voice and completes complicated input process, improves interior language
The convenience of sound interaction.
Referring to Fig. 3, a kind of structural block diagram of voice interaction device embodiment of the invention is shown, is applied to vehicle, institute
Stating vehicle includes display device, can specifically include following module:
Focal zone determining module 301 is determined for obtaining the sight action message of interior user in the display device
In visual focusing region corresponding with the sight action message;
Voice messaging obtain module 302, for when in the visual focusing region exist when at least one UI element, acquisition
The voice messaging for the UI element of user's input;
Operation executing module 303, for executing operation corresponding with the voice messaging.
Optionally, further includes:
Mapping relations obtain module, for obtaining the mapping relations between UI element and preset voice messaging;
UI element determining module, for using the mapping relations, from least one described UI element, it is determining with it is described
The matched target UI element of voice messaging of user's input.
Optionally, the UI element includes input unit, and the operation executing module 303 is executed including the first voice messaging
Submodule is used for:
When the UI element is input unit, the input unit is highlighted;
Semantics recognition is carried out to the voice messaging, generates text information;
The text information is inputted into the input unit.
Optionally, the UI element includes executing control, and the operation executing module 303 is executed including the second voice messaging
Submodule is used for:
When the UI element is the execution control, the execution control is highlighted;
Semantics recognition is carried out to the voice messaging, generates phonetic order;
Using the phonetic order, triggers the execution control and execute operation corresponding with the phonetic order.
Optionally, the UI element includes display area, and the operation executing module 303 is executed including third voice messaging
Submodule is used for:
When the UI element is the display area, the display area is highlighted;
Semantics recognition is carried out to the voice messaging, generates phonetic order;
Using the phonetic order, the display area is adjusted.
Optionally, the display device is vehicle-mounted middle control screen, and the user includes the first user and second user, institute
Stating focal zone determining module 301 includes:
First sight acquisition of information submodule, for obtaining described in car when the vehicle is in high-speed travel state
The first sight action message of first user and the second sight action message of the second user;
First focal zone determines submodule, for eliminating the first sight action message, and determines described vehicle-mounted
Visual focusing region corresponding with the second sight action message in middle control screen.
Optionally, the sight action message for obtaining interior user, determine in the display device with the sight
The corresponding visual focusing region of action message, comprising:
Second sight acquisition of information submodule, for obtaining the sight action message of at least one interior user;
Sight information extraction submodule, for from the sight action message of at least one user, extracting a use
The sight action message at family is as line of sight action message;
Second focal zone determines submodule, is used to determine in the display device and the line of sight action message
Corresponding visual focusing region.Further include:
Optionally, sound area determines submodule, for determining target sound area corresponding with the line of sight action message;
Voice messaging acquisition submodule, for when in the visual focusing region exist when at least one UI element, acquisition
The voice messaging for the UI element of user's input in the target sound area.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple
Place illustrates referring to the part of embodiment of the method.
The embodiment of the invention also provides a kind of vehicles, comprising:
One or more processors;With
One or more machine readable medias of instruction are stored thereon with, are executed when by one or more of processors
When, so that the vehicle executes method described in the embodiment of the present invention.
The embodiment of the invention also provides one or more machine readable medias, are stored thereon with instruction, when by one or
When multiple processors execute, so that the processor executes method described in the embodiment of the present invention.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with
The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can provide as method, apparatus or calculate
Machine program product.Therefore, the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and
The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can be used one or more wherein include computer can
With the computer-usable storage medium of program code (including but not limited to magnetic disk storage, CD-ROM, optical memory,
EEPROM, Flash and eMMC etc.) on the form of computer program product implemented.
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, terminal device (system) and computer program
The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions
In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these
Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals
Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices
Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram
The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices
In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet
The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram
The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that
Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus
The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart
And/or in one or more blocks of the block diagram specify function the step of.
Although the preferred embodiment of the embodiment of the present invention has been described, once a person skilled in the art knows bases
This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as
Including preferred embodiment and fall into all change and modification of range of embodiment of the invention.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by
One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation
Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning
Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap
Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article
Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited
Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
Above to a kind of voice interactive method provided by the present invention and a kind of voice interaction device, detailed Jie has been carried out
It continues, used herein a specific example illustrates the principle and implementation of the invention, and the explanation of above embodiments is only
It is to be used to help understand method and its core concept of the invention;At the same time, for those skilled in the art, according to this hair
Bright thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not manage
Solution is limitation of the present invention.
Claims (18)
1. a kind of voice interactive method, which is characterized in that be applied to vehicle, the vehicle includes display device, the method packet
It includes:
The sight action message for obtaining interior user, determines view corresponding with the sight action message in the display device
Feel focal zone;
When in the visual focusing region there are when at least one UI element, obtain user input for the UI element
Voice messaging;
Execute operation corresponding with the voice messaging.
2. the method according to claim 1, wherein further include:
Obtain the mapping relations between preset UI element and preset voice messaging;
Using the mapping relations, from least one described UI element, determine that the voice messaging inputted with the user matches
Target UI element.
3. the method according to claim 1, wherein the UI element includes input unit, the execution and institute
State the corresponding operation of voice messaging, comprising:
When the UI element is the input unit, the input unit is highlighted;
Semantics recognition is carried out to the voice messaging, generates text information;
The text information is inputted into the input unit.
4. the method according to claim 1, wherein the UI element includes executing control, the execution and institute
State the corresponding operation of voice messaging, comprising:
When the UI element is the execution control, the execution control is highlighted;
Semantics recognition is carried out to the voice messaging, generates phonetic order;
Using the phonetic order, triggers the execution control and execute operation corresponding with the phonetic order.
5. the method according to claim 1, wherein the UI element includes display area, the execution and institute
State the corresponding operation of voice messaging, comprising:
When the UI element is the display area, the display area is highlighted;
Semantics recognition is carried out to the voice messaging, generates phonetic order;
Using the phonetic order, the display area is adjusted.
6. method according to any one of claims 1 to 5, which is characterized in that the display device is vehicle-mounted middle control screen, institute
Stating user includes the first user and second user, and the sight action message for obtaining interior user is determined in the display
Visual focusing region corresponding with the sight action message in device, comprising:
When the vehicle is in high-speed travel state, the first sight action message and the institute of interior first user are obtained
State the second sight action message of second user;
Eliminate the first sight action message, and determine in the vehicle-mounted middle control screen with the second sight action message
Corresponding visual focusing region.
7. method according to any one of claims 1 to 5, which is characterized in that the sight activity letter for obtaining interior user
Breath determines visual focusing region corresponding with the sight action message in the display device, comprising:
Obtain the sight action message of at least one interior user;
From the sight action message of at least one user, the sight action message of a user is extracted as line of sight
Action message;
Determine visual focusing region corresponding with the line of sight action message in the display device.
8. the method according to the description of claim 7 is characterized in that further include:
Determine target sound area corresponding with the line of sight action message;
When there are what the user for when at least one UI element, obtaining the target sound area inputted to be directed in the visual focusing region
The voice messaging of the UI element.
9. a kind of voice interaction device, which is characterized in that be applied to vehicle, the vehicle includes display device, described device packet
It includes:
Focal zone determining module, for obtaining the sight action message of interior user, determine in the display device with institute
State the corresponding visual focusing region of sight action message;
Voice messaging obtains module, for when, there are when at least one UI element, obtaining the use in the visual focusing region
The voice messaging for the UI element of family input;
Operation executing module, for executing operation corresponding with the voice messaging.
10. device according to claim 9, which is characterized in that further include:
Mapping relations obtain module, for obtaining the mapping relations between UI element and preset voice messaging;
UI element determining module, for using the mapping relations, from least one described UI element, the determining and user
The matched target UI element of the voice messaging of input.
11. device according to claim 9, which is characterized in that the UI element includes input unit, and the operation executes
Module includes the first voice messaging implementation sub-module, is used for:
When the UI element is the input unit, the input unit is highlighted;
Semantics recognition is carried out to the voice messaging, generates text information;
The text information is inputted into the input unit.
12. device according to claim 9, which is characterized in that the UI element includes executing control, and the operation executes
Module includes the second voice messaging implementation sub-module, is used for:
When the UI element is the execution control, the execution control is highlighted;
Semantics recognition is carried out to the voice messaging, generates phonetic order;
Using the phonetic order, triggers the execution control and execute operation corresponding with the phonetic order.
13. device according to claim 9, which is characterized in that the UI element includes display area, and the operation executes
Module includes third voice messaging implementation sub-module, is used for:
When the UI element is the display area, the display area is highlighted;
Semantics recognition is carried out to the voice messaging, generates phonetic order;
Using the phonetic order, the display area is adjusted.
14. device according to any one of claims 9 to 13, which is characterized in that the display device is vehicle-mounted middle control screen,
The user includes the first user and second user, and the focal zone determining module includes:
First sight acquisition of information submodule, for obtaining car described first when the vehicle is in high-speed travel state
The first sight action message of user and the second sight action message of the second user;
First focal zone determines submodule, for eliminating the first sight action message, and determines in the vehicle-mounted middle control
Visual focusing region corresponding with the second sight action message in screen.
15. device according to any one of claims 9 to 13, which is characterized in that the sight activity for obtaining interior user
Information determines visual focusing region corresponding with the sight action message in the display device, comprising:
Second sight acquisition of information submodule, for obtaining the sight action message of at least one interior user;
Sight information extraction submodule, for from the sight action message of at least one user, extracting a user's
Sight action message is as line of sight action message;
Second focal zone determines submodule, corresponding with the line of sight action message in the display device for determination
Visual focusing region.
16. device according to claim 9, which is characterized in that further include:
Sound area determines submodule, for determining target sound area corresponding with the line of sight action message;
Voice messaging acquisition submodule, for when in the visual focusing region there are when at least one UI element, described in acquisition
The voice messaging for the UI element of user's input in target sound area.
17. a kind of vehicle characterized by comprising
One or more processors;With
One or more machine readable medias of instruction are stored thereon with, when being executed by one or more of processors, are made
Obtain the method that the vehicle executes one or more according to claims 1-8.
18. one or more machine readable medias, are stored thereon with instruction, when executed by one or more processors, so that
The method that the processor executes one or more according to claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910531997.6A CN110211586A (en) | 2019-06-19 | 2019-06-19 | Voice interactive method, device, vehicle and machine readable media |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910531997.6A CN110211586A (en) | 2019-06-19 | 2019-06-19 | Voice interactive method, device, vehicle and machine readable media |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110211586A true CN110211586A (en) | 2019-09-06 |
Family
ID=67793603
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910531997.6A Pending CN110211586A (en) | 2019-06-19 | 2019-06-19 | Voice interactive method, device, vehicle and machine readable media |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110211586A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110682921A (en) * | 2019-10-09 | 2020-01-14 | 广州小鹏汽车科技有限公司 | Vehicle interaction method and device, vehicle and machine readable medium |
CN111240477A (en) * | 2020-01-07 | 2020-06-05 | 北京汽车研究总院有限公司 | Vehicle-mounted human-computer interaction method and system and vehicle with system |
CN111324202A (en) * | 2020-02-19 | 2020-06-23 | 中国第一汽车股份有限公司 | Interaction method, device, equipment and storage medium |
CN111722826A (en) * | 2020-06-28 | 2020-09-29 | 广州小鹏车联网科技有限公司 | Construction method of voice interaction information, vehicle and storage medium |
CN111753039A (en) * | 2020-06-28 | 2020-10-09 | 广州小鹏车联网科技有限公司 | Adjustment method, information processing method, vehicle and server |
CN111767021A (en) * | 2020-06-28 | 2020-10-13 | 广州小鹏车联网科技有限公司 | Voice interaction method, vehicle, server, system and storage medium |
CN112073668A (en) * | 2020-08-25 | 2020-12-11 | 恒峰信息技术有限公司 | Remote classroom interaction method, system, device and storage medium |
WO2021222256A3 (en) * | 2020-04-27 | 2021-11-25 | Nvidia Corporation | Systems and methods for performing operations in a vehicle using gaze detection |
CN114201225A (en) * | 2021-12-14 | 2022-03-18 | 阿波罗智联(北京)科技有限公司 | Method and device for awakening function of vehicle machine |
CN114898749A (en) * | 2022-05-30 | 2022-08-12 | 中国第一汽车股份有限公司 | Automobile electronic manual interaction method and device and vehicle |
WO2023272629A1 (en) * | 2021-06-30 | 2023-01-05 | 华为技术有限公司 | Interface control method, device, and system |
WO2023045645A1 (en) * | 2021-09-24 | 2023-03-30 | 华为技术有限公司 | Speech interaction method, electronic device, and computer readable storage medium |
CN116185190A (en) * | 2023-02-09 | 2023-05-30 | 江苏泽景汽车电子股份有限公司 | Information display control method and device and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1643491A (en) * | 2002-02-15 | 2005-07-20 | Sap股份公司 | Voice-controlled user interfaces |
CN101291364A (en) * | 2008-05-30 | 2008-10-22 | 深圳华为通信技术有限公司 | Interaction method and device of mobile communication terminal, and mobile communication terminal thereof |
CN102221881A (en) * | 2011-05-20 | 2011-10-19 | 北京航空航天大学 | Man-machine interaction method based on analysis of interest regions by bionic agent and vision tracking |
CN104572997A (en) * | 2015-01-07 | 2015-04-29 | 北京智谷睿拓技术服务有限公司 | Content acquiring method and device and user device |
CN104838335A (en) * | 2012-05-18 | 2015-08-12 | 微软技术许可有限责任公司 | Interaction and management of devices using gaze detection |
US20150245133A1 (en) * | 2014-02-26 | 2015-08-27 | Qualcomm Incorporated | Listen to people you recognize |
-
2019
- 2019-06-19 CN CN201910531997.6A patent/CN110211586A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1643491A (en) * | 2002-02-15 | 2005-07-20 | Sap股份公司 | Voice-controlled user interfaces |
CN101291364A (en) * | 2008-05-30 | 2008-10-22 | 深圳华为通信技术有限公司 | Interaction method and device of mobile communication terminal, and mobile communication terminal thereof |
CN102221881A (en) * | 2011-05-20 | 2011-10-19 | 北京航空航天大学 | Man-machine interaction method based on analysis of interest regions by bionic agent and vision tracking |
CN104838335A (en) * | 2012-05-18 | 2015-08-12 | 微软技术许可有限责任公司 | Interaction and management of devices using gaze detection |
US20150245133A1 (en) * | 2014-02-26 | 2015-08-27 | Qualcomm Incorporated | Listen to people you recognize |
CN104572997A (en) * | 2015-01-07 | 2015-04-29 | 北京智谷睿拓技术服务有限公司 | Content acquiring method and device and user device |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110682921A (en) * | 2019-10-09 | 2020-01-14 | 广州小鹏汽车科技有限公司 | Vehicle interaction method and device, vehicle and machine readable medium |
CN111240477A (en) * | 2020-01-07 | 2020-06-05 | 北京汽车研究总院有限公司 | Vehicle-mounted human-computer interaction method and system and vehicle with system |
CN111324202A (en) * | 2020-02-19 | 2020-06-23 | 中国第一汽车股份有限公司 | Interaction method, device, equipment and storage medium |
WO2021222256A3 (en) * | 2020-04-27 | 2021-11-25 | Nvidia Corporation | Systems and methods for performing operations in a vehicle using gaze detection |
US11790669B2 (en) | 2020-04-27 | 2023-10-17 | Nvidia Corporation | Systems and methods for performing operations in a vehicle using gaze detection |
WO2022000863A1 (en) * | 2020-06-28 | 2022-01-06 | 广东小鹏汽车科技有限公司 | Speech interaction information construction method, vehicle, and storage medium |
CN111767021A (en) * | 2020-06-28 | 2020-10-13 | 广州小鹏车联网科技有限公司 | Voice interaction method, vehicle, server, system and storage medium |
CN111753039A (en) * | 2020-06-28 | 2020-10-09 | 广州小鹏车联网科技有限公司 | Adjustment method, information processing method, vehicle and server |
WO2022001013A1 (en) * | 2020-06-28 | 2022-01-06 | 广州橙行智动汽车科技有限公司 | Voice interaction method, vehicle, server, system, and storage medium |
CN111722826B (en) * | 2020-06-28 | 2022-05-13 | 广州小鹏汽车科技有限公司 | Construction method of voice interaction information, vehicle and storage medium |
CN111722826A (en) * | 2020-06-28 | 2020-09-29 | 广州小鹏车联网科技有限公司 | Construction method of voice interaction information, vehicle and storage medium |
CN112073668A (en) * | 2020-08-25 | 2020-12-11 | 恒峰信息技术有限公司 | Remote classroom interaction method, system, device and storage medium |
CN112073668B (en) * | 2020-08-25 | 2023-10-31 | 恒峰信息技术有限公司 | Remote classroom interaction method, system, device and storage medium |
WO2023272629A1 (en) * | 2021-06-30 | 2023-01-05 | 华为技术有限公司 | Interface control method, device, and system |
WO2023045645A1 (en) * | 2021-09-24 | 2023-03-30 | 华为技术有限公司 | Speech interaction method, electronic device, and computer readable storage medium |
CN114201225A (en) * | 2021-12-14 | 2022-03-18 | 阿波罗智联(北京)科技有限公司 | Method and device for awakening function of vehicle machine |
CN114898749A (en) * | 2022-05-30 | 2022-08-12 | 中国第一汽车股份有限公司 | Automobile electronic manual interaction method and device and vehicle |
CN116185190A (en) * | 2023-02-09 | 2023-05-30 | 江苏泽景汽车电子股份有限公司 | Information display control method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110211586A (en) | Voice interactive method, device, vehicle and machine readable media | |
US20170235361A1 (en) | Interaction based on capturing user intent via eye gaze | |
US10913463B2 (en) | Gesture based control of autonomous vehicles | |
US9656690B2 (en) | System and method for using gestures in autonomous parking | |
US10067563B2 (en) | Interaction and management of devices using gaze detection | |
US11554668B2 (en) | Control system and method using in-vehicle gesture input | |
EP1853465B1 (en) | Method and device for voice controlling a device or system in a motor vehicle | |
CN106030697B (en) | On-vehicle control apparatus and vehicle-mounted control method | |
EP3260331A1 (en) | Information processing device | |
CN110070868A (en) | Voice interactive method, device, automobile and the machine readable media of onboard system | |
KR20210112324A (en) | Multimodal user interfaces for vehicles | |
US10994612B2 (en) | Agent system, agent control method, and storage medium | |
US11514687B2 (en) | Control system using in-vehicle gesture input | |
JP2016001461A (en) | Method for controlling driver assistance system | |
US10065504B2 (en) | Intelligent tutorial for gestures | |
US20140168068A1 (en) | System and method for manipulating user interface using wrist angle in vehicle | |
Gaffar et al. | Minimalist design: An optimized solution for intelligent interactive infotainment systems | |
US20210072831A1 (en) | Systems and methods for gaze to confirm gesture commands in a vehicle | |
WO2022062491A1 (en) | Vehicle-mounted smart hardware control method based on smart cockpit, and smart cockpit | |
CN110682921A (en) | Vehicle interaction method and device, vehicle and machine readable medium | |
JP2018055614A (en) | Gesture operation system, and gesture operation method and program | |
JPWO2017212569A1 (en) | In-vehicle information processing apparatus, in-vehicle apparatus, and in-vehicle information processing method | |
CN116204253A (en) | Voice assistant display method and related device | |
CN113791841A (en) | Execution instruction determining method, device, equipment and storage medium | |
WO2020200557A1 (en) | Method and apparatus for interaction with an environment object in the surroundings of a vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190906 |