CN112799515A - Visual interaction method and system - Google Patents
Visual interaction method and system Download PDFInfo
- Publication number
- CN112799515A CN112799515A CN202110138560.3A CN202110138560A CN112799515A CN 112799515 A CN112799515 A CN 112799515A CN 202110138560 A CN202110138560 A CN 202110138560A CN 112799515 A CN112799515 A CN 112799515A
- Authority
- CN
- China
- Prior art keywords
- information
- controller
- main processor
- adjustment
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 230000003993 interaction Effects 0.000 title claims abstract description 48
- 230000000007 visual effect Effects 0.000 title claims abstract description 32
- 238000004364 calculation method Methods 0.000 claims abstract description 51
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 26
- 230000004044 response Effects 0.000 claims abstract description 21
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 20
- 230000009471 action Effects 0.000 claims description 19
- 230000001815 facial effect Effects 0.000 claims description 7
- 238000005094 computer simulation Methods 0.000 claims description 5
- 210000005224 forefinger Anatomy 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 239000012141 concentrate Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000003811 finger Anatomy 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 231100000817 safety factor Toxicity 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides a visual interaction method and a visual interaction system. Sending the collected face information to a computing module through a face camera; the calculation module calculates the face information through an eyeball positioning and tracking algorithm to generate sight line area information; the calculation module inquires out an area identifier corresponding to the sight line area information according to the corresponding relation between the stored sight line area information and the area identifier; the calculation module sends the area identification to a main processor; the main processor sends an adjusting instruction to the controller according to the area identification and the acquired user execution information; the controller makes operational adjustments in response to the adjustment instructions. According to the technical scheme provided by the embodiment of the invention, the controller can respond to the adjustment instruction to perform operation adjustment through the visual interaction method, so that the safety of driving the vehicle is improved.
Description
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of vehicles, in particular to a visual interaction method and a visual interaction system.
[ background of the invention ]
In the related art, there are three main interaction methods for a vehicle cabin: a touch interaction method, a voice interaction method and a key interaction method. In the driving process, considering safety factors, users often cannot stay the sight on a display screen of the car machine for a long time, and more users select to use a voice interaction method.
In the driving process, the voice interaction method solves most of the using scene problems, but under the environments of noisy environment, poor network environment and the like, the voice awakening rate or the voice recognition rate of the voice interaction method is greatly reduced, and when other passengers in the vehicle have a rest, the voice interaction method is inconvenient to adjust. When the vehicle runs at a high speed, if the touch interaction method or the key interaction method is selected, the driving risk of the user is improved, and the safety of driving the vehicle is reduced.
[ summary of the invention ]
In view of this, embodiments of the present invention provide a visual interaction method and system, so as to improve safety of driving a vehicle.
In one aspect, an embodiment of the present invention provides a visual interaction method, including:
the face camera sends the collected face information to the computing module;
the calculation module calculates the facial information through an eyeball positioning and tracking algorithm to generate sight line area information;
the calculation module inquires out an area identifier corresponding to the sight line area information according to the corresponding relation between the stored sight line area information and the area identifier;
the calculation module sends the area identification to a main processor;
the main processor sends an adjusting instruction to the controller according to the area identification and the acquired user execution information;
the controller makes operational adjustments in response to the adjustment instructions.
Optionally, before the sending, by the main processor, an adjustment instruction to the controller according to the area identifier and the acquired user execution information, the method includes:
the method comprises the steps that a square control position key receives user execution information input by a user;
and the direction control position key sends the user execution information to the main processor.
Optionally, the controller comprises an exterior mirror controller, the controller making operational adjustments in response to the adjustment instructions, comprising:
the exterior mirror controller adjusts the exterior mirror in response to the adjustment instruction.
Optionally, after the main processor sends an adjustment instruction to the controller according to the area identifier and the acquired user execution information, the method includes:
the main processor generates adjusting action information of the exterior rearview mirror according to the area identification and the user execution information through a dynamic simulation algorithm;
the main processor sends the adjusting action information of the outer rearview mirror to a display screen of a host;
and the host display screen displays the adjustment action information of the external rearview mirror.
Optionally, the user performance information includes position adjustment information.
Optionally, before the sending, by the main processor, an adjustment instruction to the controller according to the area identifier and the acquired user execution information, the method includes:
the gesture camera sends the collected multiple gesture pictures to the computing module;
the calculation module calculates the plurality of gesture pictures through an image recognition algorithm to generate a gesture recognition result;
the computing module inquires out user execution information corresponding to the gesture recognition result according to the corresponding relation between the stored gesture recognition result and the user execution information;
and the calculation module sends the user execution information to a main processor.
Optionally, the controller includes a car machine, and the controller performs operation adjustment in response to the adjustment instruction, including:
and the vehicle machine responds to the adjusting instruction to adjust the system control of the vehicle machine.
Optionally, the user execution information includes gesture identification information.
In another aspect, an embodiment of the present invention provides a visual interaction system, including: the system comprises a face camera, a computing module, a main processor and a controller;
the face camera is used for sending the collected face information to the computing module;
the calculating module is used for calculating the face information through an eyeball positioning and tracking algorithm to generate sight line area information; inquiring an area identifier corresponding to the sight line area information according to the corresponding relation between the stored sight line area information and the area identifier; sending the area identification to a main processor;
the main processor is used for sending an adjusting instruction to the controller according to the area identifier and the acquired user execution information;
the controller is used for responding to the adjusting instruction to carry out operation adjustment.
In the technical scheme of the visual interaction method provided by the embodiment of the invention, the collected facial information is sent to the computing module through the face camera; the calculation module calculates the face information through an eyeball positioning and tracking algorithm to generate sight line area information; the calculation module inquires out an area identifier corresponding to the sight line area information according to the corresponding relation between the stored sight line area information and the area identifier; the calculation module sends the area identification to a main processor; the main processor sends an adjusting instruction to the controller according to the area identification and the acquired user execution information; the controller makes operational adjustments in response to the adjustment instructions. According to the technical scheme provided by the embodiment of the invention, the controller can respond to the adjustment instruction to perform operation adjustment through the visual interaction method, so that the safety of driving the vehicle is improved.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a schematic structural diagram of a visual interaction system according to an embodiment of the present invention;
FIG. 2 is a flowchart of a visual interaction method according to an embodiment of the present invention;
FIG. 3 is a flow chart of another visual interaction method provided by the embodiments of the present invention;
fig. 4 is a flowchart of another visual interaction method according to an embodiment of the present invention.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In the correlation technique, when the user is driving the in-process, when having the demand of adjusting outside rear-view mirror, often can be inclined to one side to go to look for outside rear-view mirror adjustment button, and the time of 1 ~ 2s needs to be spent in whole process of adjusting outside rear-view mirror, if under the very fast condition of the speed of a motor vehicle, the user is inclined to one side to pass through the button and adjust outside rear-view mirror can be very dangerous, has reduced the security of driving the vehicle. In addition, when a user needs to adjust the system volume of the car machine or play songs and the like in the driving process, the user needs to watch the display screen of the car machine to adjust the system volume of the car machine or play songs, the attention of the user is dispersed, and the safety of driving a vehicle is reduced.
In order to solve the technical problems in the related art, the embodiment of the invention provides a visual interaction system. Fig. 1 is a schematic structural diagram of a visual interaction system according to an embodiment of the present invention, as shown in fig. 1, the system includes: the device comprises a face camera 1, a computing module 2, a main processor 3 and a controller 4. The face camera 1 is connected with the computing module 2, the computing module 2 is connected with the main processor 3, and the main processor 3 is connected with the controller 4.
The face camera 1 is used for sending the collected face information to the computing module 2.
The calculation module 2 is used for calculating the face information through an eyeball positioning and tracking algorithm to generate sight line area information; inquiring the area identification corresponding to the sight line area information according to the corresponding relation between the stored sight line area information and the area identification; the area identification is sent to the main processor 3.
The main processor 3 is configured to send an adjustment instruction to the controller 4 according to the area identifier and the acquired user execution information.
The controller 4 is used for responding to the adjusting instruction to carry out operation adjustment.
In the embodiment of the invention, the calculation module 2 comprises a Driver Monitor Status (DMS for short) calculation module.
In the embodiment of the present invention, the face camera 1 and the calculation module 2 are connected by a Low-Voltage Differential Signaling (LVDS) harness.
In the embodiment of the present invention, the system further includes: and a position key 5 is controlled. The direction control position key 5 is connected with the main processor 3.
The direction control position key 5 is used for receiving user execution information input by a user; the user execution information is sent to the main processor 3.
In the embodiment of the present invention, the square position key 5 is connected to the main processor 3 through a Controller Area Network (CAN) bus.
In an embodiment of the invention, the controller 4 comprises an exterior mirror controller for adjusting the exterior mirror in response to the adjustment command.
In the embodiment of the present invention, the system further includes: a host display screen 6. The host display 6 is connected to the main processor 3.
The main processor 3 is also used for generating the adjusting action information of the outer rearview mirror according to the area identification and the user execution information through a dynamic simulation algorithm; and sending the information of the adjusting action of the outer rearview mirror to the host display screen 6.
The host display screen 6 is used for displaying the adjustment action information of the outer rearview mirror.
In the embodiment of the invention, the user execution information comprises position adjustment information.
In the embodiment of the present invention, the system further includes: gesture camera 7. The gesture camera 7 is connected with the computing module 2.
The gesture camera 7 is used for sending the collected multiple gesture pictures to the computing module 2.
The calculation module 2 is also used for calculating a plurality of gesture pictures through an image recognition algorithm to generate a gesture recognition result; inquiring user execution information corresponding to the gesture recognition result according to the corresponding relation between the stored gesture recognition result and the user execution information; the user execution information is sent to the main processor 3.
In the embodiment of the invention, the gesture camera 7 is connected with the computing module 2 through an LVDS wire harness.
In the embodiment of the present invention, the controller 4 includes a car machine. The car machine is used for responding to the adjusting instruction and adjusting the system control of the car machine.
In the embodiment of the invention, the user execution information comprises gesture identification information.
In the technical scheme provided by the embodiment of the invention, the collected facial information is sent to the computing module through the face camera; the calculation module calculates the face information through an eyeball positioning and tracking algorithm to generate sight line area information; the calculation module inquires out an area identifier corresponding to the sight line area information according to the corresponding relation between the stored sight line area information and the area identifier; the calculation module sends the area identification to a main processor; the main processor sends an adjusting instruction to the controller according to the area identification and the acquired user execution information; the controller makes operational adjustments in response to the adjustment instructions. According to the technical scheme provided by the embodiment of the invention, the controller can respond to the adjustment instruction to perform operation adjustment through the visual interaction method, so that the safety of driving the vehicle is improved.
Based on the visual interaction system, the embodiment of the invention provides a visual interaction method. Fig. 2 is a flowchart of a visual interaction method according to an embodiment of the present invention, as shown in fig. 2, the method includes:
and 102, the face camera sends the collected face information to a computing module.
In the embodiment of the present invention, before step 102, the method further includes: the user concentrates the sight on the position of the left outer rearview mirror or the position of the right outer rearview mirror, and at the moment, the face camera collects the face information of the user.
In the embodiment of the present invention, the face information includes eyeball gaze angle information.
And 104, calculating the face information by an eyeball positioning and tracking algorithm by a calculation module to generate sight line area information.
In this step, the calculation module calculates the eyeball sight angle through an eyeball positioning and tracking algorithm to generate sight area information. For example, the sight line area information includes sight line area information corresponding to the position of the left outer mirror or sight line area information corresponding to the position of the right outer mirror.
And 106, inquiring the area identification corresponding to the sight line area information by the computing module according to the corresponding relation between the stored sight line area information and the area identification.
In the embodiment of the present invention, the area identifier includes an Identity identification number (ID for short).
In the embodiment of the invention, the calculation module stores the corresponding relation between the sight line area information and the area identification. The calculation module can inquire out the area identification corresponding to the sight line area information according to the corresponding relation between the sight line area information and the area identification. For example, the area corresponding to the sight line area information representing the position of the left outer rear view mirror is identified as the identifier of the left outer rear view mirror, the area corresponding to the sight line area information representing the position of the right outer rear view mirror is identified as the identifier of the right outer rear view mirror, and the identifier of the right outer rear view mirror is identified as the identifier of the right outer rear view mirror.
And step 108, the calculation module sends the area identification to the main processor.
And step 110, the main processor sends an adjusting instruction to the controller according to the area identifier and the acquired user execution information.
In step 112, the controller performs operation adjustment in response to the adjustment instruction.
In the technical scheme provided by the embodiment of the invention, the collected facial information is sent to the computing module through the face camera; the calculation module calculates the face information through an eyeball positioning and tracking algorithm to generate sight line area information; the calculation module inquires out an area identifier corresponding to the sight line area information according to the corresponding relation between the stored sight line area information and the area identifier; the calculation module sends the area identification to a main processor; the main processor sends an adjusting instruction to the controller according to the area identification and the acquired user execution information; the controller makes operational adjustments in response to the adjustment instructions. According to the technical scheme provided by the embodiment of the invention, the controller can respond to the adjustment instruction to perform operation adjustment through the visual interaction method, so that the safety of driving the vehicle is improved.
The embodiment of the invention provides another visual interaction method. Fig. 3 is a flowchart of another visual interaction method provided in an embodiment of the present invention, as shown in fig. 3, the method includes:
In the embodiment of the present invention, please refer to step 102 for a detailed description of step 202.
And step 204, the calculation module calculates the face information through an eyeball positioning and tracking algorithm to generate sight line area information.
In the embodiment of the present invention, please refer to step 104 for a detailed description of step 204.
And step 206, the calculation module queries the area identifier corresponding to the sight line area information according to the corresponding relationship between the stored sight line area information and the area identifier.
In the embodiment of the present invention, please refer to step 106 for a detailed description of step 206.
And step 208, the calculation module sends the area identifier to the main processor.
In the embodiment of the invention, the user execution information comprises position adjustment information.
In this step, the user inputs user execution information to the direction control position key by pressing the direction control position key. For example, the user execution information includes upward movement operation information, the user presses an upward movement button of the direction control position key to input the upward movement operation information to the direction control position key, and the angle of the exterior mirror can be adjusted by pressing the upward movement button of the direction control position key to cause the exterior mirror to perform an upward movement operation.
And step 214, the main processor sends an adjusting instruction to the controller according to the area identifier and the acquired user execution information.
For example, the area identifier includes an identifier of the left outer rear view mirror, the user execution information includes upward movement operation information, and the main processor sends an adjustment instruction to the controller, where the adjustment instruction includes an upward movement operation instruction for the left outer rear view mirror.
As an alternative, step 214 is followed by:
and step S1, generating the adjusting action information of the outer rearview mirror by the main processor through a dynamic simulation algorithm according to the area identification and the user execution information.
For example, the user execution information includes the upward movement information, and the main processor generates the outside rear view mirror adjustment action information according to the identification of the left outside rear view mirror and the upward movement information through a dynamic simulation algorithm. The outside rear view mirror adjustment operation information includes outside rear view mirror adjustment operation information for performing an upward movement operation on the left outside rear view mirror.
In the embodiment of the present invention, the rearview mirror adjustment action information includes User Interface (UI) adjustment action information.
And step S2, the main processor sends the outside rearview mirror adjusting action information to the host computer display screen.
And step S3, displaying the adjustment action information of the outer rearview mirror on the display screen of the host computer.
For example, the host display screen displays the outside mirror adjustment operation information for performing the upward movement operation on the left outside mirror.
In step 216, the controller makes an operation adjustment in response to the adjustment instruction.
In an embodiment of the invention, the controller comprises an exterior rear view mirror controller.
Specifically, the exterior mirror controller adjusts the exterior mirror in response to the adjustment command.
For example, the adjustment instruction includes an upward movement operation instruction to the left outer mirror, and the outer mirror controller adjusts the left outer mirror in response to the upward movement operation instruction to the left outer mirror so that the left outer mirror performs an operation adjustment of upward movement.
In the technical scheme provided by the embodiment of the invention, the collected facial information is sent to the computing module through the face camera; the calculation module calculates the face information through an eyeball positioning and tracking algorithm to generate sight line area information; the calculation module inquires out an area identifier corresponding to the sight line area information according to the corresponding relation between the stored sight line area information and the area identifier; the calculation module sends the area identification to a main processor; the main processor sends an adjusting instruction to the controller according to the area identification and the acquired user execution information; the controller makes operational adjustments in response to the adjustment instructions. According to the technical scheme provided by the embodiment of the invention, the controller can respond to the adjustment instruction to perform operation adjustment through the visual interaction method, so that the safety of driving the vehicle is improved.
According to the technical scheme provided by the embodiment of the invention, a user can adjust the rearview mirror through the square control position key of the steering wheel without looking for the adjusting button of the rearview mirror during high-speed driving. When the user realizes not being in outside rear-view mirror region, steering wheel control button can realize the regulation of volume and program go up/next song, has accomplished the intelligent switching of this steering wheel button in combination with the sight region, has brought fine driving experience for the user, has also reduced user's driving risk.
The embodiment of the invention provides another visual interaction method. Fig. 4 is a flowchart of another visual interaction method provided in an embodiment of the present invention, as shown in fig. 4, the method includes:
In the embodiment of the invention, the user concentrates the sight on the display screen of the car machine, and at the moment, the face camera acquires the face information of the user.
In the embodiment of the present invention, the face information includes eyeball gaze angle information.
And step 304, the calculation module calculates the face information through an eyeball positioning and tracking algorithm to generate sight line area information.
In this step, the calculation module calculates the eyeball sight angle through an eyeball positioning and tracking algorithm to generate sight area information. For example, the sight line region information includes sight line region information corresponding to the position of the display screen of the car machine.
In the embodiment of the present invention, the area identifier includes an Identity identification number (ID for short).
In the embodiment of the invention, the calculation module stores the corresponding relation between the sight line area information and the area identification. The calculation module can inquire out the area identification corresponding to the sight line area information according to the corresponding relation between the sight line area information and the area identification. For example, the area identifier corresponding to the sight line area information representing the position of the display screen of the in-vehicle machine is the identifier of the display screen of the in-vehicle machine, and the identifier of the display screen of the in-vehicle machine is used for identifying the display screen of the in-vehicle machine.
And step 310, the gesture camera sends the collected multiple gesture pictures to the computing module.
In the embodiment of the present invention, step 310 includes: the user sends out gesture actions, and the gesture camera collects gesture pictures of the gesture actions. For example, the gesture actions include an OK gesture, a whiss gesture, a forefinger clockwise rotation gesture, or a forefinger counterclockwise rotation gesture, and the gesture pictures include a picture of the OK gesture, a picture of the whiss gesture, a picture of the forefinger clockwise rotation gesture, or a picture of the forefinger counterclockwise rotation gesture.
And step 312, the calculation module calculates the plurality of gesture pictures through an image recognition algorithm to generate a gesture recognition result.
For example, the calculation module calculates gesture pictures of the plurality of index finger clockwise rotation gestures through an image recognition algorithm to generate a gesture recognition result, wherein the gesture recognition result is an index finger clockwise rotation gesture.
And step 314, the calculation module queries the user execution information corresponding to the gesture recognition result according to the corresponding relationship between the stored gesture recognition result and the user execution information.
In the embodiment of the invention, the calculation module stores the corresponding relation between the gesture recognition result and the user execution information, and can inquire the user execution information corresponding to the gesture recognition result according to the corresponding relation between the gesture recognition result and the user execution information.
For example, the user execution information corresponding to the OK gesture is the confirm popup information, the user execution information corresponding to the hiss gesture is the mute information, the user execution information corresponding to the index clockwise rotation gesture is the volume up information, and the user execution information corresponding to the index counterclockwise rotation gesture is the volume down information.
In the embodiment of the invention, the user execution information comprises gesture identification information.
For example, the area identifier includes an identifier of a display screen of the car machine, the user execution information includes volume increase information, and the main processor sends an adjustment instruction to the controller, where the adjustment instruction includes a volume increase operation instruction for the car machine.
In step 320, the controller adjusts the operation in response to the adjustment command.
In the embodiment of the invention, the controller comprises a vehicle machine.
Specifically, the car machine responds to the adjustment instruction to adjust the system control of the car machine.
For example, the adjusting instruction includes an instruction for increasing the volume of the car machine, and the car machine performs operation adjustment for increasing the volume in response to the instruction for increasing the volume of the car machine, at this time, the volume of the currently played audio of the car machine will increase.
In the technical scheme provided by the embodiment of the invention, the collected facial information is sent to the computing module through the face camera; the calculation module calculates the face information through an eyeball positioning and tracking algorithm to generate sight line area information; the calculation module inquires out an area identifier corresponding to the sight line area information according to the corresponding relation between the stored sight line area information and the area identifier; the calculation module sends the area identification to a main processor; the main processor sends an adjusting instruction to the controller according to the area identification and the acquired user execution information; the controller makes operational adjustments in response to the adjustment instructions. According to the technical scheme provided by the embodiment of the invention, the controller can respond to the adjustment instruction to perform operation adjustment through the visual interaction method, so that the safety of driving the vehicle is improved.
According to the technical scheme provided by the embodiment of the invention, the scene limitation of the voice interaction method is solved, the visual interaction method is introduced by combining with safety factors, the full scene coverage of the in-vehicle interaction mode can be realized by the linkage of the high-pixel camera and the controller and the combination of multiple microphones, and the defects of safety, experience and the like of the vehicle owner under various environments are overcome.
According to the technical scheme provided by the embodiment of the invention, the problem that the car and car interaction depends on touch and key is solved, more interaction modes are brought to the user, the user can still realize seamless interaction with the central control entertainment system in a special scene where voice cannot be used by combining two interaction modes of vision and gesture, the user experience of the user is enhanced, and the technical sense is brought to the cabin.
According to the technical scheme provided by the embodiment of the invention, when a user adjusts the seat to the lowest and enjoys the scene of cabin type cinema experience, the user can operate and adjust the car machine through gesture actions on the premise of not unlocking the safety belt, and the user does not need to get up to operate the display screen of the car machine to achieve the purpose of operating and adjusting the car machine, so that the operation cost of the user is reduced.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (9)
1. A visual interaction method, comprising:
the face camera sends the collected face information to the computing module;
the calculation module calculates the facial information through an eyeball positioning and tracking algorithm to generate sight line area information;
the calculation module inquires out an area identifier corresponding to the sight line area information according to the corresponding relation between the stored sight line area information and the area identifier;
the calculation module sends the area identification to a main processor;
the main processor sends an adjusting instruction to the controller according to the area identification and the acquired user execution information;
the controller makes operational adjustments in response to the adjustment instructions.
2. The method of claim 1, wherein before the host processor sends an adjustment instruction to the controller according to the area identifier and the acquired user execution information, the method comprises:
the method comprises the steps that a square control position key receives user execution information input by a user;
and the direction control position key sends the user execution information to the main processor.
3. The method of claim 1, wherein the controller comprises an exterior rearview mirror controller, the controller making operational adjustments in response to the adjustment instructions, comprising:
the exterior mirror controller adjusts the exterior mirror in response to the adjustment instruction.
4. The method of claim 1, wherein after the host processor sends an adjustment instruction to the controller according to the area identifier and the acquired user execution information, the method comprises:
the main processor generates adjusting action information of the exterior rearview mirror according to the area identification and the user execution information through a dynamic simulation algorithm;
the main processor sends the adjusting action information of the outer rearview mirror to a display screen of a host;
and the host display screen displays the adjustment action information of the external rearview mirror.
5. The method of any of claims 1 to 4, wherein the user-performed information comprises position adjustment information.
6. The method of claim 1, wherein before the host processor sends an adjustment instruction to the controller according to the area identifier and the acquired user execution information, the method comprises:
the gesture camera sends the collected multiple gesture pictures to the computing module;
the calculation module calculates the plurality of gesture pictures through an image recognition algorithm to generate a gesture recognition result;
the computing module inquires out user execution information corresponding to the gesture recognition result according to the corresponding relation between the stored gesture recognition result and the user execution information;
and the calculation module sends the user execution information to a main processor.
7. The method of claim 1, wherein the controller comprises a car machine, and wherein the controller makes operational adjustments in response to the adjustment instructions, comprising:
and the vehicle machine responds to the adjusting instruction to adjust the system control of the vehicle machine.
8. The method of claim 6 or 7, wherein the user-performed information comprises gesture-identifying information.
9. A visual interaction system, comprising: the system comprises a face camera, a computing module, a main processor and a controller;
the face camera is used for sending the collected face information to the computing module;
the calculating module is used for calculating the face information through an eyeball positioning and tracking algorithm to generate sight line area information; inquiring an area identifier corresponding to the sight line area information according to the corresponding relation between the stored sight line area information and the area identifier; sending the area identification to a main processor;
the main processor is used for sending an adjusting instruction to the controller according to the area identifier and the acquired user execution information;
the controller is used for responding to the adjusting instruction to carry out operation adjustment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110138560.3A CN112799515A (en) | 2021-02-01 | 2021-02-01 | Visual interaction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110138560.3A CN112799515A (en) | 2021-02-01 | 2021-02-01 | Visual interaction method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112799515A true CN112799515A (en) | 2021-05-14 |
Family
ID=75813493
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110138560.3A Pending CN112799515A (en) | 2021-02-01 | 2021-02-01 | Visual interaction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112799515A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106155290A (en) * | 2014-11-06 | 2016-11-23 | 现代自动车株式会社 | Utilize the menu setecting equipment of Eye-controlling focus |
CN106354259A (en) * | 2016-08-30 | 2017-01-25 | 同济大学 | Automobile HUD gesture-interaction-eye-movement-assisting system and device based on Soli and Tobii |
CN106945607A (en) * | 2017-03-30 | 2017-07-14 | 京东方科技集团股份有限公司 | The control method and device of vehicle |
CN109145864A (en) * | 2018-09-07 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | Determine method, apparatus, storage medium and the terminal device of visibility region |
CN109828655A (en) * | 2017-11-23 | 2019-05-31 | 英属开曼群岛商麦迪创科技股份有限公司 | The more screen control systems of vehicle and the more screen control methods of vehicle |
CN111638780A (en) * | 2020-04-30 | 2020-09-08 | 长城汽车股份有限公司 | Vehicle display control method and vehicle host |
-
2021
- 2021-02-01 CN CN202110138560.3A patent/CN112799515A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106155290A (en) * | 2014-11-06 | 2016-11-23 | 现代自动车株式会社 | Utilize the menu setecting equipment of Eye-controlling focus |
CN106354259A (en) * | 2016-08-30 | 2017-01-25 | 同济大学 | Automobile HUD gesture-interaction-eye-movement-assisting system and device based on Soli and Tobii |
CN106945607A (en) * | 2017-03-30 | 2017-07-14 | 京东方科技集团股份有限公司 | The control method and device of vehicle |
CN109828655A (en) * | 2017-11-23 | 2019-05-31 | 英属开曼群岛商麦迪创科技股份有限公司 | The more screen control systems of vehicle and the more screen control methods of vehicle |
CN109145864A (en) * | 2018-09-07 | 2019-01-04 | 百度在线网络技术(北京)有限公司 | Determine method, apparatus, storage medium and the terminal device of visibility region |
CN111638780A (en) * | 2020-04-30 | 2020-09-08 | 长城汽车股份有限公司 | Vehicle display control method and vehicle host |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110114825A (en) | Speech recognition system | |
CN105163974B (en) | The vehicle information entertainment systems of display unit with separation | |
US9678573B2 (en) | Interaction with devices based on user state | |
CN108474950B (en) | HMD device and control method thereof | |
CN109542283B (en) | Gesture touch multi-screen operation method | |
CN113486760A (en) | Object speaking detection method and device, electronic equipment and storage medium | |
CN111252074B (en) | Multi-modal control method, device, computer-readable storage medium and vehicle | |
CN106740581A (en) | A kind of control method of mobile unit, AR devices and AR systems | |
WO2021136495A1 (en) | Method and apparatus for controlling page layout of on-board unit display interface | |
CN106997283A (en) | A kind of information processing method and electronic equipment | |
CN110858467A (en) | Display screen control system and vehicle | |
CN112083795A (en) | Object control method and device, storage medium and electronic equipment | |
CN113002461A (en) | Virtual image position adjusting method, device and storage medium of AR-HUD system | |
US20230333650A1 (en) | Gesture Tutorial for a Finger-Wearable Device | |
WO2022267354A1 (en) | Human-computer interaction method and apparatus, and electronic device and storage medium | |
CN113459975B (en) | Intelligent cabin system | |
CN112799515A (en) | Visual interaction method and system | |
WO2024001091A1 (en) | Method and apparatus for controlling vehicle assembly, and electronic device and readable storage medium | |
US20230123723A1 (en) | System for controlling vehicle display based on occupant's gaze departure | |
CN114253439B (en) | Multi-screen interaction method | |
EP4029716A1 (en) | Vehicle interactive system and method, storage medium, and vehicle | |
CN112572320B (en) | Vehicle control method, vehicle control device, computer equipment and storage medium | |
CN114416253A (en) | Screen theme switching method, device and equipment and storage medium | |
JP2023540568A (en) | Presentation of content on a separate display device in the vehicle instrument panel | |
WO2024113839A1 (en) | Control method for mechanical arm, and vehicle and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210514 |