CN114115529A - Sensing method and device, storage medium and vehicle - Google Patents

Sensing method and device, storage medium and vehicle Download PDF

Info

Publication number
CN114115529A
CN114115529A CN202111306493.8A CN202111306493A CN114115529A CN 114115529 A CN114115529 A CN 114115529A CN 202111306493 A CN202111306493 A CN 202111306493A CN 114115529 A CN114115529 A CN 114115529A
Authority
CN
China
Prior art keywords
signal
vehicle
sensing
unit
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111306493.8A
Other languages
Chinese (zh)
Inventor
李晓东
沈旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jidu Technology Co Ltd
Original Assignee
Jidu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jidu Technology Co ltd filed Critical Jidu Technology Co ltd
Priority to CN202111306493.8A priority Critical patent/CN114115529A/en
Publication of CN114115529A publication Critical patent/CN114115529A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a sensing method, a sensing device, a storage medium and a vehicle. The sensing method is applied to a vehicle, and the vehicle comprises a surface skin material layer, a first sensing unit and a second sensing unit, wherein the surface skin material layer is used for forming at least one part of the surface of one or more components of the vehicle and can be touched by an occupant of the vehicle; the first sensing unit is electrically connected with the second sensing unit, and the method comprises the following steps: receiving a first interaction signal from the occupant sensed by the first sensing unit; receiving a second interaction signal from the occupant sensed by the second sensing unit; responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal. The method and the device can support richer human-vehicle interaction modes and application scenes, and improve the intelligence of the vehicle. Moreover, multi-modal human-vehicle interaction can be realized, and the intellectualization of vehicles is improved.

Description

Sensing method and device, storage medium and vehicle
Technical Field
The embodiment of the application relates to the technical field of intelligent automobile interaction, in particular to a sensing method, a sensing device, a storage medium and a vehicle.
Background
With the development of technology, the development of intelligent epidermis technology represented by In-Mold Electronics (In-Mold Electronics) technology is becoming perfect. In the prior art, the intelligent skin technology is applied to parts of a vehicle, and intelligent control of the vehicle can be realized. The current human-vehicle interaction mode and scene are single and fixed, so that the vehicle intelligence is low, and the driving experience of passengers is reduced.
Disclosure of Invention
The application provides a sensing method, a sensing device, a storage medium and a vehicle, which can support richer human-vehicle interaction modes and application scenes, and improve the intelligence of the vehicle.
In a first aspect, embodiments of the present application provide a sensing method applied to a vehicle, the vehicle including a skin material layer, a first sensing unit and a second sensing unit, the skin material layer being configured to form at least a portion of a surface of one or more components of the vehicle and being configured to be touched by an occupant of the vehicle; the first sensing unit and the second sensing unit are arranged on one side of the surface skin material layer facing the interior of one or more components of the vehicle; the first sensing unit is electrically connected with the second sensing unit, and the method comprises the following steps: receiving a first interaction signal from the occupant sensed by the first sensing unit; receiving a second interaction signal from the occupant sensed by the second sensing unit; responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal. Therefore, a third interactive signal can be generated by the control unit in response to the first interactive signal and the second interactive signal, and a control signal is sent to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal. Therefore, the control unit can simultaneously respond to the first interactive signal and the second interactive signal, so that the induction of various interactive signals can be realized, richer human-vehicle interactive modes and application scenes can be supported, and the intelligence of the vehicle is improved. In addition, when the control unit forms a control signal, the control unit can realize the action matching of one interactive signal and a plurality of components, thereby realizing multi-modal human-vehicle interaction and improving the multi-modal attitude and intellectualization of the vehicle.
Optionally, the at least one sensing component included in the first sensing unit and the at least one sensing component included in the second sensing unit are different types of sensing components; the first interaction signal or the second interaction signal is any one of the following:
sensing the action of the passenger formed on the surface of the skin material layer or the nearby space;
the position of the sensing action.
When the first sensing unit and the second sensing unit have different sensing parts, the first sensing unit and the second sensing unit can be enabled to sense different types of interaction signals. When the control unit responds to the first interactive signal and the second interactive signal simultaneously, the interactive signal types responded by the control unit can be increased, richer human-vehicle interactive modes and application scenes can be further supported, and the intelligence of the vehicle is improved. The first interactive signal and the second interactive signal can be interactive signals acquired in various forms, when the first induction unit induces the first interactive signal and the second induction unit induces the second interactive signal, the control unit can realize the induction of various interactive signals, so that richer human-vehicle interaction modes and application scenes can be further supported, and the intelligence of the vehicle is improved.
Optionally, the first sensing unit includes a touch control film layer, and the first interaction signal is an interaction signal sensed by the touch control film layer; and/or the second sensing unit comprises a touch control film layer, and the second interaction signal is an interaction signal sensed by the touch control film layer. The control unit can improve the accuracy of the touch signal according to the first interaction signal and the second interaction signal simultaneously, and therefore the reliability and accuracy of human-vehicle interaction are improved.
Optionally, the material of the skin material layer is a transparent material, at least one sensing component of the first sensing unit and the second sensing unit comprises a visual sensor, and the first interaction signal or the second interaction signal is generated by the visual sensor in response to the posture action of the occupant.
Optionally, the material of the skin material layer is a transparent material, the method further comprising at least one of:
sending a first display instruction to a display module, wherein the first display instruction is used for displaying information corresponding to the first interactive signal;
sending a second display instruction to the display module, wherein the second display instruction is used for displaying information corresponding to the second interactive signal;
and sending a third display instruction to the display module, wherein the third display instruction is used for displaying information corresponding to the third interactive signal.
Optionally, the information corresponding to the first interactive signal, the information corresponding to the second interactive signal, or the information corresponding to the third interactive signal includes at least one of the following:
feedback information, vehicle speed information, navigation information, power information, driver status information, forward road condition information, in-vehicle temperature information, in-vehicle humidity information, tire pressure information, full vehicle information, out-vehicle temperature information, or humidity information; the feedback information is information fed back to the passenger by the sensing device according to the first interaction signal, the second interaction signal or the third interaction signal.
Optionally, the method further comprises at least one of:
sending a first action instruction to a feedback unit, wherein the first action instruction is used for indicating the feedback unit to execute an action corresponding to the feedback of the first interactive signal;
sending a second action instruction to the feedback unit, wherein the second action instruction is used for indicating the feedback unit to execute the action fed back by the second interaction signal;
and sending a third action instruction to the feedback unit, wherein the third action instruction is used for indicating the feedback unit to execute the action fed back by the third interaction signal.
Optionally, the skin material layer comprises a tactile sensation adjustment unit; the method further comprises the following steps:
sending a tactile sensation adjusting instruction to the tactile sensation adjusting unit, the tactile sensation adjusting instruction being used for instructing the tactile sensation adjusting unit to adjust the tactile sensation of the surface of the part formed by the skin material layer; the tactile sensation adjustment instruction is generated according to the control signal.
Optionally, the skin material layer comprises a morphology adjustment unit; the method further comprises the following steps:
sending a form adjusting instruction to a form adjusting unit, wherein the form adjusting instruction is used for instructing the form adjusting unit to adjust the form of the surface of the part formed by the surface material layer; the form adjustment command is generated based on the control signal.
Optionally, the component corresponding to the third interaction signal includes at least one of:
the device comprises a vehicle opening and closing piece, an in-vehicle comfort degree adjusting piece and an in-vehicle and out-vehicle display piece;
the control signal is used for sending the control signal to at least one item in vehicle switching piece, comfort level regulating part in the car and the inside and outside display part of car, can realize the people car interaction of vehicle switching piece, and the people car interaction of comfortable regulating part in the car and the people car interaction of the inside and outside display part of car improve the intellectuality of vehicle.
Optionally, the vehicle further comprises a power supply unit and a switch unit, the method further comprising:
and sending a control instruction to the switch unit, wherein the control instruction is used for controlling the power supply unit to provide the power supply signal state for the first induction unit and the second induction unit. When the passenger control switch unit is switched off, the passenger is prevented from realizing human-vehicle interaction through the first sensing unit and the second sensing unit, and a wider choice is provided for the passenger.
In a second aspect, an embodiment of the present application further provides an induction apparatus, including:
a processor and an interface circuit;
wherein the processor is coupled to the memory through the interface circuit, and the processor is configured to execute the program code in the memory to implement the sensing method provided in the first aspect.
In a third aspect, an embodiment of the present application further provides a computer-readable storage medium for storing instructions for executing the sensing method provided in the first aspect.
In a fourth aspect, the embodiment of the present application further provides a vehicle, including the sensing device provided in the second aspect.
According to the technical scheme of the embodiment of the application, the control unit responds to the first interactive signal and the second interactive signal to generate a third interactive signal and sends a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal. Therefore, the control unit can simultaneously respond to the first interactive signal and the second interactive signal, so that the induction of various interactive signals can be realized, richer human-vehicle interactive modes and application scenes can be supported, and the intelligence of the vehicle is improved. And single interactive signals can be matched with various components, so that multi-modal human-vehicle interaction can be realized, and the intelligence of vehicles is improved.
Drawings
FIG. 1 is a schematic structural diagram of a skin material layer provided herein;
fig. 2 is a schematic structural diagram of an induction device according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of a sensing method provided herein;
fig. 4 is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure;
fig. 5 is a flowchart of another sensing method provided in the embodiments of the present application;
fig. 6 is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure;
FIG. 7 is a flow chart of another sensing method provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure;
fig. 9 is a flowchart of another sensing method provided in the embodiments of the present application;
FIG. 10 is a flow chart of another sensing method provided by an embodiment of the present application;
fig. 11a is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure;
fig. 11b is a schematic view of another sensing device according to an embodiment of the present disclosure;
FIG. 12 is a flow chart of another sensing method provided by an embodiment of the present application;
FIG. 13 is a flow chart of another sensing method provided by an embodiment of the present application;
fig. 14 is a flow chart of another sensing method provided by the embodiments of the present application;
fig. 15a is a schematic structural diagram of an induction device according to an embodiment of the present disclosure;
fig. 15 b-15 c are schematic views of another sensing device according to an embodiment of the present disclosure;
FIG. 16 is a flow chart of another sensing method provided by an embodiment of the present application;
FIG. 17 is a flow chart of another sensing method provided by embodiments of the present application;
FIG. 18 is a flow chart of another sensing method provided by embodiments of the present application;
FIG. 19 is a flow chart of another sensing method provided by embodiments of the present application;
FIG. 20 is a flow chart of another sensing method provided by embodiments of the present application;
fig. 21 is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure;
FIG. 22 is a flow chart of another sensing method provided by embodiments of the present application;
FIG. 23 is a schematic structural diagram of a vehicle according to an embodiment of the present disclosure;
fig. 24 is a schematic structural diagram of another vehicle according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples.
The sensing device provided by the embodiment of the application can be arranged in the terminal. Illustratively, the terminal may be a terminal device such as a motor vehicle, a drone, a rail car, or a bicycle. The terminal may comprise a motor vehicle, such as a smart internet vehicle. When the sensing device is arranged in the terminal, the interaction between the passenger and the terminal can be realized, and the intellectualization of the terminal is realized. The following description will be given taking an example in which the terminal includes a vehicle.
The sensing device of the vehicle in the embodiment of the present application may be provided in a vehicle having a sensing function of the vehicle, or other devices having sensing of the vehicle in the vehicle. Such other devices include, but are not limited to: other sensors such as vehicle-mounted terminals, vehicle-mounted controllers, vehicle-mounted modules, vehicle-mounted components, vehicle-mounted radars or vehicle-mounted cameras. The sensing method of the vehicle in the embodiment of the application can also be arranged in other intelligent terminals with sensing functions of the vehicle besides the vehicle, or in components of the intelligent terminals. The intelligent terminal can be intelligent transportation equipment, intelligent household equipment, a robot and the like.
It should be noted that the terms "first", "second", and the like in the description and claims of the present application and the accompanying drawings are used for distinguishing different objects, and are not used for limiting a specific order. The following embodiments of the present application may be implemented individually, or in combination with each other, and the embodiments of the present application are not limited specifically.
The sensing device of the vehicle in the embodiment of the present application includes a skin material layer. The layer of skin material is intended to constitute at least a part of the surface of one or more components of the vehicle and is intended to be touched by an occupant of the vehicle; specifically, the skin material layer 110 may be a skin material layer of a vehicle interior. Illustratively, the skin material layer 110 may be a skin material layer in interior trim such as seats, windows, displays, steering wheels, and lights. The material of the skin material layer 110 may be glass, plastic, wood, leather, fabric, metal, and the like. Fig. 1 is a schematic structural diagram of a skin material layer provided in the present application. As shown in fig. 1, fig. 1 exemplarily shows that the skin material layer 110 may be a skin material layer of a steering wheel, a skin material layer of an instrument display screen, a skin material layer of a vehicle head interior, a skin material layer of a central control, and the like.
Fig. 2 is a schematic structural diagram of an induction device according to an embodiment of the present application. As shown in fig. 2, the sensing device includes a skin material layer 110, a first sensing unit 120 and a second sensing unit 130, the first sensing unit 120 and the second sensing unit 130 being on a side of the skin material layer 110 facing an interior of one or more components of the vehicle; the first sensing unit 120 and the second sensing unit 130 are electrically connected.
The first sensing unit 120 may include a sensing part for sensing a first interaction signal of an occupant, and the second sensing unit 130 may also include a sensing part for sensing a second interaction signal of an occupant. The first sensing unit 120 and the second sensing unit 130 are electrically connected, so that signal transmission can be realized between the first sensing unit 120 and the second sensing unit 130. For example, after the first sensing unit 120 senses the first interaction signal, the first interaction signal may be transmitted to the second sensing unit 130; alternatively, the second sensing unit 130 senses a second interaction signal, and may transmit the second interaction signal to the first sensing unit 120.
Meanwhile, the first sensing unit 120 and the second sensing unit 130 are electrically connected, and in the interaction process inside the vehicle, the occupant sends an interaction signal as required, and the first sensing unit 120 activates the second sensing unit 130 according to the interaction signal.
For example, when the occupant emits a first interaction signal, the first sensing unit 120 senses the first interaction signal, and the second sensing unit 130 forms a second interaction signal according to the first interaction signal.
For another example, when the occupant sends the second interaction signal, the second sensing unit 130 senses the second interaction signal, and the first sensing unit 120 forms the first interaction signal according to the second interaction signal.
The sensing method provided by the embodiment of the application is applied to the vehicle and is suitable for a scene of interaction between a passenger and the vehicle. The sensing method may be performed by the control unit 140 in the vehicle. In some embodiments, the control unit 140 may be a Domain control module of the vehicle (for example, a Domain control module of a smart cabin Domain, or a Domain control module of another Domain), may be a central control unit (CDC), and may be a control module for controlling the first sensing unit and the second sensing unit, which is not limited herein.
Fig. 3 is a flowchart of a sensing method provided in the present application. With reference to fig. 1 to 3, the method includes:
s301, receiving a first interaction signal of the passenger sensed by the first sensing unit;
specifically, as shown in fig. 1, the first sensing unit 120 is connected to the control unit 140, the first sensing unit 120 transmits a first interaction signal of the sensed occupant, and the control unit 140 may receive the first interaction signal of the occupant sensed by the first sensing unit 120. Exemplarily, the first interaction signal may be a touch-type interaction signal or a gesture-type interaction signal.
In another possible implementation manner, the first sensing unit 120 is connected in series with the second sensing unit 130, the second sensing unit 130 is connected with the control unit 140, and at this time, the control unit 140 receives the first interaction signal of the occupant sensed by the first sensing unit 120 through the second sensing unit 130.
S302, receiving a second interaction signal of the passenger sensed by the second sensing unit;
specifically, as shown in fig. 1, the second sensing unit 130 is connected to the control unit 140, the second sensing unit 130 transmits a sensed second interaction signal of the occupant, and the control unit 140 may receive the sensed second interaction signal of the occupant by the second sensing unit 130. Exemplarily, the second interaction signal may be a touch-type interaction signal or a gesture-type interaction signal. The first interactive signal and the second interactive signal may be different types of interactive signals.
In another possible implementation manner, the first sensing unit 120 is connected in series with the second sensing unit 130, the first sensing unit 120 is connected with the control unit 140, and at this time, the control unit 140 receives the second interaction signal of the passenger sensed by the second sensing unit 130 through the first sensing unit 120.
S303, responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal.
Specifically, as shown in fig. 3, the control unit 140 generates a third interactive signal in response to the first interactive signal and the second interactive signal, and sends a control signal to a component 150 corresponding to the third interactive signal; the control signal control section 150 performs an action corresponding to the control signal. The control unit 140 can simultaneously respond to the first interactive signal and the second interactive signal, so that the induction of various interactive signals can be realized, richer human-vehicle interactive modes and application scenes can be supported, and the intelligence of the vehicle is improved.
The component 150 may be any component of a vehicle, including one or more. For example, component 150 may be a mechanism that adjusts any of a seat, a window, a display screen, and a light. In addition, the component 150 may also correspond to the skin material layer 110.
Illustratively, when the skin material layer 110 is a skin material layer of a seat, the component 150 may be an actuator that adjusts the seat. When the layer of skin material 110 is that of a vehicle window, the component 150 may be an actuator for adjusting the vehicle window.
In other embodiments, when the control unit 140 forms the control signal, it may implement one interaction signal to match with the actions of the plurality of components 150, so as to implement multi-modal human-vehicle interaction, and improve multi-modal attitude and intelligence of the vehicle.
For example, when the interaction signal is to control the opening and closing part and the hardware function of the vehicle, the interaction not only includes the interaction of the opening and closing part and the hardware function of the vehicle, but also includes human-vehicle interaction such as voice interaction prompt and display interaction inside and outside the vehicle, and thus multi-mode human-vehicle interaction is realized.
For example, the opening and closing member of the vehicle may include a door, a window, a sunroof, a tailgate, etc., and the controlling hardware function in the vehicle may include controlling seat sliding, seat lifting, air conditioning wind direction, temperature, sound volume, light, etc.
It should be noted that fig. 2 exemplarily shows that the skin material layer 110 is in contact with the first sensing unit 120. In other embodiments, the skin material layer 110 may also be free of contact with the first sensing element 120, simply by having the first sensing element 120 on the side of the skin material layer 110 facing the interior of one or more components of the vehicle.
In one possible implementation manner, the at least one sensing component included in the first sensing unit and the at least one sensing component included in the second sensing unit are different types of sensing components; the first interaction signal or the second interaction signal is any one of the following:
sensing the action of the passenger formed on the surface of the skin material layer or the nearby space;
the position of the sensing action.
Specifically, when the first sensing unit 120 and the second sensing unit 130 have different sensing parts therein, the first sensing unit 120 and the second sensing unit 130 may be caused to sense different types of interaction signals. When the control unit 140 responds to the first interaction signal and the second interaction signal simultaneously, the types of the interaction signals responded by the control unit 140 can be increased, richer human-vehicle interaction modes and application scenes can be further supported, and the intelligence of the vehicle is improved.
For example, when the first interaction signal is a touch-type interaction signal, the sensing part in the first sensing unit 120 may sense the touch-type interaction signal. When the second interactive signal is a gesture type interactive signal, the sensing part in the second sensing unit 130 may sense the gesture type interactive signal.
The interactive signal may include a plurality of types, and the first interactive signal and the second interactive signal are different types of interactive signals. Illustratively, the interaction signal may be an interaction signal obtained by an action of an occupant formed on the surface of the skin material layer, a pressure change of the skin material layer 110 may be caused by touching the skin material layer 110, and the first sensing unit 120 or the second sensing unit 130 senses the pressure change of the skin material layer 110, so that the first interaction signal or the second interaction signal may be formed.
Among them, the action of the occupant formed on the surface of the skin material layer may include touching, tapping, sliding, and the like. When the action of the occupant formed on the surface of the skin material layer is a touch, an interactive signal may be formed by a limb or by touching with another object. When the action of the occupant formed on the surface of the skin material layer is tapping, the continuity and number of taps may be set for forming different interaction signals. When the action of the occupant formed on the surface of the skin material layer is sliding, it may be set that the direction of the sliding (for example, the up-down direction, the acting direction, etc.) forms different interaction signals.
The interaction signal may also be an interaction signal obtained from the action of the occupant forming a space near the skin material layer. For example, the occupant performs a preset action in the space near the skin material layer, so that the electric field variation is formed in the space near the skin material layer, and the first sensing unit 120 or the second sensing unit 130 may sense the electric field variation in the space near the skin material layer, so that the first interaction signal or the second interaction signal may be formed. The action of the passenger formed in the space near the skin material layer can comprise gesture action, and different gesture actions can form different interaction signals. For example, the gesture motion may be a slide in a different direction, or may form a fixed gesture such as "OK".
The interaction signal may also be a signal obtained by sensing the position of the action. For example, when the interaction signal is a first interaction signal obtained by the first sensing unit 120 sensing the movement of the occupant formed in the space near the skin material layer, the second sensing unit 130 may sense the movement of the occupant formed in the space near the skin material layer to form a second interaction signal. Alternatively, when the interactive signal is the second interactive signal and the second interactive signal is obtained by the second sensing unit 130 sensing the action of the passenger formed in the space near the skin material layer, the first sensing unit 120 may sense the action position of the passenger formed in the space near the skin material layer to form the first interactive signal.
The first interactive signal and the second interactive signal can be interactive signals acquired in various forms, when the first sensing unit 120 senses the first interactive signal and the second sensing unit 130 senses the second interactive signal, the control unit 140 simultaneously responds to the first interactive signal and the second interactive signal, so that richer human-vehicle interactive modes and application scenes can be further supported, and the intelligence of the vehicle is improved.
On the basis of the above technical solution, the first sensing unit and/or the second sensing unit may include an electric field sensor; the first interaction signal and/or the second interaction signal are/is an interaction signal sensed by the electric field sensor; the interactive signals comprise touch interactive signals and non-touch interactive signals.
Specifically, the electric field sensor may sense a change in an electric field in a surrounding area, thereby generating an electric signal. The interactive signal can comprise a touch interactive signal and a non-touch interactive signal, and the touch interactive signal can be an interactive signal generated by contacting with the skin material layer. The non-touch interaction signal may include various types, such as a motion interaction signal. When the interaction instructions sent by passengers are different, the influence of the interaction instructions on the electric field in the area around the electric field sensor is different, so that the electric signals output by the electric field sensor are different, the interaction instructions sent by the passengers can be identified according to the electric signals output by the electric field sensor, a first interaction signal and/or a second interaction signal are/is formed according to the electric signals, the control unit responds to the first interaction signal and the second interaction signal to generate a third interaction signal, and sends a control signal to a component corresponding to the third interaction signal; the control signal control component executes the action corresponding to the control signal, and human-vehicle interaction is achieved.
Fig. 4 is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure. As shown in fig. 4, the first sensing unit 120 includes a touch film layer 121; the first interaction signal is an interaction signal sensed by the touch film layer 121.
Specifically, the touch film layer 121 and the skin material layer 110 may be in contact and/or fixedly connected, when the occupant has a touch action on the surface of the skin material layer 110, a force generated by the occupant on the skin material layer 110 may be transmitted to the touch film layer 121 through the skin material layer 110, the touch film layer 121 may form a touch signal according to the force, and the touch signal may include an action content and an action position of the occupant action, so as to sense the touch action of the occupant as a first interaction signal.
Fig. 5 is a flowchart of another sensing method according to an embodiment of the present disclosure. With reference to fig. 4 to 5, the method includes:
s501, receiving a first interaction signal of the passenger sensed by the first sensing unit;
specifically, the touch film layer 121 senses a touch action of the occupant to form a touch signal, and the touch signal is used as a first interaction signal, the touch film layer 121 sends the touch signal to the control unit 140, and the control unit 140 receives the touch signal.
S502, receiving a second interaction signal of the passenger sensed by the second sensing unit;
specifically, the touch signal may be further transmitted to the second sensing unit 130, the second sensing unit 130 may form a second interaction signal according to the touch signal, the second interaction signal may include information of an action position formed on the surface of the skin material layer 110 by the occupant, and the control unit 140 responds to the second interaction signal to improve accuracy of the touch signal, so as to improve reliability and accuracy of human-vehicle interaction.
S503, responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal.
Specifically, the control unit 140 responds to the touch signal, matches the pre-stored interaction signal, generates a third interaction signal after the touch signal is successfully matched with the pre-stored interaction signal, and sends a control signal to the component 150 corresponding to the third interaction signal, and the control signal controls the component 150 to execute an action corresponding to the control signal.
Illustratively, the touch film layer may include a touch layer and in-film electronic circuitry; for example, the touch layer is a capacitor film layer, and the capacitor film layer may include a plurality of capacitors arranged in an array.
When the passenger has a touch action on the surface of the skin material layer 110, the touch action acts on the capacitance film layer, so that capacitance value change of capacitance corresponding to a touch position in the capacitance film layer can be generated, capacitance value change information of the capacitance is transmitted to the in-film electronic circuit, and the in-film electronic circuit can determine content and position information of a touch instruction according to the capacitance position and the capacitance value change value of the capacitance value change generated in the capacitance film layer to form a touch signal, which is used as a first interaction signal.
Fig. 6 is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure. As shown in fig. 6, in other embodiments, the second sensing unit 130 may include a touch film layer 121; the second interaction signal is an interaction signal sensed by the touch film layer 121.
Specifically, the touch film layer 121 is disposed in the second sensing unit 130, the touch film layer 121 and the skin material layer 110 may be in contact and/or fixedly connected, when the occupant has a touch action on the surface of the skin material layer 110, a force generated by the occupant on the skin material layer 110 may be transmitted to the touch film layer 121 through the skin material layer 110, the touch film layer 121 may form a touch signal according to the force, and the touch signal may include an action content and an action position of the occupant action, so as to serve as a second interaction signal to sense the touch action of the occupant.
Fig. 7 is a flowchart of another sensing method according to an embodiment of the present disclosure. With reference to fig. 6 to 7, the method includes:
s701, receiving a first interaction signal of the passenger sensed by a first sensing unit;
specifically, the touch signal of the occupant sensed by the second sensing unit 130 may also be transmitted to the first sensing unit 120, the first sensing unit 120 may form a first interaction signal according to the touch signal, the first interaction signal may include motion position information of the occupant formed on the surface of the skin material layer 110, and the control unit 140 responds to the first interaction signal to improve the accuracy of the touch signal, thereby improving the reliability and accuracy of human-vehicle interaction.
S702, receiving a second interaction signal of the passenger sensed by the second sensing unit;
specifically, the touch film layer 121 senses a touch action of the occupant to form a touch signal, and the touch signal is used as a second interaction signal, the touch film layer 121 sends the touch signal to the control unit 140, and the control unit 140 receives the touch signal.
S703, responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal.
Specifically, when the touch signal is used as the second interaction signal, the control unit 140 responds to the touch signal, matches the touch signal with the pre-stored interaction signal, generates a third interaction signal after the touch signal is successfully matched with the pre-stored interaction signal, and sends a control signal to the component 150 corresponding to the third interaction signal, and the control signal controls the component 150 to execute an action corresponding to the control signal.
In other embodiments, the first sensing unit may further include a touch control film layer, and the second sensing unit includes a touch control film layer, so that the first sensing unit and the second sensing unit may simultaneously form a touch control signal according to an action of the occupant formed on the surface of the skin material layer 110, the control unit 140 may form a third interaction signal in response to the touch control signal, and send a control signal to the component 150 corresponding to the third interaction signal, and the control signal controls the component 150 to perform the action corresponding to the control signal, so as to improve the reliability and accuracy of human-vehicle interaction.
Fig. 8 is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure. As shown in fig. 8, the material of the skin material layer 110 is a transparent material, at least one sensing component of the first sensing unit 120 and the second sensing unit 130 includes a visual sensor 122, and the first interactive signal or the second interactive signal is generated by the visual sensor 122 in response to the posture action of the occupant.
In particular, it is exemplarily shown in fig. 8 that the first sensing unit 120 includes the vision sensor 122. The material of the skin material layer 110 may be a transparent material, and the material of the skin material layer 110 may be glass, for example. When the material of the skin material layer 110 is a transparent material, the vision sensor 122 may have visualization of the posture action of the occupant, that is, the vision sensor 122 may directly acquire the action content of the posture action through the skin material layer 110, so that an interaction signal may be formed according to the posture action.
Fig. 9 is a flowchart of another sensing method provided in the embodiment of the present application when the sensing component in the first sensing unit 120 includes the visual sensor 122. With reference to fig. 8 to 9, the method includes:
s901, receiving a first interaction signal of the passenger sensed by a first sensing unit;
specifically, the visual sensor 122 in the first sensing unit 120 senses an interaction signal generated by the posture action of the occupant as a first interaction signal, the visual sensor 122 sends the first interaction signal to the control unit 140, and the control unit 140 receives the first interaction signal.
Illustratively, the gesture motion may be a gesture motion, a gesture image of the gesture motion may be captured by the visual sensor 122, and then the content of the gesture motion may be determined according to the gesture image, so that the gesture motion may be recognized according to the content of the gesture motion, and the first interaction signal may be formed according to the gesture motion.
S902, receiving a second interaction signal of the passenger sensed by the second sensing unit;
s903, responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal.
Fig. 10 is a flowchart of another sensing method provided in the embodiment of the present application when the sensing component in the second sensing unit 130 includes the visual sensor 122. As shown in fig. 10, the method includes:
s1001, receiving a first interaction signal of the passenger sensed by a first sensing unit;
s1002, receiving a second interaction signal of the passenger sensed by the second sensing unit;
specifically, the visual sensor 122 in the second sensing unit 130 senses the interaction signal generated by the posture action of the occupant as a second interaction signal, the visual sensor 122 sends the second interaction signal to the control unit 140, and the control unit 140 receives the second interaction signal.
Illustratively, the gesture motion may be a gesture motion, a gesture image of the gesture motion may be captured by the visual sensor 122, and then the content of the gesture motion may be determined according to the gesture image, so that the gesture motion may be recognized according to the content of the gesture motion, and a second interaction signal may be formed according to the gesture motion.
S1003, responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal.
It should be noted that, in other embodiments, the first sensing unit and/or the second sensing unit may further include an electric field sensor and a visual sensor, and at this time, not only may sense multiple interaction signals, but also the sensing accuracy of the first sensing unit and/or the second sensing unit on the interaction signals may be improved through the electric field sensor and the visual sensor, so that the accuracy of the first interaction signal and/or the second interaction signal received by the control unit may be improved, and the sensing reliability of human-vehicle interaction is further improved.
It should be noted that the first sensing unit and the second sensing unit may further include other sensors, for example, a pressure sensor and a light-sensitive sensor, and the like, and are used to sense more kinds of interaction signals by the first sensing unit and the second sensing unit, so that the control unit can receive more kinds of first interaction signals and/or second interaction signals, and thus, richer interaction modes and interaction scenes can be further supported, and the intelligence of the vehicle is further improved.
Fig. 11a is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure. As shown in fig. 11a, the sensing device further includes a display module 160, and the display module 160 may be disposed between the skin material layer 110 and the first sensing unit 120, or the display module 160 may be disposed between the skin material layer 110 and the second sensing unit 130. In fig. 11a, it is exemplarily shown that the display module 160 is disposed between the skin material layer 110 and the first sensing unit 120. The display module 160 has a display function. When the material of the skin material layer 110 is transparent, the information displayed by the display module 160 can be transmitted to the passenger through the skin material layer 110, so as to remind the passenger of the current interaction state.
For example, in the control unit 140, based on the third interaction signal, it is determined that the information to be displayed by the display module 160 is the mail information, as shown in fig. 11b, the control unit 140 sends a control signal of the mail information to the display module 160 according to the third interaction signal, and the display module 160 may display a corresponding mail, or an icon of the mail, and the like according to the received control signal of the mail information.
When the sensing device includes the display module 160 and the material of the skin material layer 110 is a transparent material, fig. 12 is a flowchart of another sensing method according to an embodiment of the present disclosure. As shown in fig. 12, the method includes:
s1201, receiving a first interaction signal of the passenger sensed by the first sensing unit;
s1202, receiving a second interaction signal of the passenger sensed by the second sensing unit;
s1203, responding to the first interaction signal and the second interaction signal, generating a third interaction signal, and sending a control signal to a component corresponding to the third interaction signal; the control signal control part executes the action corresponding to the control signal.
S1204, sending a first display instruction to the display module, wherein the first display instruction is used for displaying information corresponding to the first interactive signal.
Specifically, the display module 160 may be electrically connected to the first sensing unit 120, after the first sensing unit 120 senses the first interaction signal of the passenger, the first sensing unit 120 may send a first display instruction to the display module 160, and the display module 160 displays information corresponding to the first interaction signal according to the first display instruction to remind the passenger of the current interaction state of the first interaction signal.
Fig. 13 is a flowchart of another sensing method according to an embodiment of the present disclosure. As shown in fig. 13, the method includes:
s1301, receiving a first interaction signal of the passenger sensed by the first sensing unit;
s1302, receiving a second interaction signal of the passenger sensed by the second sensing unit;
s1303, responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal.
And S1304, sending a second display instruction to the display module, wherein the second display instruction is used for displaying information corresponding to the second interactive signal.
Specifically, the display module 160 may be electrically connected to the second sensing unit 130, after the second sensing unit 130 senses the second interaction signal of the passenger, the second sensing unit 130 may send a second display instruction to the display module 160, and the display module 160 displays information corresponding to the second interaction signal according to the second display instruction to remind the passenger of the current interaction state of the second interaction signal.
Fig. 14 is a flowchart of another sensing method according to an embodiment of the present disclosure. As shown in fig. 14, the method includes:
s1401, receiving a first interaction signal of the passenger sensed by the first sensing unit;
s1402, receiving a second interaction signal of the passenger sensed by the second sensing unit;
s1403, responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal.
And S1404, sending a third display instruction to the display module, wherein the third display instruction is used for displaying information corresponding to the third interactive signal.
Specifically, the display module 160 may be electrically connected to the control unit 140, and after the control unit 140 generates a third interactive signal according to the first interactive signal and the second interactive signal, the control unit 140 may send a third display instruction to the display module 160, and the display module 160 displays information corresponding to the third interactive signal according to the third display instruction, so as to remind the passenger of the current interactive state of the third interactive signal.
In another possible implementation manner, the display module 160 is electrically connected to at least two of the first sensing unit 120, the second sensing unit 130 and the control unit 140, at least two of the first sensing unit 120, the second sensing unit 130 and the control unit 140 send a display instruction to the display module 160, and the display module 160 displays information corresponding to at least two of the first interaction signal, the second interaction signal and the third interaction signal according to the at least two display instructions.
In another possible implementation manner, when the display module 160 is electrically connected to the control unit 140, the control unit 140 may further send a fourth display instruction to the display module 160, where the fourth display instruction is used to display information corresponding to the control signal, so that the display module 160 displays the information corresponding to the control signal according to the fourth display instruction to remind a passenger of the current interaction state of the control signal.
A possible implementation manner is that the information corresponding to the first interactive signal, the information corresponding to the second interactive signal, or the information corresponding to the third interactive signal includes at least one of the following:
feedback information, vehicle speed information, navigation information, power information, driver status information, forward road condition information, in-vehicle temperature information, in-vehicle humidity information, tire pressure information, full vehicle information, out-vehicle temperature information, or humidity information; the feedback information is information fed back to the passenger by the sensing device according to the first interaction signal, the second interaction signal or the third interaction signal.
Specifically, the information corresponding to the interactive signal may include feedback information, and the feedback information may be preset information used for reminding the passenger of the current interactive state, where the interactive signal may include a first interactive signal, a second interactive signal, and a third interactive signal.
In some embodiments, when the display module is used for displaying the feedback information, the information fed back to the passenger by the interactive signal can be displayed. Illustratively, the feedback information may be preset interaction success display information or interaction failure display information, which may include display information of the interaction signal and the sensing result for reminding the occupant of the interaction result.
When the interactive signal is used for checking the state of the vehicle, the information corresponding to the interactive signal can also comprise the state information of the vehicle, and the display module can also display the state of the vehicle according to the interactive signal. For example, the display module may display at least one of vehicle speed information, navigation information, electric quantity information, driver state information, front road condition information, in-vehicle temperature information, in-vehicle humidity information, tire pressure information, vehicle information, out-vehicle temperature information, and humidity information of the vehicle according to the interactive signal, on the basis of obtaining the authorization for communication with each component in the vehicle and the out-vehicle networking.
In addition, the display module can be arranged on any decoration in the vehicle, so that the state of the vehicle can be displayed by any decoration in the vehicle, the human-vehicle interaction scene is increased, and the intelligence of the display information in the vehicle is improved.
In some embodiments, when the display module is a touch display module, the touch display module includes a touch panel, and at this time, the touch panel can sense an action of a passenger formed on the surface of the skin material layer to form an interaction signal, so the touch display module can be reused as the first sensing unit or the second sensing unit, and the control unit can respond to the first interaction signal or the second interaction signal sensed by the touch display module, thereby increasing human-vehicle interaction modes and application scenes, and improving the intelligence of a vehicle.
Fig. 15a is a schematic structural diagram of an induction device according to an embodiment of the present application. As shown in fig. 15a, the sensing device further includes a feedback unit 170, and the feedback unit 170 may be disposed between the skin material layer 110 and the first sensing unit 120, or the feedback unit 170 may be disposed between the skin material layer 110 and the second sensing unit 130. The feedback unit 170 may respond to the interaction signal and perform a feedback action for alerting the occupant of the state of the interaction.
It should be noted that the feedback unit 170 may be sized as needed, and may be the same as or different from the size of the skin material layer 110. In addition, the feedback unit 170 may be between the skin material layer 110 and the first sensing unit 120, and located at one side of the skin material layer 110 in a horizontal plane.
Fig. 16 is a flowchart of another sensing method according to an embodiment of the present disclosure. As shown in fig. 16, the method includes:
s1601, receiving a first interaction signal of the passenger sensed by a first sensing unit;
s1602, receiving a second interaction signal of the passenger sensed by the second sensing unit;
s1603, responding to the first interaction signal and the second interaction signal, generating a third interaction signal, and sending a control signal to a component corresponding to the third interaction signal; the control signal control part executes the action corresponding to the control signal.
And S1604, sending a first action instruction to the feedback unit, wherein the first action instruction is used for instructing the feedback unit to execute an action corresponding to the feedback of the first interactive signal.
Specifically, the feedback unit 170 may be connected to the first sensing unit 120 (as shown in fig. 15), after the first sensing unit 120 senses the first interaction signal of the occupant, the first sensing unit 120 may send a first action command to the feedback unit 170, and the feedback unit 170 may execute an action corresponding to the feedback of the first interaction signal according to the first action command, so as to remind the occupant of the interaction state of the first interaction signal.
Fig. 17 is a flowchart of another sensing method according to an embodiment of the present disclosure. As shown in fig. 17, the method includes:
s1701, receiving a first interaction signal of the passenger sensed by the first sensing unit;
s1702, receiving a second interaction signal of the passenger sensed by the second sensing unit;
s1703, responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal.
And S1704, sending a second action instruction to the feedback unit, wherein the second action instruction is used for instructing the feedback unit to execute an action corresponding to the feedback of the second interactive signal.
Specifically, the feedback unit 170 may be connected to the second sensing unit 130, after the second sensing unit 130 senses a second interaction signal of the passenger, the second sensing unit 130 may send a second action command to the feedback unit 170, and the feedback unit 170 may execute an action, corresponding to the feedback, of the second interaction signal according to the second action command, so as to remind the passenger of an interaction state of the second interaction signal.
Fig. 18 is a flowchart of another sensing method according to an embodiment of the present disclosure. As shown in fig. 18, the method includes:
s1801, receiving a first interaction signal of the passenger sensed by the first sensing unit;
s1802, receiving a second interaction signal of the passenger sensed by a second sensing unit;
s1803, responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal.
And S1804, sending a third action instruction to the feedback unit, wherein the third action instruction is used for instructing the feedback unit to execute an action corresponding to the feedback of the third interactive signal.
Specifically, the feedback unit 170 may be connected to the control unit 140, after the control unit 140 generates a third interaction signal according to the first interaction signal and the second interaction signal, the control unit 140 may send a third action instruction to the feedback unit 170, and the feedback unit 170 may execute an action, corresponding to the feedback, of the third interaction signal according to the third action instruction, so as to remind the passenger of the interaction state of the third interaction signal.
Another possible implementation manner is that the feedback unit 170 is electrically connected to at least two of the first sensing unit 120, the second sensing unit 130, and the control unit 140, at least two of the first sensing unit 120, the second sensing unit 130, and the control unit 140 send an action instruction to the feedback unit 170, and the feedback unit 170 executes an action corresponding to at least two of the first interaction signal, the second interaction signal, and the third interaction signal according to the at least two action instructions.
In another possible implementation manner, when the feedback unit 170 is electrically connected to the control unit 140, the control unit 140 may further send a fourth action instruction to the feedback unit 170, where the fourth action instruction is used to instruct the feedback unit 170 to execute an action corresponding to the feedback of the control signal, so that the feedback unit 170 executes the action corresponding to the feedback of the control signal according to the fourth action instruction to remind the passenger of the interaction state of the control signal.
In one possible implementation manner, as shown in fig. 15b, the feedback unit 170 includes a vibration feedback component 171, the vibration feedback component 171 is electrically connected to at least one of the first sensing unit 120, the second sensing unit 130, or the control unit 140, and the vibration feedback component 171 is configured to vibrate according to at least one of the first interaction signal, the second interaction signal, or the third interaction signal, so that the feedback action is vibration.
In a possible implementation manner, as shown in fig. 15c, the feedback unit 170 may further include a voice module 172, and after the feedback unit 170 acquires the interaction signal, the voice module 172 may perform voice information feedback according to the interaction signal. The feedback unit 170 may be on one side of the skin material layer 110, and it is only necessary that the feedback unit 170 can obtain the interaction signal.
For example, after at least one of the first interaction signal, the second interaction signal or the third interaction signal is transmitted to the voice module 172, the voice module 172 may remind the occupant of the success of the interaction signal sensing according to the interaction signal. When the interactive signal fails to be sensed, the voice module 172 may alert the passenger that the interactive signal fails to be sensed according to the interactive signal.
In another embodiment, the skin material layer includes a tactile sensation adjusting unit. Specifically, the tactile sensation adjusting unit may be provided as a component of a vehicle for adjusting the tactile sensation of the skin material layer. For example, the tactile sensation adjusting unit may adjust a temperature of the skin material layer, a hardness of the skin material layer, and at least one of a frosted state or a smooth state of the skin material layer.
Fig. 19 is a flowchart of another sensing method according to an embodiment of the present disclosure. As shown in fig. 19, the method includes:
s1901, receiving a first interaction signal of the passenger sensed by the first sensing unit;
s1902, receiving a second interaction signal of the passenger sensed by the second sensing unit;
s1903, responding to the first interactive signal and the second interactive signal, and generating a third interactive signal;
s1904, sending a touch adjusting instruction to the touch adjusting unit, wherein the touch adjusting instruction is used for instructing the touch adjusting unit to adjust the touch of the surface of the part formed by the surface material layer; the tactile sensation adjustment instruction is generated according to the control signal.
Specifically, the tactile sensation adjusting unit may be connected with the control unit. When the interaction information of the first interaction signal or the second interaction signal is the touch sense for adjusting the skin material layer, the control unit responds to the first interaction signal and the second interaction signal to form a third interaction signal, the third interaction signal corresponds to the touch sense adjusting unit, the control unit sends a touch sense adjusting instruction to the touch sense adjusting unit corresponding to the third interaction signal, the touch sense adjusting unit is controlled to adjust the touch sense of the skin material layer, the intelligence of human-vehicle interaction is improved, and the experience effect of passengers is improved.
In another embodiment, the skin material layer may further include a morphology adjusting unit. In particular, the form adjusting unit may also be a component of a vehicle for adjusting the form of the skin material layer. Illustratively, the form adjusting unit may adjust a bending state and a color change, etc., of the skin material layer.
Fig. 20 is a flowchart of another sensing method according to an embodiment of the present disclosure. As shown in fig. 20, the method includes:
s2001, receiving a first interaction signal of the passenger sensed by the first sensing unit;
s2002, receiving a second interaction signal of the passenger sensed by the second sensing unit;
s2003, responding to the first interaction signal and the second interaction signal, and generating a third interaction signal;
s2004, sending a form adjusting instruction to a form adjusting unit, wherein the form adjusting instruction is used for instructing the form adjusting unit to adjust the form of the surface of the part formed by the skin material layer; the form adjustment command is generated based on the control signal.
Specifically, the form adjusting unit may be connected to the control unit, and when the interaction information of the first interaction signal or the second interaction signal is to adjust the form of the skin material layer, the control unit responds to the first interaction signal and the second interaction signal to form a third interaction signal, the third interaction signal corresponds to the form adjusting unit, the control unit sends a form adjusting instruction to the form adjusting unit corresponding to the third interaction signal, and the form adjusting unit is controlled to adjust the form of the skin material layer, so that intelligence of human-vehicle interaction is improved, and further, experience effects of passengers are improved.
In one possible implementation manner, the component corresponding to the third interactive signal includes at least one of the following:
the device comprises a vehicle opening and closing piece, an in-vehicle comfort degree adjusting piece and an in-vehicle and out-vehicle display piece;
the control signal is used for sending the control signal to at least one of the vehicle opening and closing part, the interior comfort degree regulating part and the interior and exterior display part.
Specifically, the component corresponding to the third interactive signal is a component for executing the interactive information of the first interactive signal and the second interactive signal in the vehicle.
The vehicle opening and closing member may include a door, a window, a sunroof, a tailgate, and the like of the vehicle. The part corresponding to the third interactive signal comprises the vehicle opening and closing part, so that the control unit can form the third interactive signal according to the first interactive signal and the second interactive signal, send a control signal to the vehicle opening and closing part and control the vehicle opening and closing part to execute the action corresponding to the control signal, human-vehicle interaction of the vehicle opening and closing part can be realized, and function intellectualization of the vehicle is improved.
Comfort level regulating part in the car can include seat slider, seat lift spare, air conditioner wind direction regulating part, temperature regulation spare, stereo set volume size regulating part and light regulating part etc.. The part corresponding to the third interactive signal comprises the in-vehicle comfortable adjusting piece, so that the control unit can form the third interactive signal according to the first interactive signal and the second interactive signal, send a control signal to the in-vehicle comfortable adjusting piece and control the in-vehicle comfortable adjusting piece to execute the action corresponding to the control signal, thereby realizing human-vehicle interaction of the in-vehicle comfortable adjusting piece and improving the function intelligence of the vehicle.
The in-vehicle and out-vehicle display member may include an in-vehicle screen display member, an out-vehicle information display member, and the like. The part corresponding to the third interactive signal comprises the in-vehicle and external display part, so that the control unit can form the third interactive signal according to the first interactive signal and the second interactive signal, send a control signal to the in-vehicle and external display part and control the in-vehicle and external display part to execute the action corresponding to the control signal, thereby realizing human-vehicle interaction of the in-vehicle and external display part and improving the display intelligence of the vehicle.
In other embodiments, the parts corresponding to the third interaction signal include at least two of the vehicle opening and closing part, the in-vehicle comfort level adjusting part and the in-vehicle and out-vehicle display part, so that the function intellectualization and the display intellectualization of the vehicle can be further improved, and the experience effect of passengers is further improved.
Fig. 21 is a schematic structural diagram of another sensing device according to an embodiment of the present application. As shown in fig. 21, the sensing device further includes a power supply unit 180 and a switch unit 190, and the switch unit 190 can control a state in which the power supply unit 180 supplies a power supply signal to the first sensing unit 120 and the second sensing unit 130.
When the sensing device includes a power supply unit and a switch unit, fig. 22 is a flowchart of another sensing method provided in the embodiment of the present application. As shown in fig. 22, before receiving a first interaction signal from an occupant sensed by the first sensing unit and receiving a second interaction signal from an occupant sensed by the second sensing unit, the method further comprises:
S2201、
and sending a control instruction to the switch unit, wherein the control instruction is used for controlling the power supply unit to provide the power supply signal state for the first induction unit and the second induction unit.
Specifically, the switch unit 190 may be a mechanical switch, and at this time, the state of the switch unit 190 may be controlled by manually sending a control command to the switch unit 190 by an occupant. The switching unit 190 may also be a soft switch, such as a transistor, etc., and at this time, a control command may be sent to the switching unit 190 through a signal to control the state of the switching unit 190.
The control command may be a command for controlling the power supply unit 180 to supply power to the first sensing unit 120 and the second sensing unit 130. At this time, the control command controls the switch unit 190 to be turned on, so that the power signal provided by the power unit 180 is transmitted to the first sensing unit 120 and the second sensing unit 130 through the switch unit 190, and further the first sensing unit 120 and the second sensing unit 130 can normally operate according to the interaction signal. When the control command may be a command for controlling the power supply unit 180 to supply power to the first sensing unit 120 and the second sensing unit 130, as shown in fig. 22, the method further includes:
s2202, receiving a first interaction signal of the passenger sensed by a first sensing unit;
s2203, receiving a second interaction signal of the passenger sensed by the second sensing unit;
s2204, responding to the first interaction signal and the second interaction signal, generating a third interaction signal, and sending a control signal to a component corresponding to the third interaction signal; the control signal control part executes the action corresponding to the control signal.
In addition, the control command may also be a command for controlling the power supply unit 180 to stop supplying power to the first sensing unit 120 and the second sensing unit 130. At this time, the control instruction controls the switch unit 190 to be turned off, so that the power signal provided by the power unit 180 cannot be transmitted to the first sensing unit 120 and the second sensing unit 130 through the switch unit 190, and further the first sensing unit 120 and the second sensing unit 130 cannot be powered on to work, thereby preventing the passenger from realizing human-vehicle interaction through the first sensing unit 120 and the second sensing unit 130, and providing more choices for the passenger.
In one possible implementation, at least one of the first sensing unit 120 and the second sensing unit 130 may be connected to the power supply unit 180 through a switching device in the switching unit 190, for example, the first sensing unit 120 and the second sensing unit 130 are connected in series, and then one of the first sensing unit 120 and the second sensing unit 130 is connected to the switching unit 190. So that the first sensing unit 120 or the second sensing unit 130 can be directly or indirectly connected to the power supply unit 180 through the switching unit 190. A control instruction is sent to the switching unit 190, and the control instruction controls the switching unit 190 to be turned on or off, so that the power supply unit 180 can be controlled to directly or indirectly provide power supply signals for the first sensing unit 120 and the second sensing unit 130 through the switching unit 190.
In another possible implementation manner, when at least one of the first sensing unit 120 and the second sensing unit 130 is connected to the power supply unit 180 through a switching device in the switching unit 190, for example, the first sensing unit 120 and the second sensing unit 130 may be connected in parallel, and then both the first sensing unit 120 and the second sensing unit 130 are connected to the switching unit 190. So that both the first sensing unit 120 and the second sensing unit 130 can be directly connected to the power supply unit 180 through the switching unit 190. A control instruction is sent to the switch unit 190, and the control instruction controls the switch unit 190 to be turned on or off, so that the power supply unit 180 can be controlled to directly provide power signals for the first sensing unit 120 and the second sensing unit 130 through the switch unit 190.
It should be noted that, when the first sensing unit 120 and the second sensing unit 130 are connected in parallel, the switching unit 190 can separately control the conduction state between the power supply unit 180 and the first sensing unit 120 and the conduction state between the power supply unit 180 and the second sensing unit 130 as required, and at this time, the switching unit 190 includes at least two switching devices.
In other embodiments, the switch unit 190 controls the conduction state between the power unit 180 and the first sensing unit 120 and the conduction state between the power unit 180 and the second sensing unit 130 at the same time, and the switch unit 190 may include only one switching device to control the conduction state between the power unit 180 and the first sensing unit 120 and the conduction state between the power unit 180 and the second sensing unit 130 to be the same.
In an alternative embodiment, an induction device is provided in the examples of the present application. The sensing device includes:
a processor and an interface circuit;
the processor is coupled to the memory through the interface circuit, and the processor is configured to execute the program code in the memory to implement the sensing method provided in any embodiment of the present application. Since the processor can implement the sensing method provided in any embodiment of the present application, the beneficial effects of the sensing method provided in any embodiment of the present application are achieved, and are not described herein again.
In an alternative embodiment, the present application further provides a computer-readable storage medium. The computer readable storage medium is used for storing instructions for executing the induction method provided by any embodiment of the application.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
In an optional implementation manner, the embodiment of the application further provides a vehicle. Fig. 23 is a schematic structural diagram of a vehicle according to an embodiment of the present application. As shown in fig. 23, the vehicle 60 includes the sensing device 70 provided in any embodiment of the present application, and has the beneficial effects of the sensing device provided in any embodiment of the present application, which are not described herein again.
For example, fig. 24 is a schematic structural diagram of another vehicle provided in the embodiment of the present application. On the basis of the above-mentioned embodiment, referring to fig. 24, the present embodiment provides a readable storage medium 62, on which a software program is stored, and when instructions in the readable storage medium 62 are executed by a processor 61 of a vehicle 60, the vehicle 60 is enabled to execute the induction method according to any of the above-mentioned embodiments. The method comprises the following steps: receiving a first interaction signal from the occupant sensed by the first sensing unit; receiving a second interaction signal from the occupant sensed by the second sensing unit; responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal control part executes the action corresponding to the control signal.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (14)

1. A method of sensing, applied to a vehicle comprising a layer of skin material intended to constitute at least part of a surface of one or more components of the vehicle and to be touched by an occupant of the vehicle, a first sensing element and a second sensing element; the first and second sensing units are on a side of the layer of skin material facing an interior of one or more components of the vehicle; the first sensing cell and the second sensing cell are electrically connected, the method comprising:
receiving a first interaction signal from the occupant sensed by the first sensing unit;
receiving a second interaction signal from the occupant sensed by the second sensing unit;
responding to the first interactive signal and the second interactive signal, generating a third interactive signal, and sending a control signal to a component corresponding to the third interactive signal; the control signal controls the component to execute the action corresponding to the control signal.
2. The sensing method according to claim 1, wherein the at least one sensing component included in the first sensing unit and the at least one sensing component included in the second sensing unit are different types of sensing components; the first interaction signal or the second interaction signal is any one of:
sensing the action of the passenger formed on the surface of the skin material layer or the nearby space;
sensing the position of the motion.
3. The sensing method according to claim 1 or 2, wherein the first sensing unit comprises a touch film layer, and the first interaction signal is an interaction signal sensed by the touch film layer;
and/or the second sensing unit comprises a touch control film layer, and the second interaction signal is an interaction signal sensed by the touch control film layer.
4. A sensing method according to any one of claims 1-3, wherein the material of the skin material layer is a transparent material, at least one sensing component of the first sensing unit and the second sensing unit comprises a visual sensor, and the first interaction signal or the second interaction signal is generated by the visual sensor in response to a postural movement of the occupant.
5. An induction method according to any one of claims 1-4, characterised in that the material of the skin material layer is a transparent material, the method further comprising at least one of:
sending a first display instruction to a display module, wherein the first display instruction is used for displaying information corresponding to the first interactive signal;
sending a second display instruction to a display module, wherein the second display instruction is used for displaying information corresponding to the second interactive signal;
and sending a third display instruction to a display module, wherein the third display instruction is used for displaying information corresponding to the third interactive signal.
6. The sensing method of claim 5, wherein the information corresponding to the first interactive signal, the information corresponding to the second interactive signal, or the information corresponding to the third interactive signal comprises at least one of:
feedback information, vehicle speed information, navigation information, power information, driver status information, forward road condition information, in-vehicle temperature information, in-vehicle humidity information, tire pressure information, full vehicle information, out-vehicle temperature information, or humidity information; wherein the feedback information is information fed back to the occupant by the sensing device according to the first interaction signal, the second interaction signal or the third interaction signal.
7. The induction method according to any one of claims 1-6, characterized in that the method further comprises at least one of the following:
sending a first action instruction to a feedback unit, wherein the first action instruction is used for indicating the feedback unit to execute an action corresponding to the feedback of the first interaction signal;
sending a second action instruction to a feedback unit, wherein the second action instruction is used for instructing the feedback unit to execute an action corresponding to the feedback of the second interactive signal;
and sending a third action instruction to a feedback unit, wherein the third action instruction is used for instructing the feedback unit to execute an action corresponding to the feedback of the third interactive signal.
8. The induction method according to any one of claims 1-7, wherein the layer of skin material comprises a tactile sensation adjustment unit; the method further comprises the following steps:
sending a tactile sensation adjustment instruction to the tactile sensation adjustment unit, the tactile sensation adjustment instruction being used for instructing the tactile sensation adjustment unit to adjust the tactile sensation of the surface of the part formed by the skin material layer; the tactile sensation adjustment instruction is generated according to the control signal.
9. The induction method according to any one of claims 1-8, wherein the skin material layer comprises a morphology adjustment unit; the method further comprises the following steps:
sending a form adjustment instruction to the form adjustment unit, the form adjustment instruction being used for instructing the form adjustment unit to adjust the form of the surface of the part formed by the skin material layer; the form adjustment instruction is generated according to the control signal.
10. The inductive method according to any of claims 1 to 9, wherein the component corresponding to the third interaction signal comprises at least one of:
the device comprises a vehicle opening and closing piece, an in-vehicle comfort degree adjusting piece and an in-vehicle and out-vehicle display piece;
the control signal is used for sending a control signal to at least one of the vehicle opening and closing member, the in-vehicle comfort level adjusting member, and the in-vehicle and outside display member.
11. The induction method according to any one of claims 1-10, wherein the vehicle further comprises a power supply unit and a switch unit, the method further comprising:
and sending a control instruction to the switch unit, wherein the control instruction is used for controlling the power supply unit to provide power supply signals for the first induction unit and the second induction unit.
12. An inductive device, comprising:
a processor and an interface circuit;
wherein the processor is coupled to the memory through the interface circuit, the processor being configured to execute the program code in the memory to implement the method of any of claims 1-11.
13. A computer-readable storage medium storing instructions for performing the method of any one of claims 1-11.
14. A vehicle, characterized in that it comprises an induction device according to claim 12.
CN202111306493.8A 2021-11-05 2021-11-05 Sensing method and device, storage medium and vehicle Pending CN114115529A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111306493.8A CN114115529A (en) 2021-11-05 2021-11-05 Sensing method and device, storage medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111306493.8A CN114115529A (en) 2021-11-05 2021-11-05 Sensing method and device, storage medium and vehicle

Publications (1)

Publication Number Publication Date
CN114115529A true CN114115529A (en) 2022-03-01

Family

ID=80380915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111306493.8A Pending CN114115529A (en) 2021-11-05 2021-11-05 Sensing method and device, storage medium and vehicle

Country Status (1)

Country Link
CN (1) CN114115529A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108762661A (en) * 2018-09-11 2018-11-06 戴姆勒股份公司 Automobile interior exchange method and system and the vehicle including the system
US20190135199A1 (en) * 2016-07-11 2019-05-09 Shanghai Yanfeng Jinqiao Automotive Trim System Co. Ltd. Vehicle interior component
CN112558803A (en) * 2019-09-26 2021-03-26 北京钛方科技有限责任公司 Vehicle-mounted touch device, control method and automobile
CN113022462A (en) * 2021-03-26 2021-06-25 浙江吉利控股集团有限公司 Interior trim part of virtual switch and vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190135199A1 (en) * 2016-07-11 2019-05-09 Shanghai Yanfeng Jinqiao Automotive Trim System Co. Ltd. Vehicle interior component
CN108762661A (en) * 2018-09-11 2018-11-06 戴姆勒股份公司 Automobile interior exchange method and system and the vehicle including the system
CN112558803A (en) * 2019-09-26 2021-03-26 北京钛方科技有限责任公司 Vehicle-mounted touch device, control method and automobile
CN113022462A (en) * 2021-03-26 2021-06-25 浙江吉利控股集团有限公司 Interior trim part of virtual switch and vehicle

Similar Documents

Publication Publication Date Title
US10431086B2 (en) Vehicle, mobile terminal and method for controlling the same
CN108349388B (en) Dynamically reconfigurable display knob
CN104691449A (en) Vehicle control apparatus and method thereof
US20170286785A1 (en) Interactive display based on interpreting driver actions
CN105163974A (en) Motor vehicle infotainment system having separate display unit
US20220055482A1 (en) Systems And Methods For Horizon Digital Virtual Steering Wheel Controller
CN105751996A (en) Apparatus And Method For Assisting User Before Operating Switch
US10983691B2 (en) Terminal, vehicle having the terminal, and method for controlling the vehicle
GB2423808A (en) Gesture controlled system for controlling vehicle accessories
KR101716145B1 (en) Mobile terminal, vehicle and mobile terminal link system
CN107813831A (en) For vehicle and the internuncial system and method for mobile communication equipment
US10970998B1 (en) Systems, methods, and devices for remotely controlling functionalities of vehicles
KR101841501B1 (en) Mobile device for car sharing and method for car sharing system
CN109562740A (en) For remotely accessing the fingerprint device and method of the individual function profile of vehicle
US9073433B2 (en) Vehicle control system
CN108216087B (en) Method and apparatus for identifying a user using identification of grip style of a door handle
JP2018501998A (en) System and method for controlling automotive equipment
CN114115529A (en) Sensing method and device, storage medium and vehicle
CN107831825B (en) Flexible modular screen apparatus for mounting to a participating vehicle and transferring user profiles therebetween
CN107585115B (en) Vehicle, vehicle system including the same, and control method of the vehicle system
CN216210968U (en) Induction device and vehicle
KR101916425B1 (en) Vehicle interface device, vehicle and mobile terminal link system
US11667196B2 (en) Vehicle and method of controlling the same
KR101892498B1 (en) Vehicle interface device, vehicle and mobile terminal link system
KR20220010655A (en) Dynamic cockpit control system for autonomous vehicle using driving mode and driver control gesture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230829

Address after: Room 844, 8th Floor, Building 1, No. 10, Hongda North Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing 102600 (Yizhuang Cluster, High end Industrial Zone, Beijing Pilot Free Trade Zone)

Applicant after: Beijing Jidu Technology Co.,Ltd.

Address before: 100176 room 611, floor 6, zone 2, building a, No. 12, Hongda North Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing (Yizhuang group, high-end industrial area of Beijing Pilot Free Trade Zone)

Applicant before: Jidu Technology Co.,Ltd.