CN216210968U - Induction device and vehicle - Google Patents

Induction device and vehicle Download PDF

Info

Publication number
CN216210968U
CN216210968U CN202122706337.2U CN202122706337U CN216210968U CN 216210968 U CN216210968 U CN 216210968U CN 202122706337 U CN202122706337 U CN 202122706337U CN 216210968 U CN216210968 U CN 216210968U
Authority
CN
China
Prior art keywords
unit
signal
sensing
vehicle
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202122706337.2U
Other languages
Chinese (zh)
Inventor
李晓东
沈旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jidu Technology Co Ltd
Original Assignee
Jidu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jidu Technology Co ltd filed Critical Jidu Technology Co ltd
Priority to CN202122706337.2U priority Critical patent/CN216210968U/en
Application granted granted Critical
Publication of CN216210968U publication Critical patent/CN216210968U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses induction system and vehicle. The sensing device is used for interaction in a vehicle and comprises a surface material layer, a first sensing unit, a second sensing unit and a control unit; the first sensing unit comprises at least one sensing component and the second sensing unit comprises at least one sensing component which are different types of sensing components; the first sensing unit is used for sensing a first interaction signal from an occupant; the second sensing unit is used for sensing a second interaction signal from the passenger; the control unit is used for responding to the first interactive signal and the second interactive signal, generating a third interactive signal and sending a control signal to a component corresponding to the third interactive signal; the control signal is used for controlling the component to execute the action corresponding to the control signal. Therefore, the system can support richer human-vehicle interaction modes and application scenes. And single interactive information can be matched with various components, so that multi-modal human-vehicle interaction can be realized, and the intelligence of the vehicle is improved.

Description

Induction device and vehicle
Technical Field
The embodiment of the application relates to the technical field of intelligent automobile interaction, in particular to an induction device and a vehicle.
Background
With the development of technology, the development of intelligent epidermis technology represented by In-Mold Electronics (In-Mold Electronics) technology is becoming perfect. In the prior art, the intelligent skin technology is applied to parts of a vehicle, and intelligent control of the vehicle can be realized. The current human-vehicle interaction mode and scene are single and fixed, so that the vehicle intelligence is low, and the driving experience of passengers is reduced.
SUMMERY OF THE UTILITY MODEL
The application provides an induction system and vehicle, can support abundanter people's car interactive mode and application scene, has improved the intellectuality of vehicle.
In a first aspect, an embodiment of the present application provides a sensing device for interaction in a vehicle, the sensing device including a skin material layer, a first sensing unit, a second sensing unit, and a control unit; the first sensing unit and the second sensing unit are electrically connected, at least one of the first sensing unit and the second sensing unit is respectively connected with the control unit, and the surface material layer is used for forming at least one part of the surface of one or more parts of the vehicle and can be touched by an occupant of the vehicle; the first sensing unit comprises at least one sensing component and the second sensing unit comprises at least one sensing component which are different types of sensing components; the first sensing unit and the second sensing unit are arranged on one side of the surface skin material layer facing the interior of one or more components of the vehicle; the first sensing unit is used for sensing a first interaction signal from an occupant; the second sensing unit is used for sensing a second interaction signal from the passenger; the control unit is used for responding to the first interactive signal and the second interactive signal, generating a third interactive signal and sending a control signal to a component corresponding to the third interactive signal; the control signal is used for controlling the component to execute the action corresponding to the control signal. From this can respond to different interactive signals simultaneously through first induction element and second induction element for induction system can realize multiple interactive signal's response, and then can make induction system can support abundanter people's car interactive mode and application scene, has improved the intellectuality of vehicle. In addition, when the control unit forms a control signal, the control unit can realize the action matching of one interactive signal and a plurality of components, thereby realizing multi-modal human-vehicle interaction and improving the multi-modal attitude and intellectualization of the vehicle.
Optionally, the first interaction signal or the second interaction signal is any one of:
sensing the action of the passenger formed on the surface of the skin material layer or the nearby space;
the position of the sensing action.
The first interactive signal and the second interactive signal can be interactive signals acquired in various forms, when the first induction unit induces the first interactive signal and the second induction unit induces the second interactive signal, the induction device can realize induction of various interactive signals, and then the induction device can support richer human-vehicle interaction modes and application scenes, so that the intelligence of a vehicle is improved.
Optionally, the first sensing unit includes a touch film layer; the touch control film layer is fixedly connected with the surface material layer and is used for sensing a first interaction signal from the surface material layer; and/or the second sensing unit comprises a touch control film layer; the touch control film layer is fixedly connected with the surface material layer and used for sensing a second interaction signal from the surface material layer. The control unit can improve the accuracy of the touch signal according to the first interaction signal and the second interaction signal simultaneously, and therefore the reliability and accuracy of human-vehicle interaction are improved.
Optionally, the material of the skin material layer is a transparent material, at least one sensing component in the first sensing unit and the second sensing unit includes a vision sensor, an output end of the vision sensor is electrically connected with the control unit, and the action of the passenger includes: a posture action of the occupant;
the vision sensor generates a first interaction signal or a second interaction signal in response to a postural action of the occupant.
Optionally, the sensing device further includes a display module, the material of the skin material layer is a transparent material, the display module is between the skin material layer and the first sensing unit, or the display module is between the skin material layer and the second sensing unit, and the display module is electrically connected to at least one of the first sensing unit, the second sensing unit or the control unit; the display module is used for displaying information corresponding to the first interaction signal; and/or the display module is used for displaying information corresponding to the second interactive signal; and/or the display module is used for displaying information corresponding to the third interactive signal.
Optionally, the information corresponding to the first interactive signal, the information corresponding to the second interactive signal, or the information corresponding to the third interactive signal includes at least one of the following:
feedback information, vehicle speed information, navigation information, power information, driver status information, forward road condition information, in-vehicle temperature information, in-vehicle humidity information, tire pressure information, full vehicle information, out-vehicle temperature information, or humidity information; the feedback information is information fed back to the passenger by the sensing device according to the first interaction signal, the second interaction signal or the third interaction signal.
Optionally, the sensing device further comprises a feedback unit between the skin material layer and the first sensing unit, or between the skin material layer and the second sensing unit; the feedback unit is used for responding to the first interaction signal and executing a feedback action; and/or the feedback unit is used for responding to the second interaction signal and executing a feedback action; and/or the feedback unit is used for responding to the third interaction signal and executing the feedback action. The interaction state of the passengers can be reminded through the feedback action executed by the feedback unit.
Optionally, the skin material layer comprises a tactile sensation adjustment unit; the touch adjusting unit is connected with the control unit and used for adjusting the touch of the surface of the part formed by the surface skin material layer according to the control signal, so that the experience effect of passengers is improved, the intelligence of human-vehicle interaction is improved, and the experience effect of the passengers is further improved.
Optionally, the skin material layer comprises a morphology adjustment unit; the form adjusting unit is connected with the control unit, and the form adjusting unit is used for adjusting the form of the surface of the part formed by the skin material layer according to the control signal, so that the intelligence of human-vehicle interaction is improved, and the experience effect of passengers is further improved.
Optionally, the component corresponding to the third interaction signal includes at least one of:
the device comprises a vehicle opening and closing piece, an in-vehicle comfort degree adjusting piece and an in-vehicle and out-vehicle display piece;
the control signal is used for sending the control signal to at least one item in vehicle switching piece, comfort level regulating part in the car and the inside and outside display part of car, can realize the people car interaction of vehicle switching piece, and the people car interaction of comfortable regulating part in the car and the people car interaction of the inside and outside display part of car improve the intellectuality of vehicle.
Optionally, the sensing device further comprises a power supply unit and a switch unit, wherein the power supply unit is electrically connected with at least one of the first sensing unit and the second sensing unit through the switch unit; the switch unit is used for controlling the state of the power supply unit for supplying power supply signals to the first induction unit and the second induction unit. When the passenger control switch unit is switched off, the passenger is prevented from realizing human-vehicle interaction through the first sensing unit and the second sensing unit, and a wider choice is provided for the passenger.
In a second aspect, the embodiment of the present application further provides a vehicle, including the sensing device provided in the first aspect.
According to the technical scheme, the first induction unit is used for inducing the first interaction signal, the second induction unit is used for inducing the second interaction signal, the control unit is used for controlling the component to execute corresponding actions according to the first interaction signal and the second interaction signal, and therefore different interaction signals can be induced simultaneously through the first induction unit and the second induction unit, induction of various interaction signals can be achieved through the induction device, and then the induction device can support richer human-vehicle interaction modes and application scenes. And single interactive signals can be matched with various components, so that multi-modal human-vehicle interaction can be realized, and the intelligence of vehicles is improved.
Drawings
FIG. 1 is a schematic structural diagram of a skin material layer provided herein;
fig. 2 is a schematic structural diagram of an induction device according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an induction device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an induction device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an induction device according to an embodiment of the present disclosure;
fig. 6a is a schematic structural diagram of an induction device according to an embodiment of the present disclosure;
fig. 6b is a schematic view of a sensing device according to an embodiment of the present disclosure;
fig. 7a is a schematic structural diagram of an induction device according to an embodiment of the present disclosure;
fig. 7 b-7 c are schematic views illustrating a sensing device according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an induction device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples.
The sensing device provided by the embodiment of the application can be arranged in the terminal. Illustratively, the terminal may be a terminal device such as a motor vehicle, a drone, a rail car, or a bicycle. The terminal may comprise a motor vehicle, such as a smart internet vehicle. When the sensing device is arranged in the terminal, the interaction between the passenger and the terminal can be realized, and the intellectualization of the terminal is realized. The following description will be given taking an example in which the terminal includes a vehicle.
The sensing device of the vehicle in the embodiment of the present application may be provided in a vehicle having a sensing function of the vehicle, or other devices having sensing of the vehicle in the vehicle. Such other devices include, but are not limited to: other sensors such as vehicle-mounted terminals, vehicle-mounted controllers, vehicle-mounted modules, vehicle-mounted components, vehicle-mounted radars or vehicle-mounted cameras. The sensing method of the vehicle in the embodiment of the application can also be arranged in other intelligent terminals with sensing functions of the vehicle besides the vehicle, or in components of the intelligent terminals. The intelligent terminal can be intelligent transportation equipment, intelligent household equipment, a robot and the like.
It should be noted that the terms "first", "second", and the like in the description and claims of the present application and the accompanying drawings are used for distinguishing different objects, and are not used for limiting a specific order. The following embodiments of the present application may be implemented individually, or in combination with each other, and the embodiments of the present application are not limited specifically.
The sensing device of the vehicle in the embodiment of the present application includes a skin material layer. The layer of skin material is intended to constitute at least a part of the surface of one or more components of the vehicle and is intended to be touched by an occupant of the vehicle; specifically, the skin material layer 110 may be a skin material layer of a vehicle interior. Illustratively, the skin material layer 110 may be a skin material layer in interior trim such as seats, windows, displays, steering wheels, and lights. The material of the skin material layer 110 may be glass, plastic, wood, leather, fabric, metal, and the like. Fig. 1 is a schematic structural diagram of a skin material layer provided in the present application. As shown in fig. 1, fig. 1 exemplarily shows that the skin material layer 110 may be a skin material layer of a steering wheel, a skin material layer of an instrument display screen, a skin material layer of a vehicle head interior, a skin material layer of a central control, and the like.
Fig. 2 is a schematic structural diagram of an induction device according to an embodiment of the present application. As shown in fig. 2, the sensing device includes a skin material layer 110, a first sensing unit 120, a second sensing unit 130, and a control unit 140; the first sensing unit 120 and the second sensing unit 130 are on a side of the skin material layer 110 facing an interior of one or more components of the vehicle. The first sensing unit 120 and the second sensing unit 130 are electrically connected, and at least one of the first sensing unit 120 and the second sensing unit 130 is respectively connected to the control unit 140.
The first sensing unit 120 is used for sensing a first interaction signal from an occupant;
the second sensing unit 130 is used for sensing a second interaction signal from the occupant;
the control unit 140 is configured to generate a third interactive signal in response to the first interactive signal and the second interactive signal, and send a control signal to a component 150 corresponding to the third interactive signal; the control signal is used to control the component 150 to perform an action corresponding to the control signal.
In some embodiments, the at least one sensing component included in the first sensing unit 120 and the at least one sensing component included in the second sensing unit 130 are different types of sensing components; the sensing component may sense the interaction signal.
When the first sensing unit 120 and the second sensing unit 130 have different sensing parts, the first sensing unit 120 and the second sensing unit 130 can be caused to sense different types of interaction signals.
For example, when the first interaction signal is a touch-type interaction signal, the sensing part in the first sensing unit 120 may sense the touch-type interaction signal. When the second interactive signal is a gesture type interactive signal, the sensing part in the second sensing unit 130 may sense the gesture type interactive signal. The first sensing unit 120 and the second sensing unit 130 are electrically connected, so that signal transmission can be realized between the first sensing unit 120 and the second sensing unit 130.
For example, after the first sensing unit 120 senses the first interaction signal, the first interaction signal may be transmitted to the second sensing unit 130; alternatively, the second sensing unit 130 senses a second interaction signal, and may transmit the second interaction signal to the first sensing unit 120.
In some embodiments, the control unit 140 may be a Domain control module of the vehicle (for example, a Domain control module of a smart cabin Domain, or a Domain control module of another Domain), may be a central control unit (CDC), and may be a control module for controlling the first sensing unit and the second sensing unit, which is not limited herein. The first sensing unit 120 and the second sensing unit 130 may be respectively connected to the control unit 140, and at this time, the first sensing unit 120 and the second sensing unit 130 may respectively provide the control unit 140 with a first interaction signal and a second interaction signal.
In a possible implementation manner, the first sensing unit 120 and the second sensing unit 130 are connected in series and then connected to the control unit 140. At this time, the first sensing unit 120 provides the control unit 140 with a first interaction signal through the second sensing unit 130, and the second sensing unit 130 directly provides the control unit 140 with a second interaction signal;
in another possible implementation manner, the second sensing unit 130 provides the second interaction signal to the control unit 140 through the first sensing unit 120, and the first sensing unit 120 directly provides the first interaction signal to the control unit 140.
During interaction in the vehicle interior, the occupant emits an interaction signal as needed, and the first sensing unit 120 activates the second sensing unit 130 according to the interaction signal.
For example, when the occupant sends a first interaction signal, the first sensing unit 120 senses the first interaction signal, and the second sensing unit 130 forms a second interaction signal according to the first interaction signal, and the control unit 140 is configured to generate a third interaction signal in response to the first interaction signal and the second interaction signal, and send a control signal to the component 150 corresponding to the third interaction signal; the control signal control section 150 performs an action corresponding to the control signal.
For another example, when the occupant sends a second interaction signal, the second sensing unit 130 senses the second interaction signal, and the first sensing unit 120 forms a first interaction signal according to the second interaction signal, and the control unit 140 is configured to generate a third interaction signal in response to the first interaction signal and the second interaction signal, and send a control signal to the component 150 corresponding to the third interaction signal; the control signal control section 150 performs an action corresponding to the control signal.
The different interaction signals are simultaneously sensed by the first sensing unit 120 and the second sensing unit 130, so that the sensing device can sense various interaction signals, and further, the sensing device can support richer human-vehicle interaction modes and application scenes, and the intelligence of the vehicle is improved.
The component 150 may be any component of a vehicle, including one or more. For example, component 150 may be a mechanism that adjusts any of a seat, a window, a display screen, and a light. In addition, the component 150 may also correspond to the skin material layer 110.
Illustratively, when the skin material layer 110 is a skin material layer of a seat, the component 150 may be an actuator that adjusts the seat. When the layer of skin material 110 is that of a vehicle window, the component 150 may be an actuator for adjusting the vehicle window.
In other embodiments, when the control unit 140 forms the control signal, it may implement one interaction signal to match with the actions of the plurality of components 150, so as to implement multi-modal human-vehicle interaction, and improve multi-modal attitude and intelligence of the vehicle.
For example, when the interaction signal is to control the opening and closing part and the hardware function of the vehicle, the interaction not only includes the interaction of the opening and closing part and the hardware function of the vehicle, but also includes human-vehicle interaction such as voice interaction prompt and display interaction inside and outside the vehicle, and thus multi-mode human-vehicle interaction is realized.
For example, the opening and closing member of the vehicle may include a door, a window, a sunroof, a tailgate, etc., and the controlling hardware function in the vehicle may include controlling seat sliding, seat lifting, air conditioning wind direction, temperature, sound volume, light, etc.
It should be noted that fig. 2 exemplarily shows that the skin material layer 110 is in contact with the first sensing unit 120. In other embodiments, the skin material layer 110 may also be free of contact with the first sensing element 120, simply by having the first sensing element 120 on the side of the skin material layer 110 facing the interior of one or more components of the vehicle.
In one possible implementation, the first interactive signal or the second interactive signal may be any one of:
sensing the action of the passenger formed on the surface of the skin material layer or the nearby space;
the position of the sensing action.
Specifically, the interactive signal (e.g., the first interactive signal, the second interactive signal, etc.) in the present application may include a plurality of types, and the first interactive signal and the second interactive signal may be different types of interactive signals. Illustratively, the interaction signal may be an interaction signal obtained by an action of an occupant formed on the surface of the skin material layer, a pressure change of the skin material layer 110 may be caused by touching the skin material layer 110, and the first sensing unit 120 or the second sensing unit 130 senses the pressure change of the skin material layer 110, so that the first interaction signal or the second interaction signal may be formed.
Among them, the action of the occupant formed on the surface of the skin material layer may include touching, tapping, sliding, and the like. When the action of the occupant formed on the surface of the skin material layer is a touch, an interactive signal may be formed by a limb or by touching with another object. When the action of the occupant formed on the surface of the skin material layer is tapping, the continuity and number of taps may be set for forming different interaction signals. When the action of the occupant formed on the surface of the skin material layer is sliding, it may be set that the direction of the sliding (for example, the up-down direction, the acting direction, etc.) forms different interaction signals.
The interaction signal may also be an interaction signal obtained from the action of the occupant forming a space near the skin material layer. For example, the occupant performs a preset action in the space near the skin material layer, so that the electric field variation is formed in the space near the skin material layer, and the first sensing unit 120 or the second sensing unit 130 may sense the electric field variation in the space near the skin material layer, so that the first interaction signal or the second interaction signal may be formed. The action of the passenger formed in the space near the skin material layer can comprise gesture action, and different gesture actions can form different interaction signals. For example, the gesture motion may be a slide in a different direction, or may form a fixed gesture such as "OK".
The interaction signal may also be a signal obtained by sensing the position of the action. For example, when the interaction signal is a first interaction signal obtained by the first sensing unit 120 sensing the movement of the occupant formed in the space near the skin material layer, the second sensing unit 130 may sense the movement of the occupant formed in the space near the skin material layer to form a second interaction signal.
For another example, when the interaction signal is the second interaction signal and the second interaction signal is obtained by the second sensing unit 130 sensing the action of the occupant formed in the space near the skin material layer, the first sensing unit 120 may sense the action position of the occupant formed in the space near the skin material layer to form the first interaction signal.
The first interactive signal and the second interactive signal can be interactive signals acquired in various forms, when the first induction unit 120 induces the first interactive signal and the second induction unit 130 induces the second interactive signal, the induction device can realize induction of various interactive signals, and then the induction device can support richer human-vehicle interaction modes and application scenes, so that the intelligence of the vehicle is improved.
On the basis of the above technical solution, the first sensing unit and/or the second sensing unit may include an electric field sensor; the electric field sensor is used for inducing an interaction signal according to the electric field change generated by the interaction signal; the interactive signals comprise touch interactive signals and non-touch interactive signals.
In some embodiments, the electric field sensor may sense a change in an electric field in a surrounding area, thereby generating an electrical signal. The interactive signal can comprise a touch interactive signal and a non-touch interactive signal, and the touch interactive signal can be an interactive signal generated by contacting with the skin material layer. The non-touch interaction signal may include various types, such as a motion interaction signal. When the interaction instructions sent by the passengers are different, the influence of the interaction instructions on the electric field in the area around the electric field sensor is different, so that the electric signals output by the electric field sensor are different, the interaction instructions sent by the passengers can be identified according to the electric signals output by the electric field sensor, and the control unit controls the component to act according to the interaction signals in the subsequent process, so that human-vehicle interaction is realized.
Fig. 3 is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure. As shown in fig. 3, the first sensing unit 120 includes a touch film layer 121; the touch film layer 121 is fixedly connected to the skin material layer 110, and the touch film layer 121 is used for sensing a first interaction signal from the skin material layer 110.
Specifically, when the first interaction signal is generated by an action of the occupant formed on the surface of the skin material layer 110, the touch film layer 121 is fixedly connected to the skin material layer 110, so that a force generated by the first interaction signal on the skin material layer 110 is transmitted to the touch film layer 121, the touch film layer 121 may generate a touch signal according to the force, and the touch signal may include an action content and an action position of the first interaction signal.
In some embodiments, the touch signal may be transmitted to the control unit 140, the control unit 140 performs matching according to the touch signal and a pre-stored interaction signal, generates a third interaction signal after the touch signal and the pre-stored interaction signal are successfully matched, and sends a control signal to a component 150 corresponding to the third interaction signal, and the control signal controls the component 150 to execute an action corresponding to the control signal.
In some embodiments, the touch signal may be further transmitted to the second sensing unit 130, the second sensing unit 130 may form a second interaction signal according to the touch signal, the second interaction signal may include information of an action position of the occupant formed on the surface of the skin material layer 110, the second interaction signal is transmitted to the control unit 140, and the control unit 140 may improve accuracy of the touch signal according to the second interaction signal, thereby improving reliability and accuracy of human-vehicle interaction.
Illustratively, the touch film layer may include a touch layer and in-film electronic circuitry; for example, the touch layer is a capacitor film layer, and the capacitor film layer may include a plurality of capacitors arranged in an array.
When the first interaction signal is formed by an action of an occupant formed on the surface of the skin material layer 110, and the first interaction command acts on the capacitance film layer, the capacitance corresponding to the touch position in the capacitance film layer can be changed, and capacitance value change information of the capacitance is transmitted to the in-film electronic circuit, and the in-film electronic circuit can determine the content and position information of the first interaction signal of the touch command according to the capacitance position and capacitance value change value of the capacitance change in the capacitance film layer, so as to form the touch signal.
Fig. 4 is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure. As shown in fig. 4, in other embodiments, the second sensing unit 130 may include a touch film layer 121; the touch film layer 121 is fixedly connected to the skin material layer 110, and the touch film layer 121 is used for sensing a second interaction signal from the skin material layer 110.
Specifically, the touch film layer 121 is disposed in the second sensing unit 130, and the touch film layer 121 is fixedly connected to the skin material layer 110, so that a force generated by the second interaction signal on the skin material layer 110 is transmitted to the touch film layer 121, the touch film layer 121 can form a touch signal according to the force, and the touch signal can include an action content and an action position of the second interaction signal.
In some embodiments, the touch signal may be transmitted to the control unit 140, and the control unit 140 performs matching according to the touch signal and a pre-stored interaction signal, generates a third interaction signal after the touch signal and the pre-stored interaction signal are successfully matched, and sends a control signal to the component 150 corresponding to the third interaction signal; the control signal control section 150 performs an action corresponding to the control signal.
In some embodiments, the touch signal may be further transmitted to the first sensing unit 120, the first sensing unit 120 may form a first interaction signal according to the touch signal, the first interaction signal may include motion position information of the occupant formed on the surface of the skin material layer 110, the first interaction signal is transmitted to the control unit 140, and the control unit 140 may improve accuracy of the touch signal according to the first interaction signal, thereby improving reliability and accuracy of human-vehicle interaction.
In other embodiments, the first sensing unit may further include a touch control film layer, and the second sensing unit includes a touch control film layer, so that the first sensing unit and the second sensing unit may simultaneously form a touch control signal according to an action of the occupant formed on the surface of the skin material layer 110, and may improve the reliability and accuracy of human-vehicle interaction on the basis that the control unit 140 forms a third interaction signal according to the touch control signal and sends a control signal to the component 150 corresponding to the third interaction signal, and the control signal control component 150 executes the action corresponding to the control signal.
Fig. 5 is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure. As shown in fig. 5, the material of the skin material layer 110 is a transparent material, at least one sensing component of the first sensing unit 120 and the second sensing unit 130 includes a vision sensor 122, an output end of the vision sensor 122 is electrically connected to the control unit 140, and the actions of the occupant include: a posture action of the occupant;
the vision sensor 122 generates a first interaction signal or a second interaction signal in response to the occupant's gestural actions.
In some embodiments, it is exemplarily shown in fig. 5 that the first sensing unit 120 includes a vision sensor 122. The material of the skin material layer 110 may be a transparent material, and the material of the skin material layer 110 may be glass, for example. When the material of the skin material layer 110 is a transparent material, the vision sensor 122 may have visualization of the posture action of the occupant, that is, the vision sensor 122 may directly acquire the action content of the posture action through the skin material layer 110, so that an interaction signal may be formed according to the posture action.
When the sensing part in the first sensing unit 120 includes the vision sensor 122, the vision sensor 122 in the first sensing unit 120 may sense the posture action of the occupant to generate a first interaction signal.
When the sensing part in the second sensing unit 130 includes the vision sensor 122, the vision sensor 122 in the second sensing unit 130 may sense the posture action of the occupant to generate a second interaction signal.
For example, the gesture motion may be a gesture motion, a gesture image of the gesture motion may be collected through the visual sensor 122, and then the content of the gesture motion is determined according to the gesture image, so that the gesture motion may be recognized according to the content of the gesture motion, and in a subsequent process, the component motion is controlled by the control unit according to the gesture motion, so that human-vehicle interaction is achieved.
In some embodiments, the first sensing unit and/or the second sensing unit may further include an electric field sensor and a visual sensor, so that not only can various interaction signals be sensed, but also the sensing accuracy of the first sensing unit and/or the second sensing unit on the interaction signals can be improved through the electric field sensor and the visual sensor, so that the reliability of the first sensing unit and/or the second sensing unit can be improved, and the reliability of the sensing device can be further improved.
It should be noted that the first sensing unit and the second sensing unit may further include other sensors, for example, a pressure sensor and a light sensor, and the like, and are used to realize that the first sensing unit and the second sensing unit sense more kinds of interaction signals, so that the sensing device can support richer interaction modes and interaction scenes, and further improve the intelligence of the vehicle.
Fig. 6a is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure. As shown in fig. 6a, the sensing device further includes a display module 160, the material of the skin material layer 110 is a transparent material, the display module 160 is disposed between the skin material layer 110 and the first sensing unit 120, or the display module 160 is disposed between the skin material layer 110 and the second sensing unit 130, and the display module 160 is electrically connected to at least one of the first sensing unit 120, the second sensing unit 130, or the control unit 140.
Specifically, fig. 6a exemplarily shows that the display module 160 is disposed between the skin material layer 110 and the first sensing unit 120, and is electrically connected with the first sensing unit 120. The display module 160 has a display function. When the material of the skin material layer 110 is transparent, the information displayed by the display module 160 can be transmitted to the passenger through the skin material layer 110, so as to remind the passenger of the current interaction state.
In a possible implementation manner, the display module 160 is configured to display information corresponding to the first interaction signal; for example, when the display module 160 is electrically connected to the first sensing unit 120, the display module 160 may receive a first interaction signal sensed by the first sensing unit 120 and display information corresponding to the first interaction signal according to the first interaction signal.
In another possible implementation manner, the display module 160 is configured to display information corresponding to the second interactive signal; for example, when the display module 160 is electrically connected to the second sensing unit 130, the display module 160 may receive a second interaction signal sensed by the second sensing unit 160 and display information corresponding to the second interaction signal according to the second interaction signal.
In another possible implementation manner, the display module 160 is configured to display information corresponding to the third interactive signal. For example, when the display module 160 is electrically connected to the control unit 140, the display module 160 may receive a third interactive signal generated by the control unit 140 and display information corresponding to the third interactive signal according to the third interactive signal.
In another possible implementation manner, the display module 160 is electrically connected to at least two of the first sensing unit 120, the second sensing unit 130 and the control unit 140, and can display corresponding information according to at least two of the first interaction signal, the second interaction signal and the third interaction signal.
For example, in the control unit 140, based on the third interaction signal, it is determined that the information to be displayed by the display module 160 is the mail information, as shown in fig. 6b, the control unit 140 sends a control signal of the mail information to the display module 160 according to the third interaction signal, and the display module 160 may display a corresponding mail, or an icon of the mail, and the like according to the received control signal of the mail information.
In another possible implementation manner, when the display module 160 is connected to the control unit 140, the display module 160 may further receive a control signal generated by the control unit 140, and display information corresponding to the control signal according to the control signal.
For example, in the control unit 140, based on the third interaction signal, it is determined that the information to be displayed by the display module 160 is the mail information, as shown in fig. 6b, the control unit 140 sends a control signal of the mail information to the display module 160 according to the third interaction signal, and the display module 160 may display a corresponding mail, or an icon of the mail, and the like according to the received control signal of the mail information.
A possible implementation manner is that the information corresponding to the first interactive signal, the information corresponding to the second interactive signal, or the information corresponding to the third interactive signal includes at least one of the following:
feedback information, vehicle speed information, navigation information, power information, driver status information, forward road condition information, in-vehicle temperature information, in-vehicle humidity information, tire pressure information, full vehicle information, out-vehicle temperature information, or humidity information; the feedback information is information fed back to the passenger by the sensing device according to the first interaction signal, the second interaction signal or the third interaction signal.
Specifically, the information corresponding to the interactive signal may include feedback information, and the feedback information may be preset information used for reminding the passenger of the current interactive state, where the interactive signal may include a first interactive signal, a second interactive signal, and a third interactive signal.
In some embodiments, when the display module is used for displaying the feedback information, the information fed back to the occupant by the sensing device according to the interaction signal can be displayed. Illustratively, the feedback information may be preset interaction success display information or interaction failure display information, which may include display information of the interaction signal and the sensing result for reminding the occupant of the interaction result.
When the interactive signal is used for checking the state of the vehicle, the information corresponding to the interactive signal can also comprise the state information of the vehicle, and the display module can also display the state of the vehicle according to the interactive signal. For example, the display module may display at least one of vehicle speed information, navigation information, electric quantity information, driver state information, front road condition information, in-vehicle temperature information, in-vehicle humidity information, tire pressure information, vehicle information, out-vehicle temperature information, and humidity information of the vehicle according to the interactive signal, on the basis of obtaining the authorization for communication with each component in the vehicle and the out-vehicle networking.
In addition, the display module can be arranged on any decoration in the vehicle, so that the state of the vehicle can be displayed by any decoration in the vehicle, the human-vehicle interaction scene is increased, and the intelligence of the display information in the vehicle is improved.
In some embodiments, when the display module is a touch display module, the touch display module includes a touch panel, and the touch panel can sense the action of the occupant formed on the surface of the skin material layer to form an interaction signal.
Fig. 7a is a schematic structural diagram of an induction device according to an embodiment of the present disclosure. As shown in fig. 7a, the sensing device further includes a feedback unit 170, and the feedback unit 170 is between the skin material layer 110 and the first sensing unit 120. In another possible implementation, the feedback unit 170 may also be located between the skin material layer 110 and the second sensing unit 130.
The feedback unit 170 is configured to perform a feedback action in response to the first interaction signal; and/or, the feedback unit 170 is configured to perform a feedback action in response to the second interaction signal; and/or the feedback unit 170 is configured to perform a feedback action in response to the third interaction signal.
The feedback unit 170 is exemplarily shown in fig. 7a between the skin material layer 110 and the first sensing unit 120. In some embodiments, the feedback unit 170 may be connected to the first sensing unit 120 (as shown in fig. 7), and the feedback unit 170 may respond to the first interaction signal sensed by the first sensing unit 120 and perform a feedback action for reminding the occupant of the interaction state.
In other embodiments, the feedback unit 170 may be connected to the second sensing unit 130, and the feedback unit 170 may respond to the second interaction signal sensed by the second sensing unit 130 and perform a feedback action to alert the occupant of the interaction state.
In other embodiments, the feedback unit 170 may be connected to the control unit 140, and the feedback unit 170 may respond to the third interaction signal generated by the control unit 140 and perform a feedback action for reminding the occupant of the interaction state.
In other embodiments, the feedback unit 170 may be further connected to at least two of the first sensing unit 120, the second sensing unit 130 and the control unit 140, and the feedback unit 170 may respond to at least two of the first interaction signal, the second interaction signal and the third interaction signal and perform a feedback action for reminding the occupant of the interaction state.
In other embodiments, when the feedback unit 170 is connected to the control unit 140, the feedback unit 170 may further respond to a control signal sent by the control unit 140 and perform a feedback action for reminding the occupant of the interaction state.
It should be noted that the feedback unit 170 may be sized as needed, and may be the same as or different from the size of the skin material layer 110. In addition, the feedback unit 170 may be between the skin material layer 110 and the first sensing unit 120, and located at one side of the skin material layer 110 in a horizontal plane.
As shown in fig. 7b, the feedback unit 170 includes a vibration feedback component 171, the vibration feedback component 171 is electrically connected to the first sensing unit 120, the second sensing unit 130, or the control unit 140, and the vibration feedback component 171 is configured to vibrate according to the first interaction signal, the second interaction signal, or the third interaction signal, so that the feedback action is vibration.
In one possible implementation manner, as shown in fig. 7c, the feedback unit 170 may further include a voice module 172, and after the feedback unit 170 acquires the interaction signal, the voice module 172 may perform voice information feedback according to the interaction signal. The feedback unit 170 may be on one side of the skin material layer 110, and it is only necessary that the feedback unit 170 can obtain the interaction signal.
For example, after the first interaction signal, the second interaction signal or the third interaction signal is transmitted to the voice module 172, the voice module 172 may remind the passenger of the success of the interaction signal sensing according to the interaction signal. When the interactive signal fails to be sensed, the voice module 172 may alert the passenger that the interactive signal fails to be sensed according to the interactive signal.
On the basis of the technical schemes, the surface material layer comprises a touch feeling adjusting unit; the touch adjusting unit is connected with the control unit and is used for adjusting the touch of the surface of the part formed by the surface material layer according to the control signal.
Specifically, the tactile sensation adjusting unit may be provided as a component of a vehicle for adjusting the tactile sensation of the skin material layer. For example, the tactile sensation adjusting unit may adjust a temperature of the skin material layer, a hardness of the skin material layer, and at least one of a frosted state or a smooth state of the skin material layer.
When the interaction information of the first interaction signal or the second interaction signal is the touch sense for adjusting the skin material layer, the control unit forms a third interaction signal according to the first interaction signal and the second interaction signal, the third interaction signal corresponds to the touch sense adjusting unit, so that the control unit sends a control signal to the touch sense adjusting unit, the touch sense adjusting unit is controlled to adjust the touch sense of the skin material layer, the human-vehicle interaction intelligence is improved, and the experience effect of passengers is improved.
Optionally, the skin material layer may further include a form adjusting unit; the form adjusting unit is connected with the control unit and is used for adjusting the form of the surface of the part formed by the surface layer material layer according to the control signal.
In particular, the form adjusting unit may also be a component of a vehicle for adjusting the form of the skin material layer. Illustratively, the form adjusting unit may adjust a bending state and a color change, etc., of the skin material layer.
When the interaction information of the first interaction signal or the second interaction signal is used for adjusting the form of the surface material layer, the control unit forms a third interaction signal according to the first interaction signal and the second interaction signal, the third interaction signal corresponds to the form adjusting unit, so that the control unit sends a control signal to the form adjusting unit, the form adjusting unit is controlled to adjust the form of the surface material layer, the intelligence of human-vehicle interaction is improved, and the experience effect of passengers is improved.
In one possible implementation manner, the component corresponding to the third interactive signal includes at least one of the following:
the device comprises a vehicle opening and closing piece, an in-vehicle comfort degree adjusting piece and an in-vehicle and out-vehicle display piece;
the control signal is used for sending the control signal to at least one of the vehicle opening and closing part, the interior comfort degree regulating part and the interior and exterior display part.
Specifically, the component corresponding to the third interactive signal is a component for executing the interactive information of the first interactive signal and the second interactive signal in the vehicle.
The vehicle opening and closing member may include a door, a window, a sunroof, a tailgate, and the like of the vehicle. The part corresponding to the third interactive signal comprises the vehicle opening and closing part, so that the control unit can form the third interactive signal according to the first interactive signal and the second interactive signal, send a control signal to the vehicle opening and closing part and control the vehicle opening and closing part to execute the action corresponding to the control signal, human-vehicle interaction of the vehicle opening and closing part can be realized, and function intellectualization of the vehicle is improved.
Comfort level regulating part in the car can include seat slider, seat lift spare, air conditioner wind direction regulating part, temperature regulation spare, stereo set volume size regulating part and light regulating part etc.. The part corresponding to the third interactive signal comprises the in-vehicle comfortable adjusting piece, so that the control unit can form the third interactive signal according to the first interactive signal and the second interactive signal, send a control signal to the in-vehicle comfortable adjusting piece and control the in-vehicle comfortable adjusting piece to execute the action corresponding to the control signal, thereby realizing human-vehicle interaction of the in-vehicle comfortable adjusting piece and improving the function intelligence of the vehicle.
The in-vehicle and out-vehicle display member may include an in-vehicle screen display member, an out-vehicle information display member, and the like. The part corresponding to the third interactive signal comprises the in-vehicle and external display part, so that the control unit can form the third interactive signal according to the first interactive signal and the second interactive signal, send a control signal to the in-vehicle and external display part and control the in-vehicle and external display part to execute the action corresponding to the control signal, thereby realizing human-vehicle interaction of the in-vehicle and external display part and improving the display intelligence of the vehicle.
In other embodiments, the parts corresponding to the third interaction signal include at least two of the vehicle opening and closing part, the in-vehicle comfort level adjusting part and the in-vehicle and out-vehicle display part, so that the function intellectualization and the display intellectualization of the vehicle can be further improved, and the experience effect of passengers is further improved.
Fig. 8 is a schematic structural diagram of another sensing device according to an embodiment of the present disclosure. As shown in fig. 8, the sensing device further includes a power supply unit 180 and a switch unit 190, wherein the power supply unit 180 is electrically connected to at least one of the first sensing unit 120 and the second sensing unit 130 through the switch unit 190; the switch unit 190 is used for controlling the power supply unit 180 to provide the power supply signals to the first sensing unit 120 and the second sensing unit 130.
Specifically, the switching unit 190 may include a plurality of switching devices. The switch unit 190 may be a mechanical switch, and the state of the switch unit 190 may be manually controlled by an occupant. The switching unit 190 may also be a soft switch, such as a transistor, etc., in which case the state of the switching unit 190 may be controlled by an instruction input by the occupant.
In one possible implementation manner, when at least one of the first sensing unit 120 and the second sensing unit 130 is connected to the power supply unit 180 through a switching device in the switching unit 190, the first sensing unit 120 and the second sensing unit 130 may be connected in series, and then one of the first sensing unit 120 and the second sensing unit 130 is connected to the switching unit 190.
In another possible implementation manner, when at least one of the first sensing unit 120 and the second sensing unit 130 is connected to the power supply unit 180 through a switching device in the switching unit 190, the first sensing unit 120 and the second sensing unit 130 may be connected in parallel, and then both the first sensing unit 120 and the second sensing unit 130 are connected to the switching unit 190.
It should be noted that, when the first sensing unit 120 and the second sensing unit 130 are connected in parallel, the switching unit 190 can separately control the conduction state between the power supply unit 180 and the first sensing unit 120 and the conduction state between the power supply unit 180 and the second sensing unit 130 as required, and at this time, the switching unit 190 includes at least two switching devices.
When the switch unit 190 is turned on, the power signal provided by the power unit 180 can be transmitted to the first sensing unit 120 and the second sensing unit 130 through the switch unit 190, so that the first sensing unit 120 and the second sensing unit 130 can normally operate according to the command signal.
When the switch unit 190 is turned off, the power signal provided by the power unit 180 cannot be transmitted to the first sensing unit 120 and the second sensing unit 130 through the switch unit 190, so that the first sensing unit 120 and the second sensing unit 130 cannot be powered on to work, thereby preventing a passenger from realizing human-vehicle interaction through the first sensing unit 120 and the second sensing unit 130, and providing more choices for the passenger.
In other embodiments, the switch unit 190 controls the conduction state between the power unit 180 and the first sensing unit 120 and the conduction state between the power unit 180 and the second sensing unit 130 at the same time, and the switch unit 190 may include only one switching device to control the conduction state between the power unit 180 and the first sensing unit 120 and the conduction state between the power unit 180 and the second sensing unit 130 to be the same.
The embodiment of this application still provides a vehicle, and this vehicle includes the induction system that this application arbitrary embodiment provided.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (12)

1. An induction device is used for interaction in a vehicle and is characterized by comprising a surface material layer, a first induction unit, a second induction unit and a control unit; the first sensing unit and the second sensing unit are electrically connected, at least one of the first sensing unit and the second sensing unit is respectively connected with the control unit, and the surface material layer is used for forming at least one part of the surface of one or more components of the vehicle and can be touched by an occupant of the vehicle; at least one sensing component included in the first sensing unit and at least one sensing component included in the second sensing unit are different types of sensing components;
the first and second sensing units are on a side of the layer of skin material facing an interior of one or more components of the vehicle;
the first sensing unit is used for sensing a first interaction signal from the passenger;
the second sensing unit is used for sensing a second interaction signal from the passenger;
the control unit is used for responding to the first interaction signal and the second interaction signal, generating a third interaction signal and sending a control signal to a component corresponding to the third interaction signal; the control signal is used for controlling the component to execute the action corresponding to the control signal.
2. The inductive device of claim 1, wherein the first interaction signal or the second interaction signal is any one of:
sensing the action of the passenger formed on the surface of the skin material layer or the nearby space;
sensing the position of the motion.
3. The sensing device according to claim 1 or 2, wherein the first sensing unit comprises a touch film layer; the touch control film layer is fixedly connected with the surface material layer and is used for sensing a first interaction signal from the surface material layer;
and/or the second sensing unit comprises a touch control film layer; the touch control film layer is fixedly connected with the surface material layer and used for sensing a second interaction signal from the surface material layer.
4. The sensing device according to any one of claims 1 to 3, wherein the material of the skin material layer is a transparent material, at least one sensing component of the first sensing unit and the second sensing unit comprises a visual sensor, an output end of the visual sensor is electrically connected with the control unit, and the action of the occupant comprises: a postural movement of the occupant;
the vision sensor generates the first interaction signal or the second interaction signal in response to a gestural action of the occupant.
5. The sensing device according to any one of claims 1 to 4, further comprising a display module, wherein the material of the skin material layer is a transparent material, the display module is between the skin material layer and the first sensing unit, or the display module is between the skin material layer and the second sensing unit, and the display module is electrically connected with at least one of the first sensing unit, the second sensing unit or the control unit;
the display module is used for displaying information corresponding to the first interaction signal;
and/or the display module is used for displaying information corresponding to the second interactive signal;
and/or the display module is used for displaying information corresponding to the third interactive signal.
6. The sensing device of claim 5, wherein the information corresponding to the first interactive signal, the information corresponding to the second interactive signal, or the information corresponding to the third interactive signal comprises at least one of:
feedback information, vehicle speed information, navigation information, power information, driver status information, forward road condition information, in-vehicle temperature information, in-vehicle humidity information, tire pressure information, full vehicle information, out-vehicle temperature information, or humidity information; wherein the feedback information is information fed back to the occupant by the sensing device according to the first interaction signal, the second interaction signal or the third interaction signal.
7. The sensing device of any one of claims 1-6, further comprising a feedback unit between the skin material layer and the first sensing unit, or between the skin material layer and the second sensing unit;
the feedback unit is used for responding to the first interaction signal and executing a feedback action;
and/or the feedback unit is used for responding to the second interaction signal and executing a feedback action;
and/or the feedback unit is used for responding to the third interaction signal and executing a feedback action.
8. The inductive device of any one of claims 1 to 7, wherein the layer of skin material comprises a tactile sensation adjustment unit;
the touch adjusting unit is connected with the control unit and used for adjusting the touch of the surface of the part formed by the surface skin material layer according to the control signal.
9. The inductive device of any one of claims 1 to 8, wherein the skin material layer comprises a morphology adjustment unit;
the form adjusting unit is connected with the control unit and is used for adjusting the form of the surface of the part formed by the surface material layer according to the control signal.
10. The inductive device according to any of claims 1 to 9, wherein the component corresponding to the third interaction signal comprises at least one of:
the device comprises a vehicle opening and closing piece, an in-vehicle comfort degree adjusting piece and an in-vehicle and out-vehicle display piece;
the control signal is used for sending a control signal to at least one of the vehicle opening and closing member, the in-vehicle comfort level adjusting member, and the in-vehicle and outside display member.
11. The induction device according to any one of claims 1-10, further comprising a power supply unit and a switch unit, wherein the power supply unit is electrically connected to at least one of the first induction unit and the second induction unit through the switch unit, respectively; the switch unit is used for controlling the power supply unit to provide power supply signals for the first induction unit and the second induction unit.
12. A vehicle comprising an induction device according to any one of claims 1 to 11.
CN202122706337.2U 2021-11-05 2021-11-05 Induction device and vehicle Active CN216210968U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202122706337.2U CN216210968U (en) 2021-11-05 2021-11-05 Induction device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202122706337.2U CN216210968U (en) 2021-11-05 2021-11-05 Induction device and vehicle

Publications (1)

Publication Number Publication Date
CN216210968U true CN216210968U (en) 2022-04-05

Family

ID=80904190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202122706337.2U Active CN216210968U (en) 2021-11-05 2021-11-05 Induction device and vehicle

Country Status (1)

Country Link
CN (1) CN216210968U (en)

Similar Documents

Publication Publication Date Title
US11397500B2 (en) Vehicle interior component
US10410319B2 (en) Method and system for operating a touch-sensitive display device of a motor vehicle
GB2423808A (en) Gesture controlled system for controlling vehicle accessories
US9669712B2 (en) Intuitive vehicle control
US8782520B2 (en) Method for differentiating haptic feelings by a select menu of a rotary switch
KR101796991B1 (en) Vehicle and method of controlling thereof
CN105751996A (en) Apparatus And Method For Assisting User Before Operating Switch
CN110709273B (en) Method for operating a display device of a motor vehicle, operating device and motor vehicle
US20210122242A1 (en) Motor Vehicle Human-Machine Interaction System And Method
EP3799609B1 (en) Power window sync switch
US20150084896A1 (en) Touch switch module
US10533362B2 (en) Systems and methods for memory and touch position window
US20130054048A1 (en) Vehicle control system
CN216210968U (en) Induction device and vehicle
JP2000003652A (en) On-vehicle input device
JP2018501998A (en) System and method for controlling automotive equipment
US11385717B2 (en) Vehicle input device with uniform tactile feedback
CN109656424A (en) Proximity sensor
US9880731B1 (en) Flexible modular screen apparatus for mounting to, and transporting user profiles between, participating vehicles
CN114115529A (en) Sensing method and device, storage medium and vehicle
KR20220010655A (en) Dynamic cockpit control system for autonomous vehicle using driving mode and driver control gesture
US20230256902A1 (en) Input device
US11402631B2 (en) Motor vehicle control member, associated control device and method
US11853469B2 (en) Optimize power consumption of display and projection devices by tracing passenger's trajectory in car cabin
WO2022168696A1 (en) Display system for displaying gesture operation result

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230905

Address after: Room 844, 8th Floor, Building 1, No. 10, Hongda North Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing 102600 (Yizhuang Cluster, High end Industrial Zone, Beijing Pilot Free Trade Zone)

Patentee after: Beijing Jidu Technology Co.,Ltd.

Address before: 100176 room 611, floor 6, zone 2, building a, No. 12, Hongda North Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing (Yizhuang group, high-end industrial area of Beijing Pilot Free Trade Zone)

Patentee before: Jidu Technology Co.,Ltd.