WO2022170650A1 - Système et procédé d'interaction homme-machine sans contact - Google Patents

Système et procédé d'interaction homme-machine sans contact Download PDF

Info

Publication number
WO2022170650A1
WO2022170650A1 PCT/CN2021/078688 CN2021078688W WO2022170650A1 WO 2022170650 A1 WO2022170650 A1 WO 2022170650A1 CN 2021078688 W CN2021078688 W CN 2021078688W WO 2022170650 A1 WO2022170650 A1 WO 2022170650A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
action
display panel
display
focus
Prior art date
Application number
PCT/CN2021/078688
Other languages
English (en)
Chinese (zh)
Inventor
汪远
黄磊
Original Assignee
南京微纳科技研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京微纳科技研究院有限公司 filed Critical 南京微纳科技研究院有限公司
Publication of WO2022170650A1 publication Critical patent/WO2022170650A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present application relates to the field of human-computer interaction, and in particular, to a contactless human-computer interaction system and method.
  • Human-computer interaction is a way to realize information interaction between people and devices.
  • people can view the output information of the machine through the display screen, and the machine can obtain the feedback information input by the people through input devices such as mouse and keyboard.
  • the display information of the machine can be extended from the display screen to large-screen projection, and even holographic projection.
  • the way to obtain feedback information from a machine is no longer limited to input devices such as mouse and keyboard, but can also be information such as actions, sounds, and gestures.
  • the present application provides a contactless human-computer interaction system and method to solve the problem in the prior art that when a user views display information through holographic projection, and the user needs to perform corresponding actions according to the display information to achieve human-computer interaction, the person There is a problem of poor user experience in the computer interaction process.
  • the present application provides a contactless human-computer interaction system, including: a display unit, a touch-sensing unit, and an action recognition unit;
  • the display unit is used for displaying the imaging of the interactive content in the interactive area
  • the haptic unit is used to form a resonance focus in the interaction area, and the resonance focus is used to simulate the touch of the interactive content;
  • the motion recognition unit is configured to acquire motion information of the user in the interaction area, and feed back the motion information to the display unit and the touch sensing unit, and the display unit adjusts the interaction content according to the motion information , the haptic unit adjusts the resonance focus according to the action information.
  • the display unit includes: a first display panel and a second display panel;
  • the first display panel is installed at the bottom of the contactless human-computer interaction system, and the installation method is horizontal installation.
  • the first display panel is connected to the touch sensing unit and the motion recognition unit, and the first display panel is used for displaying the interactive content;
  • the second display panel is installed above the first display panel, the second display panel and the first display panel have an included angle, the second display panel is connected with the first display panel, the The second display panel is used to realize the imaging of the interactive content.
  • the display unit is installed below the motion recognition unit, and the first display panel of the display unit is respectively connected with the touch sensing unit and the motion recognition unit;
  • the touch-sensing unit is installed above the motion recognition unit, and the touch-sensing unit is respectively connected to the first display panel of the display unit and the motion recognition unit;
  • the motion recognition unit is installed between the display unit and the touch-sensing unit, and the motion-recognition unit is respectively connected with the first display panel of the display unit and the touch-sensing unit.
  • the display unit is installed below the motion recognition unit, and the first display panel of the display unit is respectively connected with the touch sensing unit and the motion recognition unit;
  • the touch sensing unit is installed under the motion recognition unit, the touch sensing unit overlaps with the second display panel of the display unit, and the touch sensing unit is respectively connected to the first display panel of the display unit and the motion Identify unit connections;
  • the motion recognition unit is installed above the display unit and the touch-sensing unit, and the motion-recognition unit is respectively connected with the first display panel of the display unit and the touch-sensing unit.
  • the touch sensing unit includes a transparent electrode driving array, a transparent ceramic material vibration unit and a transparent packaging material.
  • the display unit is further configured to adjust the angle between the first display panel and the second display panel according to the position of the interaction area.
  • the second display panel is a transmissive imaging panel TMD.
  • the haptic unit includes an ultrasonic generating array.
  • the motion recognition unit includes a depth camera.
  • the system further includes: an auxiliary unit, where the auxiliary unit includes at least one of a temperature module, an air supply module, and a voice module.
  • the auxiliary unit includes at least one of a temperature module, an air supply module, and a voice module.
  • the imaging manner of displaying the interactive content is three-dimensional imaging.
  • the present application provides a contactless human-computer interaction method, including:
  • the action image sequence is an action image sequence of the user in the interaction area, and the action image sequence is a series of action images;
  • a new resonance focus is formed, and the resonance focus is a contact formed by the touch-sensing unit in the interaction area;
  • new interactive content is displayed, and the interactive content is the content displayed in the interactive area of the display unit.
  • the determining action information according to the action image sequence includes:
  • the action image sequence determine the key points of the target part in each action image in the action image sequence
  • a user action is determined.
  • forming a new resonance focus according to the action information includes:
  • forming a new resonance focus according to the action information includes:
  • a new resonance focus is formed according to the new resonance focus arrangement.
  • the displaying new interactive content according to the action information includes:
  • new interactive content is displayed.
  • the displaying new interactive content according to the action information includes:
  • the new interactive content is displayed.
  • the method includes:
  • the interactive content is the content displayed in the interactive area of the display unit
  • a resonance focus is formed, and the resonance focus is used to simulate the tactile sensation of the three-dimensional imaging.
  • the display unit is used to display the three-dimensional imaging of the interactive content in the interactive area;
  • the tactile feedback is obtained when the 3D image is touched;
  • the action recognition unit is used to obtain the action information of the user in the interaction area, and the action recognition unit can also send the action information to the display unit and the haptic unit;
  • the display unit can adjust the interaction according to the action information content;
  • the tactile unit can adjust the resonance focus according to the action information, so as to achieve the effect of improving the user experience and improving the immersion experience.
  • FIG. 1 is a schematic diagram of a contactless human-computer interaction system according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of another contactless human-computer interaction system provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a contactless human-computer interaction method provided by an embodiment of the present application.
  • Human-computer interaction is a way to realize information interaction between people and devices. Initially, human-computer interaction was achieved through a keyboard, mouse, and monitor. People can view the output information of the machine through the display screen, and the machine can obtain the feedback information input by people through input devices such as mouse and keyboard. With the continuous development of science and technology and the continuous improvement of human social needs, the way of human-computer interaction has become more and more immersive and diverse. For example, the display information of the machine can be extended from the display screen to large-screen projection, and even holographic projection. For example, the way to obtain feedback information from a machine is no longer limited to input devices such as mouse and keyboard, but can also be information such as actions, sounds, and gestures.
  • immersive display methods can include three-dimensional projection, Transmissive Mirror Device (TMD), etc.
  • TMD Transmissive Mirror Device
  • commonly used information feedback methods may include sounds, actions, gestures, and the like.
  • the interaction between the user and the three-dimensional imaging usually relies on the user's imagination to achieve an immersive effect, and there is a problem of poor user experience.
  • the present application proposes a contactless human-computer interaction system.
  • the present application combines auxiliary functions such as vision, touch, and motion recognition to form a contactless human-computer interaction system including a display unit, a tactile unit, an motion recognition unit, and an auxiliary system.
  • the display unit adopts TMD technology to realize the function of displaying three-dimensional imaging of interactive content in the interactive area.
  • the use of this TMD technology can realize the display of medium-free images, which can provide users with real visual information.
  • the haptic unit adopts ultrasonic haptic feedback technology, so that the position that needs to be touched in the interactive content can form a resonance focus and generate haptic feedback, so that people can feel the actual shape and touch of the displayed object.
  • the action recognition unit realizes the acquisition of user action images based on depth imaging technologies such as binocular cameras.
  • the action recognition unit can calculate the depth information according to the action pattern. Furthermore, according to the depth information, the action recognition unit can recognize the key points of the target part in the action graph.
  • the action recognition unit can perform action modeling according to the key point, obtain an action model, and then determine the user action.
  • FIG. 1 shows a schematic diagram of a contactless human-computer interaction system provided by an embodiment of the present application.
  • the unattended human-computer interaction system 10 provided in this embodiment includes: a display unit 11 , a touch sensing unit 12 and a motion recognition unit 13 .
  • the display unit 11 is located below the action recognition unit 13 , and is used for displaying the imaging of the interactive content in the interactive area 14 .
  • the display unit 11 may include a liquid crystal display (Liquid Crystal Display, LCD) panel.
  • the image realized by the LCD panel can be a two-dimensional plane image.
  • the resonance focus generated by the touch-sensing unit overlaps the two-dimensional plane image. The user can directly touch the resonance focus in the interaction area to realize interaction.
  • the display unit 11 may include a three-dimensional imaging display.
  • the three-dimensional imaging display is used for realizing three-dimensional imaging of the displayed content.
  • a three-dimensional image of the displayed content will be displayed in the interaction area 14 .
  • the display unit 11 displays the three-dimensional image in the interaction area 14, the resonance focus generated by the touch-sensing unit overlaps the three-dimensional image.
  • the display unit 11 may include a first display panel and a second display panel.
  • the first display panel is an LCD panel
  • the second display panel is a transmissive imaging panel TMD.
  • the first display panel is horizontally installed at the bottom of the unmanned human-computer interaction system 10 .
  • the second display panel is located above the first display panel.
  • the second display panel maintains a certain angle with the first display panel. The included angle can be adjusted according to actual needs, and the adjustment range of the angle is 0-45 degrees.
  • the first display panel is used for displaying interactive content.
  • the second display panel is connected to the first display panel, and is used for displaying the interactive content of the first display panel in the interactive area 14 in a three-dimensional imaging manner.
  • the display unit 11 may also determine the angle between the first display panel and the second display panel according to the position of the interaction area 14 . Furthermore, the display unit 11 can adjust the second display panel according to the angle. The position of the interaction area 14 can be determined according to actual needs.
  • the position of the interaction area 14 may be determined according to the angle between the first display panel and the second display panel.
  • the display unit 11 is connected to the touch sensing unit 12 and the action recognition unit 13 respectively.
  • the connection method may be a connection line.
  • the display unit 11 may be connected to the touch-sensing unit 12 and the motion recognition unit 13 through connection lines such as data cables and network cables.
  • the connection mode may also be a wireless connection.
  • the display unit 11 may be connected to the touch sensing unit 12 and the motion recognition unit 13 through a wireless network, Bluetooth, or the like.
  • the haptic unit 12 is located above the action recognition unit 13, and is used to form a resonance focus in the interaction area 14, and the resonance focus is used to simulate the haptic feeling of the displayed content.
  • the haptic unit 12 may be installed on the upper end of the side wall of the unmanned human-machine interaction system 10 .
  • the angle between the haptic unit 12 and the side wall of the unmanned human-machine interaction system 10 can be adjusted according to actual needs.
  • the installation angle of the haptic unit 12 is related to the angle of the second display panel.
  • the angle between the touch-sensitive unit 12 and the second display panel is in the range of 90-135 degrees.
  • the haptic unit 12 may be an ultrasonic generating array.
  • the haptic unit 12 transmits ultrasonic waves, the ultrasonic waves form a resonance focus in the interaction area 14 .
  • the signal carrier of the ultrasonic wave is preferably the corresponding music signal. The use of the music signal can place the user in a good sound background and improve the experience.
  • the haptic feedback may include shape, material, fineness, and the like.
  • the haptic unit 12 can simulate the tactile sensation of different materials by controlling the frequency and amplitude of the resonance focus.
  • the haptic unit 12 can also form a haptic feeling of a specific shape through the resonance focus, and the state of the specific shape can be a static shape.
  • the motion recognition unit 13 is installed in the middle of the unmanned human-computer interaction system 10, below the touch-sensing unit 12, and above the display unit.
  • the action recognition unit 13 is used for acquiring the action information of the user in the interaction area 14 and feeding back the action information to the display unit 11 and the touch sensing unit 12 .
  • the display unit adjusts the interactive content according to the action information.
  • the haptic unit adjusts the resonance focus according to the motion information.
  • the motion recognition unit 13 and the haptic unit 12 are installed on the same side wall of the unmanned human-computer interaction system 10 .
  • the motion recognition unit 13 can also be installed in other positions, for example, above the touch sensing unit 12, below the second display panel, above the interaction area 14, below the interaction area 14, etc., which are not limited in this application.
  • the installation of the action recognition unit 13 only needs to ensure that the actions of the user in the interaction area 14 can be accurately acquired.
  • the action recognition unit 13 may be a depth camera, such as a structured light camera, a time-of-flight camera, or a binocular stereo camera, and the like.
  • the action recognition unit 13 can capture a sequence of action images of the user in the interaction area 14 . Each action image in the sequence of action images is a depth image.
  • the action recognition unit 13 can determine the action information of the user in the interaction area 14 through the action recognition algorithm.
  • the content displayed in the interaction area 14 is a three-dimensional image of a steering wheel.
  • the haptic unit 12 can form a resonance focus in the interaction area, so that the user can get haptic feedback when touching the steering wheel in the interaction area 14 .
  • the user can hold the steering wheel.
  • the action recognition unit 13 records the actions of the user in the interaction area 14 in real time. The action may include holding the steering wheel, turning the steering wheel, clicking buttons on the steering wheel, and the like.
  • the action recognition unit 13 can detect the action information of the user holding the steering wheel through the action image sequence.
  • the action recognition unit 13 may detect the action information of the user turning the steering wheel to the right through the action image sequence.
  • the untouched human-computer interaction system 10 may further include an auxiliary unit.
  • the auxiliary unit may include a temperature module, an air supply module, a voice module, and the like.
  • the temperature module is used to regulate the temperature of the interaction area 14 and its surroundings.
  • the air supply module is used to cooperate with the interactive content to realize the air supply operation. For example, when the virtual scene is a drive, the air supply module is used to implement an air supply operation according to the vehicle speed.
  • the voice module can be used to output voice information, such as playing audio files, playing music, and so on.
  • the voice module can also be used to obtain voice information, such as obtaining voice information fed back by the user to implement voice interaction.
  • the contactless human-computer interaction system includes a display unit, a touch sensing unit and a motion recognition unit.
  • the display unit is used for displaying the three-dimensional imaging of the interactive content in the interactive area.
  • the haptic unit is used to form a resonance focus in the interaction area according to the interaction content. The resonant focus allows the user to get haptic feedback when touching the three-dimensional image.
  • the action recognition unit is used to obtain the action information of the user in the interaction area.
  • the motion recognition unit may also send the motion information to the display unit and the haptic unit.
  • the display unit can adjust the interactive content according to the action information.
  • the haptic unit can adjust the resonance focus according to the motion information.
  • the present application by using a display unit and a haptic unit, tactile and visual simulations are realized in the same area, and a three-dimensional imaging with tactile feedback is obtained.
  • the realization of the three-dimensional imaging with haptic feedback improves the authenticity of the three-dimensional imaging and improves the immersive experience effect of the user.
  • the realization of the tactile feedback makes the user's actions in the interaction process more in line with the actual actions, thereby improving the recognition accuracy of the user's actions by the action recognition module.
  • the accurate recognition of the user's action by the action recognition module enables the display unit and the touch-sensing unit to more accurately realize the scene change according to the user's action information, thereby further improving the user's immersive experience.
  • FIG. 2 shows a schematic diagram of another contactless human-computer interaction system provided by an embodiment of the present application.
  • the unmanned contact human-computer interaction system 10 provided in this embodiment includes: a display unit 11 , a touch sensing unit 12 and a motion recognition unit 13 .
  • the display unit 11 and the motion recognition unit 13 are implemented similarly to the display unit 11 and the motion recognition unit 13 in the embodiment of FIG. 1 , and will not be repeated in this embodiment.
  • the haptic unit 12 may also be located below the action recognition unit.
  • the touch sensing unit 12 overlaps with the second display panel of the display unit 11 .
  • the touch sensing unit 12 can be located above the second display panel.
  • the haptic unit 12 can be adjusted together with the second display panel, so as to ensure that the position where the resonance focus is formed is in the interaction area 14, and the tactile sensation generated by the resonance focus Feedback is consistent with 3D imaging.
  • the haptic unit 12 may be an ultrasonic emitting array made of optically transparent material.
  • the touch sensing unit 12 may include a transparent electrode driving array, a transparent ceramic material vibration unit, a transparent packaging material, and the like. The use of the transparent light material enables the haptic unit to realize sound field focusing without affecting the interactive content displayed by the display unit, and to realize tactile imaging in the corresponding interactive area.
  • the touch-sensing unit by combining the touch-sensing unit with the second display panel, the touch-sensing unit does not need to be adjusted correspondingly when adjusting the angle of the second display panel, thereby improving the Adjustment efficiency of the untouched human-computer interaction system.
  • the combination of the touch-sensing unit and the second display panel can also avoid the problem of deviation between the resonance focus and the three-dimensional imaging caused by the angle deviation, thereby achieving the effect of improving user experience.
  • FIG. 3 shows a flowchart of a contactless human-computer interaction method provided by an embodiment of the present application.
  • the contactless human-computer interaction system includes a display unit, a touch-sensing unit and an action recognition unit.
  • the display unit, the touch-sensing unit and the motion recognition unit of the contactless human-computer interaction system may specifically include the following steps:
  • the user can interact with the three-dimensional imaging in the interactive area.
  • the action recognition unit acquires the action image sequence of the user in real time.
  • the motion image sequence includes a series of motion images collected by the motion recognition unit at a certain frequency.
  • the frequency at which the motion recognition unit acquires motion images may be 30 frames per second, 50 frames per second, etc., which is not limited in this application.
  • the motion image sequence may be a motion image sequence including motion images of a certain duration.
  • the motion image sequence may include a 1-second motion image, a 2-second motion image, etc., which is not limited in this application.
  • the motion image sequence may be a motion image sequence including a preset number of motion images.
  • the motion recognition unit regards the preset number of motion images as a motion image sequence.
  • the preset number can be a positive integer.
  • the action recognition unit may include a depth camera.
  • the action image obtained by the action recognition unit using the depth camera is a depth image.
  • the motion recognition unit processes each motion image in the motion image sequence.
  • the motion recognition unit may acquire motion information in each motion image, and determine motion information of the motion image sequence according to the motion information in each motion image.
  • the motion recognition unit sends the motion information to the touch sensing unit.
  • the action information may include key points, changes in key points, user actions, object movement, and the like.
  • the three-dimensional imaging content is a steering wheel
  • the user's action is to turn the steering wheel to the right.
  • the action information may include hand key points, hand key point changes, steering wheel key points, steering wheel key points changes, the user's right rotation direction, the user's right rotation 45 degrees, the steering wheel rotation to the right, the steering wheel rotation to the right Rotate 45 degrees, etc.
  • the step of determining the action information of the action image sequence by the action recognition unit may include:
  • Step 1 According to the action image sequence, determine the key points of the target part in each action image in the action image sequence.
  • the action recognition unit analyzes each action image in the action image sequence.
  • the action recognition unit may include a keypoint recognition algorithm.
  • the action recognition unit recognizes the key points of the target part of the user in the action image through the key point recognition algorithm.
  • the key points may include hand joints, hand contours, etc., and the key points are determined according to the key point identification algorithm.
  • the key point identification algorithm can be an existing algorithm or an improved algorithm.
  • the action recognition unit can recognize the key point of the user's hand in each action image through the key point recognition algorithm.
  • the keypoint recognition algorithm may also be used to identify target objects in the action image, such as keypoints of a steering wheel.
  • the key points may include feature points on the steering wheel.
  • Step 2 Perform action modeling according to the key points of the target part in each action image to obtain an action model.
  • the motion recognition unit performs motion modeling on the motion image sequence according to the key points of each motion image in the motion image sequence.
  • the motion model obtained by the motion modeling may include the motion trajectory of each key point.
  • Step 3 Determine the user action according to the action model.
  • the action recognition unit determines the user action in the action image sequence according to the action model. Or, the action recognition unit determines the change of the target object in the action image sequence according to the action model.
  • the action recognition unit may also predict the user action at the next moment according to the action model.
  • the action recognition unit may also predict the action tendency of the user according to the action model. The user action and action tendency at the next moment can help the display unit and the haptic unit to calculate the interaction content and resonance focus at the next moment in advance, so that the change process of the 3D imaging is more natural and the delay is reduced.
  • the touch sensing unit receives the motion information sent by the motion recognition unit.
  • the haptic unit determines a new resonance focus according to the action information, and forms the new resonance focus in the interaction area.
  • the step of the haptic unit determining a new resonance focus according to the motion information may include:
  • Step 1 Determine the change of the resonance focus according to the change of the key point.
  • Step 2 According to the change of the resonance focus, a new resonance focus is formed.
  • the three-dimensional imaging content is a steering wheel
  • the action information is that the user turns the steering wheel 45 degrees to the right.
  • the haptic unit rotates the position of each resonance focus of the steering wheel by 45 degrees in the clockwise direction according to the motion information.
  • the resonance focus of the steering wheel has a changing process.
  • the haptic unit can change the rotation process of the steering wheel at a frequency of updating every 5 degrees.
  • the step of determining the new resonance focus by the haptic unit according to the motion information may further include:
  • Step 1 Determine a new resonance focus arrangement according to a user action.
  • Step 2 forming a new resonance focus according to the new resonance focus arrangement.
  • the haptic unit can acquire a new arrangement of resonance focal points in the area where the door is located, so as to match the new scene after the door is opened in the three-dimensional imaging.
  • the step of determining the new resonance focus by the haptic unit may further include:
  • Step 1 Acquire interactive content, where the interactive content is the content displayed in the interactive area of the display unit.
  • the touch sensing unit acquires the interactive content from the display unit.
  • the interactive content is the interactive content to be displayed by the display unit or the interactive content currently displayed.
  • Step 2. Determine focus information according to the interactive content, where the focus information is used to describe the resonance focus.
  • the touch sensing unit determines, according to the interactive content, an area in the interactive content that may interact with the user.
  • the haptic unit acquires the preset shape and preset tactile sense of the area.
  • the haptic unit determines focus information according to the preset shape and the preset tactile sense. The focus information is used to describe the resonance focus and instruct the haptic unit to generate the resonance focus.
  • Step 3 According to the focus information, a resonance focus is formed, and the resonance focus is used to simulate the tactile sensation of three-dimensional imaging.
  • the touch sensing unit After determining the focus information, the touch sensing unit generates a resonance focus according to the focus information.
  • the tactile sensation formed by the resonance focus is the shape and tactile sensation described by the focus information.
  • S104 Display new interactive content according to the action information, where the interactive content is the content displayed in the interactive area of the display unit.
  • the display unit receives the motion information sent by the motion recognition unit.
  • the display unit determines new interactive content according to the action information, and displays the new interactive content in the interactive area.
  • the step of the display unit determining the new interactive content according to the action information may include:
  • Step 1 Determine the change of the interactive content according to the change of the key point.
  • Step 2 Display the new interactive content according to the change of the interactive content.
  • the three-dimensional imaging content is a steering wheel
  • the action information is that the user turns the steering wheel 45 degrees to the right.
  • the display unit adjusts the angle of the steering wheel to rotate 45 degrees to the right according to the motion information.
  • the display content of the steering wheel has a changing process.
  • the display unit may change the rotation process of the steering wheel at a frequency of updating once every 5 degrees.
  • the step of determining the new interactive content by the display unit according to the action information may further include:
  • Step 1 Determine new interactive content according to user actions.
  • Step 2 Display new interactive content.
  • the display unit can acquire the new interactive content in the area where the door is located, and display the pushed door and the new interactive content in the area where the door is located in the next scene.
  • the user interacts with the three-dimensional imaging in the interaction area.
  • the action recognition unit acquires the action image sequence of the user in the interaction area.
  • the motion recognition unit recognizes each motion image in the motion image sequence, and determines the user's motion information.
  • the motion recognition unit sends the motion information to the display unit and the touch sensing unit.
  • the haptic unit forms a new resonance focus according to the motion information.
  • the display unit displays new interactive content according to the action information.
  • the display unit and the haptic unit by using the display unit and the haptic unit, tactile and visual simulations are realized in the same area, and a three-dimensional imaging with tactile feedback is obtained.
  • the realization of the three-dimensional imaging with haptic feedback improves the authenticity of the three-dimensional imaging and improves the immersive experience effect of the user.
  • the realization of the tactile feedback makes the user's actions in the interaction process more in line with the actual actions, thereby improving the recognition accuracy of the user's actions by the action recognition module.
  • the accurate recognition of the user's action by the action recognition module enables the display unit and the touch-sensing unit to more accurately realize the scene change according to the user's action information, thereby further improving the user's immersive experience.
  • the display unit is described by taking the display mode of three-dimensional imaging as an example, but those skilled in the art should understand that, according to the description in the specification, the display mode of the display unit may also be two dimensional flat image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

La présente invention concerne un système et un procédé d'interaction homme-machine sans contact. Le système comprend une unité d'affichage, une unité tactile et une unité de reconnaissance d'action. L'unité d'affichage est utilisée pour afficher, dans une zone d'interaction, une image de contenu interactif. L'unité tactile est utilisée pour former un foyer de résonance dans la zone d'interaction en fonction du contenu interactif, le foyer de résonance pouvant amener un utilisateur à obtenir une rétroaction tactile lorsqu'il touche l'image. L'unité de reconnaissance d'action est utilisée pour acquérir des informations d'action, portant sur l'utilisateur dans la zone d'interaction. L'unité de reconnaissance d'action peut également envoyer les informations d'action à l'unité d'affichage et à l'unité tactile. L'unité d'affichage peut ajuster le contenu interactif en fonction des informations d'action. L'unité tactile peut ajuster la mise au point de résonance en fonction des informations d'action. Au moyen du procédé dans la présente invention, l'expérience de l'utilisateur est améliorée, et un effet d'immersion est amélioré.
PCT/CN2021/078688 2021-02-09 2021-03-02 Système et procédé d'interaction homme-machine sans contact WO2022170650A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110179345.8 2021-02-09
CN202110179345.8A CN114911338A (zh) 2021-02-09 2021-02-09 无接触人机交互系统和方法

Publications (1)

Publication Number Publication Date
WO2022170650A1 true WO2022170650A1 (fr) 2022-08-18

Family

ID=82762177

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/078688 WO2022170650A1 (fr) 2021-02-09 2021-03-02 Système et procédé d'interaction homme-machine sans contact

Country Status (2)

Country Link
CN (1) CN114911338A (fr)
WO (1) WO2022170650A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117369649A (zh) * 2023-12-05 2024-01-09 山东大学 基于本体感觉的虚拟现实交互系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699235A (zh) * 2013-12-05 2015-06-10 浙江大学 基于超声波的三维空间成像交互方法及系统
DE102014015087A1 (de) * 2014-10-11 2016-04-14 Daimler Ag Betreiben eines Steuersystems für einen Kraftwagen sowie Steuersystem für einen Kraftwagen
CN110147161A (zh) * 2019-03-29 2019-08-20 东南大学 基于超声波相控阵的多指绳索力触觉反馈装置及其反馈方法
CN110740896A (zh) * 2017-07-04 2020-01-31 宝马股份公司 用于运输工具的用户界面以及具有用户界面的运输工具
CN111752389A (zh) * 2020-06-24 2020-10-09 京东方科技集团股份有限公司 交互系统、交互方法和机器可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699235A (zh) * 2013-12-05 2015-06-10 浙江大学 基于超声波的三维空间成像交互方法及系统
DE102014015087A1 (de) * 2014-10-11 2016-04-14 Daimler Ag Betreiben eines Steuersystems für einen Kraftwagen sowie Steuersystem für einen Kraftwagen
CN110740896A (zh) * 2017-07-04 2020-01-31 宝马股份公司 用于运输工具的用户界面以及具有用户界面的运输工具
CN110147161A (zh) * 2019-03-29 2019-08-20 东南大学 基于超声波相控阵的多指绳索力触觉反馈装置及其反馈方法
CN111752389A (zh) * 2020-06-24 2020-10-09 京东方科技集团股份有限公司 交互系统、交互方法和机器可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117369649A (zh) * 2023-12-05 2024-01-09 山东大学 基于本体感觉的虚拟现实交互系统及方法
CN117369649B (zh) * 2023-12-05 2024-03-26 山东大学 基于本体感觉的虚拟现实交互系统及方法

Also Published As

Publication number Publication date
CN114911338A (zh) 2022-08-16

Similar Documents

Publication Publication Date Title
US20200409529A1 (en) Touch-free gesture recognition system and method
US9575594B2 (en) Control of virtual object using device touch interface functionality
JP6392747B2 (ja) ディスプレイ装置
JP2021508115A (ja) 空中触覚システムと人との相互作用
WO2022012194A1 (fr) Procédé et appareil d'interaction, dispositif d'affichage et support de stockage
EP3007441A1 (fr) Procédé d'affichage interactif, procédé et système de commande pour obtenir l'affichage d'une image holographique
JP2011022984A (ja) 立体映像インタラクティブシステム
JP2018142313A (ja) 仮想感情タッチのためのシステム及び方法
JP2018113025A (ja) 触覚によるコンプライアンス錯覚のためのシステム及び方法
WO2022170650A1 (fr) Système et procédé d'interaction homme-machine sans contact
US11620790B2 (en) Generating a 3D model of a fingertip for visual touch detection
JP2006012184A (ja) 表示装置、情報処理装置、及びその制御方法
CN215068137U (zh) 无接触人机交互系统
JP6803971B2 (ja) 視覚触覚統合呈示装置
JPH08129447A (ja) 3次元座標入力方法および3次元座標入力装置
WO2015121963A1 (fr) Dispositif de commande de jeu
JP2022047549A (ja) 情報処理装置、情報処理方法、及び、記録媒体
JP2004185488A (ja) 座標入力装置
TWI757941B (zh) 影像處理系統以及影像處理裝置
JP2004310351A (ja) 座標入力装置
WO2017079910A1 (fr) Procédé et système d'interaction homme-machine en réalité virtuelle basés sur les gestes
US20200356249A1 (en) Operating user interfaces
CN104679237A (zh) 一种基于电磁感应的全息交互装置、方法及感应笔
WO2017116426A1 (fr) Dispositif électronique amovible
Bongers et al. Improving gestural articulation through active tactual feedback in musical instruments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21925306

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21925306

Country of ref document: EP

Kind code of ref document: A1