CN111652155A - Human body movement intention identification method and system - Google Patents

Human body movement intention identification method and system Download PDF

Info

Publication number
CN111652155A
CN111652155A CN202010500206.6A CN202010500206A CN111652155A CN 111652155 A CN111652155 A CN 111652155A CN 202010500206 A CN202010500206 A CN 202010500206A CN 111652155 A CN111652155 A CN 111652155A
Authority
CN
China
Prior art keywords
human body
motion
coordinate
coordinates
body movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010500206.6A
Other languages
Chinese (zh)
Inventor
王兴坚
张卿
王少萍
苗忆南
安麦灵
张超
张敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202010500206.6A priority Critical patent/CN111652155A/en
Publication of CN111652155A publication Critical patent/CN111652155A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • B25J11/009Nursing, e.g. carrying sick persons, pushing wheelchairs, distributing drugs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Abstract

The invention discloses a method and a system for identifying human motion intention. The method comprises the following steps: acquiring a motion coordinate in the motion of a human body; determining coordinates of a sight line attention point in human motion through a wearable glasses type eye movement instrument; recognizing the coordinates of obstacles in a visual field scene in the motion of a human body through a trained convolutional neural network model; and predicting the human body movement yaw angle information through a trained recurrent neural network model according to the movement coordinate, the attention point coordinate and the obstacle coordinate. The exoskeleton robot can quickly and accurately identify the walking movement direction of the human body so as to provide accurate wearer movement information for the exoskeleton robot.

Description

Human body movement intention identification method and system
Technical Field
The invention relates to the technical field of computer mode identification, in particular to a method and a system for identifying human motion intention.
Background
With the continuous progress of scientific technology, machines are increasingly participating in human life. The mechanical exoskeleton can be used for medical treatment, rehabilitation training, military assistance and other situations. The power exoskeleton can improve the capacity of individual combat, and the power-assisted exoskeleton can compensate the walking capacity of the patient and assist the patient in rehabilitation training.
To realize efficient man-machine cooperative work, extremely strong man-machine interaction capability and intelligent control are required. However, in the conventional human-computer interaction method, the problems of redundant motion of the robot, motion error accumulation of a human-computer interaction system, poor user interaction experience and the like are still not effectively solved. As a participant of a human-computer interaction system, the human control system is the highest-level, complex and intelligent system on the earth at present. In order to improve the human-computer interaction capacity, the most effective way is to imitate human beings and organically integrate a plurality of subject fields such as biology, control technology, sensor data fusion, machine learning and the like so as to obtain an intelligent and friendly optimal platform of the robot. An intelligent human-computer interaction strategy is established, which can be mainly divided into the following three stages: human motion intention estimation, robot assistance control and stability control. The method is a precondition and basis for providing natural, comfortable, reasonable and healthy and friendly on-site feeling for a wearer, and is used for qualitatively and quantitatively capturing and analyzing human motion intention and quickly and accurately mapping the human motion intention to the input of a compliance control system so as to complete a semi-automatic control strategy taking an exoskeletal wearing operator as a main control model and an active intelligent control strategy based on a human and robot coordination control model. In the interactive power-assisted movement process of the lower limb exoskeleton robot and a wearer, a perception system is a research difficulty for capturing the human body movement intention in real time, quickly and accurately.
Disclosure of Invention
The invention aims to provide a method and a system for identifying human motion intention, which are used for accurately and quickly obtaining the human motion intention.
In order to achieve the purpose, the invention provides the following scheme:
a method for recognizing human body movement intention, the method comprising:
acquiring a motion coordinate in the motion of a human body;
determining coordinates of a sight line attention point in human motion through a wearable glasses type eye movement instrument;
recognizing the coordinates of obstacles in a visual field scene in the motion of a human body through a trained convolutional neural network model;
and predicting the human body movement yaw angle information through a trained recurrent neural network model according to the movement coordinate, the attention point coordinate and the obstacle coordinate.
Optionally, the acquiring a motion coordinate in the motion of the human body specifically includes:
acquiring video information in a monocular camera worn on a human body;
acquiring motion information of an IMU sensor worn on a human body;
and processing the video information and the motion information by adopting a visual inertial odometer scheme to obtain motion coordinates in the motion of the human body.
Optionally, the determining, by a wearable glasses-type eye tracker, coordinates of a sight line attention point in human motion specifically includes:
acquiring video information of pupils of human eyes through the wearable glasses-type eye movement instrument;
and determining the coordinates of the sight line attention points in the human body movement according to the video information of the pupils of the human eyes.
Optionally, the identifying, by the trained convolutional neural network model, the coordinates of the obstacle in the view scene in the human motion includes:
acquiring visual field scene video information in human body movement through the wearable glasses type eye tracker;
identifying each frame of image in the visual field scene video information through a trained convolutional neural network model, and determining an obstacle in the image;
and performing frame selection on the obstacles in each frame of image through the rectangular frame to obtain the coordinates of the obstacles in the image.
Optionally, the method further includes: and synchronizing the motion coordinate, the attention point coordinate and the obstacle coordinate into equally spaced signals by adopting a spline interpolation method.
The invention also provides a system for identifying human body movement intention, which comprises:
the motion coordinate acquisition module is used for acquiring motion coordinates in human body motion;
the attention point coordinate determination module is used for determining the coordinates of the sight line attention point in the human motion through a wearable glasses type eye tracker;
the obstacle coordinate identification module is used for identifying obstacle coordinates in a visual field scene in human motion through a trained convolutional neural network model;
and the prediction module is used for predicting the human body movement yaw angle information through the trained recurrent neural network model according to the movement coordinate, the attention point coordinate and the obstacle coordinate.
Optionally, the motion coordinate acquiring module specifically includes:
the first video information acquisition unit is used for acquiring video information in a monocular camera worn on a human body;
the motion information acquisition unit is used for acquiring motion information of the IMU sensor worn on the human body;
and the processing unit is used for processing the video information and the motion information by adopting a visual inertial odometer scheme to obtain motion coordinates in the motion of the human body.
Optionally, the focus point coordinate determining module specifically includes:
the second video information acquisition unit is used for acquiring video information of pupils of human eyes through the wearable glasses type eye tracker;
and the attention point coordinate determining unit is used for determining the coordinates of the sight line attention point in the human body movement according to the video information of the pupils of the human eyes.
Optionally, the obstacle coordinate identification module specifically includes:
the third video information acquisition unit is used for acquiring visual field scene video information in human body movement through the wearable glasses type eye tracker;
the obstacle identification unit is used for identifying each frame of image in the visual field scene video information through a trained convolutional neural network model and determining obstacles in the image;
and the coordinate determination unit is used for performing frame selection on the obstacles in each frame of image through the rectangular frame to obtain the coordinates of the obstacles in the image.
Optionally, the method further includes:
and the interpolation module is used for synchronizing the motion coordinate, the attention point coordinate and the obstacle coordinate into equally spaced signals by adopting a spline interpolation method.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
(1) according to the invention, a module integrating a wearable monocular camera and an IMU is utilized, and a visual inertial odometer scheme is adopted, so that the positioning of a human body in a space environment can be realized without an external positioning system, and the positioning information can be used for calculating the yaw angle information in the human body movement process;
(2) according to the invention, an interest point concerned by human eyes is obtained in real time by using an eye tracker, the attention degree of a wearer to an object in a space environment can be analyzed through the data, and the influence of the object on the walking direction of a human body can be analyzed by combining the coordinate point of the identified object in the invention;
(3) the invention adopts the recurrent neural network as a network model for fusing various information, and has the advantages that the recurrent neural network has memory capacity for historical information, and can predict the yaw angle information of a person at the next moment by combining the historical information, the current motion information and the attention point of the person in the environment within a period of time, thereby accurately and quickly obtaining the human motion intention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a method for recognizing human body movement intention according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an LSTM recurrent neural network according to an embodiment of the present invention;
fig. 3 is a block diagram of a human motion intention recognition system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a method and a system for identifying human motion intention, which are used for accurately and quickly obtaining the human motion intention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, a method for recognizing human body movement intention includes the following steps:
step 101: and acquiring motion coordinates in the motion of the human body. Specifically, video information in a monocular camera worn on a human body is acquired; acquiring motion information of an IMU sensor worn on a human body; and processing the video information and the motion information by adopting a visual inertial mileage scheme to obtain motion coordinates in the motion of the human body.
In the embodiment of the invention, a scheme of selecting VINS-Mono as a visual inertial odometer is adopted, firstly, an inner reference and an outer reference of a camera are obtained by utilizing a Zhangyingyou marking method, the drift and random error of an IMU are provided when the IMU leaves a factory, an image corner point is identified by utilizing an ORB method, the matching between two frames of feature points is realized by a K nearest neighbor method, the rotation and translation of the camera are solved by utilizing an N-point perspective method, the pose of the camera is estimated, the IMU drift error is calculated by utilizing IMU kinematics, a state vector is defined as all camera states in a sliding window, the change parameters from the camera to the IMU and the inverse depths of all 3D points by utilizing an IMU pre-integration method, an observation point is an observed matched feature point, the maximum posterior estimation of the observation point on the state vector is solved by utilizing a graph optimization method, and then the coordinate point of a human body in.
Step 102: and determining the coordinates of the sight line attention point in the human motion through a wearable glasses type eye tracker. Specifically, video information of the eye through holes is obtained through the wearable glasses type eye tracker; and determining the coordinates of the sight line attention points in the human body movement according to the video information of the pupils of the human eyes.
By adopting the wearable glasses type eye movement instrument, after a human body wears the eye movement instrument, the video of the surrounding environment at the angle of the human eyes can be recorded by the eye movement instrument; the wearable eye tracker obtains video information of pupils of human eyes by utilizing an infrared camera and infrared rays on the wearable eye tracker, and solves a two-dimensional pixel coordinate point (X, Y) of a plane watched by eyes of a wearer in the surrounding environment video and a three-dimensional coordinate point (X, Y, Z) under a three-dimensional coordinate system with the eye tracker equipment as an origin according to an algorithm built in the eye tracker.
Step 103: and recognizing the coordinates of the obstacles in the visual field scene in the human body movement through the trained convolutional neural network model. Specifically, the wearable glasses type eye tracker is used for acquiring the video information of the field scene in the human motion; identifying each frame of image in the visual field scene video information through a trained convolutional neural network model, and determining an obstacle in the image; and performing frame selection on the obstacles in each frame of image through the rectangular frame to obtain the coordinates of the obstacles in the image.
The glasses type eye tracker can record the video of the surrounding environment at the angle of human eyes, and the video stream is input to a convolutional neural network for target detection. The convolutional neural network adopts the existing open source scheme, is trained by an open source target detection data set in advance, and can identify preset label objects in the environment, such as a human body, a chair, a display and the like. The method comprises the steps of dividing a video into single-frame pictures, inputting each frame of the video into a convolutional neural network for identification, marking the plane position of a label object in each frame of the image by using a rectangular frame, and obtaining the center coordinates (x, y) of the identified rectangular frame and the height h and width w of the rectangular frame as the position coordinates (x, y, w, h) of the object observed by human eyes.
Step 104: and predicting the human body movement yaw angle information through a trained recurrent neural network model according to the movement coordinate, the attention point coordinate and the obstacle coordinate.
The recurrent neural network is a long-time and short-time memory network model, the network inputs a human eye fixation point coordinate vector, an obstacle object coordinate vector (x, y, w, h) and a motion coordinate vector, integrates the vector coordinates into a vector as an input vector, solves the variation of the yaw angle of the next moment and the current moment by using the attitude coordinate information as the label data corresponding to the current input vector, and inputs the label data to the long-time and short-time memory network model. The training duration short-term memory network sets the step length to be 3, namely when training the current time T, the network combines the all-connected layer vectors which are output by the historical position and attitude coordinates at the T-1 and T-2 times in the network, new output is calculated through the network at the current time, and the yaw angle value is output and used for predicting the motion direction of the human body.
In the embodiment of the present invention, the recurrent neural network is selected as the long-term memory LSTM network, and fig. 2 is a network structure thereof. The network is provided with a memory gate and a forgetting gate, so that the gradient disappearance problem of the recurrent neural network can be eliminated, and the network has the capability of memorizing long-time information. The synchronized signal is input into a bidirectional long-short time memory network, and the rotation angle information required by the current moment according to the track motion is output through network processing.
The key of the LSTM is a memory block, which mainly includes three gates (form gate, input gate, output gate) and a memory cell (cell). The upper horizontal line within the box is called the cell state. The network update relationship of LSTM is as follows:
it=α(Wxixt+Whiht-1+Wcict-1+bi)
ft=α(Wxfxt+Whfht-1+Wcfct-1+bf)
ot=α(Wxoxt+Whoht-1+Wcoct-1+bo)
ct=ft*ct-1+it*tanh(Wxcxt+Whcht-1+bc)
ht=ot*tanh(ct)
wherein itIs an input gate, ftIs a forgetting door otIs an output gate, ctIs the cell state, htIs the output of the t state.
The signals input into the recurrent neural network model have different frequencies and different phases, and need to be processed, and the information is synchronized into equally spaced signals by a spline interpolation method.
As shown in fig. 3, the present invention also provides a human motion intention recognition system, which includes:
a motion coordinate acquiring module 301, configured to acquire motion coordinates in human body motion.
The motion coordinate acquiring module 301 specifically includes:
the first video information acquisition unit is used for acquiring video information in a monocular camera worn on a human body;
the motion information acquisition unit is used for acquiring motion information of the IMU sensor worn on the human body;
and the processing unit is used for processing the video information and the motion information by adopting a visual inertial odometer scheme to obtain motion coordinates in the motion of the human body.
And the attention point coordinate determination module 302 is used for determining the coordinates of the sight line attention point in the human motion through a wearable glasses type eye tracker.
The attention point coordinate determining module 302 specifically includes:
the second video information acquisition unit is used for acquiring video information of pupils of human eyes through the wearable glasses type eye tracker;
and the attention point coordinate determining unit is used for determining the coordinates of the sight line attention point in the human body movement according to the video information of the pupils of the human eyes.
And the obstacle coordinate identification module 303 is used for identifying obstacle coordinates in the visual field scene in the human body movement through the trained convolutional neural network model.
The obstacle coordinate identification module 303 specifically includes:
the third video information acquisition unit is used for acquiring visual field scene video information in human body movement through the wearable glasses type eye tracker;
the obstacle identification unit is used for identifying each frame of image in the visual field scene video information through a trained convolutional neural network model and determining obstacles in the image;
and the coordinate determination unit is used for performing frame selection on the obstacles in each frame of image through the rectangular frame to obtain the coordinates of the obstacles in the image.
And the prediction module 304 is used for predicting the human body movement yaw angle information through the trained recurrent neural network model according to the movement coordinates, the attention point coordinates and the obstacle coordinates.
The system further comprises:
and the interpolation module is used for synchronizing the motion coordinate, the attention point coordinate and the obstacle coordinate into equally spaced signals by adopting a spline interpolation method.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A method for recognizing human body movement intention is characterized by comprising the following steps:
acquiring a motion coordinate in the motion of a human body;
determining coordinates of a sight line attention point in human motion through a wearable glasses type eye movement instrument;
recognizing the coordinates of obstacles in a visual field scene in the motion of a human body through a trained convolutional neural network model;
and predicting the human body movement yaw angle information through a trained recurrent neural network model according to the movement coordinate, the attention point coordinate and the obstacle coordinate.
2. The method for recognizing human body movement intention according to claim 1, wherein the acquiring of the movement coordinates in the human body movement specifically comprises:
acquiring video information in a monocular camera worn on a human body;
acquiring motion information of an IMU sensor worn on a human body;
and processing the video information and the motion information by adopting a visual inertial odometer scheme to obtain motion coordinates in the motion of the human body.
3. The method for recognizing human body movement intention according to claim 1, wherein the determining the coordinates of the line of sight attention point in the human body movement by a wearable glasses type eye tracker specifically comprises:
acquiring video information of pupils of human eyes through the wearable glasses-type eye movement instrument;
and determining the coordinates of the sight line attention points in the human body movement according to the video information of the pupils of the human eyes.
4. The method for recognizing human body movement intention according to claim 1, wherein the recognizing the coordinates of the obstacles in the visual field scene in the human body movement through the trained convolutional neural network model specifically comprises:
acquiring visual field scene video information in human body movement through the wearable glasses type eye tracker;
identifying each frame of image in the visual field scene video information through a trained convolutional neural network model, and determining an obstacle in the image;
and performing frame selection on the obstacles in each frame of image through the rectangular frame to obtain the coordinates of the obstacles in the image.
5. The method for recognizing human body movement intention according to claim 1, further comprising: and synchronizing the motion coordinate, the attention point coordinate and the obstacle coordinate into equally spaced signals by adopting a spline interpolation method.
6. A system for recognizing human body movement intention, the system comprising:
the motion coordinate acquisition module is used for acquiring motion coordinates in human body motion;
the attention point coordinate determination module is used for determining the coordinates of the sight line attention point in the human motion through a wearable glasses type eye tracker;
the obstacle coordinate identification module is used for identifying obstacle coordinates in a visual field scene in human motion through a trained convolutional neural network model;
and the prediction module is used for predicting the human body movement yaw angle information through the trained recurrent neural network model according to the movement coordinate, the attention point coordinate and the obstacle coordinate.
7. The system for recognizing human body movement intention according to claim 6, wherein the movement coordinate acquiring module specifically comprises:
the first video information acquisition unit is used for acquiring video information in a monocular camera worn on a human body;
the motion information acquisition unit is used for acquiring motion information of the IMU sensor worn on the human body;
and the processing unit is used for processing the video information and the motion information by adopting a visual inertial odometer scheme to obtain motion coordinates in the motion of the human body.
8. The human body movement intention recognition system according to claim 6, wherein the attention point coordinate determination module specifically comprises:
the second video information acquisition unit is used for acquiring video information of pupils of human eyes through the wearable glasses type eye tracker;
and the attention point coordinate determining unit is used for determining the coordinates of the sight line attention point in the human body movement according to the video information of the pupils of the human eyes.
9. The system for recognizing human body movement intention according to claim 6, wherein the obstacle coordinate recognition module specifically comprises:
the third video information acquisition unit is used for acquiring visual field scene video information in human body movement through the wearable glasses type eye tracker;
the obstacle identification unit is used for identifying each frame of image in the visual field scene video information through a trained convolutional neural network model and determining obstacles in the image;
and the coordinate determination unit is used for performing frame selection on the obstacles in each frame of image through the rectangular frame to obtain the coordinates of the obstacles in the image.
10. The human body movement intention recognition system according to claim 6, further comprising:
and the interpolation module is used for synchronizing the motion coordinate, the attention point coordinate and the obstacle coordinate into equally spaced signals by adopting a spline interpolation method.
CN202010500206.6A 2020-06-04 2020-06-04 Human body movement intention identification method and system Pending CN111652155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010500206.6A CN111652155A (en) 2020-06-04 2020-06-04 Human body movement intention identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010500206.6A CN111652155A (en) 2020-06-04 2020-06-04 Human body movement intention identification method and system

Publications (1)

Publication Number Publication Date
CN111652155A true CN111652155A (en) 2020-09-11

Family

ID=72347230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010500206.6A Pending CN111652155A (en) 2020-06-04 2020-06-04 Human body movement intention identification method and system

Country Status (1)

Country Link
CN (1) CN111652155A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986781A (en) * 2020-08-24 2020-11-24 龙马智芯(珠海横琴)科技有限公司 Psychological treatment method and device based on man-machine interaction and user terminal

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150309563A1 (en) * 2013-09-17 2015-10-29 Medibotics Llc Motion Recognition Clothing [TM] with Flexible Electromagnetic, Light, or Sonic Energy Pathways
US9727790B1 (en) * 2010-06-04 2017-08-08 Masoud Vaziri Method and apparatus for a wearable computer with natural user interface
CN109345542A (en) * 2018-09-18 2019-02-15 重庆大学 A kind of wearable visual fixations target locating set and method
CN109782902A (en) * 2018-12-17 2019-05-21 中国科学院深圳先进技术研究院 A kind of operation indicating method and glasses
CN110095116A (en) * 2019-04-29 2019-08-06 桂林电子科技大学 A kind of localization method of vision positioning and inertial navigation combination based on LIFT
CN110125909A (en) * 2019-05-22 2019-08-16 南京师范大学镇江创新发展研究院 A kind of multi-information fusion human body exoskeleton robot Control protection system
US20200167957A1 (en) * 2018-11-28 2020-05-28 National Taiwan University Correcting method and device for eye-tracking

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9727790B1 (en) * 2010-06-04 2017-08-08 Masoud Vaziri Method and apparatus for a wearable computer with natural user interface
US20150309563A1 (en) * 2013-09-17 2015-10-29 Medibotics Llc Motion Recognition Clothing [TM] with Flexible Electromagnetic, Light, or Sonic Energy Pathways
CN109345542A (en) * 2018-09-18 2019-02-15 重庆大学 A kind of wearable visual fixations target locating set and method
US20200167957A1 (en) * 2018-11-28 2020-05-28 National Taiwan University Correcting method and device for eye-tracking
CN109782902A (en) * 2018-12-17 2019-05-21 中国科学院深圳先进技术研究院 A kind of operation indicating method and glasses
CN110095116A (en) * 2019-04-29 2019-08-06 桂林电子科技大学 A kind of localization method of vision positioning and inertial navigation combination based on LIFT
CN110125909A (en) * 2019-05-22 2019-08-16 南京师范大学镇江创新发展研究院 A kind of multi-information fusion human body exoskeleton robot Control protection system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
柳锴: "基于人体上肢协同运动特征的外骨骼机器人设计方法研究", 《中国优秀博硕士学位论文全文数据库(博士) 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986781A (en) * 2020-08-24 2020-11-24 龙马智芯(珠海横琴)科技有限公司 Psychological treatment method and device based on man-machine interaction and user terminal
CN111986781B (en) * 2020-08-24 2021-08-06 龙马智芯(珠海横琴)科技有限公司 Psychological treatment device and user terminal based on human-computer interaction

Similar Documents

Publication Publication Date Title
Rabhi et al. Intelligent control wheelchair using a new visual joystick
CN102981616A (en) Identification method and identification system and computer capable of enhancing reality objects
CN110561399A (en) Auxiliary shooting device for dyskinesia condition analysis, control method and device
Chen et al. Camera networks for healthcare, teleimmersion, and surveillance
Lemley et al. Eye tracking in augmented spaces: A deep learning approach
Tao et al. Trajectory planning of upper limb rehabilitation robot based on human pose estimation
Wei et al. Real-time limb motion tracking with a single imu sensor for physical therapy exercises
Tao et al. Building a visual tracking system for home-based rehabilitation
CN111652155A (en) Human body movement intention identification method and system
Wang et al. Arbitrary spatial trajectory reconstruction based on a single inertial sensor
Zhang et al. Neuromorphic high-frequency 3d dancing pose estimation in dynamic environment
Ramadoss et al. Computer vision for human-computer interaction using noninvasive technology
Liu et al. Ego+ x: An egocentric vision system for global 3d human pose estimation and social interaction characterization
Chella et al. A posture sequence learning system for an anthropomorphic robotic hand
Ahmed et al. Kalman filter-based noise reduction framework for posture estimation using depth sensor
Schmudderich et al. Estimating object proper motion using optical flow, kinematics, and depth information
CN112099330B (en) Holographic human body reconstruction method based on external camera and wearable display control equipment
CN115120250A (en) Intelligent brain-controlled wheelchair system based on electroencephalogram signals and SLAM control
Dang et al. Imitation learning-based algorithm for drone cinematography system
Seo et al. Stereo feature learning based on attention and geometry for absolute hand pose estimation in egocentric stereo views
CN112257771A (en) Epidemic prevention robot vision and hearing collaborative perception model, method and medium
Takač et al. Ambient sensor system for freezing of gait detection by spatial context analysis
Sayed et al. Cognitive assessment in children through motion capture and computer vision: the cross-your-body task
Yang et al. Unconstrained human gaze estimation approach for medium-distance scene based on monocular vision
Wagh et al. Virtual Yoga System Using Kinect Sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200911

RJ01 Rejection of invention patent application after publication