CN111839926B - Wheelchair control method and system shared by head posture interactive control and autonomous learning control - Google Patents

Wheelchair control method and system shared by head posture interactive control and autonomous learning control Download PDF

Info

Publication number
CN111839926B
CN111839926B CN202010557433.2A CN202010557433A CN111839926B CN 111839926 B CN111839926 B CN 111839926B CN 202010557433 A CN202010557433 A CN 202010557433A CN 111839926 B CN111839926 B CN 111839926B
Authority
CN
China
Prior art keywords
control
wheelchair
head
head posture
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010557433.2A
Other languages
Chinese (zh)
Other versions
CN111839926A (en
Inventor
赵秦毅
石桑俐
徐国政
高翔
任国建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202010557433.2A priority Critical patent/CN111839926B/en
Publication of CN111839926A publication Critical patent/CN111839926A/en
Application granted granted Critical
Publication of CN111839926B publication Critical patent/CN111839926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G5/00Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
    • A61G5/04Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs motor-driven
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G5/00Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
    • A61G5/10Parts, details or accessories
    • A61G5/1051Arrangements for steering
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2203/00General characteristics of devices
    • A61G2203/10General characteristics of devices characterised by specific control means, e.g. for adjustment or steering
    • A61G2203/18General characteristics of devices characterised by specific control means, e.g. for adjustment or steering by patient's head, eyes, facial muscles or voice

Landscapes

  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a wheelchair control method and system for head posture interactive control and autonomous learning control sharing.A depth image of a user face is obtained through a Kinect sensor, a head posture is estimated by adopting a random forest combined nearest point iteration method, and a model for identifying the head posture to wheelchair control is established; under a structured scene, a user controls the wheelchair to perform operation skill demonstration of a specific track in a head posture interaction mode, and an observation data sequence of a demonstration process is obtained; representing the obtained operation skill observation data sequence by using a Gaussian mixture model, and learning parameters of the Gaussian mixture model by using an EM (effective electromagnetic) algorithm; the robot wheelchair operation skill is reproduced by using a Gaussian mixture regression method, and autonomous control based on head posture simulation learning is realized; and the user judges the current environment information, and switches between a head posture interaction control mode and an autonomous learning control mode in a key pressing mode to realize the shared control of the robot wheelchair. The invention enriches the functions of the robot wheelchair and facilitates the travel of old and disabled people.

Description

Wheelchair control method and system shared by head posture interactive control and autonomous learning control
Technical Field
The invention belongs to the field of robot learning and human-computer interaction, and particularly relates to a wheelchair control method shared by head posture interaction control and autonomous learning control.
Background
In recent years, with the improvement of living standard and the continuous development of medical health technology, the living quality of handicapped and old people receives more attention, the intelligent mobile robot facilitates the travel of the handicapped and the old people, and the intelligent mobile robot becomes a new research hotspot in the medical field of current auxiliary instruments. For the disabled, the human-computer interaction mode based on the head posture is taken as a new choice, is favored by many consumers and domestic and foreign researchers, and has higher market value and research value. In practice, the working environment of a mobile robot is often modeled in a highly complex form, and thus autonomous navigation may be difficult. This difficulty is exacerbated for real robots with limited computational resources and complex planning algorithms, since such an environment modeling model requires significant computational power. Mock learning, also known as demonstration programming, is inspired by human and animal simulated ability to develop to acquire new skills that are imparted to machines by relying on user demonstration. Compared with a human-computer interaction control mode, the sharing mode based on human-computer interaction and autonomous control can provide higher safety and accuracy of control task execution, and due to the introduction of autonomous control, the workload of a user is reduced; compared with an autonomous control mode, the sharing mode has better adaptability in a complex environment by means of participation of users.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a wheelchair control method and system for head posture interactive control and autonomous learning control sharing, overcomes the disadvantages of the traditional autonomous navigation method in the aspects of environment modeling complexity and computing resource utilization rate, and allows a user and a robot to cooperate with each other to jointly complete the control task of the robot wheelchair.
The invention content is as follows: the invention provides a wheelchair control method for head posture interactive control and autonomous learning control sharing, which comprises the following steps:
(1) acquiring a depth image of the face of a user through a Kinect sensor, estimating the head posture by adopting a random forest combined closest point iteration method, and establishing a model for recognizing the head posture to control the wheelchair;
(2) under a structured scene, a user controls the wheelchair to perform operation skill demonstration of a specific track in a head posture interaction mode, and an observation data sequence of a demonstration process is obtained;
(3) representing the obtained operation skill observation data sequence by using a Gaussian mixture model, and learning parameters of the Gaussian mixture model by using an EM (effective electromagnetic) algorithm;
(4) the robot wheelchair operation skill is reproduced by using a Gaussian mixture regression method, and autonomous control based on head posture simulation learning is realized;
(5) and the user judges the current environment information, and switches between a head posture interaction control mode and an autonomous learning control mode in a key pressing mode to realize the shared control of the robot wheelchair.
Further, the step (1) includes the steps of:
(11) a head posture motion mapping based on a robot wheelchair joystick is established: head pose employs a head orientation vector
Figure BDA0002544869410000021
The vector is a direction vector between the coordinate origin O and the tip of the nose, and the head posture matrix T and the head orientation vector V can be converted:
Figure BDA0002544869410000022
a circular section is cut in a conical space with the head facing the front, a head facing vector and the section are supposed to intersect at a point P at a certain moment in the head rotating process, the point is projected into a motion section of a control lever of the robot wheelchair to obtain a point P (x, y), and the turning direction and the rotating speed of the wheelchair are determined according to the position of P (x, y) in the circular control section of the control lever based on the head posture of the robot wheelchair interactive control;
(12) and calculating the speed of the driving wheel of the robot wheelchair by using an Ackerman-Jeentand steering model after the rotating speed and the steering angle of the wheelchair corresponding to the head posture are obtained according to the mapping.
Further, the step (2) comprises the steps of:
(21) acquiring a distance value from a peripheral obstacle to the wheelchair within a range of 360 degrees by using a laser scanning sensor;
(22) acquiring a head position and a posture value based on the head posture mapping of the robot wheelchair operating lever;
(23) and collecting the laser scanning values and the corresponding head position and attitude values at the same frequency, performing segmented demonstration on the demonstration track, and taking a plurality of groups of demonstration data sequences as user operation skill observation data sequences.
Further, the step (3) includes the steps of:
(31) when a user demonstrates operation skills, real-time environmental information and head posture information are obtained through a laser scanning sensor and a Kinect depth sensor, and observation data are collected;
(32) modeling the probability density function of the observed data is completed as follows:
Figure BDA0002544869410000023
Figure BDA0002544869410000031
Figure BDA0002544869410000032
Figure BDA0002544869410000033
wherein X and Y are input and output matrices, respectively,
Figure BDA0002544869410000034
for the mixed weight of each Gaussian component, satisfy
Figure BDA0002544869410000035
For each Gaussian component X, the edge density function, μjxFor the mean, Σ, of each gaussian componentjxA covariance for each gaussian component;
(33) and carrying out cluster analysis on the data through a Gaussian mixture model, and estimating corresponding parameters by using an EM (effective electromagnetic) algorithm.
Further, the step (4) is realized as follows:
obtaining m regression functions according to the conditional distribution theorem of normal vector, and obtaining a formula
Figure BDA0002544869410000036
And (5) performing weighted mixing on the regression function to complete regression analysis on the observation data.
The invention also provides a wheelchair control system shared by head posture interactive control and autonomous learning control, which comprises a head posture control module, an imitation learning module and a shared control module; the head posture control module estimates the head posture by adopting a random forest based on a depth image and combining a closest point iteration method, and maps the head motion posture space to an operation rod motion space so as to control the motion of the wheelchair; the simulation learning module is used for demonstrating operation skills of a user in a structured scene, modeling a demonstration process by adopting a Gaussian mixture model, reproducing behaviors of the robot wheelchair according to a Gaussian mixture regression method and realizing autonomous control based on head posture simulation learning; and the sharing control module switches different control modes by a user in a key mode.
Has the advantages that: compared with the prior art, the invention has the beneficial effects that: by means of simulating learning, the robot can realize autonomous movement, and compared with a traditional autonomous navigation method, the method has certain advantages in the aspects of environment modeling complexity and computing resource utilization rate; the simulation learning-based shared control system allows a user and a robot to cooperate with each other to jointly complete the control task of the robot wheelchair.
Drawings
FIG. 1 is a schematic diagram of a head motion pose mapping based on a robotic wheelchair joystick in accordance with the present invention;
FIG. 2 is a track diagram of an indoor scene artificially constructed based on tracks;
FIG. 3 is a flow chart of a wheelchair control method mock learning method with shared head pose interaction control and autonomous learning control;
FIG. 4 is a shared control flow diagram of a wheelchair control method in which head-pose interaction control is shared with autonomous learning control;
figure 5 is a schematic view of a robotic wheelchair configuration.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
the invention provides a robot wheelchair sharing control method based on head posture simulation learning, which specifically comprises the following steps:
step 1: the method comprises the steps of obtaining a depth image of a face of a user through a Kinect sensor, estimating a head posture by adopting a random forest combined closest point iteration method, and establishing a model from the head posture recognition to the wheelchair control.
Aiming at the problems that the existing iterative closest point head posture estimation algorithm has a large number of iterations and is easy to fall into local optimum, and the accuracy and stability of the random forest head posture estimation algorithm are not high, the head posture estimation method based on the fusion of the random forest and the iterative closest point algorithm is adopted.
In order to achieve the purpose of using the head posture to control the wheelchair without obstacles, a mapping from the head movement posture to the robot wheelchair movement control needs to be established. Inspired by the control principle of the traditional electric wheelchair control lever, the head posture motion mapping based on the robot wheelchair control lever is established. As shown in FIG. 1, the head pose uses a head orientation vector
Figure BDA0002544869410000041
Representing the direction vector between the coordinate origin O and the tip of the nose, the head posture matrix T and the head orientation vector V may be converted as follows:
Figure BDA0002544869410000042
a circular section is cut in a conical space with the head facing the front, a head facing vector and the section are supposed to intersect at a point P at a certain moment in the head rotating process, the point is projected into a motion section of a control lever of the robot wheelchair to obtain a point P (x, y), and the steering and rotating speed of the wheelchair are determined according to the position of P (x, y) in the circular control section of the control lever based on the head posture of the robot wheelchair interactive control.
And after the wheelchair rotating speed and the steering angle corresponding to the head posture are obtained according to the mapping, further calculating the speed of the driving wheel of the robot wheelchair by using an Ackerman-Jeentand steering model.
Step 2, under a structured scene, a user controls the wheelchair to perform operation skill demonstration of a specific track in a head and posture interaction mode, and an observation data sequence of a demonstration process is obtained;
as shown in fig. 3, the user provides a head-pose based interactive control presentation and the robotic wheelchair achieves autonomous motion by simulating learning. Under a structured indoor scene, a user controls the wheelchair along the track shown in fig. 2 to perform operation skill demonstration through an interactive control mode based on head gestures, and an observation data sequence of the demonstration process is obtained.
And acquiring the position and posture information of the head through the head motion posture mapping based on the control rod of the robot wheelchair. The three laser scanning sensors are managed through ROS nodes respectively, and the designed laser combination node combines the readings of the three laser scanning sensors into a ROS message which represents the distance from a peripheral obstacle to the wheelchair within a range of 360 degrees by taking the wheelchair as a center; and performing down-sampling processing on the original scanning data every 22.5 degrees to obtain 16 laser scanning distance values. The user carries out subsection demonstration along the track of 'starting point-AB-BC-CD-terminal point', and during each subsection demonstration, observation data sequences { head position and posture, laser scanning distance value } are collected at a certain frequency, and after the demonstration is finished, a plurality of groups of demonstration data sequences are obtained and used as user operation skill observation data sequences.
And step 3: and characterizing the obtained operation skill observation data sequence by using a Gaussian mixture model, and learning parameters of the Gaussian mixture model by using an EM (effective electromagnetic modeling) algorithm.
Modeling the probability density function of the user operation skill observation data as follows:
Figure BDA0002544869410000051
Figure BDA0002544869410000052
Figure BDA0002544869410000053
Figure BDA0002544869410000054
wherein X and Y are input and output, respectivelyThe matrix is a matrix of a plurality of matrices,
Figure BDA0002544869410000055
for the mixed weight of each Gaussian component, satisfy
Figure BDA0002544869410000056
N(x;μjx,∑jx) For each Gaussian component X, the edge density function, μjxFor the mean, Σ, of each gaussian componentjxIs the covariance of each gaussian component.
And performing cluster analysis on the data through GMM, estimating corresponding parameters by using an EM (effective vector) algorithm, and obtaining m regression functions according to the conditional distribution theorem of normal vectors.
And 4, step 4: and a Gaussian mixture regression method is used for reproducing the operation skill of the robot wheelchair, and autonomous control based on head posture simulation learning is realized.
Is composed of
Figure BDA0002544869410000057
And (4) carrying out weighted mixing on the regression function to complete regression analysis on the operation skill data sequence, thereby realizing simulation learning of the head posture interactive control wheelchair.
When the robot wheelchair reappears the operation skill, the distance value from the environmental barrier to the wheelchair is collected in real time through the laser scanning sensor, and corresponding head position and posture data are obtained according to the regression function of the operation skill observation data.
And further calculating the speed of the driving wheel of the wheelchair by using an Ackerman-Jeentand steering model. The speed of the left and right driving wheels of the robot wheelchair directly obtained according to the Ackerman-Jeentand steering model has certain fluctuation, and the speed vector of the left and right driving wheels is smoothed by adopting a least square method, so that the autonomous control of the robot wheelchair based on Gaussian mixture regression is realized.
And 5: and the user judges the current environment information, and switches between a head posture interaction control mode and an autonomous learning control mode in a key pressing mode to realize the shared control of the robot wheelchair.
As shown in fig. 4, the specific idea of the sharing control rule provided by the present invention is: when the wheelchair is in a relatively safe driving environment, autonomous control is performed through head posture interaction skills learned by the robot wheelchair; when the robot wheelchair meets an obstacle and the driving difficulty coefficient is high, the user controls the wheelchair in an interactive mode based on the head posture through the key, and after the robot wheelchair reaches a safe driving environment, the robot wheelchair is automatically controlled in a key mode, so that shared control is realized.
The structure of the robot wheelchair is shown in fig. 5, a Kinect sensor is arranged right in front of a seat, three laser scanning sensors of the same type are selected, two of the laser scanning sensors are respectively arranged on the left front side and the right front side of the wheelchair, the other laser scanning sensor is arranged right behind the wheelchair, and the three laser scanning sensors are in the same plane.
The invention also provides a wheelchair control system shared by head posture interactive control and autonomous learning control, which comprises a head posture control module, an imitation learning module and a sharing control module. The head posture control module estimates the head posture by adopting a random forest based on a depth image and combining a closest point iteration method, and maps the head motion posture space to an operation rod motion space so as to control the motion of the wheelchair; the simulation learning module is used for demonstrating operation skills of a user in a structured scene, modeling a demonstration process by adopting a Gaussian mixture model, reproducing behaviors of the robot wheelchair according to a Gaussian mixture regression method and realizing autonomous control based on head posture simulation learning; and the shared control module is used for switching different control modes by a user in a key mode.
The head posture control module realizes the control of the movement of the wheelchair through the head position and the posture, which is a premise that only the head posture control of the movement of the wheelchair is realized, the next step imitates the learning module to learn the mapping relation from laser input to the head position and the posture output through a model by acquiring laser data and the head position and the posture data, when the robot wheelchair performs behavior reproduction, the laser data is obtained in real time, the predicted head position and the predicted posture are output according to the model, and then the movement of the wheelchair is controlled according to the head posture control module, so that the autonomous control is realized, which is a key; and the final shared control strategy is used for finishing the cooperative control of the user and the robot on the basis of realizing head posture control and autonomous control.
Aiming at old and disabled people, the invention adopts human-computer interaction control based on head posture simulation learning, abstracts and transmits the head posture navigation skills of the user to the robot wheelchair through learning the wheelchair interaction control process based on the head posture, reproduces the behaviors and realizes the simulation learning of the head posture interaction control wheelchair. In addition, the sharing control method provided by the invention aims to realize mutual assistance between a user and the robot wheelchair, share the control right of the wheelchair, jointly complete the control task of the robot wheelchair, and carry out autonomous control through the head posture interaction skill learned by the robot wheelchair when the wheelchair is in a relatively safe driving environment; when the user encounters an obstacle and the driving difficulty coefficient is high, the user switches to an interactive mode based on the head posture to control the wheelchair until a safe driving environment is reached.
In order to improve the safety of the robot wheelchair and reduce the operation burden of a user, a shared control mode combining human-computer interaction and autonomous navigation is constructed. Compared with a man-machine interaction mode, the sharing mode can provide higher safety and accuracy of control intention execution, and due to the participation of autonomous navigation, the workload of the user is reduced, and the time for the user to use the wheelchair is increased; compared with the autonomous navigation mode, the sharing mode has better adaptability by means of the participation of the user in a complex environment. A robot wheelchair sharing control mode is mostly carried out between a human-computer interaction control mode and an autonomous navigation control mode, a user wheelchair operation skill in the process of controlling the wheelchair through human-computer interaction is obtained through simulation learning by the robot wheelchair autonomous control method based on the simulation learning, and the robot wheelchair sharing control method is shared between the human-computer interaction control mode and the autonomous learning control mode, so that the problem of insufficient human-computer interaction flexibility of the existing control method is solved.

Claims (5)

1. A wheelchair control method shared by head posture interactive control and autonomous learning control is characterized by comprising the following steps:
(1) acquiring a depth image of the face of a user through a Kinect sensor, estimating the head posture by adopting a random forest combined closest point iteration method, and establishing a model for recognizing the head posture to control the wheelchair;
(2) under a structured scene, a user controls the wheelchair to perform operation skill demonstration of a specific track in a head posture interaction mode, and an observation data sequence of a demonstration process is obtained;
(3) representing the obtained operation skill observation data sequence by using a Gaussian mixture model, and learning parameters of the Gaussian mixture model by using an EM (effective electromagnetic) algorithm;
(4) the robot wheelchair operation skill is reproduced by using a Gaussian mixture regression method, and autonomous control based on head posture simulation learning is realized;
(5) the user judges the current environment information, and switches between a head posture interaction control mode and an autonomous learning control mode in a key pressing mode to realize the shared control of the robot wheelchair;
the step (1) comprises the following steps:
(11) a head posture motion mapping based on a robot wheelchair joystick is established: head pose employs a head orientation vector
Figure FDA0003505189030000011
Representing the direction vector between the coordinate origin O and the tip of the nose, the head posture matrix T and the head orientation vector V can be transformed:
Figure FDA0003505189030000012
a circular section is cut in a conical space with the head facing the front, a head facing vector and the section are supposed to intersect at a point P at a certain moment in the head rotating process, the point is projected into a motion section of a control lever of the robot wheelchair to obtain a point P (x, y), and the turning direction and the rotating speed of the wheelchair are determined according to the position of P (x, y) in the circular control section of the control lever based on the head posture of the robot wheelchair interactive control;
(12) and calculating the speed of the driving wheel of the robot wheelchair by using an Ackerman-Jeentand steering model after the rotating speed and the steering angle of the wheelchair corresponding to the head posture are obtained according to the mapping.
2. The method for controlling a wheelchair, in which head posture interaction control and autonomous learning control are shared, according to claim 1, wherein the step (2) comprises the steps of:
(21) acquiring a distance value from a peripheral obstacle to the wheelchair within a range of 360 degrees by using a laser scanning sensor;
(22) acquiring a head position and a posture value based on the head posture mapping of the robot wheelchair operating lever;
(23) and collecting the laser scanning values and the corresponding head position and attitude values at the same frequency, performing segmented demonstration on the demonstration track, and taking a plurality of groups of demonstration data sequences as user operation skill observation data sequences.
3. The method for controlling a wheelchair, in which head posture interaction control and autonomous learning control are shared, according to claim 1, wherein the step (3) comprises the steps of:
(31) when a user demonstrates operation skills, real-time environmental information and head posture information are obtained through a laser scanning sensor and a Kinect depth sensor, and observation data are collected;
(32) modeling the probability density function of the observed data is completed as follows:
Figure FDA0003505189030000021
Figure FDA0003505189030000022
Figure FDA0003505189030000023
Figure FDA0003505189030000024
wherein X and Y are input and output matrices, respectively,
Figure FDA0003505189030000027
for the mixed weight of each Gaussian component, satisfy
Figure FDA0003505189030000025
Ν(x;μjx,∑jx) For each Gaussian component X, the edge density function, μjxFor the mean, Σ, of each gaussian componentjxA covariance for each gaussian component;
(33) and carrying out cluster analysis on the data through a Gaussian mixture model, and estimating corresponding parameters by using an EM (effective electromagnetic) algorithm.
4. The method for controlling the wheelchair, which is shared by the interactive head posture control and the autonomous learning control, according to claim 3, wherein the step (4) is implemented as follows:
obtaining m regression functions according to the conditional distribution theorem of normal vector, and obtaining a formula
Figure FDA0003505189030000026
And (5) performing weighted mixing on the regression function to complete regression analysis on the observation data.
5. A wheelchair control system employing head pose interactive control and autonomous learning control sharing of the method of claim 1, comprising a head pose control module, a mock learning module, and a shared control module; the head posture control module estimates the head posture by adopting a random forest based on a depth image and combining a closest point iteration method, and maps the head motion posture space to an operation rod motion space so as to control the motion of the wheelchair; the simulation learning module is used for demonstrating operation skills of a user in a structured scene, modeling a demonstration process by adopting a Gaussian mixture model, reproducing behaviors of the robot wheelchair according to a Gaussian mixture regression method and realizing autonomous control based on head posture simulation learning; and the sharing control module switches different control modes by a user in a key mode.
CN202010557433.2A 2020-06-18 2020-06-18 Wheelchair control method and system shared by head posture interactive control and autonomous learning control Active CN111839926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010557433.2A CN111839926B (en) 2020-06-18 2020-06-18 Wheelchair control method and system shared by head posture interactive control and autonomous learning control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010557433.2A CN111839926B (en) 2020-06-18 2020-06-18 Wheelchair control method and system shared by head posture interactive control and autonomous learning control

Publications (2)

Publication Number Publication Date
CN111839926A CN111839926A (en) 2020-10-30
CN111839926B true CN111839926B (en) 2022-04-12

Family

ID=72986176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010557433.2A Active CN111839926B (en) 2020-06-18 2020-06-18 Wheelchair control method and system shared by head posture interactive control and autonomous learning control

Country Status (1)

Country Link
CN (1) CN111839926B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113101079A (en) * 2021-05-20 2021-07-13 南京邮电大学 Intelligent wheelchair based on multiple constraint conditions, and dynamic sharing control method and system
CN115741670B (en) * 2022-10-11 2024-05-03 华南理工大学 Wheelchair mechanical arm system based on multi-mode signal and machine vision fusion control
CN115804695A (en) * 2023-01-09 2023-03-17 华南脑控(广东)智能科技有限公司 Multi-modal brain-computer interface wheelchair control system integrating double attitude sensors

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392241B (en) * 2014-11-05 2017-10-17 电子科技大学 A kind of head pose estimation method returned based on mixing
CN105956601B (en) * 2016-04-15 2019-01-29 北京工业大学 A kind of robot Chinese writing and learning method based on Track Imitation
CN107616880B (en) * 2017-08-01 2020-10-09 南京邮电大学 Intelligent electric wheelchair implementation method based on electroencephalogram idea and deep learning
CN107621880A (en) * 2017-09-29 2018-01-23 南京邮电大学 A kind of robot wheel chair interaction control method based on improvement head orientation estimation method
EP3572910B1 (en) * 2018-05-21 2021-11-24 Vestel Elektronik Sanayi ve Ticaret A.S. Method, system and computer program for remotely controlling a display device via head gestures
CN109459039B (en) * 2019-01-08 2022-06-21 湖南大学 Laser positioning navigation system and method of medicine carrying robot
CN110908377B (en) * 2019-11-26 2021-04-27 南京大学 Robot navigation space reduction method

Also Published As

Publication number Publication date
CN111839926A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111839926B (en) Wheelchair control method and system shared by head posture interactive control and autonomous learning control
Singla et al. Memory-based deep reinforcement learning for obstacle avoidance in UAV with limited environment knowledge
Li et al. Hrl4in: Hierarchical reinforcement learning for interactive navigation with mobile manipulators
Chen et al. Stabilization approaches for reinforcement learning-based end-to-end autonomous driving
CN104589356B (en) The Dextrous Hand remote operating control method caught based on Kinect human hand movement
WO2019076044A1 (en) Mobile robot local motion planning method and apparatus and computer storage medium
CN105787471A (en) Gesture identification method applied to control of mobile service robot for elder and disabled
CN112965081B (en) Simulated learning social navigation method based on feature map fused with pedestrian information
CN107150347A (en) Robot perception and understanding method based on man-machine collaboration
CN111251294A (en) Robot grabbing method based on visual pose perception and deep reinforcement learning
CN109543285B (en) Crowd evacuation simulation method and system integrating data driving and reinforcement learning
CN106373453A (en) Intelligent immersive high-speed train virtual driving behavior evaluation method and simulation system
CN105915987B (en) A kind of implicit interactions method towards smart television
CN112947081A (en) Distributed reinforcement learning social navigation method based on image hidden variable probability model
CN113741533A (en) Unmanned aerial vehicle intelligent decision-making system based on simulation learning and reinforcement learning
Zhu et al. Human motion generation: A survey
Ye et al. Paval: Position-aware virtual agent locomotion for assisted virtual reality navigation
CN111134974B (en) Wheelchair robot system based on augmented reality and multi-mode biological signals
Dang et al. Path-analysis-based reinforcement learning algorithm for imitation filming
CN116631262A (en) Man-machine collaborative training system based on virtual reality and touch feedback device
CN113359744B (en) Robot obstacle avoidance system based on safety reinforcement learning and visual sensor
CN116430891A (en) Deep reinforcement learning method oriented to multi-agent path planning environment
CN107644686A (en) Medical data acquisition system and method based on virtual reality
CN115294228A (en) Multi-graph human body posture generation method and device based on modal guidance
CN114326826A (en) Multi-unmanned aerial vehicle formation transformation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant