LU101804B1 - An evaluation system and method for digit force training during precision grip under mixed reality environment - Google Patents

An evaluation system and method for digit force training during precision grip under mixed reality environment Download PDF

Info

Publication number
LU101804B1
LU101804B1 LU101804A LU101804A LU101804B1 LU 101804 B1 LU101804 B1 LU 101804B1 LU 101804 A LU101804 A LU 101804A LU 101804 A LU101804 A LU 101804A LU 101804 B1 LU101804 B1 LU 101804B1
Authority
LU
Luxembourg
Prior art keywords
virtual
hand
grip
force
task
Prior art date
Application number
LU101804A
Other languages
French (fr)
Inventor
Ke Li
Original Assignee
Univ Shandong
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Shandong filed Critical Univ Shandong
Application granted granted Critical
Publication of LU101804B1 publication Critical patent/LU101804B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Abstract

The present disclosure provides a hand function training system based on mixed reality and a data processing method. Kinematic and dynamic data during the task are recorded with the use of virtual reality technology to build an immersive virtual reality environment combined with the task, thereby obtaining a true and reliable hand function evaluation indicator score. Because the system can effectively combine physical entity operations in a real space with virtual task operations in a virtual space, the system breaks through the limitation of existing VR technology lacking contact feedback, can greatly improve the sense of immediacy and the fun during training, and at the same time, accurately records dynamic signals such as forces/ torques of fingers when the hand touches an object in the real space on the basis of recording hand kinematic parameters, thereby providing accurate and reliable signals for accurate evaluation of hand function.

Description

| AN EVALUATION SYSTEM AND METHOD FOR DIGIT FORCE | ENVIRONMENT | 5 Field of the Invention | The present disclosure belongs to the field of intelligent control, and relates to a hand | function training system based on mixed reality and signal processing method.
|] Background of the Invention || 10 The statement of this section merely provides background art information related to | the present disclosure, and does not necessarily constitute the prior art.
| Hand function is the most elaborate, flexible and complex motor function of human | limbs, and plays an extremely important role in daily life. Object grip and | manipulation are the most important part of the hand function. However, in a variety | 15 of central or peripheral neuromuscular diseases, the hand function is easily damaged, | and even cannot complete the most common daily activities such as grip and | manipulation, which seriously affects the patient's ability to take care of himself. This | problem is particularly prominent for patients with ischemic and hemorrhagic strokes. à About 80% of stroke survivors suffer from long-term hand dysfunction, which | 20 seriously affects their daily living abilities. Patients suffered from strokes urgently | need adequate and effective rehabilitation training for hand function. However, as far | as the inventors know, the existing high-intensity repetitive training for the | rehabilitation of hand function of stroke patients is mostly for muscle recovery training, joint mobility training and the like for individual muscles in simple exercise | 25 forms and single movement scenarios, which achieves little objective-oriented | functional improvement, and the patients are prone to fatigue during monotonous and | repetitive training and easily lose interest in active rehabilitation. Therefore, a new | hand function rehabilitation system needs to be designed and developed to realize | more interesting rehabilitation training that is more in line with daily behavioral
| activities.
| Recent studies have shown that the rehabilitation technology based on virtual reality | (VR) can form a hand function training environment with actual life scenarios, and | has positive effects in improving the movement range, strength and movement speed | 5 of the hand, and completing grip, manipulation and other hand functions in actual | scenarios. However, the VR technology has the disadvantages that, when hands | complete operations in a virtual environment, due to the lack of real touch of actual | objects, users often cannot obtain effective contact perception feedback, so the sense ] of immediacy during training is low, and patients cannot effectively transfer this 1 10 training ability to the manipulation of actual physical entities after training. From | training in a VR environment, a variety of dynamic information such as force and | torque when fingers touch an object cannot be obtained, and the fingers' power control |] ability cannot be accurately recorded and accurately evaluated.
. 15 Summary of the Invention . In order to solve the above problems, the present disclosure proposes a hand function training system based on mixed reality and a data processing method. The present | disclosure breaks through the limitation that the existing VR technology is lack of | contact feedback, can greatly improve the sense of immediacy and the fun during training, and at the same time, accurately records dynamic signals such as forces/ | torques of fingers when the hand touches an object in a real space on the basis of | recording hand kinematic parameters, thereby providing accurate and reliable signals | for accurate evaluation and training of hand function.
| According to some embodiments, the present disclosure adopts the following technical solutions: ; | A hand function training system based on mixed reality includes a test device, a | posture acquisition device, and a virtual reality platform, wherein: | The test device includes at least one grip apparatus and a plurality of sensors, the grip | apparatus includes a hollow shell having at least two arc regions and a recess, the first
| arc region is provided with at least one force/ torque sensor, and the second arc region | is provided with at least four force/ torque sensors arranged side by side; the recess ! and the arc regions are not on the same plane; the force/ torque sensors are configured ; to receive grip information of the corresponding fingers and upload the information to | 5 the virtual reality platform; | The posture acquisition device includes a plurality of image acquisition devices for shooting an image of a real scenario and position and posture images of physical | entities, and the physical entities include a hand/ arm and an object to be gripped; | The virtual reality platform includes a processor and a virtual reality device, and the | 10 processor receives the image of the real scenario to form a virtual object in a matching | shape, maps the accurate position and posture data of the physical entity to the virtual object in the virtual space, feeds back the data to the virtual reality device to achieve | real-time and accurate matching of the virtual and physical objects in position and | posture, and obtains grip parameters of fingers according to the grip information.
| 15 When the hand/ arm in the virtual environment grips the virtual object as reflected by | the virtual reality device, the corresponding real hand/ arm in the real space also | touches the physical entity at the same time, so the user can obtain real contact | perception feedback, and the sense of immediacy of training is greatly improved. At | the same time, force/ torque sensors are also disposed inside the gripped object in the | 20 real space for acquiring dynamic data during the task, and the processor accurately | trains or/ and evaluates the hand function of a subject according to the kinematic data of object movement acquired by the camera and the dynamic data of fingers acquired | by the force sensors.
. As a possible implementation manner, the first arc region and the second arc region | 25 are disposed oppositely or approximately oppositely, which coincides with the regions | where the fingers are located when the five fingers of the hand grip the grip apparatus. | As a possible implementation manner, the force/ torque sensor(s) is/ are disposed | inside the first arc region or the second arc region, and the force receiving surfaces | | thereof match the arc regions. Accordingly, when the user can grip the grip apparatus,
| LU101804 | the corresponding force/ torque sensors can acquire grip information of the fingers.
| As a possible implementation manner, the force/ torque sensors are six-dimensional | sensors capable of acquiring the magnitude and direction of force.
| As a possible implementation manner, the recess is funnel-shaped; | 5 The shell has a certain accommodating space for placing a heavy object to generate a | deflection torque.
| As a possible implementation manner, the processor performs semantic segmentation | of an RGB image on the image acquired by the image acquisition device, and deletes, | for each segmented image, a pixel if the pixel is described for all targets to have a | 10 probability smaller than a set value; removes isolated points beyond a set range away | from a set of dense points by using a K-closest neighbor anomaly detection algorithm, || calculates a maximum set of continuous points of each segmented target by principal | component analysis, removes any abnormal points that do not intersect with the set of | points, estimates the posture of the object through an ICP algorithm, sets a | 15 pre-scanned point cloud and an actually obtained point cloud of the object, and transforms the obtained coordinates of the object relative to the camera into virtual world coordinates relative to the virtual reality device by transmission transformation; and issues a hand function training task, receives dynamic data and kinematic data of | the object acquired during the execution of the task, extracts feature parameters, and | 20 performs multiple regression analysis to obtain an evaluation indicator of task completion.
| The image acquisition device includes a plurality of depth cameras.
| A data processing method based on the above system includes the following steps: | performing semantic segmentation on the acquired image to remove image noise; | 25 mapping the processed image into a virtual space; and | issuing a hand function training task, receiving dynamic data and kinematic data of ’î the object acquired during the execution of the task, extracting feature parameters, and A performing multiple regression analysis to obtain an evaluation indicator of task . completion.
| The specific process of removing image noise includes: deleting, for each segmented | image, a pixel if the pixel is described for all targets to have a probability smaller than | a set value; and removing isolated points beyond a set range away from a set of dense | points by using a K-closest neighbor anomaly detection algorithm, calculating a | 5 maximum set of continuous points of each segmented target by principal component | analysis, and removing any abnormal points that do not intersect with the set of | points. | The specific process of mapping the processed image into a virtual space: estimating | the posture of the object through an ICP algorithm, setting a pre-scanned point cloud | 10 and an actually obtained point cloud of the object, and transforming the obtained | coordinates of the object relative to the camera into virtual world coordinates relative | to the virtual reality device by transmission transformation. | As a possible implementation manner, the dynamic data includes an average offset, | pressure changes and pressure migration of the gripped positions; | 15 The kinematic parameters include a task execution time, a moving path length and an | average speed. | As a possible implementation manner, features of respective parameters are extracted | and normalized to form multi-dimensional vectors, multiple regression is performed | on the multi-dimensional vectors to obtain an objective function, and optimization solution is performed by using a firefly population to obtain a function evaluation | indicator of the hand function. | A computer-readable storage medium stores a plurality of instructions adapted to be | loaded by a processor of a terminal device to perform the data processing method. | The terminal device includes a processor and a computer-readable storage medium, | 25 wherein the processor is configured to implement instructions; and the computer-readable storage medium is configured to store a plurality of instructions | that are adapted to be loaded by the processor to perform the data processing method. | Compared with the prior art, the beneficial effects of the present disclosure are: | The gripped physical entity in the real space is mapped into a virtual space by using a
J 6 TRL | depth © | amera ; | ; to form à Virtua] Object in à matchi ÿ pth © ne sha Pl camera can acquire ; Pe, and at the same ti IN reg] ¢ me, the Physi Me an accurate ositi Ysica] Éntity in the real Position “nd Posture dat Space and Map the sam of the Space to achieve 7 Lt no the Virtual obje ti ) Cal-time and accurate i ect the virtual position and pos Matching of the Virtual anq hysi | posture, Physica] objects in | The user's hand is ma | Pped by Using à depth came 8enerates à Virtual image ; % 50 that the real hang, | â8e in the Virtual 0 am ; Positi ; Al Space.
The user ca | on relationship b | 4 observe (Be relati l etween the virtual ; ative | immersi and/ arm and the Virtual op; / SIVe helmet.
When the virtual er rough a al arm ; | object, th “IN the virtual ENVironment or: | > the corresponding real arm in th pe trio | ; | IN the reg] Space also touches the phyei | the same time, so the User can obta; Cos a 00tain reg] cont i à act per | of imme Co Perception feedback, and the sense | lacy of training 18 greatly ; | greatly Improved, | With the use of the test dey; | ESL device, dynam; | » dynamic data during the fask is acquired through | force/ torque sensors insi [ j inside q the gripped object in the real space, and the hand function | of a subject is accurately esti nati i N j Y estimated in combination With the kinematic data of object [ movement acquired by the camera and the dynamic data of fingers acquired by the | force sensors. | Brief Description of the Drawings | The accompanying drawings constituting a part of the present disclosure are used for | . re, and the schematic | providing a further understanding of the present disclosure, | | . escriptions thereof are used fo à embodiments of the present disclosure and the descrip h | | , onstituting improper limita | interpreting the present disclosure, rather than c 5 | à present disclosure. | . evice: à ur tructure of a grip test d > | Fig. 1 is a schematic diagram of an external s in device: | AS tructure of the grip device; | Fig. 2 is a schematic diagram of an internal s | à ce itv scenario: | 0 Fig. 3 is a schematic diagram of a virtual reality > à u ic flow of test and training; || Fig. 4 is a schematic diagram of a basic flo . | g. 4 2 is a cylindrical wall | h: 1 is the rim of a cup with an inclined inner wall, 2 is a cy | biche 1 is the | à , llow bottom, 4 is a sem ll . device, 3 is a base with a hollow ’ | inside the device, |
| thumb side, and 5-8 are respectively semicircular shells on the index finger, middle | finger, ring finger and little finger; a-e are six-dimensional force/ torque sensors, 9 is a | semicircular shell on the four-finger side, and 10 is a semicircular shell on the thumb | side of the device.
| Detailed Description of the Embodiments | The present disclosure will be further illustrated below in conjunction with the | accompanying drawings and embodiments.
| It should be noted that the following detailed descriptions are exemplary and are | 10 intended to provide further descriptions of the present disclosure. All technical and | scientific terms used herein have the same meaning as commonly understood by those | of ordinary skill in the technical filed to which the present application belongs, unless | otherwise indicated.
| It should be noted that the terms used here are merely used for describing specific | 15 embodiments, but are not intended to limit the exemplary embodiments of the present | disclosure. As used herein, unless otherwise clearly stated in the context, singular | forms are also intended to include plural forms. In addition, it should also be | understood that when the terms “comprise” and/or “include” are used in the | description, it indicates the presence of features, steps, operations, devices, | 20 components, and/or combinations thereof.
| A hand function training/ evaluation system based on mixed reality includes a grip test | device, a depth camera, and a virtual reality platform. The grip test device is | configured to acquire grip information of a user when a task is executed, and each | force/ torque sensor therein is configured to acquire dynamic data of each finger | 25 during the task.
| The depth camera acquires in real time an image of a gripped physical entity in a real | space, as well as an accurate position and posture data of the physical entity in the real | space, and uploads them to the virtual reality platform.
, 8 ee. / Se, the d | pth Ca, 7 . configure Mera ;, lelativer, Îlexip Iy dis Orting to the SPecific peur Pd and cap Pe Nexibr, ; ea ; The Virtua] Tealit, Platr “eenario Space, Orm Maps th . ¢ Âcquireq | S | img © . Pace fo form a Virtua ] bio 8e of the Physica] Ent; t ; ‘ect in à mag ; Vio à vig, Curae, Aching Shape, anq A the gp L- Position ang Posture data Of the pays; 1 eme Ps the À Ysica Entity ; . 1 Object in the Virtua) | in the Tea] Space to the v; / Space ¢, achieve real.
Tlua] / “me and ac [ and Physica] Objects in posi ; Urate Matching Of the Virtua] [ On à ; | 1 Posture, At the Same lime the j [ hang/ arm is a] Mage OF the tser's | $ also Mappeq so that th Ë > © real arn, Produces 3 Virtua] ; ; ; | Pace.
The User can ob ; . [ IVe the relative Position Telationshj, between the Virtua] arm [| and the Virtual Object through à Virtual reality device (or an INferactive device, Or the | like) such as an j ersive helmet | en the Virtua] IN the Virtual Environment S the vi al object the , corresponding real arm in the rea] Space also touches the Physica] Éntity at the Same En . ; ; se of | time, so the user Can obtain real Contact Perception feedback, and the sen | | ; ining i roved, | 1 1Mmediacy of training 15 greatly IMpro | [ : ‘ect j lated according to the kinema | The hand function of the Subject is accurately eva | ; ingers | ©" acquired by the camera and the namic data of fing | acquired by the force sensors, es of five [| 1 imultaneously test forces and torqu | | ip test device is designed to s ints.
In order to | First, the grip ignals of pressure center points.
PL | -time signa coli | ips of the whole hand and real up and drinking | . s of the ipping a water c p lL fingertip s such as gripping : | he functional operation proc into a cylindrical cup, and the ri | © the igned into ; | simulat . tis desig id.
Five men ; f fluid. | f the instru , ouring o | he appearance o facilitate the p : | water, L ike a funnel fo herein the sensor a | is inclined li isposed in the device, cing | rque ni ur fingers, | imensional force/ torq t the remaining fo |] six-dim ite to the sensors a a | the thumb is oppos ition of the human ha 1. 1 is the rim of a cup | ip posi in Fig. 1. ll forms to the natural grip ip test device is shown in 4 is a semicircular | con € grip te . evice, 4 i tructure of the gri ; Il inside the d . er, | 1 The extern IL, 2 is a cylindric lar shells on the in | inclined inner w . emicirc | i clined ely s | the thumb side, nn n shell o
| 9 1 middle finger, ring finger and little finger, and 3 is a base with a hollow bottom. The | detailed internal structure of the grip device is shown in Fig. 2. 1 is the rim of a cup | with an inclined inner wall, 2 is a cylindrical wall inside the device, 3 is a base, a is a | six-dimensional force/ torque sensor on the thumb side, b-e are six-dimensional force/ | 5 torque sensors on the four-finger side, 9 is a semicircular shell on the four-finger side | for connecting the sensors for finger grip, and 10 is a semicircular shell on the thumb | side of the device.
| Five six-dimensional force/ torque sensors (the six-dimensional force/ torque sensor a | on the thumb side and the six-dimensional force/ torque sensors b-e on the four-finger | 10 side) are placed oppositely and fixed on the cylindrical wall 2 inside the grip test | device. The outsides of the sensors are closely connected with the semicircular shell ] 10 on the thumb side and the four semicircular shells 9 on the four-finger side. The | device is hollow and can hold fluid. The rim 1 of the cup is designed like a funnel to | simulate the process of pouring fluid. The base 3 is hollow, and can hold a heavy | 15 object to generate a deflection torque.
| A virtual reality scenario oriented to a training task is established. In this embodiment, computer-aided design software is used to design a corresponding virtual 3D object | and construct a scenario according to the common hand operation in daily life, such as gripping a columnar object, or stretching an arm to a specified position and pouring a ] 20 water cup.
| The virtual object is designed using 3DMAX software, including common objects in | life, such as a water cup, a knob, a handle and a clip. Spaces for placing precision A force/ torque sensors are reserved in each object to accurately measure the force and | torque when the subject completes the task. The virtual scenario is constructed using | 25 Unity 3D software, and the scenario is set according to different tasks to ensure the | fun of training while approaching to the real situation as much as possible. According ; to the needs of daily life, different tasks are set in the system, and in each task, it is | guaranteed to exercise at least one hand function indicator of a subject, for example, | gripping a water cup to pour water into another empty cup, etc.
10 ) LU101804 In this embodiment, noise points are removed based on an image semantic segmentation algorithm of a fully convolutional neural network (FCN) by using a machine learning method to a target object image. The position and posture data of | the target object are estimated from the image through an iterative closest point (ICP) | algorithm. | Specifically, the posture of the gripped object in reality is estimated using a scenario | image captured by an RGB-D camera. First, the RGB image is semantically |! segmented using FCN-8s based on a VGG architecture. For each image returned by | FCN, if the probability description of a pixel for all targets is smaller than the mean | | probability minus 3 times the standard deviation, this pixel is deleted. In order to | remove noise, isolated points far from a set of dense points are removed by using a | K-closest neighbor anomaly detection algorithm, a maximum set of continuous points | of each segmented target is calculated by principal component analysis, and any | à abnormal points that do not intersect with the set of points are removed. Finally, the | posture of the object is estimated through an ICP algorithm, and the pre-scanned point | cloud and actually obtained point cloud of the object are assumed as: P = {Pys Ps Pas Pa} Q=1{41 Bo} (1) ; ion of coordinates of £7, Coordinates of Q are obtained after the rotation and translation | . ; otedby À and ‘: ll and the rotation and translation are den y L q, = Rp; +1
AR ;mati matrices | ffects of error and noise, the problem of estimating tbe | Considering the effects function: fos ing cost function. . ed to minimize the following | osition algorithm and | + oular value decomp | mplified by a siNgu : an be simplifie | imated. | optimal parameter |.
| LU101804 | Next, the obtained coordinates of the object relative to the camera need to be | transformed into virtual world coordinates relative to the subject by transmission | transformation.
The real world coordinates of the object are perspectively transformed | into coordinates of the virtual object by using a perspective transformation algorithm. | 5 Specifically, two-dimensional space coordinates and corresponding three-dimensional | space coordinates are assumed as: | The transformation relationship between the two coordinates is: || 10 Where Ais an internal parameter matrix of the camera, § is a scaling factor, and | [R 1] | is an external parameter matrix, that is, the rotation and translation of the | world coordinates relative to the coordinates of the camera, where the internal | parameter matrix is: | A=| 0 8” 90 f 0 0/=10 f % | 0 0 100 0 10 10 0 1 | 15 Where is the focal length of the camera, and # andV refer to the center of the | aperture of the camera.
Considering the effects of noise in the real world, the | estimated external parameter matrix can be obtained by maximizing the following maximum likelihood function: | 20 ‘The external parameter matrix obtained can be multiplied by the established real | world coordinates, which is equivalent to corresponding rotation and translation, to | obtain virtual world coordinates.
£m TH 1 Cem mmm mms =m EE TTT i 12 LU101804 The state of the subject's hand function is evaluated according to the kinematic and dynamic data of the subject during the task.
In this embodiment, an optimal feature subset is searched from a feature set by using a firefly algorithm, and multiple linear regression analysis is performed on the found feature subset to obtain effective hand function evaluation results.
Specifically, for the two main data obtained during the task: the dynamic data À, acquired by the force/ torque sensors and the kinematic data À of the object | acquired by the camera, respectively expressed as: | Where F and T respectively represent a force and a torque, and A and 9 A respectively represent a position and a posture angle.
Features are extracted for the | two signals separately.
The features are as follows: | General parameters: '. This part of parameters are obtained by extracting the signals of a total of 12 channels | of the above two data at the same time, so each parameter is 12-dimensional. . Mean: the mean reflects the central tendency of the signals throughout the process. | Where 7 is the length of signals, and “7 is the * value. | Standard deviation: the standard deviation reflects the degree of dispersion of the | signals. | Median frequency: the median frequency reflects the central tendency of signal power | spectra. |
Z EN - Tr | a & p
A TW 1 | 0 | true, an 0 TT | | 1 n p N | | : e e . | | a £ I e [ [ | . ve S € a p Ss N a N S | | t N cy ure | ur 2 R | | € al | IT S ’ e 1 t C fs © ra 1 N 20 vn = = ' ; nn | . = x( Re ° 2 pr th: et N . | | ee P s ‘ € ch ir 5 ; rep _ : | fr a © 6 Ny e 1 r ô ge pre j es the | _ ep e nts pr > sf . R | S It | 4 n ; | i C a N ; ‘ > al an 0 = h f=] 0 °P | iff N f i ir r N n ie t 7 si S er 5 © z x ; en . € O t 7 . n 0 £ — 1 . © C © n “ / 2 i j N = g Ss fi ) ) | 0 C P n s or O ; | | | x e | | | it | s ur ar | | il C y | | er i € | g t i e © d J ‘ ‘ | | \
Pressure migration: the pressure migration reflects the range of pressure center changes throughout the process. nab SCOP =—— 4 (16) | Where Zand D respectively represent a long axis and a short axis of an ellipse with | 95% confidence estimate within the range of pressure center changes. | Kinematic parameters: | There are three parameters in this part, and each is one-dimensional, so there are three |] dimensions in total. |] Time of arrival: the time of arrival describes the time from when the user completes | the response command to when the hand arrives at the target. | RT = Tons = Toegin (17) | Where “Dee indicates the time when the hand starts to move, and “end indicates the time when the hand arrives at the destination. | Path length: the path length reflects the total movement length of the hand when the | user executes the task. | end | = ath(x)dx | [L=$, path(x)db (8 | end | nd Jo represents a x t locus of the hand, a | Where p ath ) represents the movemen | . curve integral from the origin 0 to the cn during the he average speed reflects the speed of hand movemen g Average speed: the e: : 3 one-dimensional | Task performän® arameters each parameter 15 one-dime ’ | : two pP > | ers includes | . ‘ons in total. |. ensions in to 0 and there are two dim |.
“oy LU101804 Task score: the task score mainly refers to the degree of completion of the task by the user, and the highest score can be obtained by perfectly completing the task.
Completion time: the completion time refers to the time from the start of the task to the completion of the task determined by the system, including the time before the | instruction is issued, etc. | 70 features extracted are respectively normalized and sequentially arranged into | 70-dimensional vectors: | E= [91 22>63>-->€w1(20) | The feature dimensions are relatively high, and some features are not helpful for |} evaluation, so a firefly algorithm is used for feature selection.
If the normalized score | of a subject's hand function tested using daily life activity scales is Y, and the | normalized score obtained by multiple linear regression after feature selection is Y, | the objective of optimization is to maximize the following objective function: | i indicates the number of | 6, indicates whether the i-th feature 1s selected, the latter item indicates | . i weights of the accuracy loss and | penalty items, and 0.9 and 0.1 are respectively the welg | anti ; firefly population 1s tive function.
Then, the | the number of features over the objec | initializ f f } | from 0 to 1. The initial maximum | nitialized to a number 0 \ fi is randomly initializ ll veness Po is 1, and the random . erations is 300, the maximum attractiven | number OË Mers p firefly in each iteration is updated to: L soht O is 0.2. The post — weight ¢ 18 i] (p,- P+ a(rand 0.5) (23) | p; = P,+ Pee Pym 5 | 1, the | { iterations 18 reached OF | til the maximum number 0 ; | The iteration continu n optimal subspace of the feature spa is time, a | : active function converges- At this tm, . objective |.
/ Obtaj 16 - | / led, Fo r th en | Performeq Selected mg LU10180 < / fo obtain à fr Catures, mult; A Ction ple 1; ÉValuar line ation in: ar | lon Indicator ofh Tegression anal an ; alysis ; d fünction: S is Where €, m the wer € Lth f Qe + Cight of th e Cafure in th I=] ? % B Correspondj © Selected ç (24 ased o Ing fea Sature ) N the ab ture Subget ÉDove, kinemar cand Ur the use of Virtua] Matic ang dynam; Presents | a ; 1c d . | Feality techn ta during ty | Comp olo € ta | ed wi 8Y to by; sk can b | With the t uild an immer: € recorde dw: | Mbinat; ; Teali ; | fie and reliable hang aon with the grip dey; 9 environment | functio ; Ice, the ; | effectively . À evaluation indicator roby Obtaining à [| Combine physical enti Score.
Because th | "Mal space, the System breaks thy, Pace with virtual task , ; ou LoL | VR technology is lack gh the limitation that the eric É Of contact feedback, ca © existing | : , > Can greatly ; | immediacy and the fin during training, and prove the sense of | » and at the ; | d oo Same time, accurately records | ynamic signals such as forces/ torques of fingers wh | ; en the hand touches an object in | | the real space on the basi | e basis of recording h | and ki | 8 Rematlc parameters, thereby providing | accurate and reliable signals for accurate evaluation of hand function 0 A person skilled i ; | | p ed in the art should understand that the embodiments of the present | disclosure may be provided as a method, a system, or a comput | » à System, puter program product. à Therefore, the present disclosure may be in the form of a full hardware embodiment, a | full software embodiment, or an embodiment combining software and hardware.
In | addition, the present disclosure may be in the form of a computer program product | implemented on one or more computer available storage media (including but not lL limited to a disk memory, a CD-ROM, an optical memory, etc.) including computer ilable program codes | ava prog : | The present disclosure is described with reference to flowcharts and/or block . diagrams of the method, device (system), and the computer program product in the embodiments of the present disclosure.
It should be understood that computer | program instructions can implement each process and/or block in the flowcharts LL
© 04 LU1018 charts flow in the Le ided to rb ; and/o a sses ; : roce oo « 2e ction mo i | en instru d pr . SO mm : ES ee chine, | ter p ; { ; : ra Ss pe pu mputer, to gene rocesse | | = a are ing device or more p by using [ ; ‘ N in one y ; : oe a ces no ted | = i data pr ecified is gener. other | - 5 ions sp ms i of ; rog nt loc pro | a Tp leme b the - me in the or : = om cks in uter ble [ : ‘ . urp omp ada | u or more ose ¢ uter-re : | omp ice to : : . C ce : - na devi N rts anı ee | mm . sing | | u © ice. be sto roces dable = - nee also data p r-rea ae cessing s may able ompute ion | : , ion Cc ti = I instructio her progra. d in the he instruc = uter ction ice. w flowe ' mm a comp he instru n device, in the | en t io à These t can guid so that instruct processes ‘ 2 . an € | mee ific m including e or mor or other - ; em speci s. into a teps aim © Wi block di be loa f operat enerate © , in the also jes © tog ue © ay ries ice ; | ne ks . m se . VI r = ro ed that a cessing de omputer o in: ice. so 0 c ue or evice, data pr on the ions | ee ter progr ing d able ecuted funct | | = hors or other p d instruct for imple locks in t moi puter ter, an ide steps more b ; mr the com a compu provide one or | | bn lemented by sing devic harts and/ nd the | i. ro flo sure, | - ata p in the isclo e | = - resent di ions may b . ; | - win T more pr ents of the p and variati fi cation, | a referred e ious modifi art Y d principle | : 0 e art.
An | | ly p Vari in th irit an ‘ k diagr ere to. in irit | a Cu ited the skil ithin the isclos = ww " ike made f the pre bove in J sur li ibed a | oe isclo he eo be | | disc rt cop 11 t | “ or veme: tecti are pre | e the | Dep impro he pr isclosur of | . ion, i into t disc ope that J a Cu 1 fall i esent ion sc e art - . he pr tectio in th ve ication s ft ro fled roe plicat iments o ings, the p hose skil ene embodi drawing od by t or ific . to | | a ders | : i any un | non wa the accomp should be - a d thereto.
It ‘ | | is not EE
Lo eT Tr RTE — | 18 | ld b de by th killed in th based | various modifications or variations cou e made by those skılled 1n the art based on | the technical solution of the present disclosure without any creative effort, and these |] difi hall fall into th fth discl modifications or variations shall fall into the protection scope of the present disclosure.
ly

Claims (10)

| Eee ES ES a PE mT gern ET Te Eee EEE 19 LU101804 | Claims
1. A hand function training system based on mixed reality, comprising a test device, a posture acquisition device, and a virtual reality platform, wherein the test device comprises at least one grip apparatus and a plurality of sensors, the grip apparatus comprises a hollow shell having at least two arc regions and a recess, the first arc region is provided with at least one force/ torque sensor, and the second arc region is provided with at least four force/ torque sensors arranged side by side; the recess and the arc regions are not on the same plane; the force/ torque sensors are configured to receive grip information of the corresponding fingers and upload the information to the virtual reality platform; the posture acquisition device comprises a plurality of image acquisition devices for shooting an image of a real scenario and position and posture images of physical entities, and the physical entities comprise a hand/ arm and an object to be gripped; the virtual reality platform comprises a processor and a virtual reality device, and the processor receives the image of the real scenario to form a virtual object in a matching shape, maps the accurate position and posture data of the physical entity to the virtual | object in the virtual space, feeds back the data to the virtual reality device to achieve real-time and accurate matching of the virtual and physical objects in position and | 20 posture, and obtains grip parameters of fingers according to the grip information.
2. The hand function training system based on mixed reality according to claim 1, wherein the first arc region and the second arc region are disposed oppositely or approximately oppositely, which coincides with the regions where the fingers are located when the five fingers of the hand grip the grip apparatus.
3. The hand function training system based on mixed reality according to claim 1, | wherein the force/ torque sensor(s) is/ are disposed inside the first arc region or the | second arc region, and the force receiving surfaces thereof match the arc regions, | accordingly, when the user can grip the grip apparatus, the corresponding force/ | torque sensors can acquire grip information of the fingers. | |
| 20 | LU101804 |
4. The hand function training system based on mixed reality according to claim 1, | wherein the force/ torque sensors are six-dimensional sensors capable of acquiring the | magnitude and direction of force. |
5. The hand function training system based on mixed reality according to claim 1, | 5 wherein the recess is funnel-shaped; | or, the shell has a certain accommodating space for placing a heavy object to generate | a deflection torque. |
6. A data processing method based on the system according to any one of claims 1-5, | comprising the following steps: | 10 performing semantic segmentation on the acquired image to remove image noise; | mapping the processed image into a virtual space; and | issuing a hand function training task, receiving dynamic data and kinematic data of | the object acquired during the execution of the task, extracting feature parameters, and | performing multiple regression analysis to obtain an evaluation indicator of task | 15 completion. |
7. The data processing method according to claim 6, wherein the specific process of removing image noise comprises: deleting, for each segmented image, a pixel if the pixel is described for all targets to have a probability smaller than a set value; and removing isolated points beyond a set range away from a set of dense points by using a K-closest neighbor anomaly detection algorithm, calculating a maximum set of continuous points of each segmented target by principal component analysis, and removing any abnormal points that do not intersect with the set of points.
8. The data processing method according to claim 6, wherein the specific process of mapping the processed image into a virtual space: estimating the posture of the object through an ICP algorithm, setting a pre-scanned point cloud and an actually obtained point cloud of the object, and transforming the obtained coordinates of the object relative to the camera into virtual world coordinates relative to the virtual reality device by transmission transformation.
9. The data processing method according to claim 6, wherein the dynamic data |comprises an average offset, pressure changes and pressure migration of the gripped | positions; | the kinematic parameters comprise a task execution time, a moving path length and an | average speed; and | 5 features of respective parameters are extracted and normalized to form | multi-dimensional vectors, multiple regression is performed on the multi-dimensional | vectors to obtain an objective function, and optimization solution is performed by | using a firefly population to obtain a function evaluation indicator of the hand | function. | 10
10. À computer-readable storage medium, storing a plurality of instructions adapted to | be loaded by a processor of a terminal device to perform the data processing method | according to any one of claims 6-9. |
LU101804A 2019-06-05 2020-05-19 An evaluation system and method for digit force training during precision grip under mixed reality environment LU101804B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910485764.7A CN110211661B (en) 2019-06-05 2019-06-05 Hand function training system based on mixed reality and data processing method

Publications (1)

Publication Number Publication Date
LU101804B1 true LU101804B1 (en) 2020-09-23

Family

ID=67790950

Family Applications (1)

Application Number Title Priority Date Filing Date
LU101804A LU101804B1 (en) 2019-06-05 2020-05-19 An evaluation system and method for digit force training during precision grip under mixed reality environment

Country Status (2)

Country Link
CN (1) CN110211661B (en)
LU (1) LU101804B1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113031756A (en) * 2019-12-09 2021-06-25 华为技术有限公司 Method, device and system for evaluating VR experience presence
CN111145865A (en) * 2019-12-26 2020-05-12 中国科学院合肥物质科学研究院 Vision-based hand fine motion training guidance system and method
CN111950521A (en) * 2020-08-27 2020-11-17 深圳市慧鲤科技有限公司 Augmented reality interaction method and device, electronic equipment and storage medium
CN112712487A (en) * 2020-12-23 2021-04-27 北京软通智慧城市科技有限公司 Scene video fusion method and system, electronic equipment and storage medium
CN113241150A (en) * 2021-06-04 2021-08-10 华北科技学院(中国煤矿安全技术培训中心) Rehabilitation training evaluation method and system in mixed reality environment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9122707B2 (en) * 2010-05-28 2015-09-01 Nokia Technologies Oy Method and apparatus for providing a localized virtual reality environment
KR102272070B1 (en) * 2013-03-18 2021-07-05 코그니센스 인코포레이티드 Perceptual-cognitive-motor learning system and method
CN104571511B (en) * 2014-12-30 2018-04-27 青岛歌尔声学科技有限公司 The system and method for object are reappeared in a kind of 3D scenes
CN105769224B (en) * 2016-03-25 2017-02-22 山东大学 Precise grabbing function testing device based on multidirectional stabilizing deflection torque and analysis method of device
CN106178427A (en) * 2016-08-29 2016-12-07 常州市钱璟康复股份有限公司 A kind of hands functional training based on the mutual virtual reality of many people and assessment system
CN108363494A (en) * 2018-04-13 2018-08-03 北京理工大学 A kind of mouse input system based on virtual reality system
CN108874123A (en) * 2018-05-07 2018-11-23 北京理工大学 A kind of general modular virtual reality is by active haptic feedback system

Also Published As

Publication number Publication date
CN110211661A (en) 2019-09-06
CN110211661B (en) 2021-05-28

Similar Documents

Publication Publication Date Title
LU101804B1 (en) An evaluation system and method for digit force training during precision grip under mixed reality environment
US5429140A (en) Integrated virtual reality rehabilitation system
WO2018196227A1 (en) Evaluation method, device, and system for human motor capacity
CN102243687A (en) Physical education teaching auxiliary system based on motion identification technology and implementation method of physical education teaching auxiliary system
US20210316449A1 (en) Robot teaching by human demonstration
JP2011110621A (en) Method of producing teaching data of robot and robot teaching system
CN107160364A (en) A kind of industrial robot teaching system and method based on machine vision
CN107616898B (en) Upper limb wearable rehabilitation robot based on daily actions and rehabilitation evaluation method
CN104887238A (en) Hand rehabilitation training evaluation system and method based on motion capture
CN109079794B (en) Robot control and teaching method based on human body posture following
CN1947960A (en) Environment-identification and proceeding work type real-man like robot
CN109243575A (en) A kind of virtual acupuncture-moxibustion therapy method and system based on mobile interaction and augmented reality
CN107363834B (en) Mechanical arm grabbing method based on cognitive map
Capsi-Morales et al. Exploring the role of palm concavity and adaptability in soft synergistic robotic hands
CN111433783A (en) Hand model generation method and device, terminal device and hand motion capture method
CN112183316B (en) Athlete human body posture measuring method
Hendrich et al. Multi-sensor based segmentation of human manipulation tasks
Callejas-Cuervo et al. Capture and analysis of biomechanical signals with inertial and magnetic sensors as support in physical rehabilitation processes
US20200320283A1 (en) Determining golf swing characteristics
WO2018207388A1 (en) Program, device and method relating to motion capture
WO2021039642A1 (en) Three-dimensional reconstruction device, method, and program
Orlando et al. Manipulability analysis of human thumb, index and middle fingers in cooperative 3D rotational movements of a small object
JP5061808B2 (en) Emotion judgment method
Tannous et al. Exploring various orientation measurement approaches applied to a serious game system for functional rehabilitation
Richtsfeld et al. Grasping unknown objects based on 2½D range data

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20200923