GB2298501A - Movement detection - Google Patents

Movement detection Download PDF

Info

Publication number
GB2298501A
GB2298501A GB9417807A GB9417807A GB2298501A GB 2298501 A GB2298501 A GB 2298501A GB 9417807 A GB9417807 A GB 9417807A GB 9417807 A GB9417807 A GB 9417807A GB 2298501 A GB2298501 A GB 2298501A
Authority
GB
United Kingdom
Prior art keywords
movement
walking
user
detected
pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9417807A
Other versions
GB9417807D0 (en
Inventor
Melvyn Slater
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Queen Mary University of London
Original Assignee
Queen Mary and Westfiled College University of London
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Queen Mary and Westfiled College University of London filed Critical Queen Mary and Westfiled College University of London
Priority to GB9417807A priority Critical patent/GB2298501A/en
Publication of GB9417807D0 publication Critical patent/GB9417807D0/en
Publication of GB2298501A publication Critical patent/GB2298501A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method of detecting a walking movement by the user of a virtual reality system, in order to modify the virtual environment surrounding the user to reflect his apparent movement. A sensor is mounted on the head of the user, and an adaptive learning system (in hardware or software) is trained to recognise the pattern of head movements that correspond to a mimicked walking motion, so that the user can navigate the environment by "walking on the spot".

Description

"Virtual Reality Svstems" Background of the Invention This invention relates to methods and apparatus for providing interfaces between human users, and computer systems, particularly, although not exclusively, adapted for use in so called "Virtual Reality" systems. In particular, the present invention provides a system for adaptively recognising patterns or sequences of movements by the user, in three dimensional space, and for interpreting them so as to provide suitable inputs to the virtual reality software.
The invention also has more general application, in other systems in which it is necessary to monitor a persons movements, and detect patterns in movements, which can then be used to control other processes, for example in remote control applications.
Interaction between the human and computer system in Virtual Reality is usually effected by electro-magnetic tracking devices, such as the Polhemus system. The human user wears a head mounted display (HMD), with a sensor on the top, and a receiver is able to provide 3 dimensional tracking data of the user's head movements, which are transmitted to the computer system. This is then used by the computer system to continually update the view of the scene presented to the user through the HMD. Similarly, the user holds a pointing device, or wears a data glove, which is used to transmit information about the position and orientation of the user's hand.
A problem is to provide a means for the user moving through the environment. Electro-magnetic sensors operate within a small field, so that it is not possible for participants to wander around a large Virtual Environment by physically walking. There have been several methods implemented and discussed in the literature.
A standard solution for navigation in VR is to make use of the hand-held pointing device: VPL used the DataGlove [FOLE 87]: a hand gesture would initiate movement, and the direction of movement would be controlled by the pointing direction. Velocity was controlled as part of the gesture: for example the smaller the angle between thumb and first finger the greater the velocity.
DIVISION's Pro Vision system typically employs a 3D mouse (though it supports gloves as well). Here the direction of movement is determined by gaze, and movement is caused when the user presses a button on the mouse. There are two speeds of travel controlled by a combination of button presses.
In order to give the user the sense of actually walking, rather than the artificial metaphor involved in using the hand, two solutions have been proposed: Iwata [IWAT 92] used a system based on roller skates - the user would stand in a confined area wearing roller skates, and walk using the skates, while staying on the spot. The sensors on the skates would be used to return such foot movements to the computer, which could update the views accordingly.
Brooks [BROO 92] used a treadmill to the same effect - the user would walk on a treadmill, and the walking information transmitted to the computer to update the view.
These two solutions, although giving the user a more naturalistic sense of moving through the environment, require costly and cumbersome additional hardware.
Accordingly, a first aspect of the present invention provides a method of detecting a predetermined pattern of movement, in a system which exhibits a complex relationship between the movements of its constituent parts, such as a human being in the process of walking, the method comprising the steps of: (a) Determining the pattern of movement of one part of the system, over a period of time and (b) Repeatedly applying the detected pattern, to an adaptive learning system, so as to progressively adapt it to produce a reliable indication of the subsequent occurrence of the predetermined pattern.
Preferably, the movement is detected in three dimensions, relative to a fixed point, and a series of x, y and z co-ordinates are supplied to the adaptive learning system in turn, so as to "train" it to recognise patterns which fall within a desired envelope of relationships between the values.
In one particular application of the invention, the system is used to recognise a human walking motion, by detecting the position of a sensor on the user's head. In this way, if the user mimics a walking motion, by "walking on the spot", the system can be "trained" to recognise the pattern of head movements which correspond to such walking, and thus, in a virtual reality system, the display seen by the user can be changed to reflect the changes which would be seen, if the user were actually walking in the "virtual space".
One possible embodiment of the "adaptive learning" system comprises a "neural network" which may comprise dedicated hardware circuitry or may be emulated in software.
Of course, it is also possible to detect movement by using a plurality of sensors, for example, one on each leg of the user. In practice, however, this is undesirable for a number of reasons: (1) sensor devices of the required capabilities are expensive; (2) if the number of signals to be processed is increased, the overall speed of operation of the system is reduced; and (3) users are required to attach more pieces of equipment to themselves, which is inconvenient and can also be uncomfortable.
Thus the preferred arrangement, in which only one sensor is used, is greatly preferable.
It will also be appreciated that other types of "adaptive learning" systems could be used. For example, it may be possible to utilise a "evolutionary" program design, in which a program capable of recognising the desired pattern, is built up from a series of self modifying subroutines, successive generations of which are more specifically adapted to the problem in hand. Other possible methods of pattern recognition could also be used, such as statistical method including discriminant analysis, or cluster analysis.
One embodiment of the invention will now be described in more detail, by way of example, with reference to a "virtual reality" system in which it is required to recognise that a user is "walking on the spot", so as to control the change of a display seen by the user, in accordance with his apparent movements through the virtual space.
2. Pattern Recognition: Recognising Walking on the Spot Behaviour in Virtual, Reality The method requires the detection of specific behavioural activity of users - that is, whether they are walking on the spot or doing something else. As an example, we have used a feed-forward neural net to implement a pattern recogniser that detects whether participants are "walking on the spot" or doing something else. However, there are other possible methods of pattern recognition, and artificial intelligence techniques such as genetic algorithms that would do the job. The neural network that has been implemented as a demonstrable example involves weighted backpropagation, so that changes detected in the weights are given a weight coefficient, depending on how great the change is. This is a standard method for pattern recognition described in [HERT 91 j.
The HMD tracker delivers a stream of position values (xj,yi,zj) from which we compute first differences (t ,Ayj,åzj). We choose a fixed sample of data i = 1,2 1,2,...,n, and the corresponding delta-coordinates are inputs to the bottom layer of the net, so that there are 3n units at the bottom layer. There are two intermediate layers of m1 and m2 hidden units ( (rr c # m2), and the top layer consists of a single unit, which outputs either 1 corresponding to "walking on the spot" or 0 for anything else. We obtain training data from a person, which is used to compute the weights for the net.
The network is then executed on the VR ProVision?00 machine that we are using for all of our experiments.
After experimenting with a ntinibcr of nets we have found that a value of n = 20, ml = S and m2 = 10 gives good results. We have never obtained 100% accuracy from any network, and this would not be expecìed.
There are two possible kinds of error, eqlliv;llent to Type I and Type II errors of statistical testing, where the nilil hypothesis is taken as "the person is not walking on the spot". The net may predict that the person is walking when they are not (Type I error) or may predict that the person is not walking when they are (Type II error). The Type I error is the one that causes the most confusion to people, and is also the one that is most difficult to rectify - in the sense that once they have been involuntarily moved from where they want to be, it is almost impossible to "undo" this. Hence our effons have concentrated on reducing this kind of error. We do not use the output of the net directly, but only change froin not moving to moving if a sequence ofp is is observed, and from moving to not nlovir)ss if i sequence of q Os is observed (q < p). In practise we have used p = 4 and (l = 2. Tit best result we have obtained is a correct prediction 97% of the time. The Type I error is typically around 4% and the Type II error around 5%. It is likely that with fullllel investigation of the Neural Net training method, results will improve.
The Polhemus Isotrak tracking device we are using actually returns data to the application at a rate of 28-3() Hz. The overall error is largely caused by the actual output lagging behind the real output by typically 5 samples, at the end of each sequence of is or Os.
3. Incorporation into the Virtual Reality System The following steps are carried out in order to support a person using this "walking on the spot" method. First, the person spends some time (typically 15 minutes) in the VR, where they are asked to "walk on the spot" some of the time, and do a range of other activities the rest of the time (eg, bending down, moving around, looking around, and so on). During this period, the data from the HMD that they are wearing is recorded, and segments of data (that is sequences of coordinate values) are marked as corresponding to "walking on the spot" or "other" activity. This exercise is carried out on the Provision20O Virtual Reality system.
The data is then transported to a SUN workstation, and a neural net training program is executed on the data. This is as described in Section 2. The data (that is sequences of first differences, each setience marked as either "walking on the spot" or "other") is presented to the neural net trainer over and over again until the net "learns" - that is, an equation is estnblished that for any data sequence, can predict whether or not this data sequence corresponds to "walking on the spot" or "other".
The proportion of correct predictions give an indication of the success of the next. This is a standard method for training neural nets.
Once this equation is established. it is incorporated into the dVS software on the ProVision system, at the point in the sofiwire where the next view that the user will see is to be computed. At the moment where the next view is to be computed, the past sequence of HMD movements (ie. first differences of coordinates as described above) is put through the equation, and a prediction made, also based on previous predictions, as to whether the user is xv;llkig on the spot or not.If it is decided that the user is walking on the spot, then the view presented to them is computed based on the decision that they have moved tbrwird. The direction of the move forward is deterniined by the direction of the usel'.s caze. wl1icll is also provided by the HMD.
The above paragraphs describe how a network is trained to fit the personal "walking on the spot" style of a user. In addition, we have arbitrarily designated the walking on the spot style of one person as "standard", and have trained other people to emulate this style, so that they are successfully able to move through the virtual environment relying on the trained network of this person.
4. Possible Benefits We have carried out scientificallv coiitrolled experiments with users, comparing this "walking on the spot" method "'itIi the method that involves navigating with a hand-held pointing device (a DIVISION 3D mouse). The results are preliminary, since insufficient people have been through the experimental procedure at the time of writing.We have found that 9 otit of 12 people have been able to use the "walking on the spot" method for moving through the environment - that is, the networks were sufficiently trained to correctly reconisc- their walking on the spot and "other" behaviour sufficiently often to allow them to successfully move around. The experiments have pointed out some problems with our data gathering procedures that we have rectified, so that we expect the proportion of successes to improve over time.
We have found that users pmbthly find it easier and more accurate to move through the virtual environment by using the mouse. Also it is less tiring. However, there is some evidence to suggest tli;ri this rcstilt nzay be a function of the successful performance of the neural network. For those tisers with networks that performed very well tended to score the "walking on the spot" method as being a preferable method of moving around than the method using the pointing device.
We have found that the walking on the spot method probably greatly enhances the person's sense of presence in the environment - that is the sense of "being there" in the computer generated environment rather than in the real world where their physical bodies were located. This is crucial, since it is the sense of presence that Virtual Reality system uniquely offer, so that anything which enhances the sense of presence is beneficial as a whole. In other words moving through the virtual environment using the mouse is probably perceived as less realistic by users, compared to walking on the spot.
5. Conclusions A walking method for navigating through a virtual environment has been described.
An example implementation was presented based on a neural network recognising a user's walking on the spot behaviour pattern, using data gathered from the tracking system for the HMD. Provisional results indicate that such a net can be successfully trained, and that this metaphor may enhance the sense of presence, in comparison to the more usual method of navigation using a hand-held pointing device. Of course, this is based on a small amount of data at the time of writing, though experiments are continuing.
It is less clear whether people prefer this walking method to using the mouse, purely from the point of view of actually getting around the environment. As Brooks (op. cit.) noted in the case of the real treadmill: "The steerable treadmill provided quite a realistic walking experience, and it neatly solved the problem of the limited range of the head sensor on the head-mounted display. Nevenheless, it proved to be too slow a tool for exploring extensive models. The user wore out with the exercise and grew frustrated at the slow pace. The flying metaphors proved more useful for this kind of rapid survey." The utility of any metaphor depends on the application context. Clearly, just as in real life, walking is not a good method for exploring large spaces.It was observed that some of the experimental subjects did become physically tired as a rcsult of walking, and it cannot be recommended to be used for a long time. However, it is a cheap additional tool in the range of interface metaphors available in VR, and there are circumstances where the sense of presence would outweigh the costs of relative inefficiency and tiredness. For example. consider an application for training simulation of emergency service personnel in hazardous conditions such as a fire: the fact that users would become tired and frustr.lted as a result of the additional exercise involved in a whole body movement is realistic. In real life, they would not move around a hazardous environment by using a mouse.
[BROO 92] Brooks, F.P. et. al. (1992) Final Technical Report: Walkthrough Project, Six Generations of Building Walkthrntiglos. Deportment of Computer Science, University of North Carolina, Chapel Hill. N.C. 27599-3175.
[FOLE 87] Foley, J.D. (1987) interfaces for Advallced Computing, Scientific American, October, 126-135.
[HERT 91] Hertz, J., A. Krogh, R.G. Palmer (1991) Introduction to the Theory of Neural Computation, Addison-Wesley Publishing Company.
[IWAT 92] Iwata, H. and K. Matsuda (1992) Haptic Walkthrough Simulator: Its Design and Application to Studies on Cognitive Map, The Second International Conference on Artificial Reality and Tele-existence, ICAT 92, 185-192.

Claims (8)

Claims:
1. A method of detecting a predetermined pattern of movement, in a system which exhibits a complex relationship between the movements of its constituent parts, such as a human being in the process of walking, the method comprising the steps of: (a) determining the pattern of movement of one part of the system, over a period of time and (b) repeatedly applying the detected pattern, to an adaptive learning system, so as to progressively adapt it to produce a reliable indication of the subsequent occurrence of the predetermined pattern.
2. A method according to claim 1 in which the pattern of movement is detected in three dimensions, relative to a fixed point, and a series of x, y and z co-ordinates are supplied in turn to the adaptive learning system, whereby it can be trained to recognise patterns which fall within a desired envelope of relationships between the values.
3. A method according to claim 1 or claim 2 in which the predetermined pattern of movement is a human walking movement, and the part being detected is the human head, whereby the occurrence of a walking motion can be detected from the corresponding movement of the head.
4. A virtual reality system in which a walking movement or mimicked walking movement by the user is detected by a method according to any of claims 1 to 3, whereby a display of a virtual environment surrounding the user can be modified in accordance with the user's movement or apparent movement.
5. A system according to claim 4 in which the said movement is detected by means of a sensor on the user's head.
6. A system according to claim 4 or claim 5 in which the adaptive learning system comprises a neural network which comprises dedicated hardware circuitry or a software emulation.
7. A method of detecting a predetermined pattern according to claim 1 and substantially as herein described.
8. A virtual reality system according to claim 4 and substantially as herein described.
GB9417807A 1994-09-05 1994-09-05 Movement detection Withdrawn GB2298501A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9417807A GB2298501A (en) 1994-09-05 1994-09-05 Movement detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9417807A GB2298501A (en) 1994-09-05 1994-09-05 Movement detection

Publications (2)

Publication Number Publication Date
GB9417807D0 GB9417807D0 (en) 1994-10-26
GB2298501A true GB2298501A (en) 1996-09-04

Family

ID=10760825

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9417807A Withdrawn GB2298501A (en) 1994-09-05 1994-09-05 Movement detection

Country Status (1)

Country Link
GB (1) GB2298501A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004042544A1 (en) * 2002-11-07 2004-05-21 Personics A/S Control system including an adaptive motion detector
WO2004042545A1 (en) * 2002-11-07 2004-05-21 Personics A/S Adaptive motion detection interface and motion detector
WO2007110555A1 (en) * 2006-03-28 2007-10-04 The University Court Of The University Of Edinburgh A method for automatically characterizing the behavior of one or more objects
WO2015037219A1 (en) * 2013-09-13 2015-03-19 Seiko Epson Corporation Head mounted display device and control method for head mounted display device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4894777A (en) * 1986-07-28 1990-01-16 Canon Kabushiki Kaisha Operator mental condition detector
US5008946A (en) * 1987-09-09 1991-04-16 Aisin Seiki K.K. System for recognizing image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4894777A (en) * 1986-07-28 1990-01-16 Canon Kabushiki Kaisha Operator mental condition detector
US5008946A (en) * 1987-09-09 1991-04-16 Aisin Seiki K.K. System for recognizing image

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004042544A1 (en) * 2002-11-07 2004-05-21 Personics A/S Control system including an adaptive motion detector
WO2004042545A1 (en) * 2002-11-07 2004-05-21 Personics A/S Adaptive motion detection interface and motion detector
WO2007110555A1 (en) * 2006-03-28 2007-10-04 The University Court Of The University Of Edinburgh A method for automatically characterizing the behavior of one or more objects
WO2015037219A1 (en) * 2013-09-13 2015-03-19 Seiko Epson Corporation Head mounted display device and control method for head mounted display device
US9906781B2 (en) 2013-09-13 2018-02-27 Seiko Epson Corporation Head mounted display device and control method for head mounted display device

Also Published As

Publication number Publication date
GB9417807D0 (en) 1994-10-26

Similar Documents

Publication Publication Date Title
Templeman et al. Virtual locomotion: Walking in place through virtual environments
Slater et al. Steps and ladders in virtual reality
Agah Human interactions with intelligent systems: research taxonomy
Boman International survey: Virtual-environment research
Yang et al. Implementation and evaluation of “just follow me”: An immersive, VR-based, motion-training system
Slater et al. Taking steps: the influence of a walking technique on presence in virtual reality
Slater et al. The virtual treadmill: A naturalistic metaphor for navigation in immersive virtual environments
JP2019061707A (en) Control method for human-computer interaction and application thereof
Sturman et al. Hands-on interaction with virtual environments
WO2018120964A1 (en) Posture correction method based on depth information and skeleton information
Xu A neural network approach for hand gesture recognition in virtual reality driving training system of SPG
CN108983636B (en) Man-machine intelligent symbiotic platform system
US11327566B2 (en) Methods and apparatuses for low latency body state prediction based on neuromuscular data
Adamo-Villani et al. Two gesture recognition systems for immersive math education of the deaf
WO2002001336A2 (en) Automated visual tracking for computer access
Wang et al. Feature evaluation of upper limb exercise rehabilitation interactive system based on kinect
CN114082158B (en) Upper limb rehabilitation training system for stroke patient
CN114998983A (en) Limb rehabilitation method based on augmented reality technology and posture recognition technology
Dzikri et al. Hand gesture recognition for game 3D object using the leap motion controller with backpropagation method
Echeverria et al. KUMITRON: Artificial intelligence system to monitor karate fights that synchronize aerial images with physiological and inertial signals
CN115170773A (en) Virtual classroom action interaction system and method based on metauniverse
Balaguer et al. Virtual environments
Wu et al. Omnidirectional mobile robot control based on mixed reality and semg signals
Müezzinoğlu et al. Wearable glove based approach for human-UAV interaction
GB2298501A (en) Movement detection

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)