WO2022180894A1 - Tactile-sensation-expansion information processing system, software, method, and storage medium - Google Patents

Tactile-sensation-expansion information processing system, software, method, and storage medium Download PDF

Info

Publication number
WO2022180894A1
WO2022180894A1 PCT/JP2021/031776 JP2021031776W WO2022180894A1 WO 2022180894 A1 WO2022180894 A1 WO 2022180894A1 JP 2021031776 W JP2021031776 W JP 2021031776W WO 2022180894 A1 WO2022180894 A1 WO 2022180894A1
Authority
WO
WIPO (PCT)
Prior art keywords
action
operator
virtual
motion
information
Prior art date
Application number
PCT/JP2021/031776
Other languages
French (fr)
Japanese (ja)
Inventor
友三郎 岡▲崎▼
Original Assignee
合同会社Vessk
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合同会社Vessk filed Critical 合同会社Vessk
Priority to JP2022506693A priority Critical patent/JPWO2022180894A1/ja
Publication of WO2022180894A1 publication Critical patent/WO2022180894A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators

Definitions

  • the present invention relates to, for example, a tactile augmented information processing system, software, method, and recording medium, and in particular, promotes memory fixation in a tactile simulating manner while eliminating the need to use an additional device connected to a computer that simulates tactile sensation.
  • Haptic augmented information processing system, software, method and recording medium suitable for
  • APP application software
  • games including games, present images and sounds as presentation content, and add operations to those images and sounds.
  • real devices operating panels, controllers, handles, wheels, keyboards, mice, tablets, trackpads, etc.
  • tactile sensations were used a lot.
  • the content includes, for example, contents displayed by various (video) games and computer APPs (for example, payroll, factory production system, medical diagnosis support system, musical instrument performance simulation system, driving simulation system, language learning system, etc.) , tourism experience system, etc.).
  • video for example, payroll, factory production system, medical diagnosis support system, musical instrument performance simulation system, driving simulation system, language learning system, etc.
  • computer APPs for example, payroll, factory production system, medical diagnosis support system, musical instrument performance simulation system, driving simulation system, language learning system, etc.
  • tourism experience system etc.
  • HMDs head-mounted displays
  • VR virtual reality
  • AR augmented reality
  • the user operates the object displayed in the virtual space displayed on the HMD with a hand or finger existing in the real space, thereby allowing the user to view the content presented by the computer.
  • the wearer applies an operation (for example, Patent Document 1).
  • Patent Document 1 discloses a technical idea regarding a game device capable of appropriately suppressing vibrations such as camera shake that can occur in objects.
  • position information of a controller facing a display device is sequentially acquired with respect to arrangement information of an object to be controlled in a virtual space, and displacement amount of the controller is calculated based on the acquired position information.
  • the displacement amount correction unit performs correction based on this, but there is no focus on the tactile sensation of the operator when operating the object.
  • Patent Document 2 does not delve into the fundamental idea of simulating the sense of touch, and does not disclose the idea of substantially providing the sense of touch in the real space. In other words, it does not disclose the technical idea of how to establish the technical significance of simulating the sense of touch and how to use the laws of nature regarding this significance to create what kind of value.
  • JP 2008-113845 A Japanese Patent No. 6419932
  • the present invention provides a computer or game operation in a virtual space using, for example, an HMD, without using a special additional device connected to the computer.
  • An object of the present invention is to provide a tactile augmented information processing system, software, method, and recording medium capable of substantially providing an internal tactile sensation.
  • the inventor first started by digging deep into the technical idea of eliminating the need to use an additional device connected to a computer that simulates the sense of touch.
  • the starting point was to define the technical significance of simulating the sense of touch.
  • a computer system realizes strengthening the association formation (described later) between visual and/or auditory senses and tactile senses in the human brain, thereby promoting memory consolidation. I arrived at a technical idea using This point will be described in detail below.
  • associative memory Associative memory or associative memory
  • associative memory is important for organisms including humans.
  • a simple example of associative learning is the famous Pavlovian dog.
  • associative memory is formed by constructing high-level interrelationships among a large number of memories formed by association (for example, Non-Patent Document 1).
  • association is supposed to learn the relationship between two different stimuli when given. That is, as an example, when operating a real keyboard, associative learning is established between the content displayed on the display (visual sense) and the button pressing operation (tactile sense). From this point of view, if we define the problem of the above-mentioned conventional technology from the point of view of the present invention, when operating the virtual keyboard displayed on the HMD, it is not possible to obtain the tactile sensation from the button pressing operation, so the associated learning is difficult. The present inventor discovered that the essence of the problem is that it is difficult to establish
  • associative learning it is possible to acquire advantages such as a stronger memory for two stimuli, a faster response speed, a more reliable response, and the ability to respond unconsciously.
  • being able to improve the response ability (performance) when the same stimulus is given is an improvement in game skills (for example, the ability to quickly and accurately return a desired action to a predetermined target), computer operation, etc. improvement of typing skills in , or improvement of operation skills of machinery in general (including vehicles such as passenger cars and aircraft, machine tools, construction/tool machines, etc.), improvement of operator's ability to operate in response to changes in display display display.
  • the augmented haptic information processing system is worn by an operator located near an object having a physical substance, or is mounted near the operator.
  • a display unit arranged in the display unit, a sensor capable of detecting an action of the operator on the object or an action associated with the object, and a display control unit that instructs or specifies content to be displayed on the display unit and determining virtual position information in the virtual space corresponding to the motion based on the motion performed by the operator with respect to the object in the virtual space corresponding to the target being detected as the detection motion, and detecting the motion.
  • Acquiring action specifying information corresponding to the action based on a predetermined correspondence relationship between the movement specifying information and the action specifying information corresponding thereto, and storing the physical reaction force applied to the action by the object; and an associated memory formation promoting unit that causes the operator to form an associated memory with the memory related to the action specifying information.
  • the sensor detects the action, and then the operator can superimpose the object on the object.
  • Actions e.g. rightward or leftward arrows
  • displayed objects in virtual space e.g.
  • the associative memory formation promoting part detects the motion by opening the hand, stroking the palm, the back of the hand, or the pad of the finger, moving the hand up, down, left, or right, or rotating the hand.
  • the virtual position information in the virtual space corresponding to the motion is determined, the motion specifying information corresponding to the motion is obtained from a predetermined correspondence table, and the physical physical given to the motion from the object
  • the above action for example, fingertip rotation action
  • the resulting event in the virtual space virtual space
  • the operator/user will be able to master certain actions (such as game operations) to objects in the virtual space. .
  • the associative memory formation promoting unit performs a specific action performed by the operator on the object or in relation to the object at a real position on the real space.
  • a virtual operation target display for displaying the object defined for the specific action at a virtual position in the virtual space corresponding to the real position on the display unit when the sensor detects that the a control unit, based on the action given by the operator to the object in the real space corresponding to the action, and a virtual position in the virtual space corresponding to the real position information; an operating subject motion detection unit that obtains information and motion specifying information corresponding to the motion; It is good also as what is provided with a thing.
  • the senor may have an imaging function.
  • an imaging unit having an imaging function is further provided, and the associative memory formation promotion unit causes the operator to associate the object with or with the object.
  • the imaging unit detects that a specific action is performed at a real position on the physical space, the object defined for the specific action is displayed on the virtual space corresponding to the real position on the display unit.
  • a virtual operation target display control unit for displaying at a virtual position of , and real position information where the target object is located in the real space corresponding to the action from the action given to the object by the operator and the real position an operating subject motion detection unit that obtains virtual position information in the virtual space corresponding to the information and motion specifying information corresponding to the motion; and an object that exerts a physical reaction force on the operator.
  • the display section may have a see-through mechanism.
  • the see-through mechanism means that, in addition to the function of displaying image data generated by the program, the wearer can see through the physical entity facing the wearer through the display section by providing transparency. It means a thing equipped with a function that enables
  • the display unit may be anything as long as it can form a virtual space for the user to recognize, such as a head mount A display or AR goggles are preferable, but a device for displaying an image such as a screen or a display may be arranged between the object and the user.
  • the senor has at least one of an optical sensor, a magnetic sensor, a contact sensor, and a distance image sensor.
  • the object has elasticity
  • the memory related to the physical reaction force has the elasticity may be weighted.
  • the object has hardness, elasticity, temperature, texture, etc. that provide a tactile sensation sufficient for forming the associative memory. or a combination thereof.
  • it may be not only a horizontal surface, but also curved, uneven, or non-horizontal.
  • the display further comprises an associative learning detection unit that detects progress of associative learning based on the detected hand or finger movement.
  • the control unit may change the content, which is the operation target, according to the progress of the operation and/or the association learning on the virtual operation target detected by the association learning detection unit.
  • an associative learning detection unit for measuring and estimating the progress of manipulation and/or associative learning with respect to the virtual operation target.
  • the display control unit may change the content that is the operation target according to the progress of the operation and/or the association learning on the virtual operation target estimated by the association learning detection unit.
  • the eleventh aspect even if the visual information and/or auditory information related to the content is changed according to the degree of progress of the federated learning estimated by the federated learning detection unit good.
  • the estimation of the progress of the associated learning includes a learning curve, a typing error rate, a game winning/losing rate, a degree of progress of a predetermined series of actions, User's movement speed and reaction speed, correct answer rate, accuracy, perfection calculated by algorithm or variables to measure perfection (for example, player's openness in reversi), key press strength, keys At least one of finger shape change information and button pressing time information for determining pressing strength may be included.
  • the display unit is a display installed in front of the operator's eyes, visual information related to the content is presented on the display, and the imaging is performed. Movement of the operator may be detected by the unit and/or the sensor.
  • a method for augmented haptic information processing includes a display unit and a sensor capable of detecting the action of an operator located near an object having physical substance. and an information processing system comprising the display unit and the control unit for controlling the sensor, wherein the operator's action on the object or in association with the object is detected as a detection action.
  • the controller adjusts the size of the object in the virtual space corresponding to the specific action, and adjusts the size of the object after the adjustment.
  • the rear object is displayed on the display unit at a position identified by the virtual position information in the virtual space and superimposed on the real position information in the real space associated with the object; is detected as a motion given by the operator to the object in the virtual space, and the type of the detected motion and the real position information on the physical space related to the motion is specified by the control unit, and the object gives a physical reaction force against the motion to the operator at a position in the real space corresponding to the object in the virtual space, and the detected motion
  • the control unit finds out the real position information on the physical space and identifies it with the virtual position information on the virtual space.
  • the motion specifying information and the virtual position information corresponding to the motion are specified, and the first memory in which the operator perceives the physical reaction force given by the object to the motion and the motion specifying information are stored as the The operator may form an associative memory with the second memory perceived by the operator.
  • a tactile augmented information processing method includes a display unit and a sensor capable of detecting the action of an operator located near an object having physical substance. and a control unit for controlling the display unit and the sensor, a first step of displaying content to be operated on the display unit; a second step of detecting spatial information about an entity with a certain physical entity; a third step of setting a virtual plane at the position detected in the second step; a fourth step of setting a physically operable range on the set virtual plane; and virtual buttons, game controllers, touch panels, discs, and operations within the operable range set in the fourth step.
  • a haptic augmented information processing program is capable of detecting a motion of an operator located near a display unit and an object having physical substance.
  • the sensor is used to notify the computer that the operator has performed an action on the object or in association with the object.
  • a third means for acquiring action specifying information corresponding to the action based on a predetermined correspondence relationship between the detected action and the action specifying information corresponding thereto; and On the other hand, in order to cause the operator to form an associative memory between the memory related to the physical reaction force given by the object and the memory related to the action specifying information, substantially the same as when the physical reaction force is given. and fourth means for presenting to the operator the action specifying information or information obtained by processing the action specifying information at a given timing.
  • the fourth means further instructs the computer, and further, the control unit to adjust the size of the object in the virtual space corresponding to the specific action.
  • the adjusted object after adjustment is displayed on the display unit at a position identified by the virtual position information in the virtual space and superimposed on the real position information in the real space associated with the object.
  • a motion given by the operator to the real object is detected as a motion given by the operator to the object in the virtual space, and a type of the detected motion and a reality related to the motion;
  • the control unit identifies real position information in space, and the object gives a physical reaction force to the action to the operator at a position in the real space corresponding to the object in the virtual space;
  • the controller determines the real position information on the physical space related to the detected motion and identifies it with the virtual position information on the virtual space, the motion corresponding to the type of the detected motion is performed.
  • the action specifying information at approximately the same timing as the physical reaction force is applied so as to cause the operator to form an associated memory between the action specifying information and a second memory perceived by the operator;
  • the information obtained by processing the action specifying information may be presented to the operator.
  • a twentieth aspect of the present invention can be implemented as a recording medium storing the augmented haptic information processing program according to the eighteenth or nineteenth aspect.
  • the display unit refers to a unit that allows the wearer or a person in the vicinity to form a virtual space in terms of recognition by displaying content. It is preferable to have a see-through mechanism and be able to display the supplied image in front of both eyes. It may be a device that can be viewed by opening or being transparent, or a device that displays an image such as a screen or a display that is simply arranged between the object and the user. may be
  • a sensor that detects an object can detect the outline of a fixed object or the movement of a moving object.
  • a unit for example, a TV camera
  • the sensor may include a distance image sensor capable of obtaining a distance image by outputting the distance from the photographed image of the subject to the object in units of pixels.
  • the senor for example, a TV camera, etc. captures an image of the outside world, acquires or processes distance information, acceleration information, etc., and uses the image display control unit or directly, head-mounted type It is configured so that it can be displayed on the display unit and the captured image can be stored or transmitted. may be placed.
  • the display unit and sensor described above may be implemented as, for example, a head mounted display (HMD).
  • the HMD preferably includes at least a display and a sensor (which may include an imaging unit (for example, a TV camera)), and more preferably includes a communication interface (I/F). Configured.
  • the display control unit includes a communication I/F for exchanging signals with the display unit (for example, the display unit of the HMD when implemented as the HMD described above), a CPU for performing calculations, and a numerical calculation unit. It is configured with a GPU that performs high-speed processing, a memory in which the program and data being executed are placed, and an external storage device that saves the program and data. It comprises a program for controlling at least one of the CPU, GPU, communication I/F, and external storage device so as to transmit and instruct content information to be processed.
  • the substantially flat portion is preferably a substantially horizontal surface of an object such as a desk, table, wooden box, cardboard box, flat plate, floor, sand, soil, water surface, cushion, chair, etc. Regardless of the type, it includes not only horizontal surfaces but also slightly inclined surfaces, surfaces with small steps and unevenness on the surface, and slightly curved surfaces.In addition to horizontal surfaces, walls and box-shaped surfaces It may be a side surface of an object, or a vertical or near-vertical surface such as one's own or another person's hand, leg, belly, or other person's back. Furthermore, an obstacle or the like may be installed within a range that does not interfere with the operation.
  • the subject of operation instruction refers to a part of the body such as the fingers of the person who uses this system, or the operation unit held by the user, and the place where they are placed is detected and recognized by the sensor and the image display control unit. It is also an object capable of causing a sensor to detect motions thereof, that is, motions in the direction perpendicular to the substantially flat portion, or motions that move the substantially flat portion in the plane direction.
  • a sensor e.g., a TV camera
  • a substantially flat surface in front of the operator, and the operator's fingers, hands
  • a virtual operation range is set by specifying a plurality of points on a substantially plane existing in the real space using a controller or an AR marker or the like placed in the real space.
  • the operator's fingers and hands are displayed in addition to the virtual keyboard, virtual buttons, game images to be operated, etc. within the virtual operation range set in this way.
  • the operator operates the virtual keyboard and virtual buttons on the screen of this HMD, but since the reality is a substantially flat surface that actually exists in the real space, the tactile feedback from it can be obtained without wearing a special device. , can be received as a realistic feeling, and by this, associative learning between visual sense and tactile sense such as game images can be performed, and associative memory can be constructed.
  • an existing keyboard for forming an associative memory
  • buttons and virtual keyboards can be placed anywhere within the virtual space's operating range, and even dynamically changed by the program. This makes it possible to implement even more complex games. For example, it is possible to change the controller depending on the operator's situation. This has been difficult to achieve with conventional game experience equipment with physical controllers. Furthermore, it is possible to change or adjust not only virtual buttons and virtual keyboards, but also all other virtual operation objects, and even the appearance and size of the user's hands (for example, animal hands, robot hands, etc.). , translucent hands, etc.).
  • keyboards and buttons were mainly used as objects to be operated by the subject of operation instruction, but there are other objects such as sticks, levers, sliders, game controllers, touch panels, discs, wheels, handles, control panels, A mouse, a keyboard, a stringed instrument, a percussion instrument, or any other instrument that can be operated by an operator and whose movement can be detected by a TV camera or sensor can be used.
  • haptic augmented information processing system for example, when operating a computer or game in a virtual space using an HMD or the like, without using a special additional device connected to the computer, Or, while reducing the number of additional devices, substantially providing tactile sensations in the real space, thereby promoting associative learning of visual and/or auditory and tactile sensations, and forming strong associative memories. becomes possible. Furthermore, it is also possible to monitor the progress of associative learning by detecting movements of fingers and hands.
  • buttons and virtual keyboards be freely arranged and sized within the virtual space, but they can also be dynamically changed by a program without the operator's permission. program can be provided.
  • a virtual touch display can be used as a virtual controller.
  • the user can use a large size touch display in the virtual space, which in reality is very expensive.
  • the size of the virtual keyboard it is possible to adjust the size of the virtual keyboard to an optimal size that exceeds the restrictions of the display size according to the size of the user's hand, etc., and it is also possible to arrange a large number of keys that exceed the physical restrictions. can.
  • a virtual keyboard in conventional virtual or mixed reality software is a character that selects keys floating in the virtual space one by one with a laser (virtual laser pointer) that extends into the virtual space from the controller held by the user. Input method was the mainstream. However, in reality, operations performed by tapping the screen or keyboard with 10 fingers are performed with only two controllers on both hands, so simultaneous pressing of 3 or more keys cannot be performed, and input speed is slow. was a problem.
  • the present invention it is possible to give the user's "collision determination" to the virtual keyboard or virtual touch panel a "collision" involving the substance of a real, substantially flat surface.
  • the advantage of the virtual touch display in the present invention is added. This is because the number of simultaneous touches that can be performed on a real touch display differs depending on the performance of the pressure-sensitive sensor and the capacitive sensor, and the price and operability may differ accordingly. Multiple simultaneous touch processing (multiple collision processing) is no longer related to such sensor restrictions, and is replaced by the number of collision determination (object distance calculation) processing on software, so the number of simultaneous touches can be said to be virtually unlimited. From this point of view, it can be said that it will be possible to provide more complex operations and deeper experiences than ever before.
  • multiple virtual touch displays can be arranged, and they can coexist with virtual buttons and virtual controllers.
  • FIG. 1 is a configuration diagram mainly from the hardware side of a haptic augmented information processing system (for example, an image display system) according to an embodiment of the present invention
  • FIG. 1 is a functional configuration diagram from the software side that configures an image display system according to an embodiment of the present invention
  • FIG. 3 is a functional block diagram obtained by adding the configuration of an associated memory formation promoting unit 30 to the functional configuration diagram (functional block diagram) according to FIG. 2
  • FIG. 4 is a conceptual block diagram showing an enlarged associative memory formation promoting unit 30 in FIG. 3 and conceptually illustrating how an associative memory is formed
  • FIG. 4 is a conceptual block diagram showing an enlarged associative memory formation promoting unit 30 in FIG.
  • FIG. 4 is a timing flowchart for explaining an operation defining a specific operation according to an embodiment of the present invention
  • FIG. 4 is a conceptual perspective view for explaining actual movements of an operator in specific actions according to one embodiment of the present invention
  • FIG. 4 is a conceptual perspective view for explaining actual movements of an operator in specific actions according to one embodiment of the present invention
  • FIG. 4 is a conceptual perspective view for explaining actual movements of an operator in specific actions according to one embodiment of the present invention
  • FIG. 4 is a conceptual perspective view for explaining actual movements of an operator in specific actions according to one embodiment of the present invention
  • FIG. 4 is a conceptual diagram for explaining a learning progress estimation method according to an embodiment of the present invention
  • 4 is a flowchart for explaining the operation of the image display system according to one embodiment of the present invention
  • a haptic augmented information processing system for example, an image display system
  • an image display system for example, an image display system
  • the range necessary for the description to achieve the object of the present invention is schematically shown, and the range necessary for the description of the relevant part of the present invention is mainly described. It shall be based on a well-known technique.
  • FIG. 1 is an overall hardware configuration diagram of a haptic augmented information processing system (for example, an image display system) 100 according to an embodiment of the present invention. It is a view in which a displayed image and a conceptual perspective view for explaining a range captured by a TV camera, which is one element of the same hardware, are arranged around, and in the same figure, the case where an HMD is used is exemplified. showing.
  • the image display system 100 includes, in terms of hardware configuration, a display 6, a TV camera 7, a sensor 8, and an HMD 5 equipped with or incorporated with a communication I/F (interface) 9; It comprises an operation control unit 21 having a communication I/F 16, a CPU 17, a GPU 18, a memory 19, and an external storage device 20, which are connected to the HMD 5 via a communication I/F 9.
  • An operation range on a substantially planar desk 1 (as an entity) is specified, and a virtual keyboard 2, a virtual button 3, and an operation instructing subject 4 (as an entity) (both hands are shown as an example) ) are shown.
  • An operation instructing subject 4 (as an entity) can wear an HMD (head mounted display) 5 (as an entity).
  • the virtual keyboard 2 and the virtual buttons 3 are virtual entities from the user's (wearer's) point of view.
  • the user recognizes the electronic entity displayed as video data on the display, which is one piece of hardware, by operating another entity called wear, the entity appears as if it exists in the void. It is an object that can be recognized as if. Details of these technical features will be described later.
  • the display 6 preferably has a see-through function.
  • the display image displayed on the display 6 may be a see-through real image in which the front real space is seen through the see-through function, or may be an image captured by the TV camera 7. It may be a superimposed image obtained by superimposing a captured image displayed on the display 6 so as to be superimposed on the real space based on the obtained image data and the see-through real image. If the display 6 does not have a see-through function, the display image displayed on the display 6 may be image data captured by the TV camera 7 or processed data thereof.
  • the substantially flat surface (which is the surface of the desk 1) is detected from the image captured by the TV camera 7, but alternatively may be detected by a sensor 8 with imaging capability.
  • These signals that is, information related to moving images and/or still images captured by the TV camera 7 (or the sensor 8) and detection information detected by the sensor 8 are controlled via the communication I/F 9. sent to section 21.
  • FIG. 1 An example of the screen displayed on the display 6 of the HMD 5 is shown as screen 10 (in FIG. 1).
  • a virtual plane 12 is also displayed.
  • An operation range is designated on this virtual plane 12, and a virtual keyboard 13 and virtual buttons 14 are displayed within the range, and furthermore, an operation instruction subject (both hands are shown as an example) that overlaps the virtual keyboard 13 and the virtual button ) on the real space is displayed.
  • the mode in which the image 15 of the main subject of the operation instruction is displayed may be a real image (the above-mentioned see-through real image) captured visually by the wearer by the see-through mechanism of the HMD 5, or (the see-through function is absent or turned off).
  • a real image for example, a virtualized image (the above-described captured image or Alternatively, it may be an image in which a real image perceived by the wearer's eyes and the virtualized image are superimposed (the superimposed image described above).
  • Screen data related to the screen 10 displayed on the display 6 of the HMD 5 is generated by the operation control unit 21, sent to the communication I/F 9 of the HMD 5 through the communication I/F 16, and displayed. More specifically, for example, the CPU 17 or GPU 18 reads a certain program pre-stored in the external storage device 20 all at once or each time, stores it in the memory 19, and the CPU 17 or GPU 18 stores it in the memory 19.
  • the screen data can be generated by reading the stored program and operating the corresponding hardware according to the program.
  • the operation control unit 21 includes the CPU 17, the GPU 18, the memory 19, and the external storage device 20, and preferably also includes the communication I/F 16, but may be configured with additional necessary devices and components. may be
  • FIG. 2 is a functional configuration diagram (functional block diagram) from the software aspect constituting the image display system according to one embodiment of the present invention.
  • FIG. 10 is a functional configuration diagram showing a configuration of a part particularly related to motion of an operation instructing subject and progress estimation of association learning.
  • functions constituting the software constituting the image display system may be other than those shown in FIG. , which may be omitted here.
  • the motion control unit 21 includes, in terms of functional blocks, a space detection unit 2122, an operation range setting unit 2123, an operation detection unit 2124, and an operation instruction subject. , a motion detection unit 2125, a virtual operation target display control unit 2126, a display control unit 2127, a content display control unit 2128, and an associated learning progress detection unit 2129.
  • the federated learning progress detector 2129 may be optional.
  • the image signal (signal relating to the moving image information and/or the still image information) sent from the TV camera 7 of the HMD 5 is preprocessed by an algorithm described later. It is detected by the detection unit 2122 .
  • the object detected by the space detection unit 2122 may be a space (three-dimensional object) such as a three-dimensional object.
  • the space detection unit 2122 can specify which part of the image signal captured by the TV camera 7 forms a substantially flat surface.
  • the space detection unit 2122 operates the hardware of the TV camera 7 and/or the sensor 8 and the hardware to detect a plane by a known plane detection algorithm based on the information obtained from the TV camera 7 and/or the sensor 8.
  • the space detection unit 2122 performs, for example, a plane detection method (for example, Yamato Ideoka, Masuda Nitta, Kiyotaka Kato: "Face detection in range image using three-dimensional Hough transform” (Proceedings of the Japan Society for Precision Engineering Autumn Meeting Scientific Lecture Proceedings (2011)), but it is not limited to this.
  • a plane detection method for example, Yamato Ideoka, Masuda Nitta, Kiyotaka Kato: "Face detection in range image using three-dimensional Hough transform" (Proceedings of the Japan Society for Precision Engineering Autumn Meeting Scientific Lecture Proceedings (2011), but it is not limited to this.
  • the operation range is determined by the operation range designating unit 2123, whereby the subject of the operation instruction in the real space is associated with the virtual space, for example, as position information.
  • This set of software components prepares the APP for operation.
  • the operation range specifying unit 2123 determines the operation range, it may be specified by the operator, or may be physically specified in advance in the real space (for example, a marker in advance in the real space (for example, an AR The operation range may be specified by placing a marker)).
  • the operation range specifying unit 2123 operates the (external) hardware such as the TV camera 7 and/or the sensor 8 to (Internal) computer resources (communication I/F 16, CPU 17, GPU 18) have a function of acquiring physical position information of a designated operation range and correlating (mapping processing) the acquired position information in the virtual space. , memory 19, and/or external storage device 20) and the above-mentioned hardware cooperate with each other.
  • the operation detection unit 2124 operates hardware such as the TV camera 7 and/or the sensor 8, and detects an action, that is, an operation by an operation instructing subject obtained from the TV camera 7 and/or the sensor 8, for example, based on position information (
  • the algorithm (software) that causes the computer resources to perform the function of detecting the operation by specifying the operation related to the operation through various information obtained as a result of the operation, including location information. It is realized by cooperating with hardware.
  • the motion of the operation instruction subject is detected by the motion detection unit 2125 of the operation subject.
  • the objects to be detected here include reaction time, reaction speed, time interval between key touches, key pressing strength, right or left arrow pressing, finger rotation, finger extension, finger thrust, and two or more fingers. Pinch the tips of the fingers together, create shapes such as circles and polygons with your fingers, hold your hands, open your hands, close your fingers, open your fingers, and rub your palms, backs of your hands, and pad of your fingers.
  • At least one of a motion, a motion of moving a hand up, down, left and right, and a motion of rotating a hand can be included, and the reaction time, reaction speed, and key touch detected in relation to the specific motion can be included.
  • Time interval, key press intensity, learning curve, typing error rate, game win/loss rate, progress of a series of prescribed actions, user's action speed and reaction speed, correct answer rate, accuracy, completeness calculated by algorithm Alternatively, it includes at least one of specific operation information such as a variable for determining the degree of perfection (for example, the degree of openness of the user's stroke in reversi) and finger shape change information for determining the key pressing strength.
  • the detection can include detection for estimating the degree of progress of associated learning, but the detection function of the motion detection unit 2125 of the operating subject is not limited to these.
  • the motion detection unit 2125 of the operating subject operates hardware such as the TV camera 7 and/or the sensor 8, and detects an action of an operation by the operating instruction subject obtained from the TV camera 7 and/or the sensor 8, for example, as an action related to the operation. It is detected as position information or the like, or as position information, time information, and pressure information obtained by adding pressing force information corresponding to the key pressing strength to these, and the detected information thus detected is used for each operation (for example, reaction, Action information for each key touch, key press, slider slide, touch panel flick, wheel rotation, action such as playing a musical instrument, etc. Algorithms (software) that cause computer resources to automatically identify as information such as movement proficiency, movement accuracy, movement perfection, degree of divergence from ideal movement, and the above hardware are realized by working together.
  • a certain threshold is set for each type of detected information for each motion corresponding to each motion information, and the actually detected detection information
  • a specific action indicates a specific action predefined on the system. For example, let's take the case where the specific action is defined as the action of "touching and stroking an object to form a circle”.
  • FIG. 6 is a timing flowchart for explaining the operation that defines the specific operation according to one embodiment of the present invention
  • FIGS. is a perspective view. As shown in FIG. 7, when an operator performs a specific action on a specific object (for example, the object "top surface of a desk” is substantially circled about a specific first position P1 as the center). (Step 301).
  • the motion detection unit 2125 of the operating subject uses, for example, the following algorithm to systematically determine that an action of "stroking to form a circle", that is, a specific action, has been performed.
  • the center point of the circle related to this specific action is determined using a known algorithm, and the determined center point of the circle, i.e., the specific first position P1, of the virtual controller (in the virtual space ) identifies the controller first position Z1 (FIG. 8) and displays it (step 303).
  • steps 300B, 304 by, for example, a predefined number N (for example, "4" here) as the number of positions related to the specific motion
  • N for example, "4" here
  • Controller Nth position ZN (N may be 4, for example) is specified (identified)
  • controller first positions Z1 (in virtual space) of the virtual controller to For example, the controller fourth position Z4 and a virtual plane (virtual plane 12 in FIG. 1) defined by Z1 to Z4 are displayed in the virtual space by the virtual operation target display control unit 2126 (FIG. 9).
  • the algorithm for determining whether a specific action has been performed from the detection information described above can be realized, for example, by the following, but is not limited to this.
  • the virtual operation target display control unit 2126 compares the position information related to the action of “stroking to form a circle” with the captured image in the real world, so that the “stroking to form a circle” is performed.
  • the controller first position Z1 to controller Nth position ZN (N may be 4, for example) of the virtual controller (in the virtual space) are specified (identified) in the virtual space.
  • the virtual operation target display control unit 2126 superimposes a specific function object having a specific function (for example, a game controller or a keyboard) on the real object.
  • the size is displayed as a virtual world (step 305).
  • the controller first position Z1 to (for example, the controller 4th position Z4 is identified, and a plane controller having the controller 1st position Z1 to the controller 4th position Z4 defined in this way, for example, as the four corners is identified and arranged in the virtual space, and displayed on the display 6. (Fig. 10).
  • operation buttons B1 to B4 (corresponding to the virtual keyboard 13 and the virtual button 14 in FIG. 1) are defined, and each operation button is displayed in the virtual space.
  • the position may be associated with the position on the physical space for identification.
  • the above-mentioned specific function objects are not limited to game controllers, and examples include touch panels, track pads, touch displays, toggle switches, button switches, wheels, steering wheels, levers, Joysticks, discs, sliders, rotary bodies, control panels, work instruction devices, mice, keyboards, air hockey tables, whack-a-mole, musical instruments with keyboards such as pianos, stringed instruments such as guitars, musical instruments with striking surfaces such as drums, card games cards and play area, mahjong game tiles, board game pieces, etc., as long as it realizes the function of transmitting (inputting) the operator's intention or action to this system.
  • the position information of the real object is used as a base.
  • can be detected as motion information for each motion for example, a “reaction” motion, a “key touch” motion, a “key press” motion, etc.).
  • the "specific action” is not limited to the action of "stroking to form a circle", and includes, for example, closing the finger, pressing and holding with the fingertip, tapping twice with the fingertip, pointing the finger in a specific direction, Any motion that can identify one or more points by a known algorithm, such as drawing a line with a fingertip or pointing with a controller, can be used.
  • the operation to specify one or more points as described above is to specify a substantially plane and an operation target range or a safety confirmation range in it by continuing or combining (so-called “manual calibration", “clearance confirmation” ), but instead of that continuous action, for example, by tracing the periphery of the operation target range with a finger or stroking it with the palm, it is possible to specify an approximately plane and the operation target range at the same time with one action. good.
  • This includes a step of processing the range detected as a result of stroking or tracing into a circular or polygonal range through a known algorithm, and a range adjustment of the operation unit to deal with detection accuracy (detection deviation or blurring). , the size, or the step of adjusting the starting position of the hit determination.
  • a kick is defined as a specific preliminary action (user hits the hand any number of times, draws an arbitrary figure such as a circle in the air, emits a specific sound, makes the hand a specific shape, etc.). , then target the indexed target (by the user) as a specific object, and perform a specific post-action (which may be the same as the above-mentioned pre-action, clap the hand a different number of times than the pre-action, or perform a pre-action such as ⁇ in the air).
  • Draw a figure different from the action, make a specific sound) is detected as a kick to complete the action of targeting a specific object, or while the previous action is continuing (for example, controller button, hand wave, index finger and thumb pinch), and the end of the pre-action (e.g., hand off button, hand still, hand open). ) or complete the object-viewing action upon detection of an action that triggers the completion of the pre-action (e.g., pressing another button, rotating the hand, pinching the middle finger and thumb).
  • the target object is first placed in the virtual space.
  • this adjusting action may include the action of designating one or more points, the action of targeting by the pre-action, the post-action, and the like.
  • the above-mentioned actions performed with fingers and hands can be replaced by detecting actions of external devices (controllers, input devices based on them, or physical objects in a form that can detect actions, such as AR markers, etc.) and button input operations. can be done.
  • the virtual operation target display control unit 2126 detects such a series of actions, it is not limited to detection via the above-described still image. Alternatively, detection may be performed by measuring a physical quantity other than a still image, such as a moving image, sound, vibration, or the like.
  • an object in the real world (for example, a desk surface board, a cup/doll on a table, etc.) is regarded as a game controller, and a specific action (for example, "stroking to form a circle” action) is performed. I do.
  • the real world object is regarded as matching the specific function object (for example, a game controller) in terms of position information, or the real world object and the virtual
  • a base (or platform) is generated for linking with a specific functional object corresponding to this in the world, that is, for forming an associative memory, which will be described later.
  • the wearer can use the game controller, touch panel, By realistically adding various operations to the above-mentioned objects in the real world, you can experience the feeling that your intention to operate is transmitted to the system side as if you were operating a work instruction device, a musical instrument, or a card or piece in a board game. It will be possible.
  • the associative memory formation promotion part (to be described later) can also exhibit the effect of improving the precision and finesse of the above operations.
  • an object in the real world can be used as a game controller, that is, a specific object with physical substance (which can be arbitrarily specified by the operator each time).
  • a game controller that is, a specific object with physical substance (which can be arbitrarily specified by the operator each time).
  • it can be regarded as an interface between the virtual world and the real world, such as a controller, and it is not the experience of operating in the void as before, but the reaction force of physical objects or the real experience accompanied by physical entities. will get In other words, it is possible to have an experience that brings a sense of touch into the virtual reality of the game.
  • the associative memory formation promoting part will be described in detail later. Since it is possible to obtain physical feedback in the form of reaction force from the desk, the stimulation from this feedback is directly transmitted to the human nerves, increasing the sense of immersion.
  • a specific object in the virtual world such as a controller
  • a situation is first created as if the controller is possessed by such a physical object.
  • game experience, performance experience, and work experience accompanied by physical feedback i.e., feeling a reaction force from a physical object, in other words, forming an associative memory, which will be described later
  • learning and training experiences For example, if we define this from the perspective of the wearer during the progress of the game, it will be more realistic, especially when it is operated with a momentary intense movement, as if it were a sport (so-called E-sports).
  • the operator/user can be given a sense of realism and a sense of immersion that has never been experienced before, and a greater sense of satisfaction.
  • a user can use a desk or wall (furthermore, a cardboard box, a wooden box, a pillar, a musical instrument, a floor, a surface of water, a part of the body such as one's or another's belly, a doll, a fruit, etc.) as a "controller.” ”, and you will have a different (for example) gaming experience than before.
  • game content for example, animal breeding games, farm management games, medieval immersive role-playing games, defense games, shooting games, music games, card games, puzzle games, mahjong games, board games, etc.
  • game content for example, collars, farm equipment, facility models, swords and shields, triggers and firing buttons, musical instruments, cards and play mats, puzzle pieces, mahjong tiles, small items such as pieces, etc.
  • game content for example, animal breeding games, farm management games, medieval immersive role-playing games, defense games, shooting games, music games, card games, puzzle games, mahjong games, board games, etc.
  • swords and shields for example, collars, farm equipment, facility models, swords and shields, triggers and firing buttons, musical instruments, cards and play mats, puzzle pieces, mahjong tiles, small items such as pieces, etc.
  • an object indicating the clearance confirmation result range or the processed range may be displayed.
  • an obstacle object such as a fence or a signboard, or a warning object
  • the above object indicating the safe range is processed and displayed according to the detection result when the user is about to go out of the safe range by mistake or when an operation outside the safe range is detected. It can be anything.
  • the changed APP display screen detected by the operation detection unit 2124 and sent to the APP is sent to the content display control unit 2128 .
  • the display of the APP is changed to reflect the degree of progress of joint learning (according to another embodiment of the present invention, which will be described later)
  • information from the joint learning progress detection unit 2129 is used. good.
  • the content display control unit 2128 controls the display 6 of the HMD 5, which is the hardware, to display the changed APP display screen. It is realized by cooperating with wear.
  • the function of the federated learning progress detector 2129 according to another embodiment of the invention will be described later.
  • Information including an image in the real space captured by the TV camera 7 of the HMD 5 is sent to the virtual operation target display control unit 2126, processed there, and then merged with the image from the content display control unit 2128 in the display control unit 2127. and sent to the HMD. Thereby, the necessary information is comprehensively displayed on the screen 10 of the HMD. If we define this from the viewpoint of the operator (wearer), we can say that it is a semi-real/semi-virtual space in which virtual objects are mixed in the real space, or a mixed reality space in which the real space and the virtual space are aligned and superimposed. is appearing in front of you.
  • the virtual operation target display control unit 2126 pre-edits (for example, scales) the information including the image in the real space captured by the hardware of the TV camera 7 so as to be suitable for the image merge operation (to be described later). It is realized by the cooperation of the above-mentioned hardware and the above-mentioned computer resources with an algorithm (software) that causes the computer resources to perform the function of controlling the above-mentioned hardware and computer resources.
  • the display control unit 2127 merges (combines) the image information edited by the virtual operation target display control unit 2126 and the image information from the content display control unit 2128 into combined image information. This is achieved by cooperation between an algorithm (software) that causes computer resources to perform the function of controlling display of image information on hardware such as the display 6 of the HMD 5 and the hardware.
  • FIG. 3 is a functional block diagram obtained by adding the configuration of an associated memory formation promotion unit 30 to the functional configuration diagram (functional block diagram) according to FIG. 2, and FIGS. 30 is an enlarged view of the conceptual configuration block diagram conceptually depicting how an associative memory is formed.
  • the associative memory formation promoting unit 30 is composed of an operating subject movement detecting unit 2125, a virtual operating target display control unit 2126, and a desk 1 (as an entity). (Strictly speaking, the reaction force given by the desk 1 to an action such as pushing down on the desk 1).
  • the desk 1 is given as an example of a realistic (that is, having a physical substance) object, and such an object is not limited to a desk, but can be any other movable object. Even if it is (desktop light, document holder, chair, dining table, cardboard box, mahjong table, bed, sofa, flat plate, musical instrument, doll, vegetables, fruit, etc.), fixed objects (walls, handles, pillars, floors, etc.) or anything else (manipulator's hands, fingers, stomach, chest, other person's back, soil, water, etc.) that has physical substance, i.e., can give a reaction force. It doesn't matter which one.
  • the operator 4 gives an action (stimulation action) S1 of pressing down on the desk 1 (strictly speaking, the operator 4, for example, the index finger 4a is in direct contact with the desk 1). ), the desk 1 instantaneously applies a reaction force S2 against S1 to the forefinger 4a. Then, the stimulation by this reaction force S2 is transmitted to the central nerves of the operator 4 via the peripheral nerve network of the operator 4 (event S3), resulting in a certain experience in the brain (for example, the action S1 is The experience of giving to things) S4 is formed as a memory. At this time, substantially concurrently, as shown in FIG.
  • the action S1 is translated/information-converted as a specific input action (for example, the action of instructing the "controller” to move rightward) in an information-processing manner on this system, and this specific input action is reflected. (That is, for example, the object 1601 in the virtual space is moved to the position of the object 1603 by proceeding rightward by the "controller"). 4 sees this, a specific success experience (for example, the experience that the object 1601 was able to move to the position of the object 1603) S5 is formed as a memory in the brain.
  • a specific input action for example, the action of instructing the "controller” to move rightward
  • S5 is formed as a memory in the brain.
  • an associativity S6 is formed between the memories of both S4 and S5 in the brain.
  • an associated memory S6 of S4 and S5 is formed.
  • the operator 4 visually recognizes that the result of translating/converting the action S1 as the specific input action is displayed on the display unit.
  • S5 is formed as a memory
  • the operator 4 confirms by auditory sense, olfactory confirmation
  • a specific success experience S5 is triggered by any of the five human senses other than vision, such as the result of being confirmed by touch (the application of a specific stimulus), the result of being confirmed by taste, or any combination of these. It may be formed as a memory.
  • the input action S1 is performed on the desk 1, which is a real object in which the object in the virtual space is embodied after recognizing that the input action S1 has been performed.
  • a reaction force S2 from an object having a physical substance for example, a desk, a wall, one's other hand, desk 1, etc.
  • S3 operator's brain
  • the reaction force (strictly, feeling the reaction force) S2 and the output as a result of information processing with the previous input motion S1 as input is formed by viewing the image 10 displayed on the display unit (for example, the display 6). Since the memory S4 and the memory S5 are generated substantially at the same time, a causal relationship between the memory S4 and the memory S5 is recognized in the brain. Associative memory is formed. In other words, by superimposing the virtual space and the real space through the physical object, an associative memory unique to the present application is formed. When an input action S1 is performed on an object in the virtual space in the void as in the prior art, there is no reaction force as described above. .
  • objects with elasticity, texture, temperature, and smell may be adopted.
  • associative memory is formed with the addition of elasticity, texture, temperature, and smell, resulting in deeper and diverse associative memory, That is, there is a possibility that an associative memory in which memories related to elasticity, texture, temperature, and smell are added to the above-described associative memory.
  • FIG. 11 is a conceptual diagram for explaining a learning progress estimation method according to an embodiment of the present invention. More specifically, the correspondence between learning progress and reaction time (reciprocal of proficiency).
  • FIG. 4 is a diagram showing relationships; An example of a method for estimating the progress of federated learning will be described using a learning curve with reference to FIG. Actions such as button pressing are measured as reaction time from the presentation of a stimulus to the button being pressed. Generally, as shown in FIG.
  • Non-Patent Document 2 by measuring the reaction time from the presentation of a certain stimulus to the pressing of a button, the operator's proficiency for actions related to such a stimulus can be estimated. . For example, by measuring the reaction time of pressing a button in a game, it is possible to determine whether the operator is at stage A (elementary stage of learning progress) or stage B (intermediate stage of learning progress). , or C stage (a stage where the learning progress is advanced). In other words, by focusing on the fact that it is possible to estimate proficiency by measuring the reaction time from the presentation of a stimulus to the pressing of a button, it is possible to estimate the progress of learning. It is possible to select the display method of corresponding contents.
  • the estimation of progress and proficiency includes typing error rate, game win/loss rate, progress of pre-instructed (a series of) actions, user's action speed and reaction speed, quiz correct answer rate, action accuracy, degree of completion or degree of completion of actions calculated by algorithms (for example, the number of chains in puzzle games, the degree of openness in reversi, the number of listeners in mahjong games, etc.), using the above variables It is also possible to use other methods, such as the required score, key press intensity, and finger shape change information for finding it.
  • FIG. 11 has been explained using a learning curve, but in addition to this, there is a typing error rate, a game winning/losing rate, a progress of a pre-instructed (a series of) actions, a user's action speed and reaction speed, a quiz correct answer rate, Accuracy of movement, degree of completion of movement calculated by algorithm or variables for obtaining degree of completion (for example, number of chains in puzzle games, degree of openness in reversi, number of listeners in mahjong games, etc.), using the above variables It is possible to use an index that is considered appropriate for APP, including the score obtained by pressing the key, information on changes in finger shape for obtaining it, or any combination thereof.
  • an index that is considered appropriate for APP, including the score obtained by pressing the key, information on changes in finger shape for obtaining it, or any combination thereof.
  • a currently widely used display may be used instead of the HMD 5.
  • FIG. 12 is an example of a flowchart for explaining the operation of the image display system according to one embodiment of the present invention.
  • a space including a substantially plane is displayed on the display screen including the HMD5 screen (step 601).
  • an object having a substantially flat portion is detected by the space detection unit 2122 from the image captured by the TV camera 7 of the HMD 5, and a virtual plane is set at a predetermined position on the display screen 10 and displayed (step 603). ).
  • the motion detection unit 2125 of the operating instruction subject detects the motion of the operating instruction subject specifying the operation range, and the operation range is determined on a substantially plane (step 605). Further, the virtual operation object display control unit 2126 displays a virtual operation object including a virtual keyboard and virtual buttons within the operation range (step 607).
  • the content display control unit 2128 displays a screen (content) generated by application software including games and office software in the background of the virtual operation target (step 609).
  • the operation of the operation instruction subject is given to the content, whereby the operation of the operation instruction subject is detected by the operation detection unit 2124 (step 611), and the operation of the operation instruction subject is transferred to the application software.
  • the display content is changed by the content display control unit 2128 (step 613).
  • the movement detection unit 2125 of the subject of the operation instruction detects the movement of the subject of the operation instruction with respect to the content (step 615).
  • This motion detection is for obtaining the information necessary for estimating the progress of associative learning.
  • the motion detection unit 2125 of the operation instruction subject detects the movement of the operation instruction subject
  • the association learning progress detection unit 2129 detects the progress of the association learning (step 617), and the detected virtual operation target is operated and detected.
  • a change occurs in the content to be operated (step 619), prompting the operator to perform the next operation.
  • the program can also be used without the operator's permission (or without the operator's awareness). Since it can be changed dynamically, it becomes possible to provide programs such as complicated games that could not be realized so far.
  • the user's motion target range will be within the range of the formed object. .
  • inserting a safety range check ("clearance check” or "manual calibration") prevents unintended situations within the real range when the user operates in the virtual space. (e.g. knocking over a coffee cup placed on the edge of the desk) can be reduced.
  • This effect can be further enhanced by combining means for displaying the confirmed safety range according to the user's operation (display of boundary lines, fences, etc.).
  • the appearance (shape, layout, etc.) of the displayed operation target can also be changed according to the size of the operation target range.
  • unintended changes in operability and visual discomfort due to differences in the size of the operation range specified by the user due to differences in the size of the real space, etc. can be further reduced, and content with a high degree of immersion can be created. can be provided.
  • the description has been given mainly from the viewpoint of a tactile augmented information processing system as a memory fixation promotion system that does not require an additional device that simulates the tactile sense. or as a method for producing a specific effect by the functions detailed above, or as a recording medium on which the software is recorded.
  • the tactile augmented information processing system which can be said to be a memory consolidation promoting system that does not require an additional device that simulates a tactile sense, there is no need to use an additional device that simulates a tactile sense.
  • associative learning between visual, auditory and tactile senses is formed. Therefore, the present invention is useful in the information industry, the game industry, the music industry, the tourism industry, the construction industry, the equipment maintenance industry, the manufacturing industry, and the education and training industry, even from the perspective of being able to provide improved APP usage literacy. It has great industrial applicability and convenience in industry, medical industry, medical-related industry, and the like.
  • Step 605 The operation range is determined on a substantially plane by the operation instruction subject's specifying operation of the operation range.
  • Step 607 A virtual keyboard or a virtual Step 609 for displaying a virtual operation target including target buttons:
  • a screen (content) generated by an APP including a game or office software is displayed around the virtual operation target, or at the position of the virtual operation target.
  • Step 613 The operation of the operation instructing subject is transferred to the application software, and as a result, a change is given to the displayed content.
  • Step 615 Movement of the operation instructing subject with respect to the content is detected.
  • Step 617 Operation.
  • Step 619 Motion of the instruction subject is detected, and progress of associative learning is detected: In accordance with the operation on the detected virtual operation target and/or the progress of associative learning, the content that is the operation target is changed to the next step.
  • Step 2122 where the operator is prompted to perform an operation: space detection unit 2123: operation range setting unit 2124: operation detection unit 2125: motion detection unit 2126: virtual operation target display control unit 2127: display control unit 2128: content display Control unit 2129: Associative learning progress detection unit S1: (Stimulation) action S2: Reaction force from desk 1 to S1 S3: Stimulation by reaction force S2 is transmitted to the central nervous system via peripheral nerve network of operator 4 Event S4: Experiential memory of giving action S1 to a physical object S5: Experiential memory of being able to move object 601 to the position of object 603 in virtual space S6: Formed associativity, associative memory

Abstract

Provided are a tactile-sensation-expansion information processing system, software, a method, and a storage medium capable of essentially providing a tactile sensation in a real space without using a special additional device connected to a computer. A tactile-sensation-expansion information processing system according to a first embodiment of the present invention comprises: a display unit which is worn by an operator located near a target object having a physical substance, or which is disposed near the operator; a sensor which can detect an action by the operator with respect to the target object, or an action associated with the target object; a display control unit that instructs or specifies content to be displayed to the display unit; and an associative memory formation promotion unit which, on the basis of the detection, as a detected action, of an action performed by the operator with respect to an object in a virtual space corresponding to the target object, calculates virtual position information in the virtual space corresponding to the action, acquires action specification information corresponding to the action on the basis of a predetermined correspondence relationship for the detected action and action specification information corresponding thereto, and causes the operator to form an associative memory between a memory relating to the action specification information and a memory relating to a physical reaction imparted from the target object in response to the action.

Description

触覚拡張情報処理システム、ソフトウエア、方法並びに記録媒体Haptic augmented information processing system, software, method and recording medium
 本発明はたとえば触覚拡張情報処理システム、ソフトウエア、方法並びに記録媒体に係り、特に、触覚を模擬するコンピュータ接続の付加的装置を用いることが不要にしつつ触覚模擬的に記憶固定化の促進を行うのに適した触覚拡張情報処理システム、ソフトウエア、方法並びに記録媒体に関する。 The present invention relates to, for example, a tactile augmented information processing system, software, method, and recording medium, and in particular, promotes memory fixation in a tactile simulating manner while eliminating the need to use an additional device connected to a computer that simulates tactile sensation. Haptic augmented information processing system, software, method and recording medium suitable for
 ゲームを含むアプリケーションソフトウエア(APP)では、提示コンテンツとして画像や音を提示し、その画像や音に操作を加えるものがある。これまでの通常のコンピュータシステムでは、そのような操作を利用者が現実に触覚を伴って行うことのできる実在の装置(操作盤、コントローラ、ハンドル、ホイール、キーボード、マウス、タブレット、トラックパッドなど)が多数利用されていた。 Some application software (APP), including games, present images and sounds as presentation content, and add operations to those images and sounds. In conventional computer systems, real devices (operating panels, controllers, handles, wheels, keyboards, mice, tablets, trackpads, etc.) that allow users to actually perform such operations with tactile sensations were used a lot.
 ここで、コンテンツには、例えば、種々の(ビデオ)ゲームやコンピュータのAPPが表示する内容(たとえば、給与計算、工場生産システム、医療診断支援システム、楽器演奏シミュレーションシステム、運転シミュレーションシステム、言語学習システム、観光体験システム等)が含まれる。 Here, the content includes, for example, contents displayed by various (video) games and computer APPs (for example, payroll, factory production system, medical diagnosis support system, musical instrument performance simulation system, driving simulation system, language learning system, etc.) , tourism experience system, etc.).
 一方、頭部装着型表示部、いわゆるヘッドマウントディスプレイ(HMD)、を用いて、仮想現実(VR)や拡張現実(AR)を利用する技術は、様々な分野で活用が始まっている。この場合には、HMDに表示される仮想空間内に表示されたオブジェクトを現実空間内に存在する手や指などで利用者(装着者)が操作することにより、コンピュータ提示コンテンツに対して利用者(装着者)が操作を加える(例えば、特許文献1)、というものであった。 On the other hand, technologies that use virtual reality (VR) and augmented reality (AR) using head-mounted display units, so-called head-mounted displays (HMDs), have begun to be used in various fields. In this case, the user (wearer) operates the object displayed in the virtual space displayed on the HMD with a hand or finger existing in the real space, thereby allowing the user to view the content presented by the computer. (the wearer) applies an operation (for example, Patent Document 1).
 しかし、HMDにおける操作では、仮想空間内のオブジェクトと現実空間内の手や指などとの対応関係がHMD内で構築されてはいるものの、現実空間に存在する手や指などは虚空で動くために、仮想空間内のオブジェクトを操作しても操作したという触覚を得ることができず、したがって、操作者は手ごたえ感がなく宙を切るようないわば虚空感を感じざるを得ないという問題点を有していた。 However, in the operation in the HMD, although the correspondence relationship between the objects in the virtual space and the hands and fingers in the real space is built in the HMD, the hands and fingers in the real space move in the void. In addition, even if the object in the virtual space is manipulated, it is not possible to obtain the tactile sensation of manipulating it. had.
 この点で、たとえば特許文献1には、オブジェクトに生じ得る手振れのような振動を適切に抑制することのできるゲーム装置に関する技術思想が開示される。しかし、特許文献1では、コントロール対象となるオブジェクトの仮想空間における配置情報に対して、表示装置に対向するコントローラの位置情報を順次取得し、取得された当該位置情報に基づいて、コントローラの変位量を算定し、これに基づいて変位量補正部が補正を行うという技術思想は開示されるものの、当該オブジェクトに対する操作における操作者の触覚といった点への着眼は存在しない。 In this respect, Patent Document 1, for example, discloses a technical idea regarding a game device capable of appropriately suppressing vibrations such as camera shake that can occur in objects. However, in Patent Document 1, position information of a controller facing a display device is sequentially acquired with respect to arrangement information of an object to be controlled in a virtual space, and displacement amount of the controller is calculated based on the acquired position information. is disclosed, and the displacement amount correction unit performs correction based on this, but there is no focus on the tactile sensation of the operator when operating the object.
 一方、操作部分が仮想空間に設けられ、触覚を有する操作手段が楽器のキーボードであるとする技術思想については、例えば、特許文献2に開示されている。しかし特許文献2においては、触覚を模擬するということの根本的な発想は深堀されておらず、現実空間内の触覚を実質的に提供するという発想は開示されていない。換言すれば、触覚を模擬することの技術的な意義を定立し、かかる意義についての自然法則をどう利用していかなる価値を生み出すのかという技術思想は開示されていない。 On the other hand, the technical idea that the operation part is provided in the virtual space and the operation means having the sense of touch is the keyboard of a musical instrument is disclosed in Patent Document 2, for example. However, Patent Document 2 does not delve into the fundamental idea of simulating the sense of touch, and does not disclose the idea of substantially providing the sense of touch in the real space. In other words, it does not disclose the technical idea of how to establish the technical significance of simulating the sense of touch and how to use the laws of nature regarding this significance to create what kind of value.
特開2008-113845号公報JP 2008-113845 A 特許第6419932号公報Japanese Patent No. 6419932
 本発明は、上述したような従来技術上の問題点に鑑み、たとえばHMDなどによる仮想空間内でのコンピュータやゲームに対する操作において、コンピュータに接続された特別な付加的装置を用いずに、現実空間内の触覚を実質的に提供することが可能な触覚拡張情報処理システム、ソフトウエア、方法並びに記録媒体を提供することを課題とする。 In view of the problems in the prior art as described above, the present invention provides a computer or game operation in a virtual space using, for example, an HMD, without using a special additional device connected to the computer. An object of the present invention is to provide a tactile augmented information processing system, software, method, and recording medium capable of substantially providing an internal tactile sensation.
 本発明者は、まず、触覚を模擬するコンピュータ接続の付加的装置を用いることを不要にする、との技術思想を深堀することから出発した。すなわち、触覚を模擬することの技術的な意義を定義することを出発点に置いた。試行錯誤を重ねた結果、ヒトの脳における視覚および/または聴覚と触覚との(後述する)連合形成を強化し、それによって記憶固定化を促進することをコンピュータシステムによって実現する、という、自然法則を利用した技術思想に到達した。以下、この点を詳述する。 The inventor first started by digging deep into the technical idea of eliminating the need to use an additional device connected to a computer that simulates the sense of touch. In other words, the starting point was to define the technical significance of simulating the sense of touch. As a result of repeated trial and error, it is a natural law that a computer system realizes strengthening the association formation (described later) between visual and/or auditory senses and tactile senses in the human brain, thereby promoting memory consolidation. I arrived at a technical idea using This point will be described in detail below.
 まず、本発明者は、操作における触覚の重要性に着眼した。種々考察を重ねた結果、連合記憶の概念を本分野、特に操作における触覚を活用した情報処理分野に応用することに思い至った。すなわち、ヒトを含めた生物は、連合による学習とそれによって形成される記憶(連合記憶または連想記憶)とが重要であるが、連合による学習の簡単な例は有名なパブロフのイヌであるところ、ヒトにおいては連合によって形成された多数の記憶間に高度な相互関係が構築され、連想記憶が形成されるとされている(例えば非特許文献1)。 First, the inventor focused on the importance of tactile sensation in operation. As a result of various considerations, I came up with the idea of applying the concept of associative memory to this field, especially to the information processing field that utilizes the sense of touch in operation. In other words, associative learning and memory formed by it (associative memory or associative memory) are important for organisms including humans. A simple example of associative learning is the famous Pavlovian dog. In humans, it is believed that associative memory is formed by constructing high-level interrelationships among a large number of memories formed by association (for example, Non-Patent Document 1).
 「連合」は、2つの異なる刺激が与えられたときに、それら刺激間の関係を学習するものとされる。すなわち、一例として、実在のキーボードを操作する場合には、ディスプレイに表示されたコンテンツ(視覚)とボタン押し操作(触覚)との間に連合学習が成立する。この観点から、本発明の観点から上記従来技術上の問題点を定義すると、HMDに表示された仮想的キーボードを操作する場合には、ボタン押し操作による触覚を得ることができないために、連合学習を成立させることが困難となることが問題の本質に存在している、ということになることを本発明者は発見した。 "Association" is supposed to learn the relationship between two different stimuli when given. That is, as an example, when operating a real keyboard, associative learning is established between the content displayed on the display (visual sense) and the button pressing operation (tactile sense). From this point of view, if we define the problem of the above-mentioned conventional technology from the point of view of the present invention, when operating the virtual keyboard displayed on the HMD, it is not possible to obtain the tactile sensation from the button pressing operation, so the associated learning is difficult. The present inventor discovered that the essence of the problem is that it is difficult to establish
 連合学習によって、2つの刺激に対する記憶が強固なものになる、応答速度が速くなる、応答が確実になる、無意識的な応答が可能になる、などの利点を獲得することができる。特に同じ刺激が与えられたときに応答能力(性能)を向上できることは、ゲーム技能(たとえば、所定のターゲットに対して所望の動作を高速かつ正確に返すことのできる能力等)の向上、コンピュータ操作におけるタイピング技能の向上、或いは、機械一般(乗用車・航空機等の乗り物、工作機械、建設・工具機械等を含む)の操作技能の向上、ディスプレイ表示の変化に対する操作者の動作能力の向上に繋がるものであると考えられる。本発明者は、このことから発展させて、連合学習を介することで2つの刺激に対する記憶が強固なものになるという自然法則を利用した情報システムを体験することにより、人間が操作を行うことで処理が進む情報処理分野(特に、上述したゲーム分野、タイピング分野、機械操作を含む製造業分野全般等)において操作者の動作能力が向上する、もしくは当該能力の向上に資することが可能となる、という技術思想の創作にまで昇華させた。 Through associative learning, it is possible to acquire advantages such as a stronger memory for two stimuli, a faster response speed, a more reliable response, and the ability to respond unconsciously. In particular, being able to improve the response ability (performance) when the same stimulus is given is an improvement in game skills (for example, the ability to quickly and accurately return a desired action to a predetermined target), computer operation, etc. improvement of typing skills in , or improvement of operation skills of machinery in general (including vehicles such as passenger cars and aircraft, machine tools, construction/tool machines, etc.), improvement of operator's ability to operate in response to changes in display display It is considered to be Developing from this, the present inventor experienced an information system that utilizes the natural law that memory for two stimuli becomes stronger through associative learning. In the information processing field where processing is progressing (especially in the above-mentioned game field, typing field, general manufacturing industry field including machine operation, etc.), the operator's operation ability is improved, or it is possible to contribute to the improvement of the ability. It was sublimated to the creation of the technical idea.
 すなわち、上述した課題を解決するために、本発明の第1の態様に係る触覚拡張情報処理システムは、物理的実体を持つ対象物の近傍に所在する操作者によって装着されもしくは前記操作者の近傍に配置された表示部と、前記操作者による前記対象物に対する動作もしくは前記対象物と関連づけられる動作を検知できるセンサと、前記表示部に対して表示されるべきコンテンツを指示もしくは特定する表示制御部と、前記対象物に対応する仮想空間上のオブジェクトに対して前記操作者が行う動作が検知動作として検知されたのに基づき前記動作に対応する前記仮想空間上の仮想位置情報を割り出し、検知動作とこれに対応する動作特定情報とについて予め定められた対応関係に基づいて前記動作に対応する動作特定情報を取得し、前記動作に対して前記対象物から与えられる物理的反力に係る記憶と前記動作特定情報に係る記憶との間の連合記憶を前記操作者に形成させる連合記憶形成促進部とを備えて構成される。 That is, in order to solve the above-described problems, the augmented haptic information processing system according to the first aspect of the present invention is worn by an operator located near an object having a physical substance, or is mounted near the operator. a display unit arranged in the display unit, a sensor capable of detecting an action of the operator on the object or an action associated with the object, and a display control unit that instructs or specifies content to be displayed on the display unit and determining virtual position information in the virtual space corresponding to the motion based on the motion performed by the operator with respect to the object in the virtual space corresponding to the target being detected as the detection motion, and detecting the motion. Acquiring action specifying information corresponding to the action based on a predetermined correspondence relationship between the movement specifying information and the action specifying information corresponding thereto, and storing the physical reaction force applied to the action by the object; and an associated memory formation promoting unit that causes the operator to form an associated memory with the memory related to the action specifying information.
 上記の構成を持つことにより、操作者が特定の物理的実体を持つ対象物をそれと定め、一定の動作を行うと、かかる動作をセンサが検出し、続いて操作者が、当該対象物と重畳表示される仮想空間上のオブジェクト(たとえば、ゲームコントローラ、タッチパネル、ディスク、操作盤、マウス、キーボード、ピアノ、ギター、ドラム、ホイール、ハンドル、レバー、スティック)に対して動作(たとえば、右向きまたは左向き矢印押圧動作、指先回転動作、指を伸ばす動作、指を突き出す操作、2本以上の指の先を合せてつまむ動作、指で円や多角形などの図形を作る動作、手をにぎる動作、手を開く動作、手のひらや手の甲や指の腹で撫でる動作、手を上下左右に動かす動作、手を回転させる動作)を行うと、センサが当該動作を検知することに基づき、連合記憶形成促進部にて、当該動作に対応する仮想空間上の仮想位置情報が割り出されるとともに、予め定められた対応表から当該動作に対応する動作特定情報が取得され、当該動作に対して対象物から与えられる物理的反力と上記取得された動作特定情報との間で連合記憶を形成させるため、操作者の内にて、上記動作(たとえば指先回転動作)とそれによって仮想空間上に起こされる事態(仮想空間上の動作スピードが倍速される)との間に連合記憶が形成される結果、操作者・利用者は、仮想空間上のオブジェクトへの一定動作(たとえばゲーム操作)により習熟することができるようになる。 By having the above configuration, when the operator determines an object having a specific physical entity as it and performs a certain action, the sensor detects the action, and then the operator can superimpose the object on the object. Actions (e.g. rightward or leftward arrows) on displayed objects in virtual space (e.g. game controllers, touch panels, discs, control panels, mice, keyboards, pianos, guitars, drums, wheels, handles, levers, sticks) Pressing motion, rotating fingertips, stretching fingers, sticking out fingers, pinching the tips of two or more fingers together, making shapes such as circles and polygons with fingers, gripping hands, hands When the sensor detects the motion, the associative memory formation promoting part detects the motion by opening the hand, stroking the palm, the back of the hand, or the pad of the finger, moving the hand up, down, left, or right, or rotating the hand. , the virtual position information in the virtual space corresponding to the motion is determined, the motion specifying information corresponding to the motion is obtained from a predetermined correspondence table, and the physical physical given to the motion from the object In order to form an associative memory between the reaction force and the acquired action-specifying information, the above action (for example, fingertip rotation action) and the resulting event in the virtual space (virtual space As a result of the formation of associative memory, the operator/user will be able to master certain actions (such as game operations) to objects in the virtual space. .
 本願における、このメカニズムについて、さらに詳述する。すなわち、操作者にとってみれば、従来技術のように虚空において仮想空間上のオブジェクトに対して入力動作を行った場合に比して、手や指に物理的実体を持つ対象物(たとえば、卓上、壁、自らのもう一方の手など)からの反力が加わるため、瞬時にその反力が操作者の脳内に伝達される。したがって、当該反力(厳密には、当該反力を感じること)と、先の入力動作を入力として情報処理された結果としてのアウトプット(たとえば、仮想空間上のあるキャラクタが横に動くなど)との間で連合記憶が形成される。この場合、虚空に入力した場合には、上述した反力がないことから、連合記憶形成に必要な一方が存在せず連合記憶が形成されない。したがって、虚空の場合に比して、着実な記憶形成に資することとなる。これを実生活に応用すれば、ゲームのコントロール技術についての習熟、キーボード入力スピード・正確性の向上、などの効果を奏することに繋がる。換言すれば、自分がある動作をすると、それに対して、その動作へのレスポンスに物理的な圧迫と実体が存在し、かかる実体と情報処理結果との間に連合性を持った記憶が形成されることで、自分が当該動作をしたことが自分で確認できる、という構造が招来されることとなり、こうした技術思想を現実に利用することで、一定の人間の所作(技術、技巧)の向上に資することになるものである。 This mechanism will be further detailed in this application. In other words, from the operator's point of view, compared to the case of performing an input action on an object in a virtual space in the void as in the conventional technology, an object having a physical substance (for example, a desk, The reaction force from the wall, the other hand, etc.) is applied, and the reaction force is instantly transmitted to the brain of the operator. Therefore, the reaction force (strictly, feeling the reaction force) and the output as a result of information processing with the previous input motion as input (for example, a character in the virtual space moves sideways) An associative memory is formed between In this case, when inputting into the void, since there is no reaction force as described above, the one necessary for the formation of associative memory does not exist, and associative memory is not formed. Therefore, compared to the case of empty space, it contributes to steady memory formation. If this is applied to real life, it will lead to effects such as proficiency in game control technology and improvement in keyboard input speed and accuracy. In other words, when one performs a certain action, there is physical pressure and substance in the response to that action, and a memory with an associativity is formed between this substance and the information processing result. By doing so, a structure is created in which one can confirm by oneself that one has performed the action in question. It is something that will be invested.
 本発明の第2の態様として、第1の態様において、前記連合記憶形成促進部は、前記操作者が前記対象物に対して或いは前記対象物に関連させて現実空間上の現実位置において特定動作を行ったことを前記センサが検出したときに前記特定動作に対して定義づけられた前記オブジェクトを前記表示部における前記現実位置に対応する前記仮想空間上の仮想位置に表示させる仮想的操作対象表示制御部と、前記操作者が前記オブジェクトに対して与える前記動作から前記動作に対応する前記現実空間において前記対象物が所在する現実位置情報及び該現実位置情報に対応する前記仮想空間上の仮想位置情報並びに前記動作に対応する動作特定情報を得る操作主体動作検出部と、前記物理的実体を持つ対象物であって、前記操作者が与える前記動作に対する物理的反力を前記操作者に与える対象物とを備えるものとしてもよい。 As a second aspect of the present invention, in the first aspect, the associative memory formation promoting unit performs a specific action performed by the operator on the object or in relation to the object at a real position on the real space. a virtual operation target display for displaying the object defined for the specific action at a virtual position in the virtual space corresponding to the real position on the display unit when the sensor detects that the a control unit, based on the action given by the operator to the object in the real space corresponding to the action, and a virtual position in the virtual space corresponding to the real position information; an operating subject motion detection unit that obtains information and motion specifying information corresponding to the motion; It is good also as what is provided with a thing.
 本発明の第3の態様として、第1もしくは第2の態様において、前記センサは撮像機能を有するものとしてもよい。 As a third aspect of the present invention, in the first or second aspect, the sensor may have an imaging function.
 本発明の第4の態様として、第1の態様において、撮像機能を有する撮像部をさらに備え、前記連合記憶形成促進部は、前記操作者が前記対象物に対して或いは前記対象物に関連させて現実空間上の現実位置において特定動作を行ったことを前記撮像部が検出したときに前記特定動作に対して定義づけられた前記オブジェクトを前記表示部における前記現実位置に対応する前記仮想空間上の仮想位置に表示させる仮想的操作対象表示制御部と、前記操作者が前記オブジェクトに対して与える前記動作から前記動作に対応する前記現実空間において前記対象物が所在する現実位置情報及び該現実位置情報に対応する前記仮想空間上の仮想位置情報並びに前記動作に対応する動作特定情報を得る操作主体動作検出部と、前記物理的実体を持つ対象物であって、前記操作者が与える前記動作に対する物理的反力を前記操作者に与える対象物とを備えるものとしてもよい。 As a fourth aspect of the present invention, in the first aspect, an imaging unit having an imaging function is further provided, and the associative memory formation promotion unit causes the operator to associate the object with or with the object. When the imaging unit detects that a specific action is performed at a real position on the physical space, the object defined for the specific action is displayed on the virtual space corresponding to the real position on the display unit. a virtual operation target display control unit for displaying at a virtual position of , and real position information where the target object is located in the real space corresponding to the action from the action given to the object by the operator and the real position an operating subject motion detection unit that obtains virtual position information in the virtual space corresponding to the information and motion specifying information corresponding to the motion; and an object that exerts a physical reaction force on the operator.
 本発明の第5の態様として、第1~第4の態様のうちのいずれかの態様において、前記表示部は、シースルー機構を有するものとしてもよい。ここで、シースルー機構とは、プログラムによって生成されるイメージデータを表示する機能とは別に、透過性を備えることで、装着者に表示部を介して装着者と対向する物理的な実体を透視することを可能にする機能を備えるものをいう。 As a fifth aspect of the present invention, in any one of the first to fourth aspects, the display section may have a see-through mechanism. Here, the see-through mechanism means that, in addition to the function of displaying image data generated by the program, the wearer can see through the physical entity facing the wearer through the display section by providing transparency. It means a thing equipped with a function that enables
 本発明の第6の態様として、第1~第5の態様のうちのいずれかの態様において、前記表示部は、使用者に仮想空間を認識上形成できるものであれば何でもよく、例えばヘッドマウントディスプレイや、ARゴーグルが好適であるが、対象物と使用者の間にスクリーンやディスプレイなど映像を表示させる装置を配置したものであるとしてもよい。 As a sixth aspect of the present invention, in any one of the first to fifth aspects, the display unit may be anything as long as it can form a virtual space for the user to recognize, such as a head mount A display or AR goggles are preferable, but a device for displaying an image such as a screen or a display may be arranged between the object and the user.
 本発明の第7の態様として、第1~第6の態様のうちのいずれかの態様において、前記センサは、光学センサ、磁気センサ、接触センサ、距離画像センサのうちの少なくとも一つを有するものとしてもよい。 As a seventh aspect of the present invention, in any one of the first to sixth aspects, the sensor has at least one of an optical sensor, a magnetic sensor, a contact sensor, and a distance image sensor. may be
 本発明の第8の態様として、第1~第7の態様のうちのいずれかの態様において、前記対象物は、前記対象物は弾性を備え、前記物理的反力に係る記憶には前記弾性に係る記憶が加重されるものとしてもよい。 As an eighth aspect of the present invention, in any one of the first to seventh aspects, the object has elasticity, and the memory related to the physical reaction force has the elasticity may be weighted.
 本発明の第9の態様として、第1~第8の態様のうちのいずれかの態様において、前記対象物は、前記連合記憶の形成に十分な触覚を与える硬さ、弾性、温度、テクスチャなどのいずれかまたはその組み合わせを持つとしてもよい。また、水平面だけでなく、湾曲、凹凸、水平以外のうちの少なくともいずれかであるとしてもよい。 As a ninth aspect of the present invention, in any one of the first to eighth aspects, the object has hardness, elasticity, temperature, texture, etc. that provide a tactile sensation sufficient for forming the associative memory. or a combination thereof. In addition, it may be not only a horizontal surface, but also curved, uneven, or non-horizontal.
 本発明の第10の態様として、第1~第9の態様のうちのいずれかの態様において、検出した手や指の動きによって連合学習の進捗を検出する連合学習検出部をさらに備え、前記表示制御部は、前記連合学習検出部が検出した仮想的操作対象に対する操作および/または連合学習の進捗に応じて、操作対象であるコンテンツを変化させるとしてもよい。 As a tenth aspect of the present invention, in any one of the first to ninth aspects, the display further comprises an associative learning detection unit that detects progress of associative learning based on the detected hand or finger movement. The control unit may change the content, which is the operation target, according to the progress of the operation and/or the association learning on the virtual operation target detected by the association learning detection unit.
 本発明の第11の態様として、第1~第9の態様のうちのいずれかの態様において、前記仮想的操作対象に対する操作および/または連合学習の進捗を計測し推定する連合学習検出部をさらに備え、前記表示制御部は、前記連合学習検出部によって推定された仮想的操作対象に対する操作および/または連合学習の進捗に応じて、操作対象であるコンテンツを変化させるとしてもよい。 As an eleventh aspect of the present invention, in any one of the first to ninth aspects, an associative learning detection unit for measuring and estimating the progress of manipulation and/or associative learning with respect to the virtual operation target is further provided. In addition, the display control unit may change the content that is the operation target according to the progress of the operation and/or the association learning on the virtual operation target estimated by the association learning detection unit.
 本発明の第12の態様として、第11の態様において、前記連合学習検出部にて推定された連合学習の進捗度合いに応じて、前記コンテンツに係る視覚情報および/または聴覚情報を変化させるとしてもよい。 As a twelfth aspect of the present invention, in the eleventh aspect, even if the visual information and/or auditory information related to the content is changed according to the degree of progress of the federated learning estimated by the federated learning detection unit good.
 本発明の第13の態様として、第11もしくは第12の態様において、前記連合学習の進捗の推定には、学習曲線、タイピングミス率、ゲームの勝敗率、定められた一連の動作の進捗度、使用者の動作速度や反応速度、正答率、正確性、アルゴリズムによって算出される完成度あるいは完成度を測るための変数(例えば、リバーシにおけるプレイヤーの打ち手の開放度など)、キー押し強度、キー押し強度を求めるための指形の変化情報、ボタンの押し時間情報のうちの少なくともいずれかを含むとしてもよい。 As a thirteenth aspect of the present invention, in the eleventh or twelfth aspect, the estimation of the progress of the associated learning includes a learning curve, a typing error rate, a game winning/losing rate, a degree of progress of a predetermined series of actions, User's movement speed and reaction speed, correct answer rate, accuracy, perfection calculated by algorithm or variables to measure perfection (for example, player's openness in reversi), key press strength, keys At least one of finger shape change information and button pressing time information for determining pressing strength may be included.
 本発明の第14の態様として、第3もしくは第4の態様において、前記表示部は前記操作者の眼前に設置されたディスプレイであり、前記ディスプレイに前記コンテンツに係る視覚情報が提示され、前記撮像部および/または前記センサによって前記操作者の動きを検知することができるとしてもよい。 As a fourteenth aspect of the present invention, in the third or fourth aspect, the display unit is a display installed in front of the operator's eyes, visual information related to the content is presented on the display, and the imaging is performed. Movement of the operator may be detected by the unit and/or the sensor.
 また、上記課題を解決するために、本発明の第15の態様に係る触覚拡張情報処理方法は、表示部と、物理的実体を持つ対象物の近傍に所在する操作者の動作を検出できるセンサと、前記表示部及び前記センサを制御する制御部とを備えた情報処理システムにおいて、前記操作者が前記対象物に対してもしくは前記対象物と関連づけて動作をしたことが検知動作として検出される第1のステップと、前記検出された前記検知動作に係り前記対象物と関連づけられる現実空間上の現実位置情報に対応する仮想空間上の仮想位置情報を前記制御部が同定する第2のステップと、検知動作とこれに対応する動作特定情報とについて予め定められた対応関係に基づいて前記動作に対応する動作特定情報が取得される第3のステップと、前記動作に対して前記対象物から与えられる物理的反力に係る記憶と前記動作特定情報に係る記憶との間の連合記憶を前記操作者に形成させる第4のステップとを備える。 Further, in order to solve the above problems, a method for augmented haptic information processing according to a fifteenth aspect of the present invention includes a display unit and a sensor capable of detecting the action of an operator located near an object having physical substance. and an information processing system comprising the display unit and the control unit for controlling the sensor, wherein the operator's action on the object or in association with the object is detected as a detection action. a first step, and a second step in which the control unit identifies virtual position information in a virtual space corresponding to real position information in the real space related to the detected motion and associated with the object; a third step of acquiring motion specifying information corresponding to said motion based on a predetermined correspondence relationship between a sensed motion and motion specifying information corresponding to said motion; and a fourth step of causing the operator to form an associative memory between the memory of the physical reaction force applied and the memory of the action specifying information.
 本発明の第16の態様として、第15の態様において、前記第4のステップは、前記制御部が前記特定動作に対応する仮想空間上のオブジェクトの大きさを調整し該調整された後の調整後オブジェクトを前記表示部の前記仮想空間上の仮想位置情報と同定された位置であって前記対象物と関連づけられる現実空間上の現実位置情報と重畳される位置に表示させ、前記操作者によって現実の前記対象物に対して与えられる動作を前記仮想空間上のオブジェクトに対して前記操作者から与えられた動作として検出し該検出された動作の種類及び該動作に係る現実空間上の現実位置情報を前記制御部が特定するとともに、前記仮想空間上のオブジェクトに対応する現実空間上の位置において前記対象物が前記動作に対する物理的反力を前記操作者に対して与え、前記検出された前記動作に係る現実空間上の現実位置情報を前記制御部が割り出して仮想空間上の仮想位置情報と同定したうえで前記検出された前記動作の種類に応じた動作が行われたとして該動作の種類に係る動作特定情報及び該動作に対応する仮想位置情報を特定し、前記動作に対して前記対象物から与えられる物理的反力を前記操作者が知覚した第1の記憶と前記動作特定情報を前記操作者が知覚した第2の記憶との間の連合記憶を前記操作者に形成させる、としてもよい。 As a sixteenth aspect of the present invention, in the fifteenth aspect, in the fourth step, the controller adjusts the size of the object in the virtual space corresponding to the specific action, and adjusts the size of the object after the adjustment. the rear object is displayed on the display unit at a position identified by the virtual position information in the virtual space and superimposed on the real position information in the real space associated with the object; is detected as a motion given by the operator to the object in the virtual space, and the type of the detected motion and the real position information on the physical space related to the motion is specified by the control unit, and the object gives a physical reaction force against the motion to the operator at a position in the real space corresponding to the object in the virtual space, and the detected motion The control unit finds out the real position information on the physical space and identifies it with the virtual position information on the virtual space. The motion specifying information and the virtual position information corresponding to the motion are specified, and the first memory in which the operator perceives the physical reaction force given by the object to the motion and the motion specifying information are stored as the The operator may form an associative memory with the second memory perceived by the operator.
 さらに、上記課題を解決するために、本発明の第17の態様に係る触覚拡張情報処理方法は、表示部と、物理的実体を持つ対象物の近傍に所在する操作者の動作を検出できるセンサと、前記表示部及び前記センサを制御する制御部とを備えた情報処理システムにおいて、前記表示部に操作対象となるコンテンツを表示する第1のステップと、前記操作者の前の実空間内にある物理的実体を伴う実体物に係る空間情報を検出する第2のステップと、前記第2のステップにおいて検出された位置に仮想的平面を設定する第3のステップと、前記第3のステップにおいて設定された仮想的平面に物理的に操作可能な範囲を設定する第4のステップと、前記第4のステップにおいて設定された操作可能な範囲内に仮想的ボタン、ゲームコントローラ、タッチパネル、ディスク、操作盤、マウス、キーボード、楽器、ホイール、ハンドル、レバー、スティックのうち少なくともいずれかを含む仮想的操作対象を前記仮想空間内に表示する第5のステップと、コンテンツの変化に応じて、前記操作者による前記仮想的操作対象に対する操作に対して実空間内で反力を与えつつ、前記操作を検出する第6のステップと、前記仮想的操作対象に対する前記操作における操作指示主体の動きを検出する第7のステップと、前記第7のステップにおいて検出された前記操作指示主体の動きによって連合学習の進捗を検出する第8のステップと、前記第8のステップにおいて検出された前記仮想的操作対象に対する操作および/または連合学習の進捗に応じて、操作対象であるコンテンツを変化させる第9のステップとを備える。 Furthermore, in order to solve the above problems, a tactile augmented information processing method according to a seventeenth aspect of the present invention includes a display unit and a sensor capable of detecting the action of an operator located near an object having physical substance. and a control unit for controlling the display unit and the sensor, a first step of displaying content to be operated on the display unit; a second step of detecting spatial information about an entity with a certain physical entity; a third step of setting a virtual plane at the position detected in the second step; a fourth step of setting a physically operable range on the set virtual plane; and virtual buttons, game controllers, touch panels, discs, and operations within the operable range set in the fourth step. a fifth step of displaying a virtual operation target including at least one of a board, a mouse, a keyboard, a musical instrument, a wheel, a handle, a lever, and a stick in the virtual space; A sixth step of detecting the operation while applying a reaction force in real space to the operation of the virtual operation target by the 7, an eighth step of detecting progress of association learning based on the movement of the operation instructing subject detected in the seventh step, and an operation on the virtual operation target detected in the eighth step. and/or a ninth step of changing the content to be operated according to the progress of the federated learning.
 さらにまた、上記課題を解決するために、本発明の第18の態様に係る触覚拡張情報処理プログラムは、表示部と、物理的実体を持つ対象物の近傍に所在する操作者の動作を検出できるセンサと、前記表示部及び前記センサを制御する制御部とを備えた情報処理システムにおいて、コンピュータに、前記操作者が前記対象物に対してもしくは前記対象物と関連づけて動作をしたことを前記センサに検知動作として検出させる第1の手段と、前記センサによって検出された前記検知動作に係り前記対象物と関連づけられる現実空間上の現実位置情報に対応する仮想空間上の仮想位置情報を前記制御部に同定させる第2の手段と、検知動作とこれに対応する動作特定情報とについて予め定められた対応関係に基づいて前記動作に対応する動作特定情報を取得させる第3の手段と、前記動作に対して前記対象物から与えられる物理的反力に係る記憶と前記動作特定情報に係る記憶との間の連合記憶を前記操作者に形成させるべく、前記物理的反力が与えれるのと略同タイミングで前記動作特定情報、または前記動作特定情報を処理して得られる情報を前記操作者に対して提示させる第4の手段と、として機能させることを特徴とする。 Furthermore, in order to solve the above problems, a haptic augmented information processing program according to an eighteenth aspect of the present invention is capable of detecting a motion of an operator located near a display unit and an object having physical substance. In an information processing system comprising a sensor, and a control unit for controlling the display unit and the sensor, the sensor is used to notify the computer that the operator has performed an action on the object or in association with the object. a first means for causing the sensor to detect the motion as a detection motion; and the control unit transmitting virtual position information in the virtual space corresponding to the real position information in the real space associated with the object related to the detection motion detected by the sensor. a third means for acquiring action specifying information corresponding to the action based on a predetermined correspondence relationship between the detected action and the action specifying information corresponding thereto; and On the other hand, in order to cause the operator to form an associative memory between the memory related to the physical reaction force given by the object and the memory related to the action specifying information, substantially the same as when the physical reaction force is given. and fourth means for presenting to the operator the action specifying information or information obtained by processing the action specifying information at a given timing.
 本発明の第19の態様として、第18の態様において、前記第4の手段は、前記コンピュータに、さらに、前記制御部が前記特定動作に対応する仮想空間上のオブジェクトの大きさを調整し該調整された後の調整後オブジェクトを前記表示部の前記仮想空間上の仮想位置情報と同定された位置であって前記対象物と関連づけられる現実空間上の現実位置情報と重畳される位置に表示させ、前記操作者によって現実の前記対象物に対して与えられる動作を前記仮想空間上のオブジェクトに対して前記操作者から与えられた動作として検出し該検出された動作の種類及び該動作に係る現実空間上の現実位置情報を前記制御部が特定するとともに、前記仮想空間上のオブジェクトに対応する現実空間上の位置において前記対象物が前記動作に対する物理的反力を前記操作者に対して与え、前記検出された前記動作に係る現実空間上の現実位置情報を前記制御部が割り出して仮想空間上の仮想位置情報と同定したうえで前記検出された前記動作の種類に応じた動作が行われたとして該動作の種類に係る動作特定情報及び該動作に対応する仮想位置情報を特定し、前記動作に対して前記対象物から与えられる物理的反力を前記操作者が知覚した第1の記憶と前記動作特定情報を前記操作者が知覚した第2の記憶との間の連合記憶を前記操作者に形成させるように、前記物理的反力が与えれるのと略同タイミングで前記動作特定情報、または前記動作特定情報を処理して得られる情報を前記操作者に対して提示させるように機能させるものとしてもよい。 As a 19th aspect of the present invention, in the 18th aspect, the fourth means further instructs the computer, and further, the control unit to adjust the size of the object in the virtual space corresponding to the specific action. The adjusted object after adjustment is displayed on the display unit at a position identified by the virtual position information in the virtual space and superimposed on the real position information in the real space associated with the object. a motion given by the operator to the real object is detected as a motion given by the operator to the object in the virtual space, and a type of the detected motion and a reality related to the motion; the control unit identifies real position information in space, and the object gives a physical reaction force to the action to the operator at a position in the real space corresponding to the object in the virtual space; After the controller determines the real position information on the physical space related to the detected motion and identifies it with the virtual position information on the virtual space, the motion corresponding to the type of the detected motion is performed. a first memory in which the operator perceives the physical reaction force given by the object to the motion, and the motion specifying information related to the type of motion and the virtual position information corresponding to the motion as a first memory; the action specifying information at approximately the same timing as the physical reaction force is applied so as to cause the operator to form an associated memory between the action specifying information and a second memory perceived by the operator; Alternatively, the information obtained by processing the action specifying information may be presented to the operator.
 本発明の第20の態様として、第18もしくは第19の態様に係る触覚拡張情報処理プログラムが記憶された記録媒体として実現することができる。 A twentieth aspect of the present invention can be implemented as a recording medium storing the augmented haptic information processing program according to the eighteenth or nineteenth aspect.
 なお、上記において、表示部とは、コンテンツを表示することで装着者もしくは近傍所在者が仮想空間を認識上形成することができるものをいい、たとえば、外界の周囲視野を遮断し、好適にはシースルー機構を有し、両眼の眼前に、供給される画像を表示することができるものが好適であるが、片方の眼前のみに画像が表示されるものや、一部、外界の周囲視野が開放されまたは透過されて視認できるもの、あるいは単純に対象物と使用者の間にスクリーンやディスプレイなど映像を表示させる装置を配置したものであってもよい。であってもよい。 In the above, the display unit refers to a unit that allows the wearer or a person in the vicinity to form a virtual space in terms of recognition by displaying content. It is preferable to have a see-through mechanism and be able to display the supplied image in front of both eyes. It may be a device that can be viewed by opening or being transparent, or a device that displays an image such as a screen or a display that is simply arranged between the object and the user. may be
 物体を検知するセンサとしては、固定された物体の外形を検知したり、移動する物体の動きを検知したりできるものであって、可視光や赤外線によって検知を行う光学センサや単数もしくは複数の撮像部(たとえばTVカメラ)が好適であるが、その他の磁気センサや接触センサ、筋電センサ、あるいは加速度センサ等であってもよいし、上記センサいずれかの組み合わせであっても良い。また、上記センサには、被写体の撮影画像から物体までの距離をピクセル単位で出力させることで距離画像が得られる距離画像センサを含むことができる。 A sensor that detects an object can detect the outline of a fixed object or the movement of a moving object. A unit (for example, a TV camera) is preferable, but other magnetic sensors, contact sensors, myoelectric sensors, acceleration sensors, or the like may be used, or any combination of the above sensors may be used. Further, the sensor may include a distance image sensor capable of obtaining a distance image by outputting the distance from the photographed image of the subject to the object in units of pixels.
 上記の場合のセンサ(例えばTVカメラ等)は、外界の画像を撮影したり、距離情報あるいは加速度情報等を取得あるいは加工したりして、画像表示制御部を通して、あるいは、直接、頭部装着型表示部に表示したり、撮影した画像を記憶あるいは伝送することができるように構成されており、頭部装着型表示部に内蔵されていることが好適であるが、内蔵されなくても近傍に配置されていてもよい。 In the above case, the sensor (for example, a TV camera, etc.) captures an image of the outside world, acquires or processes distance information, acceleration information, etc., and uses the image display control unit or directly, head-mounted type It is configured so that it can be displayed on the display unit and the captured image can be stored or transmitted. may be placed.
 上述した表示部及びセンサを、たとえばヘッドマウントディスプレイ(HMD)として実現してもよい。この場合、HMDは、好適には、少なくともディスプレイ、センサ(撮像部(たとえばTVカメラ)を含むことができる)を有して構成され、さらに好適には通信インターフェース(I/F)を有して構成される。 The display unit and sensor described above may be implemented as, for example, a head mounted display (HMD). In this case, the HMD preferably includes at least a display and a sensor (which may include an imaging unit (for example, a TV camera)), and more preferably includes a communication interface (I/F). Configured.
 HMDのディスプレイには操作者が必要とするすべての視覚的情報が表示され、TVカメラ及びセンサによって以下に示す動作制御部に対して送られるべきHMD外の情報が撮像され、通信I/FによってHMDと動作制御部との間の信号のやり取りが行われる。 All visual information required by the operator is displayed on the display of the HMD, information outside the HMD to be sent to the operation control unit shown below is imaged by the TV camera and sensor, and the communication I / F Signals are exchanged between the HMD and the motion control section.
 表示制御部は、ハードウエア的には、表示部(たとえば上記のHMDとして実現される場合においては、HMDのディスプレイ部)との信号のやり取りを行う通信I/F、演算を行うCPU、数値演算を高速に行うGPU、実行中のプログラムやデータが置かれるメモリ、プログラムやデータを保存する外部記憶装置を備えて構成され、ソフトウエア的には、表示部に対して、表示部に表示されるべきコンテンツ情報を送信・指示するように上記CPU、GPU、通信I/F、外部記憶装置の少なくともいずれかを制御するプログラムを備えて構成される。 In terms of hardware, the display control unit includes a communication I/F for exchanging signals with the display unit (for example, the display unit of the HMD when implemented as the HMD described above), a CPU for performing calculations, and a numerical calculation unit. It is configured with a GPU that performs high-speed processing, a memory in which the program and data being executed are placed, and an external storage device that saves the program and data. It comprises a program for controlling at least one of the CPU, GPU, communication I/F, and external storage device so as to transmit and instruct content information to be processed.
 これに対して、HMDの外部には略平面部が実在し、この上で操作者はキーボード、ボタン、楽器、操作盤、マウス、タッチパネル、レバーの操作などを行う。略平面部は現実に存在する机、テーブル、木箱、段ボール箱、平板、床部、砂地、土壌、水面、クッション、椅子などのような物体の、概ね水平な面が好適であるが、物体の種類を問わず、また、水平面でなく、やや傾斜した面、表面に小さな段差や凹凸を有する面、多少湾曲している面なども含むものとし、更に、水平面だけでなく、壁面や箱状の物体の側面、あるいは自身や他者の手、足、腹、他者の背中など、鉛直または鉛直に近い面であってもよい。さらに、操作の邪魔にならない範囲に、障害物となる小物等が設置されていてもよい。 On the other hand, there is a substantially flat surface outside the HMD, on which the operator operates the keyboard, buttons, musical instruments, operation panel, mouse, touch panel, levers, etc. The substantially flat portion is preferably a substantially horizontal surface of an object such as a desk, table, wooden box, cardboard box, flat plate, floor, sand, soil, water surface, cushion, chair, etc. Regardless of the type, it includes not only horizontal surfaces but also slightly inclined surfaces, surfaces with small steps and unevenness on the surface, and slightly curved surfaces.In addition to horizontal surfaces, walls and box-shaped surfaces It may be a side surface of an object, or a vertical or near-vertical surface such as one's own or another person's hand, leg, belly, or other person's back. Furthermore, an obstacle or the like may be installed within a range that does not interfere with the operation.
 操作指示主体とは、本システムを利用する者の手指などの身体の一部や、利用者の保持する操作部などを指し、それらが置かれた場所がセンサ及び画像表示制御部によって検知、認識され、また、それらの動作、すなわち、略平面部に垂直方向の動き、あるいは、略平面部を面方向に移動させる動きなどをセンサに検知させることができる物体である。 The subject of operation instruction refers to a part of the body such as the fingers of the person who uses this system, or the operation unit held by the user, and the place where they are placed is detected and recognized by the sensor and the image display control unit. It is also an object capable of causing a sensor to detect motions thereof, that is, motions in the direction perpendicular to the substantially flat portion, or motions that move the substantially flat portion in the plane direction.
 上記ハードウエア構成において、HMDに内蔵、装着あるいは近傍に配置されているセンサ(例えばTVカメラ)によって操作者の前にある略平面(机等)が認識あるいは撮像され、操作者の指、手、コントローラ、あるいは現実空間に置かれるARマーカー等によって実空間に存在する略平面上の複数の点が指定されることによって仮想的操作範囲が設定される。 In the above hardware configuration, a sensor (e.g., a TV camera) built in, attached to, or placed near the HMD recognizes or images a substantially flat surface (desk, etc.) in front of the operator, and the operator's fingers, hands, A virtual operation range is set by specifying a plurality of points on a substantially plane existing in the real space using a controller or an AR marker or the like placed in the real space.
 HMDの画面には、こうして設定された仮想的操作範囲内に仮想的キーボード、仮想的ボタン、操作対象となるゲーム画像などに加え、操作者の指や手が表示される。操作者はこのHMDの画面の仮想的キーボードや仮想的ボタンに対して操作を行うが、その実態は実空間に実在する略平面であるので、それからの触覚フィードバックを特別な装置を装着することなく、リアルな感触として受けることができ、これによってゲーム画像などの視覚と触覚との連合学習を行い、連想記憶を構築することができる。 On the screen of the HMD, the operator's fingers and hands are displayed in addition to the virtual keyboard, virtual buttons, game images to be operated, etc. within the virtual operation range set in this way. The operator operates the virtual keyboard and virtual buttons on the screen of this HMD, but since the reality is a substantially flat surface that actually exists in the real space, the tactile feedback from it can be obtained without wearing a special device. , can be received as a realistic feeling, and by this, associative learning between visual sense and tactile sense such as game images can be performed, and associative memory can be constructed.
 本発明の上記各態様によれば、(連合記憶を形成するためのものとして)実在するキーボードが不要になるため、机の上のスペースが解放され、キーボードと書類などの紙類との共存の問題が解消される。その上、紙類の存在する場所に仮想キーボードを設定することも可能となり、作業効率とスペース効率が著しく改善される。 According to each of the above aspects of the present invention, an existing keyboard (for forming an associative memory) becomes unnecessary, freeing space on the desk and allowing the keyboard to coexist with paper such as documents. the problem is resolved. In addition, it is possible to set a virtual keyboard in a place where papers exist, which significantly improves work efficiency and space efficiency.
 さらに本発明の技術思想に基づけば、仮想ボタンや仮想キーボードにおける学習の進捗度の推定を行うことが可能となるので、例えば学習度合いに応じたゲームの難易度を調節することが可能となり、ゲーム技能の向上促進が図られるだけでなく、これまで実現できなかった複雑なゲームを提供することが可能になる。 Further, based on the technical idea of the present invention, it is possible to estimate the degree of progress of learning in the virtual buttons and the virtual keyboard. Not only does it promote the improvement of skills, but it also makes it possible to provide complex games that have not been possible before.
 上記の場合、仮想ボタンや仮想キーボードは仮想空間の操作範囲のどこに置くこともでき、さらにプログラムがダイナミックに変更することすら可能となる。これによってさらに複雑なゲームを実現することが可能になる。例えば、操作者の状況に応じてコントローラを変えることが可能である。これは、物理コントローラを持つ従来のゲーム体感設備では実現が難しかった。更には、仮想ボタンや仮想キーボードだけでなく他のあらゆる仮想操作対象、さらには使用者の手の見た目や大きさすらも変更あるいは調整することが可能である(例えば、動物の手、ロボットの手、半透明の手などに変更可能である)。 In the above case, virtual buttons and virtual keyboards can be placed anywhere within the virtual space's operating range, and even dynamically changed by the program. This makes it possible to implement even more complex games. For example, it is possible to change the controller depending on the operator's situation. This has been difficult to achieve with conventional game experience equipment with physical controllers. Furthermore, it is possible to change or adjust not only virtual buttons and virtual keyboards, but also all other virtual operation objects, and even the appearance and size of the user's hands (for example, animal hands, robot hands, etc.). , translucent hands, etc.).
 なお、上記では、主に操作指示主体が操作する対象としてキーボードとボタンで説明を行ったが、それ以外にも、スティック、レバー、スライダー、ゲームコントローラ、タッチパネル、ディスク、ホイール、ハンドル、操作盤、マウス、鍵盤、弦楽器、打楽器など、操作指示主体が操作可能で、TVカメラやセンサでその動きを検出できるものなら何でもよい。 In the above description, keyboards and buttons were mainly used as objects to be operated by the subject of operation instruction, but there are other objects such as sticks, levers, sliders, game controllers, touch panels, discs, wheels, handles, control panels, A mouse, a keyboard, a stringed instrument, a percussion instrument, or any other instrument that can be operated by an operator and whose movement can be detected by a TV camera or sensor can be used.
 さらに、ソフトウエアにてAPP毎に異なる仮想コントローラを生成したり内容を変更することにより、APP毎に異なるハードウエアの入れ替えをせずとも、触覚を伴うコンテンツの操作が可能となり、コスト削減と利便性の向上が実現する。 Furthermore, by creating a different virtual controller for each APP in software and changing the content, it is possible to operate content with a sense of touch without replacing different hardware for each APP, reducing costs and increasing convenience. performance improvement is realized.
 本発明の態様に係る触覚拡張情報処理システム、方法、プログラム、記憶媒体では、たとえばHMDなどによる仮想空間内でのコンピュータやゲームに対する操作において、コンピュータに接続された特別な付加的装置を用いずに、あるいは付加的装置の数を減らしながらも、現実空間内の触覚を実質的に提供し、それによって視覚および/または聴覚と触覚の連合学習の促進が図られ、強固な連想記憶を形成することが可能となる。さらに、指や手などの動きを検出することによって連合学習の進捗を監視することも可能となる。すなわち、HMDを含む仮想空間においてキーボードやボタン押しで生じる触覚を与えることができ、これによって視覚情報及び/または聴覚情報と触覚との間の連合学習を促進し、強固な連想記憶を形成することが可能となるばかりでなく、学習の進捗を監視することによっていっそう連合学習の促進と強固な連想記憶の形成を行うことが可能となる。 In the haptic augmented information processing system, method, program, and storage medium according to aspects of the present invention, for example, when operating a computer or game in a virtual space using an HMD or the like, without using a special additional device connected to the computer, Or, while reducing the number of additional devices, substantially providing tactile sensations in the real space, thereby promoting associative learning of visual and/or auditory and tactile sensations, and forming strong associative memories. becomes possible. Furthermore, it is also possible to monitor the progress of associative learning by detecting movements of fingers and hands. That is, it is possible to provide tactile sensations generated by pressing keyboards and buttons in a virtual space including HMD, thereby promoting associative learning between visual information and/or auditory information and tactile sensations, and forming strong associative memory. In addition, by monitoring the progress of learning, it is possible to further promote associative learning and form strong associative memory.
 さらに、これまで操作者に触覚を与えるために必要であった別途付加的装置が不要になる。あるいは、付加的装置の数を減らしたり付加的装置を簡易的なものに変更したりできる。その結果、ゲームごとに異なるコントローラを用意していた従来状況に対して、柔軟性や利便性の向上、さらにはコスト削減という極めて大きな効果を奏する。 Furthermore, there is no need for a separate additional device that was previously required to give the operator a sense of touch. Alternatively, the number of additional devices can be reduced or the additional devices can be changed to simple ones. As a result, compared to the conventional situation in which a different controller was prepared for each game, there is an extremely large effect of improving flexibility and convenience and reducing costs.
 さらに、仮想ボタンや仮想キーボードの仮想空間内の配置や大きさが自由であるばかりでなく、プログラムによって操作者に無断でダイナミックに変更可能であるので、これまで実現できなかった複雑なゲームなどのプログラムを提供することが可能となる。 Furthermore, not only can the virtual buttons and virtual keyboards be freely arranged and sized within the virtual space, but they can also be dynamically changed by a program without the operator's permission. program can be provided.
 加えて、仮想空間で障害物となりかねない机などの物体について、その平面部に仮想オブジェクトを形成することで、利用者の大きな動きが不要となり、周囲の家具にぶつかるなどのリスクが低減される。さらに、平面部内に操作範囲を指定することで、意図せぬ障害物(例えば机上面端に置かれたコーヒーカップなど)にぶつかるリスクも、同様に軽減される。 In addition, by forming a virtual object on the flat surface of an object such as a desk that may become an obstacle in the virtual space, the user does not need to move much, and the risk of colliding with surrounding furniture is reduced. . Furthermore, by designating the operation range within the plane portion, the risk of hitting unintended obstacles (such as a coffee cup placed on the edge of the desk surface) is likewise reduced.
 さらに、仮想コントローラとして、仮想タッチディスプレイを使用することができる。この場合、現実には非常に高価となる、大きなサイズのタッチディスプレイを、使用者は仮想空間内で使用できる。 Furthermore, a virtual touch display can be used as a virtual controller. In this case, the user can use a large size touch display in the virtual space, which in reality is very expensive.
 さらに、仮想タッチディスプレイサイズを変えることも可能になり、従来のスマートフォンやタブレットでは難しかった、ディスプレイ内に表示される仮想キーボード等のユーザーインターフェースのサイズも、仮想タッチディスプレイのサイズ可変性と同様に、自在に調整可能になる。 In addition, it is now possible to change the size of the virtual touch display, which was difficult with conventional smartphones and tablets. freely adjustable.
 たとえば、使用者の手のサイズ等に応じて仮想キーボードサイズをディスプレイサイズの制約を超えた最適なものに調整することも可能になるし、物理的制約を超えた多数のキーを配置することもできる。 For example, it is possible to adjust the size of the virtual keyboard to an optimal size that exceeds the restrictions of the display size according to the size of the user's hand, etc., and it is also possible to arrange a large number of keys that exceed the physical restrictions. can.
 以下、仮想タッチパネル及び仮想キーボード、ひいては仮想空間内におけるユーザーインターフェース全般に関する操作性と本発明の新規性、進歩性についてさらに詳述する。従来の仮想もしくは複合現実ソフトウエアにおける仮想キーボードは、仮想空間上に浮かんだキーを、使用者が持つコントローラ等から仮想空間内に伸びるレーザー(仮想レーザーポインター)で一つ一つ選択していく文字入力方式が主流であった。しかしこれは、現実では10本の指で画面やキーボードを叩いて行う操作を、両手のコントローラ2本のみで行っているため、3キー以上の同時押しショートカット操作等が実現できないことや、入力速度が遅いことが問題となっていた。 The operability of the virtual touch panel and virtual keyboard, as well as the overall user interface in the virtual space, and the novelty and inventiveness of the present invention will be described in more detail below. A virtual keyboard in conventional virtual or mixed reality software is a character that selects keys floating in the virtual space one by one with a laser (virtual laser pointer) that extends into the virtual space from the controller held by the user. Input method was the mainstream. However, in reality, operations performed by tapping the screen or keyboard with 10 fingers are performed with only two controllers on both hands, so simultaneous pressing of 3 or more keys cannot be performed, and input speed is slow. was a problem.
 それを解決しようと、現実のキーボードやタッチパネルを模したオブジェクトを仮想空間内に再現する試みがあったが、そのタッチパネルやキーボードには現実に「実体」が無いため、手が突き抜けてしまい、操作しづらい上に隣のキーや別のUIボタンも同時に反応してしまうなどの誤選択(誤操作)も頻発していた。この誤操作を減らすため、仮想空間におけるユーザーインターフェースでは、空中に浮かぶパネルに一定時間継続して触れた場合のみ入力を受け付ける(もしくは、パネルに触れたまま使用者が外部コントローラのボタンを押す)という手法も多く取られているが、これは結局入力速度が犠牲になってしまっていた。 In an attempt to solve this problem, there was an attempt to reproduce objects that mimic real keyboards and touch panels in virtual space. In addition to being difficult to use, there were frequent erroneous selections (misoperations) such as neighboring keys and other UI buttons reacting at the same time. In order to reduce this erroneous operation, the user interface in the virtual space accepts input only when the panel floating in the air is touched continuously for a certain period of time (or the user presses the button of the external controller while touching the panel). Although many are also taken, this ended up sacrificing the input speed.
 本発明により、仮想キーボードや仮想タッチパネルに対する使用者の「衝突判定」に現実の略平面という実体を伴う「衝突」を与えることができる。それによって、これまで一般的だった仮想現実もしくは複合現実ソフトウエアで問題となっていた、現実空間での実体を伴わない故の上述したような操作性の悪さを、特別な付加装置無しで一挙に解決することができる。 According to the present invention, it is possible to give the user's "collision determination" to the virtual keyboard or virtual touch panel a "collision" involving the substance of a real, substantially flat surface. As a result, the above-mentioned poor operability due to the fact that it does not involve the entity in the real space, which has been a problem with virtual reality or mixed reality software that has been common so far, can be solved without special additional equipment. can be resolved to
 さらに、現実の実体を押したり撫でたりといった操作は、同様の操作を物理的抵抗のない空中で行った場合と比べて、「ブレ」が減る。このブレの減少によって、誤操作が減るだけでなく、「キーの押し時間」「キーに触れてから動かした方向」等といった細かな操作情報を、使用者の意図に近い状態で取得できる。これにより、従来の物理キーボードでは難しいもしくは高コストになっていた、長押しでキーの見た目や挙動を変えることができるキーボードの実現が容易になる。 In addition, operations such as pushing and stroking real entities reduce "blurring" compared to performing similar operations in the air with no physical resistance. This reduction in blurring not only reduces erroneous operations, but also enables detailed operation information such as "key pressing time" and "direction moved after touching a key" to be acquired in a state close to the user's intention. This makes it easier to create a keyboard that allows you to change the appearance and behavior of keys with long presses, which was difficult or expensive with conventional physical keyboards.
 また、タッチディスプレイにおける(文字)入力方法として既知の「スワイプ操作」「フリック入力」「グライド入力」等といった手法を発展させる形で、スマートフォンやタブレットの画面サイズの制約を超えた、複雑でダイナミックな操作を要する「スワイプ操作」「フリック入力」「グライド入力」も、本発明においては、可能になる。 In addition, by developing methods such as "swipe operation", "flick input", "glide input", etc., which are known as (character) input methods on touch displays, complex and dynamic input methods that exceed the screen size restrictions of smartphones and tablets "Swipe operation", "flick input" and "glide input" that require operation are also possible in the present invention.
 さらに本発明における仮想タッチディスプレイの利点を付け加える。それは、現実のタッチディスプレイは、感圧センサーや静電容量センサーの性能等によって同時タッチ可能数などが異なり、それにより、価格や操作性も異なる場合があるが、本発明における仮想タッチディスプレイでの複数同時タッチの処理(複数衝突処理)は、そういったセンサーの制約は関係なくなり、ソフトウエア上での衝突判定(オブジェクト距離計算)処理数に置き換わることから、同時タッチ可能数は事実上無制限と言え、この観点からも、これまで以上に複雑な操作と、深い体験の提供が可能になるといえる。 Furthermore, the advantage of the virtual touch display in the present invention is added. This is because the number of simultaneous touches that can be performed on a real touch display differs depending on the performance of the pressure-sensitive sensor and the capacitive sensor, and the price and operability may differ accordingly. Multiple simultaneous touch processing (multiple collision processing) is no longer related to such sensor restrictions, and is replaced by the number of collision determination (object distance calculation) processing on software, so the number of simultaneous touches can be said to be virtually unlimited. From this point of view, it can be said that it will be possible to provide more complex operations and deeper experiences than ever before.
 もちろん、仮想タッチディスプレイを複数配置する事もできるし、仮想ボタンや仮想コントローラと共存させることもできる。 Of course, multiple virtual touch displays can be arranged, and they can coexist with virtual buttons and virtual controllers.
本発明の一実施形態に係る触覚拡張情報処理システム(たとえば画像表示システム)の主としてハードウエア面からの構成図である。1 is a configuration diagram mainly from the hardware side of a haptic augmented information processing system (for example, an image display system) according to an embodiment of the present invention; FIG. 本発明の一実施形態に係る画像表示システムを構成するソフトウエア面からの機能構成図である。1 is a functional configuration diagram from the software side that configures an image display system according to an embodiment of the present invention; FIG. 図2に係る機能構成図(機能ブロック図)に、連合記憶形成促進部30の構成を加えた機能ブロック図である。FIG. 3 is a functional block diagram obtained by adding the configuration of an associated memory formation promoting unit 30 to the functional configuration diagram (functional block diagram) according to FIG. 2 ; 図3の連合記憶形成促進部30を拡大表記するとともに連合記憶が形成される様子を概念的に描いた構成ブロック概念図である。FIG. 4 is a conceptual block diagram showing an enlarged associative memory formation promoting unit 30 in FIG. 3 and conceptually illustrating how an associative memory is formed; 図3の連合記憶形成促進部30を拡大表記するとともに連合記憶が形成される様子を概念的に描いた構成ブロック概念図である。FIG. 4 is a conceptual block diagram showing an enlarged associative memory formation promoting unit 30 in FIG. 3 and conceptually illustrating how an associative memory is formed; 本発明の一実施形態に係る特定動作を定義する動作について説明するためのタイミングフローチャートである。4 is a timing flowchart for explaining an operation defining a specific operation according to an embodiment of the present invention; 本発明の一実施形態に係る特定動作における操作者の実際の動きを説明するための概念的斜視図である。FIG. 4 is a conceptual perspective view for explaining actual movements of an operator in specific actions according to one embodiment of the present invention; 本発明の一実施形態に係る特定動作における操作者の実際の動きを説明するための概念的斜視図である。FIG. 4 is a conceptual perspective view for explaining actual movements of an operator in specific actions according to one embodiment of the present invention; 本発明の一実施形態に係る特定動作における操作者の実際の動きを説明するための概念的斜視図である。FIG. 4 is a conceptual perspective view for explaining actual movements of an operator in specific actions according to one embodiment of the present invention; 本発明の一実施形態に係る特定動作における操作者の実際の動きを説明するための概念的斜視図である。FIG. 4 is a conceptual perspective view for explaining actual movements of an operator in specific actions according to one embodiment of the present invention; 本発明の一実施形態に係る学習進捗度の推定方法を説明するための概念図である。FIG. 4 is a conceptual diagram for explaining a learning progress estimation method according to an embodiment of the present invention; 本発明の一実施形態に係る画像表示システムの動作を説明するためのフローチャートである。4 is a flowchart for explaining the operation of the image display system according to one embodiment of the present invention;
 以下、図面を参照し、本発明の第1の実施形態にかかる触覚拡張情報処理システム(たとえば画像表示システム)について説明する。なお、以下では本発明の目的を達成するための説明に必要な範囲を模式的に示し、本発明の該当部分の説明に必要な範囲を主に説明することとし、説明を省略する箇所については公知技術によるものとする。 A haptic augmented information processing system (for example, an image display system) according to the first embodiment of the present invention will be described below with reference to the drawings. In the following, the range necessary for the description to achieve the object of the present invention is schematically shown, and the range necessary for the description of the relevant part of the present invention is mainly described. It shall be based on a well-known technique.
 図1は、本発明の一実施形態に係る触覚拡張情報処理システム(たとえば画像表示システム)100のハードウエアの全体構成図を中心に配置したうえで、ハードウエアを構成する一要素たるディスプレイ上に表示されるイメージ図及び同ハードウエアを構成する一要素たるTVカメラが撮像する範囲を説明するための概念的斜視図を周囲に配置した図であり、同図ではHMDを用いた場合を例示的に示している。同図に示されるように、画像表示システム100は、ハードウエア構成的には、ディスプレイ6、TVカメラ7、センサ8、及び、通信I/F(インターフェース)9が装備もしくは内蔵されるHMD5と、HMD5と通信I/F9を介して接続された、通信I/F16、CPU17、GPU18、メモリ19、及び、外部記憶装置20を備える動作制御部21とを備えて構成される。 略平面である(実体としての)机1の上の操作範囲が指定され、その内部に仮想的キーボード2、仮想的ボタン3、及び(実体としての)操作指示主体4(例示的に両手を示す)の位置関係が示される。(実体としての)操作指示主体4は(実体としての)HMD(ヘッドマウントディスプレイ)5を装着することができる。いうまでもなく、仮想的キーボード2、仮想的ボタン3は、利用者(装着者)の観点からはヴァーチャルな存在であるが、情報処理的には、情報処理装置というハードウエア(実体)をソフトウエアという別の実体が動作させることによってハードウエアの一であるディスプレイ上に映像データとして表示される電子的実体を利用者(装着者)という人間が認識することで、あたかも虚空に実体が存在するかのように認識できる対象である。これらの技術的特徴の詳細については後述する。 FIG. 1 is an overall hardware configuration diagram of a haptic augmented information processing system (for example, an image display system) 100 according to an embodiment of the present invention. It is a view in which a displayed image and a conceptual perspective view for explaining a range captured by a TV camera, which is one element of the same hardware, are arranged around, and in the same figure, the case where an HMD is used is exemplified. showing. As shown in the figure, the image display system 100 includes, in terms of hardware configuration, a display 6, a TV camera 7, a sensor 8, and an HMD 5 equipped with or incorporated with a communication I/F (interface) 9; It comprises an operation control unit 21 having a communication I/F 16, a CPU 17, a GPU 18, a memory 19, and an external storage device 20, which are connected to the HMD 5 via a communication I/F 9. An operation range on a substantially planar desk 1 (as an entity) is specified, and a virtual keyboard 2, a virtual button 3, and an operation instructing subject 4 (as an entity) (both hands are shown as an example) ) are shown. An operation instructing subject 4 (as an entity) can wear an HMD (head mounted display) 5 (as an entity). Needless to say, the virtual keyboard 2 and the virtual buttons 3 are virtual entities from the user's (wearer's) point of view. When the user (wearer) recognizes the electronic entity displayed as video data on the display, which is one piece of hardware, by operating another entity called wear, the entity appears as if it exists in the void. It is an object that can be recognized as if. Details of these technical features will be described later.
 机1を含む操作者4の前方の様子はHMD5に装着されたTVカメラ7によって撮像され、HMD5のディスプレイ6に表示される。この場合、ディスプレイ6は好適にはシースルー機能を備える。ディスプレイ6がシースルー機能を備えている場合には、ディスプレイ6に表示される表示画像は、前方の現実空間がシースルー機能を通して眼前に見えるシースルー現実画像であっても、或いは、TVカメラ7によって撮像されたイメージデータをもとに現実空間と重畳されるようにディスプレイ6に表示される撮像画像と当該シースルー現実画像とが重畳された重畳画像であってもよい。ディスプレイ6がシースルー機能を備えていない場合には、ディスプレイ6に表示される表示画像は、TVカメラ7によって撮像されたイメージデータ、またはそれを加工したものであってもよい。(机1の表面である)略平面はTVカメラ7によって撮像される画像から検出されるが、代替的に、撮像機能を備えたセンサ8によって検出されることとしてもよい。これらの信号、すなわち、TVカメラ7(もしくはセンサ8)によって撮像される動画像及び/もしくは静止画像に係る情報、並びに、センサ8によって検出される検出情報、は通信I/F9を介して動作制御部21に送られる。 The situation in front of the operator 4 including the desk 1 is imaged by the TV camera 7 attached to the HMD 5 and displayed on the display 6 of the HMD 5. In this case, the display 6 preferably has a see-through function. When the display 6 has a see-through function, the display image displayed on the display 6 may be a see-through real image in which the front real space is seen through the see-through function, or may be an image captured by the TV camera 7. It may be a superimposed image obtained by superimposing a captured image displayed on the display 6 so as to be superimposed on the real space based on the obtained image data and the see-through real image. If the display 6 does not have a see-through function, the display image displayed on the display 6 may be image data captured by the TV camera 7 or processed data thereof. The substantially flat surface (which is the surface of the desk 1) is detected from the image captured by the TV camera 7, but alternatively may be detected by a sensor 8 with imaging capability. These signals, that is, information related to moving images and/or still images captured by the TV camera 7 (or the sensor 8) and detection information detected by the sensor 8 are controlled via the communication I/F 9. sent to section 21.
 HMD5のディスプレイ6に表示される画面の一例を(図1の)画面10として示す。実行中のAPP(ここでは、たとえばゲーム画像を例示的に示す)の画面(厳密には、仮想空間上に現出される画像)11が表示され、略平面から生成された操作者の操作が行われる仮想的平面12も表示される。この仮想的平面上12には操作範囲が指定されており、その範囲内に仮想的キーボード13及び仮想的ボタン14が表示され、されにそれにオーバーラップして操作指示主体(例示的に両手を示す)の実空間上の画像15が表示される。 操作指示主体の画像15が表示される態様としてはHMD5のシースルー機構によって装着者の視覚でとらえられる実画像(上述したシースルー現実画像)でもよいし、或いは、(シースルー機能がない、もしくはオフになっている場合には)実画像の代わりにたとえば(図示しない)プログラムによってハードウエアが一定の動作をすることでディスプレイ6に画像として表示される仮想化した画像(上述した撮像画像もしくは当該撮像画像に対して一定の加工を加えた加工済撮像画像)であってもよく、もしくは、装着者の視覚でとらえられる実画像と当該仮想化した画像とが重畳された画像(上述した重畳画像)であってもよい。 An example of the screen displayed on the display 6 of the HMD 5 is shown as screen 10 (in FIG. 1). A screen (strictly speaking, an image appearing in a virtual space) 11 of an APP being executed (here, for example, a game image is exemplified) is displayed, and an operator's operation generated from a substantially plane is displayed. A virtual plane 12 is also displayed. An operation range is designated on this virtual plane 12, and a virtual keyboard 13 and virtual buttons 14 are displayed within the range, and furthermore, an operation instruction subject (both hands are shown as an example) that overlaps the virtual keyboard 13 and the virtual button ) on the real space is displayed. The mode in which the image 15 of the main subject of the operation instruction is displayed may be a real image (the above-mentioned see-through real image) captured visually by the wearer by the see-through mechanism of the HMD 5, or (the see-through function is absent or turned off). Instead of a real image, for example, a virtualized image (the above-described captured image or Alternatively, it may be an image in which a real image perceived by the wearer's eyes and the virtualized image are superimposed (the superimposed image described above). may
 HMD5のディスプレイ6に表示される画面10に係る画面データは、動作制御部21によって生成され、その通信I/F16を通ってHMD5の通信I/F9に送られて表示される。その生成は、より具体的には、たとえば、外部記憶装置20に予め記憶されている一定のプログラムをCPU17もしくはGPU18が一括してもしくは都度読み取ってメモリ19に格納し、CPU17もしくはGPU18がメモリ19に格納されたプログラムを読取り、これに沿って該当するハードウエアを動作させることで上記画面データを生成する、という態様をとることができる。 Screen data related to the screen 10 displayed on the display 6 of the HMD 5 is generated by the operation control unit 21, sent to the communication I/F 9 of the HMD 5 through the communication I/F 16, and displayed. More specifically, for example, the CPU 17 or GPU 18 reads a certain program pre-stored in the external storage device 20 all at once or each time, stores it in the memory 19, and the CPU 17 or GPU 18 stores it in the memory 19. The screen data can be generated by reading the stored program and operating the corresponding hardware according to the program.
 動作制御部21は上述したように、CPU17、GPU18、メモリ19、外部記憶装置20を備えて構成され、好適には通信I/F16も備えるが、さらに必要な装置やコンポーネントを付加したものとして構成されてもよい。 As described above, the operation control unit 21 includes the CPU 17, the GPU 18, the memory 19, and the external storage device 20, and preferably also includes the communication I/F 16, but may be configured with additional necessary devices and components. may be
 図2は、本発明の一実施形態に係る画像表示システムを構成するソフトウエア面からの機能構成図(機能ブロック図)であり、特に、図1で示したハードウエアを制御するためのソフトウエアのうち、特に操作指示主体の動き及び連合学習の進捗推定に係る部分の構成を示す機能構成図である。すなわち、画像表示システムを構成するソフトウエアを構成する機能は同図に示したもの以外にあり得るが、これらは公知のもの或いはいわゆる当業者にとって通常の知識の範囲内のものを用いることができ、ここでは図示を省略されるものもあり得る。図2に示されるように、ハードウエア構成として(図1に示される)動作制御部21は、機能ブロック的には、空間検出部2122、操作範囲設定部2123、操作検出部2124、操作指示主体の動き検出部2125、仮想的操作対象表示制御部2126、表示制御部2127、コンテンツ表示制御部2128、及び、連合学習進捗検出部2129、を少なくとも備えて構成される。なお、連合学習進捗検出部2129はオプションとしてもよい。 FIG. 2 is a functional configuration diagram (functional block diagram) from the software aspect constituting the image display system according to one embodiment of the present invention. FIG. 10 is a functional configuration diagram showing a configuration of a part particularly related to motion of an operation instructing subject and progress estimation of association learning. In other words, functions constituting the software constituting the image display system may be other than those shown in FIG. , which may be omitted here. As shown in FIG. 2, as a hardware configuration, the motion control unit 21 (shown in FIG. 1) includes, in terms of functional blocks, a space detection unit 2122, an operation range setting unit 2123, an operation detection unit 2124, and an operation instruction subject. , a motion detection unit 2125, a virtual operation target display control unit 2126, a display control unit 2127, a content display control unit 2128, and an associated learning progress detection unit 2129. Note that the federated learning progress detector 2129 may be optional.
 HMD5のTVカメラ7より送られてきた画像信号(上述した動画像情報及び/もしくは静止画像情報に係る信号)は後述するアルゴリズムによって前処理される結果、操作者の前方に位置する略平面が空間検出部2122によって検出される。代替的に、空間検出部2122が検出する対象は立体物等の空間(3次元体)であってもよい。いずれにしても、空間検出部2122により、TVカメラ7によって撮像された画像信号のうちのどの部分が略平面を形成するものであるかが特定されることができる。空間検出部2122は、TVカメラ7及び/もしくはセンサ8というハードウエアと、かかるハードウエアを動作させて、TVカメラ7及び/もしくはセンサ8から得られる情報をもとに公知の平面検出アルゴリズムによって平面を検出する機能もしくは公知の空間検出アルゴリズムによって空間を検出する機能をコンピュータ資源に果たさせるプログラム(ソフトウエア)とが協働することによって実現される。すなわち、空間検出部2122は、たとえば、センサ(たとえば距離画像センサ)から得られた距離画像に対して3次元ハフ変換を用いた平面検出法(たとえば、出岡大和,新田益大,加藤清敬:「3次元ハフ変換を用いた距離画像における面検出」 精密工学会秋季大会学術講演会講演論文集(2011))によるアルゴリズムによって実現されることができるが、これに限定されるものではない。 The image signal (signal relating to the moving image information and/or the still image information) sent from the TV camera 7 of the HMD 5 is preprocessed by an algorithm described later. It is detected by the detection unit 2122 . Alternatively, the object detected by the space detection unit 2122 may be a space (three-dimensional object) such as a three-dimensional object. In any case, the space detection unit 2122 can specify which part of the image signal captured by the TV camera 7 forms a substantially flat surface. The space detection unit 2122 operates the hardware of the TV camera 7 and/or the sensor 8 and the hardware to detect a plane by a known plane detection algorithm based on the information obtained from the TV camera 7 and/or the sensor 8. or a program (software) that causes computer resources to perform a function of detecting space by a known space detection algorithm. That is, the space detection unit 2122 performs, for example, a plane detection method (for example, Yamato Ideoka, Masuda Nitta, Kiyotaka Kato: "Face detection in range image using three-dimensional Hough transform" (Proceedings of the Japan Society for Precision Engineering Autumn Meeting Scientific Lecture Proceedings (2011)), but it is not limited to this.
 操作範囲は、操作範囲指定部2123によって決定され、これによって実空間内にある操作指示主体が仮想空間内にたとえば位置情報として対応付けられる。以上の一連のソフトウエアコンポーネントによって、APPを操作する準備が整えられる。操作範囲指定部2123が操作範囲を決定するにあたっては、操作者によって指定される態様でも、あるいは、現実空間において予め物理的に指定される(たとえば、現実空間に予めマーカーのようなもの(たとえばARマーカー)が置かれることで操作範囲が指定されている)態様であってもよい。たとえば操作者によって指定されることで操作範囲が決定される態様の場合には、操作範囲指定部2123は、TVカメラ7及び/もしくはセンサ8という(外部)ハードウエアを動作させて、操作者によって指定される操作範囲の物理的位置情報を獲得させて、当該獲得された位置情報を仮想空間内に対応付け(マッピング処理)するという機能を(内部)コンピュータ資源(通信I/F16、CPU17、GPU18、メモリ19、及び/もしくは、外部記憶装置20)に果たさせるアルゴリズム(ソフトウエア)と上記ハードウエアとが協働することによって実現される。 The operation range is determined by the operation range designating unit 2123, whereby the subject of the operation instruction in the real space is associated with the virtual space, for example, as position information. This set of software components prepares the APP for operation. When the operation range specifying unit 2123 determines the operation range, it may be specified by the operator, or may be physically specified in advance in the real space (for example, a marker in advance in the real space (for example, an AR The operation range may be specified by placing a marker)). For example, in the case where the operation range is determined by being specified by the operator, the operation range specifying unit 2123 operates the (external) hardware such as the TV camera 7 and/or the sensor 8 to (Internal) computer resources (communication I/F 16, CPU 17, GPU 18) have a function of acquiring physical position information of a designated operation range and correlating (mapping processing) the acquired position information in the virtual space. , memory 19, and/or external storage device 20) and the above-mentioned hardware cooperate with each other.
 その後、起動したAPPに応じて操作指示主体によって操作が行われ、その操作が操作検出部2124によって検出され、APPに渡される。これによって実キーボードや実ボタンを操作したのと同等の効果がAPPに与えられる。操作検出部2124は、TVカメラ7及び/もしくはセンサ8というハードウエアを動作させて、たとえばTVカメラ7及び/もしくはセンサ8から得られる操作指示主体による操作というアクションを当該操作に係る位置情報等(位置情報を始め、操作の結果得られる各種情報を含んでもよい。以下同じ。)を通じて当該操作に係る動作を特定することで操作検出する機能をコンピュータ資源に果たさせるアルゴリズム(ソフトウエア)と上記ハードウエアとが協働することによって実現される。 After that, an operation is performed by the operation instruction subject according to the activated APP, and the operation is detected by the operation detection unit 2124 and passed to the APP. This gives the APP the same effect as operating a real keyboard or real button. The operation detection unit 2124 operates hardware such as the TV camera 7 and/or the sensor 8, and detects an action, that is, an operation by an operation instructing subject obtained from the TV camera 7 and/or the sensor 8, for example, based on position information ( The algorithm (software) that causes the computer resources to perform the function of detecting the operation by specifying the operation related to the operation through various information obtained as a result of the operation, including location information. It is realized by cooperating with hardware.
 操作指示主体の動きは操作主体の動き検出部2125によって検出される。ここでの検出対象には、反応時間、反応速度、キータッチの時間間隔、キー押し強度、右向きまたは左向き矢印押圧動作、指先回転動作、指を伸ばす動作、指を突き出す操作、2本以上の指の先を合わせてつまむ動作、指で円や多角形などの図形を作る動作、手をにぎる動作、手を開く動作、指を閉じる動作、指を開く動作、手のひらや手の甲や指の腹で撫でる動作、手を上下左右に動かす動作、手を回転させる動作などの特定動作のうち少なくとも一つを含むことができ、さらに、特定動作に関連して検出される反応時間、反応速度、キータッチの時間間隔、キー押し強度、学習曲線、タイピグミス率、ゲームの勝敗率、定められた一連の動作の進捗度、使用者の動作速度や反応速度、正答率、正確性、アルゴリズムによって算出される完成度あるいは完成度を求めるための変数(例えば、リバーシにおける使用者の打ち手の開放度など)、キー押し強度を求めるための指形の変化情報、などの特定動作情報のうちの少なくとも一つを含むことができ、また、検出には連合学習の進捗度を推定するための検出を含むことができるが、操作主体の動き検出部2125の検出機能はこれらに限定されるものではない。操作主体の動き検出部2125は、TVカメラ7及び/もしくはセンサ8というハードウエアを動作させて、たとえばTVカメラ7及び/もしくはセンサ8から得られる操作指示主体による操作というアクションを、当該操作に係る位置情報等として、或いはたとえばこれらにキー押し強度に対応する押圧力情報を加えた位置情報及び時間情報並びに押圧力情報として、検出し、こうして検出された検出情報を、各動作(たとえば、反応、キータッチ、キー押下、スライダーのスライド、タッチパネルのフリック、ホイールの回転、楽器の演奏等の動作、等)ごとの動作情報(たとえば、時間、動作方向、動作強度、動作距離、回転、動作速度、動作習熟度、動作の正確性、動作の完成度、理想とする動作との乖離度合い等の情報)として自動的に特定する機能をコンピュータ資源に果たさせるアルゴリズム(ソフトウエア)と上記ハードウエアとが協働することによって実現される。この場合の検出情報をもとに動作情報を自動的に特定するについては、たとえば、各動作情報に該当する動作ごとに検出情報の各種類ごと一定の閾値を定め、実際に検出された検出情報をもとに振り分ける公知のアルゴリズムによることができるが、これに限定されることなく「検出された情報をもとに、各検出値ごとに予め定義された動作情報に振り分けられることで自動的に対応する動作情報を特定する」機能を実現する公知のあらゆるアルゴリズムによってもよい。 The motion of the operation instruction subject is detected by the motion detection unit 2125 of the operation subject. The objects to be detected here include reaction time, reaction speed, time interval between key touches, key pressing strength, right or left arrow pressing, finger rotation, finger extension, finger thrust, and two or more fingers. Pinch the tips of the fingers together, create shapes such as circles and polygons with your fingers, hold your hands, open your hands, close your fingers, open your fingers, and rub your palms, backs of your hands, and pad of your fingers. At least one of a motion, a motion of moving a hand up, down, left and right, and a motion of rotating a hand can be included, and the reaction time, reaction speed, and key touch detected in relation to the specific motion can be included. Time interval, key press intensity, learning curve, typing error rate, game win/loss rate, progress of a series of prescribed actions, user's action speed and reaction speed, correct answer rate, accuracy, completeness calculated by algorithm Alternatively, it includes at least one of specific operation information such as a variable for determining the degree of perfection (for example, the degree of openness of the user's stroke in reversi) and finger shape change information for determining the key pressing strength. Also, the detection can include detection for estimating the degree of progress of associated learning, but the detection function of the motion detection unit 2125 of the operating subject is not limited to these. The motion detection unit 2125 of the operating subject operates hardware such as the TV camera 7 and/or the sensor 8, and detects an action of an operation by the operating instruction subject obtained from the TV camera 7 and/or the sensor 8, for example, as an action related to the operation. It is detected as position information or the like, or as position information, time information, and pressure information obtained by adding pressing force information corresponding to the key pressing strength to these, and the detected information thus detected is used for each operation (for example, reaction, Action information for each key touch, key press, slider slide, touch panel flick, wheel rotation, action such as playing a musical instrument, etc. Algorithms (software) that cause computer resources to automatically identify as information such as movement proficiency, movement accuracy, movement perfection, degree of divergence from ideal movement, and the above hardware are realized by working together. In order to automatically identify the motion information based on the detected information in this case, for example, a certain threshold is set for each type of detected information for each motion corresponding to each motion information, and the actually detected detection information However, it is not limited to this, and "based on the detected information, automatically by sorting into predefined motion information for each detected value Any known algorithm that implements the function of identifying corresponding operational information may be used.
 次に、上記において、本発明の一実施形態に係る「特定動作」につき説明する。特定動作とは、システム上に予め定義される特定性を備えた動作を示す。たとえば、「対象物に接して撫でて〇(サークル形)を形成する」動作を特定動作と定義している場合を例にとる。図6は、本発明の一実施形態に係る特定動作を定義する動作について説明するためのタイミングフローチャートであり、図7~図10は、この動作における操作者の実際の動きを説明するための概念的斜視図である。図7に示すように、操作者が特定の対象物を対象として特定動作を行った(たとえば、「机の上面」という対象物に対して特定の第1位置P1を略中心とする略円周を描くようにして、「撫でて〇を形成する」という動作を行う)こと(ステップ300A)を操作主体の動き検出部2125が検出する(ステップ301)。すると、操作主体の動き検出部2125は、上述の検出された検出情報からたとえば下記のアルゴリズムを用いることでシステム的に「撫でて〇を形成する」動作、したがって特定動作、が行われたと判定し(ステップ302)、この特定動作に係る円の中心点を公知のアルゴリズムを用いて割出し、こうして割り出された円の中心点、即ち特定の第1位置P1、を仮想コントローラの(仮想空間における)コントローラ第1位置Z1と同定し(図8)これを表示する(ステップ303)。これを特定動作に係る位置数としてたとえば予め定義された数N(たとえばここでは「4」であるとして説明する)の分だけ繰り返す(ステップ300B、ステップ304)と、現実の特定の第1位置P1~特定の第N位置PN(Nはたとえば4であってもよい)に対応する仮想コントローラの(仮想空間における)コントローラ第1位置Z1~コントローラ第N位置ZN(Nはたとえば4であってもよい)が特定(同定)され、(現実空間の)特定の第1位置P1~特定の(たとえば)第4位置P4と重畳させるように、仮想コントローラの(仮想空間における)コントローラ第1位置Z1~(たとえば)コントローラ第4位置Z4及び、かかるZ1~Z4によって画される仮想的平面(図1の仮想的平面12)が仮想的操作対象表示制御部2126によって仮想空間上に表示される(図9)。 Next, the "specific action" according to one embodiment of the present invention will be described above. A specific action indicates a specific action predefined on the system. For example, let's take the case where the specific action is defined as the action of "touching and stroking an object to form a circle". FIG. 6 is a timing flowchart for explaining the operation that defines the specific operation according to one embodiment of the present invention, and FIGS. is a perspective view. As shown in FIG. 7, when an operator performs a specific action on a specific object (for example, the object "top surface of a desk" is substantially circled about a specific first position P1 as the center). (Step 301). Then, the motion detection unit 2125 of the operating subject, based on the detected detection information described above, uses, for example, the following algorithm to systematically determine that an action of "stroking to form a circle", that is, a specific action, has been performed. (Step 302), the center point of the circle related to this specific action is determined using a known algorithm, and the determined center point of the circle, i.e., the specific first position P1, of the virtual controller (in the virtual space ) identifies the controller first position Z1 (FIG. 8) and displays it (step 303). When this is repeated (steps 300B, 304) by, for example, a predefined number N (for example, "4" here) as the number of positions related to the specific motion, the actual specific first position P1 ~ Controller first position Z1 (in virtual space) of the virtual controller corresponding to a specific Nth position PN (N may be 4, for example) ~ Controller Nth position ZN (N may be 4, for example) ) is specified (identified), and controller first positions Z1 (in virtual space) of the virtual controller to ( For example, the controller fourth position Z4 and a virtual plane (virtual plane 12 in FIG. 1) defined by Z1 to Z4 are displayed in the virtual space by the virtual operation target display control unit 2126 (FIG. 9). .
 上述の、検出情報から特定動作が行われたことの判定についてのアルゴリズムとしては、たとえば、次のようなものによって実現できるが、これに限定されるものではない。すなわち、仮想的操作対象表示制御部2126は、当該「撫でて〇(サークル)を形成する」動作に係る位置情報と現実世界の撮像画像とを照合することで、当該「撫でて〇を形成する」動作によって(現実世界の中でたとえば装着者によって物理的に)撫でて〇が形成された現実世界の対象物(現実対象物:たとえば装着者眼前の机上面)を特定する、すなわち、当該現実対象物の仮想世界における仮想的位置情報を特定する(当該現実対象物と仮想世界における位置情報とを紐づける)ことができる(ステップ303)。 The algorithm for determining whether a specific action has been performed from the detection information described above can be realized, for example, by the following, but is not limited to this. In other words, the virtual operation target display control unit 2126 compares the position information related to the action of “stroking to form a circle” with the captured image in the real world, so that the “stroking to form a circle” is performed. "Specify the real-world object (real-world object: for example, the desk surface in front of the wearer) that has been stroked (physically by the wearer in the real world, for example) to form a circle, that is, the real-world It is possible to identify the virtual position information of the object in the virtual world (associating the real object with the position information in the virtual world) (step 303).
 次に、ステップ304以降の動作の説明を続行する。図9に示されるように、仮想コントローラの(仮想空間における)コントローラ第1位置Z1~コントローラ第N位置ZN(Nはたとえば4であってもよい)が特定(同定)されたものが仮想空間上に表示されたのと略同時、もしくはかかる表示がされた後に、仮想的操作対象表示制御部2126は、特定の機能を有する特定機能オブジェクト(たとえばゲームコントローラやキーボード)を現実対象物と重畳する位置及び大きさに仮想世界として表示させる(ステップ305)。具体的には、たとえば、特定機能オブジェクトがゲームコントローラであった場合には、(現実空間の)特定の第1位置P1~特定の第4位置P4と重畳させる位置にコントローラ第1位置Z1~(たとえば)コントローラ第4位置Z4を同定し、こうして画されるコントローラ第1位置Z1~コントローラ第4位置Z4をたとえば四隅とする平面コントローラを仮想空間上に同定・配置するとともに、ディスプレイ6上に表示させる(図10)。このとき、特定機能オブジェクトがゲームコントローラの場合には、たとえば操作ボタンB1~B4(図1では、仮想的キーボード13及び仮想的ボタン14に該当)を定義し、それぞれの操作ボタンの仮想空間上の位置と現実空間上の位置とを対応付けて同定するようにしてもよい。また、いうまでもないことであるが、上述した特定機能オブジェクトはゲームコントローラに限定されることはなく、たとえば、タッチパネル、トラックパッド、タッチディスプレイ、トグル型スイッチ、ボタンスイッチ、ホイール、ハンドル、レバー、ジョイスティック、ディスク、スライダー、回転体、操作盤、作業指示装置、マウス、キーボード、エアホッケー台、もぐらたたき台、ピアノ等の鍵盤付き楽器、ギター等の弦楽器、ドラム等の打面のある楽器、カードゲームのカードとプレイエリア、麻雀ゲームの牌、ボードゲームの駒など、操作者の意思もしくは動作を本システムに伝達する(入力する)機能を実現するものである限りいかなるものであってもよい。 Next, the explanation of the operation after step 304 will be continued. As shown in FIG. 9, the controller first position Z1 to controller Nth position ZN (N may be 4, for example) of the virtual controller (in the virtual space) are specified (identified) in the virtual space. , or after such display, the virtual operation target display control unit 2126 superimposes a specific function object having a specific function (for example, a game controller or a keyboard) on the real object. And the size is displayed as a virtual world (step 305). Specifically, for example, if the specific function object is a game controller, the controller first position Z1 to ( For example, the controller 4th position Z4 is identified, and a plane controller having the controller 1st position Z1 to the controller 4th position Z4 defined in this way, for example, as the four corners is identified and arranged in the virtual space, and displayed on the display 6. (Fig. 10). At this time, if the specific functional object is a game controller, for example, operation buttons B1 to B4 (corresponding to the virtual keyboard 13 and the virtual button 14 in FIG. 1) are defined, and each operation button is displayed in the virtual space. The position may be associated with the position on the physical space for identification. Needless to say, the above-mentioned specific function objects are not limited to game controllers, and examples include touch panels, track pads, touch displays, toggle switches, button switches, wheels, steering wheels, levers, Joysticks, discs, sliders, rotary bodies, control panels, work instruction devices, mice, keyboards, air hockey tables, whack-a-mole, musical instruments with keyboards such as pianos, stringed instruments such as guitars, musical instruments with striking surfaces such as drums, card games cards and play area, mahjong game tiles, board game pieces, etc., as long as it realizes the function of transmitting (inputting) the operator's intention or action to this system.
 以降は、この現実対象物の位置において装着者によって動作が与えられる(たとえば、装着者がボタンB1~B4のうちの特定のボタンを押下する)たびに、当該現実対象物の位置情報をベースにして各動作(たとえば、「反応」動作、「キータッチ」動作、「キー押下」動作、等)ごとの動作情報として検出することができる。なお、「特定動作」は上記「撫でて〇を形成する」動作に限定されることなく、たとえば、指を閉じる、指先で長押しする、指先で2回叩く、指を特定の方向に向ける、指先で線を描く、コントローラで指し示すなどの、公知のアルゴリズムによって1点以上を特定できる動作であれば何でもよい。さらに、上述のような1点以上を特定させる動作は、連続させるもしくは組み合わせることで略平面とその中にある操作対象範囲あるいは安全確認範囲を指定すること(いわゆる「手動キャリブレーション」「クリアランス確認」を兼ねること)ができるが、その連続動作の代わりに、例えば指で操作対象範囲の周囲をなぞる、または手のひらで撫でることで、1動作で略平面と操作対象範囲を同時に指定できるようにしても良い。これには、撫でたりなぞったりした結果検出された範囲を、既知のアルゴリズムを通して円形や多角形の範囲に加工するステップや、検出精度(検出のズレやブレ)に対応するために操作部の範囲、大きさ、あるいは当たり判定開始位置を調整するステップが含まれていても良い。 After that, each time the wearer gives an action at the position of this real object (for example, the wearer presses a specific button among the buttons B1 to B4), the position information of the real object is used as a base. can be detected as motion information for each motion (for example, a “reaction” motion, a “key touch” motion, a “key press” motion, etc.). It should be noted that the "specific action" is not limited to the action of "stroking to form a circle", and includes, for example, closing the finger, pressing and holding with the fingertip, tapping twice with the fingertip, pointing the finger in a specific direction, Any motion that can identify one or more points by a known algorithm, such as drawing a line with a fingertip or pointing with a controller, can be used. Furthermore, the operation to specify one or more points as described above is to specify a substantially plane and an operation target range or a safety confirmation range in it by continuing or combining (so-called "manual calibration", "clearance confirmation" ), but instead of that continuous action, for example, by tracing the periphery of the operation target range with a finger or stroking it with the palm, it is possible to specify an approximately plane and the operation target range at the same time with one action. good. This includes a step of processing the range detected as a result of stroking or tracing into a circular or polygonal range through a known algorithm, and a range adjustment of the operation unit to deal with detection accuracy (detection deviation or blurring). , the size, or the step of adjusting the starting position of the hit determination.
 さらに、特定の事前動作(使用者が手を任意の回数叩く、空中に○等の任意の図形を描く、特定の音声を発する、手を特定の形にする等)がされたのをキックとして、その後に(使用者によって)指標された対象を特定オブジェクトとして対象視し、特定の事後動作(上述の事前動作と同じでも良いほか、手を事前動作と違う回数叩く、空中に▽等の事前動作と異なる図形を描く、特定の音声を発する)がされたのを検出したのをキックに特定オブジェクトを対象視する動作を完了とする、あるいは、事前動作が継続している間(例えば、コントローラのボタンを押し続ける、手を振り続ける、人差し指と親指をピンチし続ける間)のみオブジェクトの対象視が行われ、事前動作の終了(例えば、ボタンから手を話す、手を静止させる、手を開く)あるいは事前動作を完了するトリガーとなる動作(例えば、別のボタンを押す、手を回転させる、中指と親指をピンチさせる動作)の検出をもってオブジェクトを対象視する動作を完了とする、などの一連の動作であってもよく、さらには、空間検出した内容や、ハードあるいはソフトウエア的に事前に与えられた(仮想)位置情報を組み合わせあるいは加工して、まず対象となるオブジェクトを仮想空間上に表示させてから、オブジェクトの頂点を選択して動かしたり、オブジェクトの上下左右を片手または両手で押すことで移動させたり、オブジェクトの拡大縮小や移動に対応するボタン操作を行うことで事後的に調整しても良い。さらにこの調整動作には、上述の1点以上を指定する動作、事前動作によって対象視する動作、事後動作等が含まれていてもよい。また、上述の指や手で行われる動作は、外部装置(コントローラやそれに準じた入力装置、あるいは動作検出可能な形態の現実物体、例えばARマーカー等)の動作検出やボタン入力操作で置き換えられることができる。 In addition, a kick is defined as a specific preliminary action (user hits the hand any number of times, draws an arbitrary figure such as a circle in the air, emits a specific sound, makes the hand a specific shape, etc.). , then target the indexed target (by the user) as a specific object, and perform a specific post-action (which may be the same as the above-mentioned pre-action, clap the hand a different number of times than the pre-action, or perform a pre-action such as ▽ in the air). Draw a figure different from the action, make a specific sound) is detected as a kick to complete the action of targeting a specific object, or while the previous action is continuing (for example, controller button, hand wave, index finger and thumb pinch), and the end of the pre-action (e.g., hand off button, hand still, hand open). ) or complete the object-viewing action upon detection of an action that triggers the completion of the pre-action (e.g., pressing another button, rotating the hand, pinching the middle finger and thumb). Furthermore, by combining or processing the contents of space detection and the (virtual) position information given in advance by hardware or software, the target object is first placed in the virtual space. After displaying, you can select and move the vertices of the object, move the object by pressing the top, bottom, left, and right with one hand or both hands, and adjust it after the fact by performing button operations corresponding to scaling and movement of the object. You can Furthermore, this adjusting action may include the action of designating one or more points, the action of targeting by the pre-action, the post-action, and the like. In addition, the above-mentioned actions performed with fingers and hands can be replaced by detecting actions of external devices (controllers, input devices based on them, or physical objects in a form that can detect actions, such as AR markers, etc.) and button input operations. can be done.
 また、こうした一連の動作を仮想的操作対象表示制御部2126が検出するに当たっては、上述した静止画像を介しての検出に限定されることなく、当該静止画像を介する検出に加えて、もしくは当該検出に代替させて、動画画像、音声、振動等、静止画像以外の物理量を測定することを介する検出としてもよい。 Further, when the virtual operation target display control unit 2126 detects such a series of actions, it is not limited to detection via the above-described still image. Alternatively, detection may be performed by measuring a physical quantity other than a still image, such as a moving image, sound, vibration, or the like.
 これを装着者視点から定義すると、現実世界の対象物(たとえば机表面板や卓上のカップ/人形等)をゲームコントローラとして見立て、これに特定動作(例えば、「撫でて○を形成する」動作)を行う。システム側ではこの特定動作がなされたことを検出したことで、かかる現実世界の対象物が特定機能オブジェクト(たとえばゲームコントローラ)と位置情報的に一致視される、或いは、現実世界の対象物と仮想世界におけるこれに対応する特定機能オブジェクトとが紐づけられる、すなわち、後述する連合記憶が形成されるための土台(もしくはプラットフォーム)が生成される。この特定動作(例えば、「撫でで○を形成する」動作)を行ったあとは(特定動作をキックとして)、後述する連合記憶形成の効果も相まって、装着者は、以降、ゲームコントローラ、タッチパネル、作業指示装置、楽器、ボードゲームにおけるカードや駒を操作する感覚で、現実的に上記現実世界の対象物に対して諸操作を加えることで自分の操作意思がシステム側に伝達される感覚を味わうことができることとなる。本発明の技術思想的には、(後述する)連合記憶形成促進部はまた、上記諸操作についての精緻性、巧緻性を増進させるという効果を発揮することができる。これは人間心理的には、あたかも、現実世界の対象物をたとえばゲームコントローラとして使うことができること、すなわち物理的実体をもった特定の対象物(これは操作者が都度任意に指定し得る)をたとえばコントローラのような仮想世界と現実世界とのインターフェースと同視し得ること、となり、これまでのような虚空において操作するという体験ではなく、実体物による反力或いは物理的な実体を伴った実体験を得ることになる。換言すれば、ゲームという仮想現実上に接触感覚を持ち込んだ体験をすることが可能となる。連合記憶形成促進部については後に詳述するが、たとえば、ゲーム操作者は特定オブジェクトと同視されるもの(たとえば机表面板)を実際に「押す」(撫でる、弾く、スライドする等)ことで、机からは反力という物理的フィードバックを得ることができるから、人間の神経に直接このフィードバックによる刺激が伝わることにより没入感が高まることになり、また、後述する連合記憶を刺激するという効果(たとえば一定の操作を行うことにおいての巧緻性、高速性、精密性等)も得られることになる。さらに、反力だけでなく、同視される現実物体の素材や使用者の動作に応じて、弾性、摩擦、温度差、温度変化、物体からの打音、動作によって放出される匂い(例えば、机に塗った香料)等のフィードバックも、同様に没入感および連合記憶の強化に組み込むことができる。 Defining this from the wearer's point of view, an object in the real world (for example, a desk surface board, a cup/doll on a table, etc.) is regarded as a game controller, and a specific action (for example, "stroking to form a circle" action) is performed. I do. On the system side, by detecting that this specific action has been performed, the real world object is regarded as matching the specific function object (for example, a game controller) in terms of position information, or the real world object and the virtual A base (or platform) is generated for linking with a specific functional object corresponding to this in the world, that is, for forming an associative memory, which will be described later. After performing this specific action (for example, the action of "stroking to form a circle") (using the specific action as a kick), combined with the effect of forming associated memory, which will be described later, the wearer can use the game controller, touch panel, By realistically adding various operations to the above-mentioned objects in the real world, you can experience the feeling that your intention to operate is transmitted to the system side as if you were operating a work instruction device, a musical instrument, or a card or piece in a board game. It will be possible. In terms of the technical concept of the present invention, the associative memory formation promotion part (to be described later) can also exhibit the effect of improving the precision and finesse of the above operations. In terms of human psychology, it is as if an object in the real world can be used as a game controller, that is, a specific object with physical substance (which can be arbitrarily specified by the operator each time). For example, it can be regarded as an interface between the virtual world and the real world, such as a controller, and it is not the experience of operating in the void as before, but the reaction force of physical objects or the real experience accompanied by physical entities. will get In other words, it is possible to have an experience that brings a sense of touch into the virtual reality of the game. The associative memory formation promoting part will be described in detail later. Since it is possible to obtain physical feedback in the form of reaction force from the desk, the stimulation from this feedback is directly transmitted to the human nerves, increasing the sense of immersion. Dexterity, high speed, precision, etc.) in performing certain operations will also be obtained. Furthermore, in addition to the reaction force, elasticity, friction, temperature difference, temperature change, hammering sound from the object, and odor emitted by the action (for example, desk Feedback such as fragrance applied to the skin) can also be incorporated to enhance immersion and associative memory.
 したがって、装着者としては、ある実体物にたとえばコントローラのような仮想世界中の特定オブジェクトを自らの意思において「憑依」させて、以降はかかる実体物にコントローラが憑依されたかのような状況をまず作り出し、かかる状況下で物理的フィードバック(すなわち、実体物からの反力が感じられること、換言すれば、後述する連合記憶が形成されたこと)を伴った(たとえば)ゲーム体験、演奏体験、作業体験、学習体験、トレーニング体験を続けるということができることとなる。これをたとえばゲーム進行中の装着者の観点から定義付けるとするならば、瞬間的な激しい動作、あたかもスポーツ(いわゆEスポーツ)的な動作を伴って操作する場合には特に、より臨場感を伴った操作実感を得られることになる。したがって、本実施形態によった操作者・利用者にこれまでなかったようなリアル感、没入感を与え、より満足感を与えることになる。換言すれば、利用者はたとえば、机や壁(さらにはダンボール、木箱、柱、楽器、床、水面、自身や他者の腹など体の一部、人形、果物など)といったものを「コントローラ」とすることもできることとなり、これまでとは異なった(たとえば)ゲーミング体験が得られることとなる。このために、現実世界のどこにでもあるもの(机、テーブル等)を(たとえば)ゲームの「コントローラ」とすることができるから、ゲーム業界的にはたとえばアーケードゲームなどにおいて体験型ゲームに応用すればより迫力を伴ったゲーム体験、或いは不思議な感覚を伴ったゲーム体験を与えることが可能である。また、たとえば家庭内ゲームの場合には、ゲームのコンテンツ(たとえば動物飼育ゲーム、農場経営ゲーム、中世没入ロールプレイイングゲーム、ディフェンスゲーム、シューティングゲーム、音楽ゲーム、カードゲーム、パズルゲーム、麻雀ゲーム、ボードゲーム、…等)に合わせた最適なコントローラ(たとえば首輪、農機具、施設模型、剣と盾、引き金や発射ボタン、楽器、カードやプレイマット、パズルピース、麻雀牌、駒などの小物、…等)を選択できることとなる。 Therefore, as a wearer, a specific object in the virtual world, such as a controller, is allowed to "possess" on a physical object, and thereafter, a situation is first created as if the controller is possessed by such a physical object. , (for example) game experience, performance experience, and work experience accompanied by physical feedback (i.e., feeling a reaction force from a physical object, in other words, forming an associative memory, which will be described later) under such circumstances , learning and training experiences. For example, if we define this from the perspective of the wearer during the progress of the game, it will be more realistic, especially when it is operated with a momentary intense movement, as if it were a sport (so-called E-sports). You will get a real feeling of operation. Therefore, according to the present embodiment, the operator/user can be given a sense of realism and a sense of immersion that has never been experienced before, and a greater sense of satisfaction. In other words, for example, a user can use a desk or wall (furthermore, a cardboard box, a wooden box, a pillar, a musical instrument, a floor, a surface of water, a part of the body such as one's or another's belly, a doll, a fruit, etc.) as a "controller." ”, and you will have a different (for example) gaming experience than before. For this reason, objects (desks, tables, etc.) anywhere in the real world can be used (for example) as "controllers" for games. It is possible to provide a more powerful game experience or a game experience with a mysterious feeling. In addition, for example, in the case of domestic games, game content (for example, animal breeding games, farm management games, medieval immersive role-playing games, defense games, shooting games, music games, card games, puzzle games, mahjong games, board games, etc.) games, etc.) (for example, collars, farm equipment, facility models, swords and shields, triggers and firing buttons, musical instruments, cards and play mats, puzzle pieces, mahjong tiles, small items such as pieces, etc.) can be selected.
 上記のゲームにおいて、たとえば、机を特定オブジェクトとする場合、装着者が、当該机上に障害物がないことの確認(「クリアランス確認」、もしくは「手動キャリブレーション」ともいう。)動作を行うプロセスを介在させてもよい。この場合には、たとえば机上面は比較的安全性が確保された前提で以降を進行させることができる。これをゲームに適用した場合でいうと、ゲーム遊戯の動作・操作中心が机の上面に集約されることになるから、比較的狭い空間にあっても安全性が確保された環境において、上述した体感性の高さを伴ったゲーム体験を実現することができることに繋がる。また、このクリアランス確認による安全性確保をより強固にするため、クリアランス確認結果範囲またはそれを処理した範囲を示すオブジェクト(例えば、柵や看板等の障害物オブジェクトや警告オブジェクト)を表示しても良い。また、安全範囲を示す上記オブジェクトは、使用者が誤って安全確認済み範囲外に出そうになったときや、範囲外操作が検知された場合に、その検知結果に応じて加工され表示されるものであっても良い。 In the above game, for example, when a desk is a specific object, the wearer confirms that there are no obstacles on the desk (also referred to as "clearance confirmation" or "manual calibration"). You can intervene. In this case, for example, it is possible to proceed on the assumption that the desktop surface is relatively safe. In the case of applying this to a game, the action and operation center of the game play will be concentrated on the top surface of the desk, so even in a relatively narrow space, the above-mentioned This leads to the realization of a game experience with a high degree of sensibility. In addition, in order to further ensure safety by this clearance confirmation, an object indicating the clearance confirmation result range or the processed range (for example, an obstacle object such as a fence or a signboard, or a warning object) may be displayed. . In addition, the above object indicating the safe range is processed and displayed according to the detection result when the user is about to go out of the safe range by mistake or when an operation outside the safe range is detected. It can be anything.
 一方、操作検出部2124が検出し、APPに送られた結果として変化したAPP表示画面がコンテンツ表示制御部2128に送られる。このとき、(後述する、本発明の別の実施形態に係る)連合学習の進捗度合いを反映させてAPPの表示を変化させる場合には、連合学習進捗検出部2129からの情報が使われてもよい。ここで、コンテンツ表示制御部2128は、HMD5のディスプレイ6というハードウエアに対して、上記変化したAPP表示画面を表示させるように制御する機能をコンピュータ資源に果たさせるアルゴリズム(ソフトウエア)と上記ハードウエアとが協働することによって実現される。発明の別の実施形態に係る連合学習進捗検出部2129についての機能は後述する。 On the other hand, the changed APP display screen detected by the operation detection unit 2124 and sent to the APP is sent to the content display control unit 2128 . At this time, when the display of the APP is changed to reflect the degree of progress of joint learning (according to another embodiment of the present invention, which will be described later), information from the joint learning progress detection unit 2129 is used. good. Here, the content display control unit 2128 controls the display 6 of the HMD 5, which is the hardware, to display the changed APP display screen. It is realized by cooperating with wear. The function of the federated learning progress detector 2129 according to another embodiment of the invention will be described later.
 HMD5のTVカメラ7で撮像された実空間内の画像を含む情報は仮想的操作対象表示制御部2126に送られ、そこで処理された後に表示制御部2127においてコンテンツ表示制御部2128からの画像とマージされ、HMDに送られる。これにより、HMDの画面10には必要な情報が統合的に表示される。これを操作者(装着者)視点から定義するならば、現実空間に仮想的対象物が混在した半現実・半仮想、或いは現実空間と仮想空間とが位置合わせしたうえで重畳された融合現実空間が眼前に出現していることになる。仮想的操作対象表示制御部2126は、上記TVカメラ7というハードウエアで撮像された実空間内の画像を含む情報を、(後述する)画像マージ動作に適合するように事前編集(たとえば、スケールを適合させること等)するように制御する機能をコンピュータ資源に果たさせるアルゴリズム(ソフトウエア)と上記ハードウエア・上記コンピュータ資源とが協働することによって実現される。表示制御部2127は、仮想的操作対象表示制御部2126によって上記編集された画像情報とコンテンツ表示制御部2128からの画像情報とをマージ(結合)処理して結合画像情報とし、こうして得られた結合画像情報をHMD5のディスプレイ6というハードウエアに対して表示させるように制御する機能をコンピュータ資源に果たさせるアルゴリズム(ソフトウエア)と上記ハードウエアとが協働することによって実現される。 Information including an image in the real space captured by the TV camera 7 of the HMD 5 is sent to the virtual operation target display control unit 2126, processed there, and then merged with the image from the content display control unit 2128 in the display control unit 2127. and sent to the HMD. Thereby, the necessary information is comprehensively displayed on the screen 10 of the HMD. If we define this from the viewpoint of the operator (wearer), we can say that it is a semi-real/semi-virtual space in which virtual objects are mixed in the real space, or a mixed reality space in which the real space and the virtual space are aligned and superimposed. is appearing in front of you. The virtual operation target display control unit 2126 pre-edits (for example, scales) the information including the image in the real space captured by the hardware of the TV camera 7 so as to be suitable for the image merge operation (to be described later). It is realized by the cooperation of the above-mentioned hardware and the above-mentioned computer resources with an algorithm (software) that causes the computer resources to perform the function of controlling the above-mentioned hardware and computer resources. The display control unit 2127 merges (combines) the image information edited by the virtual operation target display control unit 2126 and the image information from the content display control unit 2128 into combined image information. This is achieved by cooperation between an algorithm (software) that causes computer resources to perform the function of controlling display of image information on hardware such as the display 6 of the HMD 5 and the hardware.
 ここで、図2に戻って、本発明の一実施形態に係る連合記憶(「連合学習」ともいう。)形成の詳細を説明する。図3は、図2に係る機能構成図(機能ブロック図)に、連合記憶形成促進部30の構成を加えた機能ブロック図であり、図4及び図5は、図3の連合記憶形成促進部30を拡大表示するとともに連合記憶が形成される様子を概念的に描いた概念的構成ブロック概念図である。図3、図4に示されるように、連合記憶形成促進部30は、構成的には、操作主体の動き検出部2125、仮想的操作対象表示制御部2126、及び、(実体としての)机1(厳密には、机1に対する押下などの動作に対して机1が与える反力)とを備えて形成される。なお、この場合、机1は現実的な(すなわち物理的実体を持つ)対象物の一例として挙げたものであって、かかる対象物としては、机に限定されることなく、その他の移動可能物(卓上のライト、書類入れ、椅子、ちゃぶ台、段ボール箱、麻雀卓、ベッド、ソファ、平板、楽器、人形、野菜、果物等)であっても、固定物(壁、取っ手、柱、床等)であっても、その他(操作者の手、指、腹、胸、他者の背中、土、水等)であっても、物理的実体を持つもの即ち反力を与えることができるものであればいずれであっても構わない。 Here, referring back to FIG. 2, the details of formation of associative memory (also referred to as "associative learning") according to one embodiment of the present invention will be described. FIG. 3 is a functional block diagram obtained by adding the configuration of an associated memory formation promotion unit 30 to the functional configuration diagram (functional block diagram) according to FIG. 2, and FIGS. 30 is an enlarged view of the conceptual configuration block diagram conceptually depicting how an associative memory is formed. As shown in FIGS. 3 and 4, the associative memory formation promoting unit 30 is composed of an operating subject movement detecting unit 2125, a virtual operating target display control unit 2126, and a desk 1 (as an entity). (Strictly speaking, the reaction force given by the desk 1 to an action such as pushing down on the desk 1). In this case, the desk 1 is given as an example of a realistic (that is, having a physical substance) object, and such an object is not limited to a desk, but can be any other movable object. Even if it is (desktop light, document holder, chair, dining table, cardboard box, mahjong table, bed, sofa, flat plate, musical instrument, doll, vegetables, fruit, etc.), fixed objects (walls, handles, pillars, floors, etc.) or anything else (manipulator's hands, fingers, stomach, chest, other person's back, soil, water, etc.) that has physical substance, i.e., can give a reaction force. It doesn't matter which one.
 図4に示されるように、操作者4が机1に対して押下するという動作(刺激動作)S1を与える(厳密には、操作者4のたとえば人差し指4aが直に接する態様で机1に対してS1という動作が与えられる)と、机1からは瞬間的に、S1に対する反力S2が人差し指4aに対して与えられる。すると、この反力S2による刺激が操作者4の末梢神経網を経由して操作者4の中枢神経に伝達される(イベントS3)結果、脳内にて一定の体験(たとえば、動作S1を実体物に対して与えるという体験)S4が記憶として形成される。このとき略同時並行的に、図5に示されるように、操作者4の外部においては、本システムの情報処理動作として、操作主体の動き検出部2125及び仮想的操作対象表示制御部2126の上述した動きにより、本システム上で情報処理的に動作S1が特定の入力動作(たとえば、「コントローラ」の右向き進行を指示する動作)として翻訳/情報転換されて、この特定の入力動作が反映された(すなわち、たとえば「コントローラ」によって右向き進行がなされることで仮想空間上のオブジェクト1601がオブジェクト1603の位置に移動される)結果が表示部(たとえば図1のディスプレイ6)に表示されると操作者4はこれを視認することで、脳内に特定の成功体験(たとえば、オブジェクト1601がオブジェクト1603の位置に移動させることができたという体験)S5が記憶として形成される。この体験S4と成功体験S5とは上述のように略同時並行的に生起されるので、脳内において、S4とS5との双方の記憶の間に連合性S6が形成される。これにより、結果的に、S4とS5との連合記憶S6が形成される。なお、上記において、脳内に特定の成功体験S5が記憶として形成されるにおいて、動作S1が特定の入力動作として翻訳/情報転換された結果が表示部に表示されるのを操作者4において視認することが契機となる例で説明したが、S5が記憶として形成されるには視認に限られることはなく、たとえば、操作者4において、聴覚によって確認される結果、嗅覚によって確認される結果、(特定の刺激が加わるのが)触覚によって確認される結果、味覚によって確認される結果、等視覚以外の人間の五感のいずれか、もしくはこれらの任意の組み合わせ、を契機として特定の成功体験S5が記憶として形成されてもよい。 As shown in FIG. 4, the operator 4 gives an action (stimulation action) S1 of pressing down on the desk 1 (strictly speaking, the operator 4, for example, the index finger 4a is in direct contact with the desk 1). ), the desk 1 instantaneously applies a reaction force S2 against S1 to the forefinger 4a. Then, the stimulation by this reaction force S2 is transmitted to the central nerves of the operator 4 via the peripheral nerve network of the operator 4 (event S3), resulting in a certain experience in the brain (for example, the action S1 is The experience of giving to things) S4 is formed as a memory. At this time, substantially concurrently, as shown in FIG. 5, outside the operator 4, as the information processing operation of this system, the above-described motion detection unit 2125 of the operating subject and the virtual operation target display control unit 2126 Due to this movement, the action S1 is translated/information-converted as a specific input action (for example, the action of instructing the "controller" to move rightward) in an information-processing manner on this system, and this specific input action is reflected. (That is, for example, the object 1601 in the virtual space is moved to the position of the object 1603 by proceeding rightward by the "controller"). 4 sees this, a specific success experience (for example, the experience that the object 1601 was able to move to the position of the object 1603) S5 is formed as a memory in the brain. Since the experience S4 and the successful experience S5 are generated substantially simultaneously as described above, an associativity S6 is formed between the memories of both S4 and S5 in the brain. As a result, an associated memory S6 of S4 and S5 is formed. In the above, when the specific success experience S5 is formed as a memory in the brain, the operator 4 visually recognizes that the result of translating/converting the action S1 as the specific input action is displayed on the display unit. Although an example in which S5 is formed as a memory is not limited to visual recognition, for example, the operator 4 confirms by auditory sense, olfactory confirmation, A specific success experience S5 is triggered by any of the five human senses other than vision, such as the result of being confirmed by touch (the application of a specific stimulus), the result of being confirmed by taste, or any combination of these. It may be formed as a memory.
 上記の態様を操作者視点から説明する。仮想空間上のオブジェクトに対して入力動作S1を行った認識の上で当該オブジェクトがいわば化体した現実対象物である机1に対して入力動作S1が行われる。このとき瞬間的に、(手や)指4aに物理的実体を持つ対象物(たとえば、卓上、壁、自らのもう一方の手、机1など)からの反力S2が加わるため、瞬時にその反力S2を感知した体験が指4aの末梢神経から操作者の脳内に伝達される(S3)ことで一定の記憶S4が形成される。一方、当該反力(厳密には、当該反力を感じること)S2と、先の入力動作S1を入力として情報処理された結果としてのアウトプット(たとえば、仮想空間上のあるキャラクタが横に動くなど)を表示部(たとえばディスプレイ6)に表示された画像10を視認したことで記憶S5が形成される。記憶S4と記憶S5とは略同時に生起されることから、脳内にて記憶S4と記憶S5との間に因果関係性すなわち連合性301が認識される結果、記憶S4と記憶S5との間に連合記憶が形成される。すなわち、実体を持った対象物を介して仮想空間と現実空間とが重畳されることになることで、本願独自の連合記憶が形成されることになる。従来技術のように虚空において仮想空間上のオブジェクトに対して入力動作S1を行った場合には、上述した反力がないことから、連合記憶形成に必要な一方が存在せず連合記憶が形成されない。したがって、本願の上記態様によれば、虚空の場合に比して、着実な記憶形成に資することとなる。これを実生活に応用すれば、ゲームのコントロール技術についての習熟、キーボード入力スピード・正確性の向上、運転技術の向上、機械操作技術の取得や向上、楽器の演奏技術の取得や向上などの効果を奏することに繋がる。換言すれば、自分がある動作をすると、それに対して、その動作へのレスポンス・物理的な圧迫を生み出す実体が存在し、かかる物理的な圧迫に係る記憶と情報処理結果を覚知(視認、聴認、嗅認、触認、味認等)した記憶との間に連合性を持った連合記憶が形成されることで、自分が当該動作をしたことが自分で確認できる、という構造が招来されることとなり、こうした技術思想を現実に利用することで、一定の人間の所作(技術、技巧)の向上に資することになるものである。これは、視点を変えれば、仮想現実上での連合学習を可能としたことになる。 The above aspects will be explained from the operator's point of view. The input action S1 is performed on the desk 1, which is a real object in which the object in the virtual space is embodied after recognizing that the input action S1 has been performed. At this time, a reaction force S2 from an object having a physical substance (for example, a desk, a wall, one's other hand, desk 1, etc.) is instantaneously applied to (hand or) finger 4a, so that The experience of sensing the reaction force S2 is transmitted from the peripheral nerve of the finger 4a to the operator's brain (S3), thereby forming a certain memory S4. On the other hand, the reaction force (strictly, feeling the reaction force) S2 and the output as a result of information processing with the previous input motion S1 as input (for example, a character moving sideways in the virtual space etc.) is formed by viewing the image 10 displayed on the display unit (for example, the display 6). Since the memory S4 and the memory S5 are generated substantially at the same time, a causal relationship between the memory S4 and the memory S5 is recognized in the brain. Associative memory is formed. In other words, by superimposing the virtual space and the real space through the physical object, an associative memory unique to the present application is formed. When an input action S1 is performed on an object in the virtual space in the void as in the prior art, there is no reaction force as described above. . Therefore, according to the above aspect of the present application, it contributes to steady memory formation compared to the case of empty space. If this is applied to real life, effects such as proficiency in game control technology, improvement in keyboard input speed and accuracy, improvement in driving technique, acquisition and improvement of machine operation technique, acquisition and improvement of musical instrument playing technique, etc. It leads to playing. In other words, when you perform a certain action, there is an entity that produces a response to that action and physical pressure, and the memory and information processing results related to such physical pressure are perceived (visually recognized, By forming an associative memory with associative memories (auditory, olfactory, tactile, taste, etc.), a structure is created in which one can confirm that one has performed the action. By actually using these technical ideas, it will contribute to the improvement of certain human actions (skills and techniques). From a different point of view, this means that associative learning in virtual reality has become possible.
 上記において、対象物としては、弾性、テクスチャ、温度や匂いを持つものを採用してもよい。適度な弾性、テクスチャ、温度や匂いを備えるものを対象物とすることで、連合記憶形成に弾性、テクスチャ、温度や匂いの記憶が加味されることで、より深みや多様性のある連合記憶、すなわち、上述した連合記憶に更に弾性、テクスチャ、温度や匂いに係る記憶が加重された連合記憶、が形成される可能性がある。 In the above, objects with elasticity, texture, temperature, and smell may be adopted. By using objects with appropriate elasticity, texture, temperature, and smell as objects, associative memory is formed with the addition of elasticity, texture, temperature, and smell, resulting in deeper and diverse associative memory, That is, there is a possibility that an associative memory in which memories related to elasticity, texture, temperature, and smell are added to the above-described associative memory.
 次に、発明の別の実施形態に係る連合学習進捗検出部2129について、その機能を詳述する。一般に、学習曲線を用いれば、進捗度を推定することが可能である(ただし、進捗度の推定はこれに限られるものではない)。図11は、本発明の一実施形態に係る学習進捗度の推定方法を説明するための概念図であり、より具体的には、学習の進捗と、反応時間(習熟度の逆数)との対応関係を示す図である。同図を用いて、連合学習の進捗推定に係る方法の一例を、学習曲線を使って説明する。ボタン押しなどの動作は、刺激が提示されてからボタンが押されるまでの反応時間などとして計測されるところ、一般に図11に示されるように、ボタン押しの反応時間などは、学習の進捗にともなって短くなる(例えば非特許文献2)から、ある刺激が提示されてからボタンが押されるまでの反応時間を測定することによって、かかる刺激に関連する動作についての操作者の習熟度が推定される。例えば、ゲームにおいてボタン押しの反応時間を測定することにより、操作者が学習進捗のA段階(学習進捗が初歩的な段階)にいるのか、B段階(学習進捗が中間的な段階)にいるのか、それともC段階(学習進捗が上級的な段階)にいるのかを推定することが可能である。すなわち、ある刺激が提示されてからボタンが押されるまでの反応時間を測定することによって習熟度を推定することが可能であることに着眼することで、学習の進捗を推定することができ、それに対応したコンテンツの表示方法を選択することが可能となる。進捗度および習熟度の推定には、この他に、タイピグミス率、ゲームの勝敗率、あらかじめ指示された(一連の)動作の進捗度、使用者の動作速度や反応速度、クイズの正答率、動作の正確性、アルゴリズムによって算出される動作の完成度あるいは完成度を求めるための変数(例えば、パズルゲームにおける連鎖数、リバーシにおける開放度、麻雀ゲームにおける向聴数など)、上記変数を使用して求められるスコア、キー押し強度やそれを求めるための指形の変化情報といった、他の方法を用いることも可能である。 Next, the function of the federated learning progress detector 2129 according to another embodiment of the invention will be described in detail. In general, it is possible to estimate the degree of progress using a learning curve (however, estimation of the degree of progress is not limited to this). FIG. 11 is a conceptual diagram for explaining a learning progress estimation method according to an embodiment of the present invention. More specifically, the correspondence between learning progress and reaction time (reciprocal of proficiency). FIG. 4 is a diagram showing relationships; An example of a method for estimating the progress of federated learning will be described using a learning curve with reference to FIG. Actions such as button pressing are measured as reaction time from the presentation of a stimulus to the button being pressed. Generally, as shown in FIG. (For example, Non-Patent Document 2), by measuring the reaction time from the presentation of a certain stimulus to the pressing of a button, the operator's proficiency for actions related to such a stimulus can be estimated. . For example, by measuring the reaction time of pressing a button in a game, it is possible to determine whether the operator is at stage A (elementary stage of learning progress) or stage B (intermediate stage of learning progress). , or C stage (a stage where the learning progress is advanced). In other words, by focusing on the fact that it is possible to estimate proficiency by measuring the reaction time from the presentation of a stimulus to the pressing of a button, it is possible to estimate the progress of learning. It is possible to select the display method of corresponding contents. In addition to this, the estimation of progress and proficiency includes typing error rate, game win/loss rate, progress of pre-instructed (a series of) actions, user's action speed and reaction speed, quiz correct answer rate, action accuracy, degree of completion or degree of completion of actions calculated by algorithms (for example, the number of chains in puzzle games, the degree of openness in reversi, the number of listeners in mahjong games, etc.), using the above variables It is also possible to use other methods, such as the required score, key press intensity, and finger shape change information for finding it.
 学習曲線を用いた場合には、例えば、ゲームにおいては、ボタン押しの時間を測定することにより、操作者が学習進捗のA段階にいるのか、B段階か、それともC段階かを推定することが可能となる。習熟度に応じたゲームの進行速度を制御することが可能となる。 In the case of using a learning curve, for example, in a game, it is possible to estimate whether the operator is in stage A, stage B, or stage C of learning progress by measuring the button press time. It becomes possible. It is possible to control the progress speed of the game according to the proficiency level.
 このように推定された進捗度に基づけば、操作者に適切な負荷を与えることが可能となる。すなわち、たとえば学習進捗のA段階にあると推定された操作者に対しての場合には、より軽量的な負荷を与える課題を提示し、学習進捗のC段階にあると推定された操作者に対しての場合には、かなり重量的な負荷を与える課題を提示することで、学習の進捗度に応じた、刺激関連動作についての学習記憶を設定(設計)し、これをシステムに組み込むことで連想記憶の強度を利用した触覚模擬式記憶固定化促進システムを実現することができる。このように、推定された進捗度に基づく負荷設計により、連想記憶の固定化の促進(図11点線)、および強固な連想記憶(この場合にはより短い反応時間、すなわち忘れにくく、かつ応答速度の速い連想記憶)の形成(図11破線)の実現が可能となる。すなわち、これをたとえばゲーム産業的に利用するとすれば、ゲーム操作の習熟度を(学習進捗度推定に基づく)測定し、かかる習熟度に応じたレベルのゲーム内容・程度・環境設定等に役立てることが可能となる。 Based on the progress estimated in this way, it is possible to give an appropriate load to the operator. That is, for example, in the case of an operator estimated to be in stage A of learning progress, a task that gives a lighter load is presented to the operator estimated to be in stage C of learning progress. On the other hand, by presenting a task that gives a fairly heavy load, setting (designing) learning and memory for stimulus-related actions according to the progress of learning, and incorporating this into the system It is possible to realize a tactile-simulated memory consolidation promoting system that utilizes the strength of associative memory. In this way, the load design based on the estimated degree of progress promotes consolidation of associative memory (dotted line in FIG. 11) and strong associative memory (in this case, shorter reaction time, i.e., less likely to forget, and faster response speed). It is possible to realize the formation of (broken line in FIG. 11). In other words, if this is used in the game industry, for example, it is possible to measure the degree of proficiency in game operation (based on the estimation of learning progress) and use it for game content, degree, environment setting, etc. according to the level of proficiency. becomes possible.
 図11では学習曲線を用いて説明したが、これ以外にタイピグミス率、ゲームの勝敗率、あらかじめ指示された(一連の)動作の進捗度、使用者の動作速度や反応速度、クイズの正答率、動作の正確性、アルゴリズムによって算出される動作の完成度あるいは完成度を求めるための変数(例えば、パズルゲームにおける連鎖数、リバーシにおける開放度、麻雀ゲームにおける向聴数など)、上記変数を使用して求められるスコア、キー押し強度やそれを求めるための指形の変化情報もしくはいずれかの組み合わせなどを含み、APPに適切と考えられる指標を用いることが可能である。 FIG. 11 has been explained using a learning curve, but in addition to this, there is a typing error rate, a game winning/losing rate, a progress of a pre-instructed (a series of) actions, a user's action speed and reaction speed, a quiz correct answer rate, Accuracy of movement, degree of completion of movement calculated by algorithm or variables for obtaining degree of completion (for example, number of chains in puzzle games, degree of openness in reversi, number of listeners in mahjong games, etc.), using the above variables It is possible to use an index that is considered appropriate for APP, including the score obtained by pressing the key, information on changes in finger shape for obtaining it, or any combination thereof.
 また、上記の説明において、HMD5の代わりに現在多用されているディスプレイを用いてもよい。この場合、外部にTVカメラやセンサを配置することで実在するキーボードを不要にすることが可能であり、外部に配置されたTVカメラやセンサによってHMDの場合と同様に仮想キーボードを机の上などに設定し、それに対して操作者の操作指示主体によってキー操作やボタン押し操作を行うことが可能となる。 Also, in the above description, a currently widely used display may be used instead of the HMD 5. In this case, it is possible to eliminate the need for an existing keyboard by arranging a TV camera and sensors externally. , and the operator can perform key operation or button press operation according to the operator's operation instruction subject.
 次に、上記のように構成される、発明の別の実施形態に係る画像表示システムの動作・作用について説明する。図12は、本発明の一実施形態に係る画像表示システムの動作を説明するためのフローチャートの一例である。 Next, the operation and action of the image display system according to another embodiment of the invention configured as described above will be described. FIG. 12 is an example of a flowchart for explaining the operation of the image display system according to one embodiment of the present invention.
 まず、HMD5画面を含む表示画面に略平面を含む空間が表示される(ステップ601)。次に、HMD5のTVカメラ7によって撮影された画像から空間検出部2122によって略平面部を有する物体が検出され、表示画面10の所定の位置に仮想的平面が設定され、表示される(ステップ603)。 First, a space including a substantially plane is displayed on the display screen including the HMD5 screen (step 601). Next, an object having a substantially flat portion is detected by the space detection unit 2122 from the image captured by the TV camera 7 of the HMD 5, and a virtual plane is set at a predetermined position on the display screen 10 and displayed (step 603). ).
 次いで、操作指示主体の動き検出部2125によって、操作指示主体による操作範囲を指定する動作が検出され、操作範囲が略平面上に決定される(ステップ605)。更に、仮想的操作対象表示制御部2126によって操作範囲内に仮想的キーボードや仮想的ボタンを含む仮想的操作対象が表示される(ステップ607)。 Next, the motion detection unit 2125 of the operating instruction subject detects the motion of the operating instruction subject specifying the operation range, and the operation range is determined on a substantially plane (step 605). Further, the virtual operation object display control unit 2126 displays a virtual operation object including a virtual keyboard and virtual buttons within the operation range (step 607).
 ここで、コンテンツ表示制御部2128により、ゲームや事務系ソフトを含むアプリケーションソフトウエアによって生成された画面(コンテンツ)が仮想的操作対象の背景に表示される(ステップ609)。このようなプロセスを経て、コンテンツに対して操作指示主体が操作を与えることにより、操作検出部2124により操作主体の操作が検出され(ステップ611)、操作指示主体の操作がアプリケーションソフトウエアに転送され、その結果、コンテンツ表示制御部2128により表示コンテンツに変化が与えられる(ステップ613)。 Here, the content display control unit 2128 displays a screen (content) generated by application software including games and office software in the background of the virtual operation target (step 609). Through such a process, the operation of the operation instruction subject is given to the content, whereby the operation of the operation instruction subject is detected by the operation detection unit 2124 (step 611), and the operation of the operation instruction subject is transferred to the application software. As a result, the display content is changed by the content display control unit 2128 (step 613).
 次に、操作指示主体の動き検出部2125によって、コンテンツに対する操作指示主体の動きが検出される(ステップ615)。この動きの検出は、連合学習の進捗を推定するために必要な情報を得るためのものであり、この情報には、反応時間、反応速度、キータッチの時間間隔、キー押し強度に係る各情報が含まれてよいが、これらに限定されるものではない。仮想空間のオブジェクトに対するキー押し強度の測定方法の例として、キーを軽く押したときと強く押したときの指先の形状(例えば反り具合、指先-第1関節・第2関節の角度)の違いや変化を使用できる。次に、操作指示主体の動き検出部2125によって操作指示主体の動きが検出され、連合学習進捗検出部2129によって、連合学習の進捗が検出され(ステップ617)、検出した仮想的操作対象に対する操作および/または連合学習の進捗に応じて、操作対象であるコンテンツに変化が生じ(ステップ619)、次の操作が操作者に促される。 Next, the movement detection unit 2125 of the subject of the operation instruction detects the movement of the subject of the operation instruction with respect to the content (step 615). This motion detection is for obtaining the information necessary for estimating the progress of associative learning. may include, but are not limited to, As an example of how to measure the key pressing strength for objects in virtual space, the difference in fingertip shape (for example, degree of curvature, angle between the fingertip and the first and second joints) when the key is pressed lightly and when the key is pressed strongly, Variations can be used. Next, the motion detection unit 2125 of the operation instruction subject detects the movement of the operation instruction subject, the association learning progress detection unit 2129 detects the progress of the association learning (step 617), and the detected virtual operation target is operated and detected. / Or, according to the progress of the associated learning, a change occurs in the content to be operated (step 619), prompting the operator to perform the next operation.
 以上述べたように、本発明の一実施形態及び/もしくは別の実施形態に係る画像表示システムによれば、たとえばHMDなどによる仮想空間内でのコンピュータやゲームに対する操作において、コンピュータに接続された特別な付加的装置を用いずに、現実空間内の触覚を実質的に提供し、それによって視覚および/または聴覚と触覚の連合学習の促進が図られ、強固な連想記憶を形成することが可能となる。これらは、操作者・利用者の観点からすれば、仮想空間内のオブジェクトを操作する場合でも、実体的感覚を伴った操作触覚を得ることができる、という独自の効果を奏するものということができる。このため、より臨場感、インパクトをもった(たとえばゲーム世界、タイピング課題、ライブハウス、観光施設、工場研修施設等における)仮想もしくは拡張世界での操作体験をもたらすことができる。 As described above, according to the image display system according to one embodiment and/or another embodiment of the present invention, when operating a computer or a game in a virtual space using, for example, an HMD, a special It is possible to provide a tactile sensation in the real space without using any additional equipment, thereby promoting associative learning of visual and/or auditory and tactile sensations and forming a strong associative memory. Become. From the point of view of the operator/user, these have the unique effect of being able to obtain a tactile sensation accompanied by a tangible sensation even when operating an object in the virtual space. . For this reason, it is possible to provide a virtual or augmented world operation experience with more realism and impact (for example, game worlds, typing assignments, live houses, tourist facilities, factory training facilities, etc.).
 また、上記態様によれば、指や手などの(操作者の)動きを検出することによって連合学習の進捗を監視することも可能となる。すなわち、HMDを含む仮想空間においてキーボードやたとえばボタン押下等の一定動作・所作・操作で生じる触覚を与えることができ、これによって視覚情報(これと代替的に、或いはこれに結合させて、聴覚情報、嗅覚情報、味覚情報、触覚情報等)に係る記憶と上記一定動作・所作・操作で生じる触覚に係る記憶との間の連合学習を促進し、一層強固な連想記憶を形成することが可能となる。さらにこの場合、学習の進捗を監視することによっていっそう連合学習の促進と強固な連想記憶の形成を行うことが可能となる。 In addition, according to the above aspect, it is also possible to monitor the progress of associative learning by detecting (operator's) movements of fingers, hands, and the like. That is, it is possible to give a tactile sensation generated by a certain action, gesture, or operation such as pressing a keyboard or a button in a virtual space including an HMD, thereby providing visual information (alternatively, or combined with this, auditory information). , olfactory information, gustatory information, tactile information, etc.) and the tactile sensations generated by the above-mentioned certain actions, gestures, and operations, and form a stronger associative memory. Become. Furthermore, in this case, by monitoring the progress of learning, it becomes possible to further promote associative learning and form strong associative memory.
 またさらに、これまで操作者に触覚を与えるために必要であった別途付加的装置が不要になり、その結果ゲームごとに異なるコントローラが不要になるので、柔軟性、利便性、コスト削減という、経済上、リソース上、極めて大きな効果を奏する。 In addition, it eliminates the need for an additional device that was previously required to give the operator a sense of touch, and as a result eliminates the need for a different controller for each game. It has a very large effect on the top and resources.
 さらに、上記態様によれば、仮想ボタンや仮想キーボードの仮想空間内の配置や大きさが自由であるばかりでなく、プログラムによって操作者に無断で(或いは、操作者が意識することない状態で)ダイナミックに変更可能であるので、これまで実現できなかった複雑なゲームなどのプログラムを提供することが可能となる。 Furthermore, according to the above aspect, not only can the virtual buttons and the virtual keyboard be arranged and sized freely in the virtual space, but the program can also be used without the operator's permission (or without the operator's awareness). Since it can be changed dynamically, it becomes possible to provide programs such as complicated games that could not be realized so far.
 加えて、仮想空間で障害物となりかねない机などの物体について、その平面部に仮想オブジェクトを形成することで、利用者の動作対象範囲は、この形成されたオブジェクトの範囲内に収まることになる。これにより、利用者の大きな動きが不要となり、周囲の家具にぶつかるなどのリスクが低減される。また、オブジェクトとその範囲の指定において、安全範囲確認(「クリアランス確認」あるいは「手動キャリブレーション」)を挟むことで、使用者が仮想空間で操作を行ううえでの現実範囲内において意図せぬ事態が発生する(例えば、机上端に置かれたコーヒーカップを倒す)リスクを減らすことができる。この効果は、確認した安全範囲を使用者の操作内容に応じて表示する手段(境界線や柵などの表示)と組み合わせることで、さらに強化できる。
また、動作対象範囲の広さに応じて、表示する操作対象の見た目(形状やレイアウト等)を変えることもできる。これにより、現実空間の広さの違い等に由来する、使用者の指定する操作範囲の広さの違いによる意図せぬ操作性の変化と映像的違和感をさらに減らした、没入感の高いコンテンツの提供が可能である。
In addition, by forming a virtual object on the plane of an object such as a desk that may become an obstacle in the virtual space, the user's motion target range will be within the range of the formed object. . This eliminates the need for large movements by the user and reduces the risk of colliding with surrounding furniture. In addition, when specifying an object and its range, inserting a safety range check ("clearance check" or "manual calibration") prevents unintended situations within the real range when the user operates in the virtual space. (e.g. knocking over a coffee cup placed on the edge of the desk) can be reduced. This effect can be further enhanced by combining means for displaying the confirmed safety range according to the user's operation (display of boundary lines, fences, etc.).
The appearance (shape, layout, etc.) of the displayed operation target can also be changed according to the size of the operation target range. As a result, unintended changes in operability and visual discomfort due to differences in the size of the operation range specified by the user due to differences in the size of the real space, etc., can be further reduced, and content with a high degree of immersion can be created. can be provided.
 なお、上記においては、主に触覚を模擬する付加的装置が不要な記憶固定化促進システムとしての触覚拡張情報処理システムの観点から説明したが、本発明は、上記において詳述した機能をコンピュータ資源に実行させるためのソフトウエアとして、或いは、上記において詳述した機能によって特定の効果を招来する方法として、さらには、当該ソフトウエアが記録された記録媒体として、それぞれ実現することが可能である。 In the above description, the description has been given mainly from the viewpoint of a tactile augmented information processing system as a memory fixation promotion system that does not require an additional device that simulates the tactile sense. or as a method for producing a specific effect by the functions detailed above, or as a recording medium on which the software is recorded.
 上述したように、本発明の一態様に係る、触覚を模擬する付加的装置が不要な記憶固定化促進システムともいえる触覚拡張情報処理システムによれば、触覚を模擬する付加的装置を用いることなく、操作者の前にある略平面を使ってキー押しなどの触覚を与えることができ、これによってこれまで以上にコンピュータの利用シーンを広げ、HMD等を用いた仮想空間上での操作性と没入感の向上が図られる。また、それだけでなく、視覚や聴覚と触覚との連合学習が形成される。したがって、本発明は、APPの利用リテラシーの向上を提供することができる等の観点からいっても、情報産業、ゲーム産業、音楽産業、観光産業、建設業、設備メンテナンス業、製造業、教育研修産業、或いは医療業、医療関連産業等において、多大な産業上の利用可能性及び利便性を有する。 As described above, according to the tactile augmented information processing system according to one aspect of the present invention, which can be said to be a memory consolidation promoting system that does not require an additional device that simulates a tactile sense, there is no need to use an additional device that simulates a tactile sense. , using a flat surface in front of the operator to give tactile sensations such as key presses. The feeling is improved. In addition, associative learning between visual, auditory and tactile senses is formed. Therefore, the present invention is useful in the information industry, the game industry, the music industry, the tourism industry, the construction industry, the equipment maintenance industry, the manufacturing industry, and the education and training industry, even from the perspective of being able to provide improved APP usage literacy. It has great industrial applicability and convenience in industry, medical industry, medical-related industry, and the like.
1:略平面
2:仮想キーボード
3:仮想ボタン
4:操作者の実空間内における操作指示主体(手で例示)
5:HMD
6:ディスプレイ
7:TVカメラ
8:センサ
9:通信I/F
10:HMDの表示画面
11:APPの表示画面
12:仮想的平面
13:キーボード
14:ボタン
15:実空間内の操作指示主体(手を例示)の仮想空間内での表示
16:通信I/F
17:CPU
18:GPU
19:メモリ
20:外部記憶装置
21:動作制御部
601:HMD画面を含む表示画面に、略平面を含む空間の表示ステップ
603:撮影された画像から略平面部を有する物体が検出され、表示画面の所定の位置に仮想的平面が設定・表示されるステップ
605:操作指示主体による操作範囲の指定動作によって、操作範囲が略平面上に決定されるステップ
607:操作範囲内に仮想的キーボードや仮想的ボタンを含む仮想的操作対象を表示するステップ
609:ゲームや事務系ソフトを含むAPPによって生成された画面(コンテンツ)が仮想的操作対象の周辺に表示される もしくは、仮想的操作対象の位置に応じて表示されるステップ
611:コンテンツに対して操作指示主体が操作を与えることにより、操作主体の操作が検出される、或いは、コンテンツに対して操作指示主体が触覚を伴って操作を与え、その操作が検出される、ステップ
613:操作指示主体の操作がアプリケーションソフトウエアに転送され、その結果、表示コンテンツに変化が与えられるステップ
615:コンテンツに対する操作指示主体の動きが検出されるステップ
617:操作指示主体の動きが検出され、連合学習の進捗が検出されるステップ
619:検出した仮想的操作対象に対する操作および/または連合学習の進捗に応じて、操作対象であるコンテンツに変化が生じて次の操作が操作者に促されるステップ
2122:空間検出部
2123:操作範囲設定部
2124:操作検出部
2125:操作主体の動き検出部
2126:仮想的操作対象表示制御部
2127:表示制御部
2128:コンテンツ表示制御部
2129:連合学習進捗検出部
S1:(刺激)動作
S2:S1に対する机1からの反力
S3:反力S2による刺激が操作者4の末梢神経網を経由して中枢神経に伝達されるイベント
S4:動作S1を実体物に対して与えるという体験記憶
S5:仮想空間上でのオブジェクト601がオブジェクト603の位置に移動させることができたという体験記憶
S6:形成される連合性、連合記憶
1: Substantially flat surface 2: Virtual keyboard 3: Virtual button 4: Operation instruction subject in real space of operator (exemplified by hand)
5: HMDs
6: Display 7: TV camera 8: Sensor 9: Communication I/F
10: Display screen of HMD 11: Display screen of APP 12: Virtual plane 13: Keyboard 14: Button 15: Display in virtual space of an operation instruction subject (example hand) in real space 16: Communication I/F
17: CPU
18: GPUs
19: Memory 20: External storage device 21: Operation control unit 601: Display of space including approximately plane on display screen including HMD screen Step 603: An object having approximately plane portion is detected from the photographed image and displayed on display screen A virtual plane is set and displayed at a predetermined position. Step 605: The operation range is determined on a substantially plane by the operation instruction subject's specifying operation of the operation range. Step 607: A virtual keyboard or a virtual Step 609 for displaying a virtual operation target including target buttons: A screen (content) generated by an APP including a game or office software is displayed around the virtual operation target, or at the position of the virtual operation target. Step 611 displayed accordingly: The operation of the operation subject is detected by the operation instruction subject giving an operation to the content, or the operation instruction subject gives an operation to the content with a sense of touch, and the operation is detected. An operation is detected, Step 613: The operation of the operation instructing subject is transferred to the application software, and as a result, a change is given to the displayed content. Step 615: Movement of the operation instructing subject with respect to the content is detected. Step 617: Operation. Step 619: Motion of the instruction subject is detected, and progress of associative learning is detected: In accordance with the operation on the detected virtual operation target and/or the progress of associative learning, the content that is the operation target is changed to the next step. Step 2122 where the operator is prompted to perform an operation: space detection unit 2123: operation range setting unit 2124: operation detection unit 2125: motion detection unit 2126: virtual operation target display control unit 2127: display control unit 2128: content display Control unit 2129: Associative learning progress detection unit S1: (Stimulation) action S2: Reaction force from desk 1 to S1 S3: Stimulation by reaction force S2 is transmitted to the central nervous system via peripheral nerve network of operator 4 Event S4: Experiential memory of giving action S1 to a physical object S5: Experiential memory of being able to move object 601 to the position of object 603 in virtual space S6: Formed associativity, associative memory

Claims (20)

  1.  物理的実体を持つ対象物の近傍に所在する操作者によって装着されもしくは前記操作者の近傍に配置された表示部と、
     前記操作者による前記対象物に対する動作もしくは前記対象物と関連づけられる動作を検知できるセンサと、
     前記表示部に対して表示されるべきコンテンツを指示もしくは特定する表示制御部と、
     前記対象物に対応する仮想空間上のオブジェクトに対して前記操作者が行う動作が検知動作として検知されたのに基づき前記動作に対応する前記仮想空間上の仮想位置情報を割り出し、検知動作とこれに対応する動作特定情報とについて予め定められた対応関係に基づいて前記動作に対応する動作特定情報を取得し、前記動作に対して前記対象物から与えられる物理的反力に係る記憶と前記動作特定情報に係る記憶との間の連合記憶を前記操作者に形成させる連合記憶形成促進部と
     を備えることを特徴とする触覚拡張情報処理システム。
    a display unit worn by an operator located in the vicinity of an object having physical substance or arranged in the vicinity of said operator;
    a sensor capable of detecting motion of the operator with respect to or associated with the object;
    a display control unit that instructs or specifies content to be displayed on the display unit;
    virtual position information in the virtual space corresponding to the motion is determined based on the motion performed by the operator with respect to the object in the virtual space corresponding to the target object being detected as the detection motion; Acquiring action specifying information corresponding to the motion based on a predetermined correspondence relationship with the action specifying information corresponding to the motion specifying information, and storing the physical reaction force given by the object to the motion and the motion A tactile augmented information processing system, comprising: an associated memory formation promoting unit that causes the operator to form an associated memory with a memory related to specific information.
  2.  前記連合記憶形成促進部は、
     前記操作者が前記対象物に対して或いは前記対象物に関連させて現実空間上の現実位置において特定動作を行ったことを前記センサが検出したときに前記特定動作に対して定義づけられた前記オブジェクトを前記表示部における前記現実位置に対応する前記仮想空間上の仮想位置に表示させる仮想的操作対象表示制御部と、
     前記操作者が前記オブジェクトに対して与える前記動作から前記動作に対応する前記現実空間において前記対象物が所在する現実位置情報及び該現実位置情報に対応する前記仮想空間上の仮想位置情報並びに前記動作に対応する動作特定情報を得る操作主体動作検出部と、
     前記物理的実体を持つ対象物であって、前記操作者が与える前記動作に対する物理的反力を前記操作者に与える対象物と
     を備えることを特徴とする請求項1記載の触覚拡張情報処理システム。
    The associative memory formation promoting unit
    When the sensor detects that the operator has performed a specific action on the object or in relation to the object at a real position on the physical space, the defined action for the specific action is detected. a virtual operation target display control unit for displaying an object at a virtual position in the virtual space corresponding to the real position on the display unit;
    From the action given to the object by the operator, real position information where the object is located in the real space corresponding to the action, virtual position information in the virtual space corresponding to the real position information, and the action an operating subject motion detection unit that obtains motion specifying information corresponding to the
    2. The tactile augmented information processing system according to claim 1, further comprising: an object having the physical substance, the object giving the operator a physical reaction force against the action given by the operator. .
  3.  前記センサは撮像機能を有することを特徴とする請求項1もしくは2記載の触覚拡張情報処理システム。 The tactile augmented information processing system according to claim 1 or 2, wherein the sensor has an imaging function.
  4.  撮像機能を有する撮像部をさらに備え、
     前記連合記憶形成促進部は、
     前記操作者が前記対象物に対して或いは前記対象物に関連させて現実空間上の現実位置において特定動作を行ったことを前記撮像部が検出したときに前記特定動作に対して定義づけられた前記オブジェクトを前記表示部における前記現実位置に対応する前記仮想空間上の仮想位置に表示させる仮想的操作対象表示制御部と、
     前記操作者が前記オブジェクトに対して与える前記動作から前記動作に対応する前記現実空間において前記対象物が所在する現実位置情報及び該現実位置情報に対応する前記仮想空間上の仮想位置情報並びに前記動作に対応する動作特定情報を得る操作主体動作検出部と、
     前記物理的実体を持つ対象物であって、前記操作者が与える前記動作に対する物理的反力を前記操作者に与える対象物と
     を備えることを特徴とする請求項1記載の触覚拡張情報処理システム。
    further comprising an imaging unit having an imaging function,
    The associative memory formation promoting unit
    defined for the specific action when the imaging unit detects that the operator performs a specific action on the object or in relation to the object at a real position on the physical space a virtual operation target display control unit for displaying the object at a virtual position in the virtual space corresponding to the real position on the display unit;
    From the action given to the object by the operator, real position information where the object is located in the real space corresponding to the action, virtual position information in the virtual space corresponding to the real position information, and the action an operating subject motion detection unit that obtains motion specifying information corresponding to the
    2. The tactile augmented information processing system according to claim 1, further comprising: an object having the physical substance, the object giving the operator a physical reaction force against the action given by the operator. .
  5.  前記表示部は、シースルー機構を有することを特徴とする請求項1~4のうちのいずれか1項記載の触覚拡張情報処理システム。 The tactile augmented information processing system according to any one of claims 1 to 4, wherein the display unit has a see-through mechanism.
  6.  前記表示部は、使用者に仮想空間を認識上形成できるものであることを特徴とする請求項1~5のうちのいずれか1項記載の触覚拡張情報処理システム。 The tactile augmented information processing system according to any one of claims 1 to 5, characterized in that the display unit can form a virtual space for the user to perceive.
  7.  前記センサは、光学センサ、磁気センサ、接触センサ、距離画像センサのうちの少なくとも一つを有することを特徴とする請求項1~6のうちのいずれか1項記載の触覚拡張情報処理システム。 The tactile augmented information processing system according to any one of claims 1 to 6, wherein the sensor has at least one of an optical sensor, a magnetic sensor, a contact sensor, and a distance image sensor.
  8.  前記対象物は弾性を備え、
     前記物理的反力に係る記憶には前記弾性に係る記憶が加重される
     ことを特徴とする請求項1~7のうちのいずれか1項記載の触覚拡張情報処理システム。
    the object is elastic,
    The tactile augmented information processing system according to any one of claims 1 to 7, wherein the memory related to the physical reaction force is weighted by the memory related to the elasticity.
  9.  前記対象物は、前記連合記憶の形成に十分な触覚を与える硬さ、弾性、温度、テクスチャのいずれか一つまたはその組み合わせを持ち、湾曲、凹凸、水平以外のうちの少なくともいずれかである請求項1~8のうちのいずれか1項記載の触覚拡張情報処理システム。 The object has any one or a combination of hardness, elasticity, temperature, and texture that provides a tactile sensation sufficient for forming the associative memory, and is at least one of curved, uneven, and non-horizontal. Item 9. The haptic augmented information processing system according to any one of items 1 to 8.
  10.  検出した手や指の動きによって連合学習の進捗を検出する連合学習検出部をさらに備え、
     前記表示制御部は、前記連合学習検出部が検出した仮想的操作対象に対する操作および/または連合学習の進捗に応じて、操作対象であるコンテンツを変化させることを特徴とする請求項1~9のうちのいずれか1項記載の触覚拡張情報処理システム。
    It further comprises an associative learning detection unit that detects the progress of associative learning based on detected hand and finger movements,
    10. The display control unit changes the content that is the operation target according to the operation of the virtual operation target detected by the associated learning detection unit and/or the progress of the associated learning. The haptic augmented information processing system according to any one of the above.
  11.  前記仮想的操作対象に対する操作および/または連合学習の進捗を計測し推定する連合学習検出部をさらに備え、
     前記表示制御部は、前記連合学習検出部によって推定された仮想的操作対象に対する操作および/または連合学習の進捗に応じて、操作対象であるコンテンツを変化させることを特徴とする請求項1~9のうちのいずれか1項記載の触覚拡張情報処理システム。
    further comprising a federated learning detection unit for measuring and estimating the progress of manipulation and/or federated learning on the virtual operation target;
    10. The display control unit changes the content that is the operation target according to the progress of the operation of the virtual operation target estimated by the associated learning detection unit and/or the associated learning. The haptic augmented information processing system according to any one of the above.
  12.  前記連合学習検出部にて推定された連合学習の進捗度合いに応じて、前記コンテンツに係る視覚情報および/または聴覚情報を変化させることを特徴とする請求項11記載の触覚拡張情報処理システム。 12. The augmented haptic information processing system according to claim 11, wherein the visual information and/or auditory information related to the content is changed according to the degree of progress of the joint learning estimated by the joint learning detection unit.
  13.  前記連合学習の進捗の推定には、学習曲線、タイピングミス率、ゲームの勝敗率、定められた一連の動作の進捗度、使用者の動作速度や反応速度、正答率、正確性、アルゴリズムによって算出される完成度あるいは完成度を測るための変数、キー押し強度、キー押し強度を求めるための指形の変化情報、ボタンの押し時間情報のうちの少なくともいずれかを含むことを特徴とする請求項11もしくは12記載の触覚拡張情報処理システム。 The estimation of the progress of the federated learning includes the learning curve, typing error rate, game win/loss rate, progress of a series of predetermined actions, user's action speed and reaction speed, correct answer rate, accuracy, calculated by algorithm at least one of a degree of completion or a variable for measuring the degree of completion, key pressing strength, finger shape change information for obtaining key pressing strength, and button pressing time information. 13. The haptic augmented information processing system according to 11 or 12.
  14.  前記表示部は前記操作者の眼前に設置されたディスプレイであり、
     前記ディスプレイに前記コンテンツに係る視覚情報が提示され、
     前記撮像部および/または前記センサによって前記操作者の動きを検知することができる、請求項3もしくは4記載の触覚拡張情報処理システム。
    The display unit is a display installed in front of the operator,
    visual information relating to the content is presented on the display;
    5. The tactile augmented information processing system according to claim 3, wherein said imaging unit and/or said sensor can detect movement of said operator.
  15.  表示部と、物理的実体を持つ対象物の近傍に所在する操作者の動作を検出できるセンサと、前記表示部及び前記センサを制御する制御部とを備えた情報処理システムにおいて、
     前記操作者が前記対象物に対してもしくは前記対象物と関連づけて動作をしたことが検知動作として検出される第1のステップと、
     前記検出された前記検知動作に係り前記対象物と関連づけられる現実空間上の現実位置情報に対応する仮想空間上の仮想位置情報を前記制御部が同定する第2のステップと、
     検知動作とこれに対応する動作特定情報とについて予め定められた対応関係に基づいて前記動作に対応する動作特定情報が取得される第3のステップと、
     前記動作に対して前記対象物から与えられる物理的反力に係る記憶と前記動作特定情報に係る記憶との間の連合記憶を前記操作者に形成させる第4のステップと
     を備えることを特徴とする触覚拡張情報処理方法。
    An information processing system comprising a display unit, a sensor capable of detecting actions of an operator located near a physical object, and a control unit controlling the display unit and the sensor,
    a first step in which a motion of the operator with respect to the object or in association with the object is detected as a detected motion;
    a second step of identifying, by the control unit, virtual position information in a virtual space corresponding to real position information in a real space associated with the object in relation to the detected sensing action;
    a third step of acquiring motion specifying information corresponding to the motion based on a predetermined correspondence relationship between the sensed motion and motion specifying information corresponding thereto;
    and a fourth step of causing the operator to form an associative memory between a memory related to the physical reaction force given by the object to the motion and a memory related to the motion specifying information. haptic augmented information processing method.
  16.  前記第4のステップは、
     前記制御部が前記特定動作に対応する仮想空間上のオブジェクトの大きさを調整し該調整された後の調整後オブジェクトを前記表示部の前記仮想空間上の仮想位置情報と同定された位置であって前記対象物と関連づけられる現実空間上の現実位置情報と重畳される位置に表示させ、
     前記操作者によって現実の前記対象物に対して与えられる動作を前記仮想空間上のオブジェクトに対して前記操作者から与えられた動作として検出し該検出された動作の種類及び該動作に係る現実空間上の現実位置情報を前記制御部が特定するとともに、前記仮想空間上のオブジェクトに対応する現実空間上の位置において前記対象物が前記動作に対する物理的反力を前記操作者に対して与え、
     前記検出された前記動作に係る現実空間上の現実位置情報を前記制御部が割り出して仮想空間上の仮想位置情報と同定したうえで前記検出された前記動作の種類に応じた動作が行われたとして該動作の種類に係る動作特定情報及び該動作に対応する仮想位置情報を特定し、
     前記動作に対して前記対象物から与えられる物理的反力を前記操作者が知覚した第1の記憶と前記動作特定情報を前記操作者が知覚した第2の記憶との間の連合記憶を前記操作者に形成させる
     ことを特徴とする請求項15記載の触覚拡張情報処理方法。
    The fourth step is
    The control unit adjusts the size of the object in the virtual space corresponding to the specific action, and the adjusted object is at the position identified by the virtual position information in the virtual space of the display unit. to display at a position superimposed on the real position information on the real space associated with the object,
    A motion given by the operator to the real object is detected as a motion given by the operator to the object in the virtual space, and a type of the detected motion and a real space related to the motion The control unit specifies the above real position information, and the object gives a physical reaction force to the action to the operator at a position in the real space corresponding to the object in the virtual space,
    After the controller determines the real position information on the physical space related to the detected motion and identifies it with the virtual position information on the virtual space, the motion corresponding to the type of the detected motion is performed. to identify the motion identification information related to the type of motion and the virtual position information corresponding to the motion,
    said associative memory between a first memory in which said operator perceives a physical reaction force given by said object to said action and a second memory in which said operator perceives said action specifying information; 16. The tactile augmented information processing method according to claim 15, wherein the haptic augmented information processing method is formed by an operator.
  17.  表示部と、物理的実体を持つ対象物の近傍に所在する操作者の動作を検出できるセンサと、前記表示部及び前記センサを制御する制御部とを備えた情報処理システムにおいて、
     前記表示部に操作対象となるコンテンツを表示する第1のステップと、
     前記操作者の前の実空間内にある物理的実体を伴う実体物に係る空間情報を検出する第2のステップと、
     前記第2のステップにおいて検出された位置に仮想的平面を設定する第3のステップと、
     前記第3のステップにおいて設定された仮想的平面に物理的に操作可能な範囲を設定する第4のステップと、
     前記第4のステップにおいて設定された操作可能な範囲内に仮想的ボタン、ゲームコントローラ、タッチパネル、ディスク、操作盤、マウス、キーボード、楽器、ホイール、ハンドル、レバー、スティックのうち少なくともいずれかを含む仮想的操作対象を前記仮想空間内に表示する第5のステップと、
     コンテンツの変化に応じて、前記操作者による前記仮想的操作対象に対する操作に対して実空間内で反力を与えつつ、前記操作を検出する第6のステップと、
     前記仮想的操作対象に対する前記操作における操作指示主体の動きを検出する第7のステップと、
     前記第7のステップにおいて検出された前記操作指示主体の動きによって連合学習の進捗を検出する第8のステップと、
     前記第8のステップにおいて検出された前記仮想的操作対象に対する操作および/または連合学習の進捗に応じて、操作対象であるコンテンツを変化させる第9のステップと
     を備えることを特徴とする触覚拡張情報処理方法。
    An information processing system comprising a display unit, a sensor capable of detecting actions of an operator located near a physical object, and a control unit controlling the display unit and the sensor,
    a first step of displaying content to be operated on the display unit;
    a second step of detecting spatial information relating to entities with physical entities in real space in front of the operator;
    a third step of setting a virtual plane at the position detected in the second step;
    a fourth step of setting a physically operable range on the virtual plane set in the third step;
    A virtual including at least one of virtual buttons, game controllers, touch panels, discs, operation panels, mice, keyboards, musical instruments, wheels, handles, levers, and sticks within the operable range set in the fourth step a fifth step of displaying a target for manipulation in the virtual space;
    a sixth step of detecting the operation while applying a reaction force in real space to the operation of the virtual operation target by the operator in response to a change in content;
    a seventh step of detecting a movement of an operation instructing subject in the operation on the virtual operation target;
    an eighth step of detecting the progress of associative learning based on the movement of the operation instructing subject detected in the seventh step;
    and a ninth step of changing content as an operation target in accordance with the operation of the virtual operation target detected in the eighth step and/or the progress of associated learning. Processing method.
  18.  表示部と、物理的実体を持つ対象物の近傍に所在する操作者の動作を検出できるセンサと、前記表示部及び前記センサを制御する制御部とを備えた情報処理システムにおいて、
     コンピュータに、
     前記操作者が前記対象物に対してもしくは前記対象物と関連づけて動作をしたことを前記センサに検知動作として検出させる第1の手段と、
     前記センサによって検出された前記検知動作に係り前記対象物と関連づけられる現実空間上の現実位置情報に対応する仮想空間上の仮想位置情報を前記制御部に同定させる第2の手段と、
     検知動作とこれに対応する動作特定情報とについて予め定められた対応関係に基づいて前記動作に対応する動作特定情報を取得させる第3の手段と、
     前記動作に対して前記対象物から与えられる物理的反力に係る記憶と前記動作特定情報に係る記憶との間の連合記憶を前記操作者に形成させるべく、前記物理的反力が与えれるのと略同タイミングで前記動作特定情報、または前記動作特定情報を処理して得られる情報を前記操作者に対して提示させる第4の手段と
     として機能させることを特徴とする触覚拡張情報処理プログラム。
    An information processing system comprising a display unit, a sensor capable of detecting actions of an operator located near a physical object, and a control unit controlling the display unit and the sensor,
    to the computer,
    a first means for causing the sensor to detect, as a detection action, that the operator has performed an action with respect to the object or in association with the object;
    a second means for causing the control unit to identify virtual position information in the virtual space corresponding to the real position information in the real space associated with the object related to the sensing action detected by the sensor;
    a third means for acquiring action specifying information corresponding to the action based on a predetermined correspondence relationship between the detected action and the action specifying information corresponding thereto;
    The physical reaction force is applied to cause the operator to form an associative memory between the memory of the physical reaction force applied by the object to the action and the memory of the action specifying information. and a fourth means for presenting the action specifying information or information obtained by processing the action specifying information to the operator at substantially the same timing as the augmented haptic information processing program.
  19.  前記第4の手段は、前記コンピュータに、さらに、
     前記制御部が前記特定動作に対応する仮想空間上のオブジェクトの大きさを調整し該調整された後の調整後オブジェクトを前記表示部の前記仮想空間上の仮想位置情報と同定された位置であって前記対象物と関連づけられる現実空間上の現実位置情報と重畳される位置に表示させ、
     前記操作者によって現実の前記対象物に対して与えられる動作を前記仮想空間上のオブジェクトに対して前記操作者から与えられた動作として検出し該検出された動作の種類及び該動作に係る現実空間上の現実位置情報を前記制御部が特定するとともに、前記仮想空間上のオブジェクトに対応する現実空間上の位置において前記対象物が前記動作に対する物理的反力を前記操作者に対して与え、
     前記検出された前記動作に係る現実空間上の現実位置情報を前記制御部が割り出して仮想空間上の仮想位置情報と同定したうえで前記検出された前記動作の種類に応じた動作が行われたとして該動作の種類に係る動作特定情報及び該動作に対応する仮想位置情報を特定し、
     前記動作に対して前記対象物から与えられる物理的反力を前記操作者が知覚した第1の記憶と前記動作特定情報を前記操作者が知覚した第2の記憶との間の連合記憶を前記操作者に形成させるように、前記物理的反力が与えれるのと略同タイミングで前記動作特定情報、または前記動作特定情報を処理して得られる情報を前記操作者に対して提示させる
     ように機能させることを特徴とする請求項18記載の触覚拡張情報処理プログラム。
    The fourth means further comprises:
    The control unit adjusts the size of the object in the virtual space corresponding to the specific action, and the adjusted object is at the position identified by the virtual position information in the virtual space of the display unit. to display at a position superimposed on the real position information on the real space associated with the object,
    A motion given by the operator to the real object is detected as a motion given by the operator to the object in the virtual space, and a type of the detected motion and a real space related to the motion The control unit specifies the above real position information, and the object gives a physical reaction force to the action to the operator at a position in the real space corresponding to the object in the virtual space,
    After the controller determines the real position information on the physical space related to the detected motion and identifies it with the virtual position information on the virtual space, the motion corresponding to the type of the detected motion is performed. to identify the motion identification information related to the type of motion and the virtual position information corresponding to the motion,
    said associative memory between a first memory in which said operator perceives a physical reaction force given by said object to said action and a second memory in which said operator perceives said action specifying information; The operator is caused to present the action specifying information or information obtained by processing the action specifying information at substantially the same timing as the physical reaction force is applied. 19. The haptic augmented information processing program according to claim 18, wherein the haptic augmented information processing program is caused to function.
  20.  請求項18もしくは19記載の触覚拡張情報処理プログラムが記憶された記録媒体。 A recording medium storing the extended haptic information processing program according to claim 18 or 19.
PCT/JP2021/031776 2021-02-24 2021-08-30 Tactile-sensation-expansion information processing system, software, method, and storage medium WO2022180894A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022506693A JPWO2022180894A1 (en) 2021-02-24 2021-08-30

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-027536 2021-02-24
JP2021027536 2021-02-24

Publications (1)

Publication Number Publication Date
WO2022180894A1 true WO2022180894A1 (en) 2022-09-01

Family

ID=83048740

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/031776 WO2022180894A1 (en) 2021-02-24 2021-08-30 Tactile-sensation-expansion information processing system, software, method, and storage medium

Country Status (2)

Country Link
JP (1) JPWO2022180894A1 (en)
WO (1) WO2022180894A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010067062A (en) * 2008-09-11 2010-03-25 Ntt Docomo Inc Input system and method
JP2013114375A (en) * 2011-11-28 2013-06-10 Seiko Epson Corp Display system and operation input method
WO2018146922A1 (en) * 2017-02-13 2018-08-16 ソニー株式会社 Information processing device, information processing method, and program
JP2020144233A (en) * 2019-03-06 2020-09-10 株式会社日立製作所 Learning assisting system, learning assisting device, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010067062A (en) * 2008-09-11 2010-03-25 Ntt Docomo Inc Input system and method
JP2013114375A (en) * 2011-11-28 2013-06-10 Seiko Epson Corp Display system and operation input method
WO2018146922A1 (en) * 2017-02-13 2018-08-16 ソニー株式会社 Information processing device, information processing method, and program
JP2020144233A (en) * 2019-03-06 2020-09-10 株式会社日立製作所 Learning assisting system, learning assisting device, and program

Also Published As

Publication number Publication date
JPWO2022180894A1 (en) 2022-09-01

Similar Documents

Publication Publication Date Title
Seinfeld et al. User representations in human-computer interaction
Cheng et al. Sparse haptic proxy: Touch feedback in virtual environments using a general passive prop
Martínez et al. Identifying virtual 3D geometric shapes with a vibrotactile glove
Bowman et al. 3d user interfaces: New directions and perspectives
Iwata et al. Project FEELEX: adding haptic surface to graphics
US9694283B2 (en) Method and apparatus for tracking of a subject in a video game
Mendes et al. Mid-air interactions above stereoscopic interactive tables
US8232989B2 (en) Method and apparatus for enhancing control of an avatar in a three dimensional computer-generated virtual environment
Rietzler et al. Conveying the perception of kinesthetic feedback in virtual reality using state-of-the-art hardware
Orozco et al. The role of haptics in games
Sadihov et al. Prototype of a VR upper-limb rehabilitation system enhanced with motion-based tactile feedback
Parisi Game interfaces as bodily techniques
Deng et al. Multimodality with eye tracking and haptics: a new horizon for serious games?
Freeman et al. The role of physical controllers in motion video gaming
Ariza et al. Ring-shaped haptic device with vibrotactile feedback patterns to support natural spatial interaction
KR20150097050A (en) learning system using clap game for child and developmental disorder child
Maggiorini et al. Evolution of game controllers: Toward the support of gamers with physical disabilities
Chapoulie et al. Finger-based manipulation in immersive spaces and the real world
WO2022180894A1 (en) Tactile-sensation-expansion information processing system, software, method, and storage medium
Richard et al. Multivibes: What if your vr controller had 10 times more vibrotactile actuators?
Young et al. Usability testing of video game controllers
Chien et al. Gesture-based head-mounted augmented reality game development using leap motion and usability evaluation
Yusof et al. Virtual Block Augmented Reality Game Using Freehand Gesture Interaction
Liao et al. Playing games with your mouth: Improving gaming experience with EMG supportive input device
Clark Understanding Hand Interactions and Mid-Air Haptic Responses within Virtual Reality and Beyond.

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022506693

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21927990

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21927990

Country of ref document: EP

Kind code of ref document: A1