WO2019022102A1 - Activity assistant method, program, and activity assistant system - Google Patents

Activity assistant method, program, and activity assistant system Download PDF

Info

Publication number
WO2019022102A1
WO2019022102A1 PCT/JP2018/027797 JP2018027797W WO2019022102A1 WO 2019022102 A1 WO2019022102 A1 WO 2019022102A1 JP 2018027797 W JP2018027797 W JP 2018027797W WO 2019022102 A1 WO2019022102 A1 WO 2019022102A1
Authority
WO
WIPO (PCT)
Prior art keywords
activity
menu
target person
information
subject
Prior art date
Application number
PCT/JP2018/027797
Other languages
French (fr)
Japanese (ja)
Inventor
圭佑 中村
法上 司
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2017144008A external-priority patent/JP2019024580A/en
Priority claimed from JP2017144007A external-priority patent/JP2019024579A/en
Priority claimed from JP2017184111A external-priority patent/JP2019058285A/en
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2019022102A1 publication Critical patent/WO2019022102A1/en

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H1/00Apparatus for passive exercising; Vibrating apparatus; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
    • A61H1/02Stretching or bending or torsioning apparatus for exercising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/22Social work or social welfare, e.g. community support activities or counselling services

Definitions

  • the present disclosure generally relates to an activity support method, a program, and an activity support system, and more particularly, to an activity support method, a program, and an activity support system for supporting a target person's activity.
  • an exercise support system that supports the exercise of a user is known, and is disclosed, for example, in Patent Document 1.
  • the exercise support system described in Patent Document 1 includes a wrist device and a chest device, an imaging device, a network server, and a user terminal.
  • the wrist device and the chest device acquire sensor data during the running operation of the user.
  • the imaging device acquires a running image in synchronization with sensor data and the like.
  • the network server processes and analyzes the running video such as sensor data, compares the video between the user and the elite runner, the index displayed superimposed on the comparative video for each teaching item, and the advice text according to the teaching item Create advice data to include.
  • the user terminal displays the advice data in a predetermined display form via the network.
  • the target person is provided with advice for the running motion (activity) spontaneously performed by the user (target person), but is the activity appropriate for the target person? It is unknown whether or not. Therefore, in this exercise support system, there is a problem that it is difficult to present appropriate activities to be performed on the subject.
  • the present disclosure has been made in view of the above, and it is an object of the present disclosure to provide an activity support method, program, and activity support system that easily present an activity to be performed appropriately to a target person.
  • the activity support method includes a generation step, an acquisition step, and a presentation step.
  • the generation step is a step of generating an activity menu of the subject by one or more processors based on the input physical information.
  • the acquisition step is a step of acquiring the activity menu generated in the generation step via a network.
  • the presenting step is a step of presenting the activity menu acquired in the acquiring step.
  • a program according to an aspect of the present disclosure is a program for causing one or more processors to execute the above-described activity support method.
  • An activity support system includes a generation unit, an acquisition unit, and a presentation unit.
  • the generation unit generates an activity menu of the subject by one or more processors based on the input physical information.
  • the acquisition unit acquires the activity menu generated by the generation unit via a network.
  • the presentation unit presents the activity menu acquired by the acquisition unit.
  • FIG. 1 is a flowchart illustrating an example of an operation (activity support method) of an activity support system according to an embodiment of the present disclosure.
  • FIG. 2 is a conceptual diagram showing an example of the operation at the facility in the above-described activity support system.
  • FIG. 3 is a conceptual diagram showing an example of the operation at the user's home in the above-described activity support system.
  • FIG. 4 is a block diagram showing the configuration of the above-mentioned activity support system.
  • 5A to 5C are conceptual diagrams showing an example of an activity menu presented by the presentation unit in the above-described activity support system.
  • FIG. 6 is a flowchart showing another example of the operation (activity support method) of the above-described activity support system.
  • FIG. 7A to FIG. 7C are conceptual diagrams showing that the target person performs rehabilitation using the facility system in the above-described activity support system.
  • 8A to 8C are conceptual diagrams showing how a plurality of subjects perform rehabilitation using the facility system in the above-menti
  • the activity support method is a method for supporting the activity of the target person 200.
  • the “activity” in the present disclosure means the general behavior of the target person 200 in daily life. That is, “activity” is not only exercise of the target person 200 such as rehabilitation (rehabilitation) and training but also mental activity such as an action in which the target person 200 eats nutrition such as food and a circle activity by the target person 200 Also includes.
  • the “rehabilitation” referred to in the present disclosure is performed, for example, in order to enable a subject's independent daily life, targeting a person in a state in which physical ability and cognitive function etc. have been reduced due to aging, illness or injury. Means physical or psychological training.
  • the activity support method described below is a method for supporting the rehabilitation of the subject person 200.
  • the activity support method includes a generation step S2, an acquisition step S4, and a presentation step S5.
  • the generation step S2 is a step of generating the activity menu M1 (see FIGS. 5A to 5C) of the subject 200 by one or more processors based on the input physical information.
  • the “physical information” in the present disclosure includes, for example, the age, sex, height, weight, BMI (Body Mass Index), exercise capacity, and presence or absence of a physical or mental disease of the subject 200.
  • the “exercise ability” is the ability of the subject 200 to move the body (hand, foot, neck, waist, etc.), and includes, for example, grip strength, ability to maintain posture (time to keep open one's foot standing), etc.
  • the “activity menu” in the present disclosure is a menu presented to the target person 200 in order to instruct the target person 200 on a specific activity or the like.
  • the input of the physical information is performed at a facility 1 such as a rehabilitation center (hereinafter, also referred to as a “first place”).
  • the input of the physical information is performed by measuring the motion of the object person 200 as shown in FIG. 2 as an example.
  • the input of physical information may be performed in the form of a question using an input device such as a keyboard or a microphone, for example.
  • the input of the physical information is performed by the target person 200 without the assistance of a therapist such as a physical therapist, an occupational therapist and a speech therapist, but the target person with the assistance of the therapist 200 may go.
  • the therapist, an agent of the target person 200 such as the family of the target person 200, or the like may perform the input instead of the target person 200.
  • an activity for rehabilitation that one or more processors can execute at the home 4 (hereinafter, also referred to as "the second place") based on the physical information input at the facility 1.
  • the activity menu M1 includes, for example, a menu for training various operations of the object person 200 required in daily life, such as a walking operation, a single-foot standing operation, an uprising operation, a standing and sitting operation, and a platform lifting operation.
  • the “rising movement” in the present disclosure means the movement of the target person 200 rising from the lying state
  • the “stand-up movement” in the present disclosure means the movement of the target person 200 rising from the chair and / or the movement of sitting on the chair.
  • the activity menu M1 includes, for example, a recipe of cooking for obtaining nutrients necessary to restore or maintain the health of the subject 200.
  • “generation” in the present disclosure includes generating a new activity menu M1 based on physical information, and changing a part of the existing activity menu M1 based on physical information.
  • “generation” includes selecting an activity menu M1 suitable for the target person 200 from a plurality of existing activity menus M1 based on physical information.
  • the activity menu M1 generated in the generation step S2 is uploaded to the server 2 via the network N1 and stored in the server 2 (see FIG. 4).
  • the acquisition step S4 is a step of acquiring the activity menu M1 generated in the generation step S2 via the network N1.
  • the obtaining step S4 is performed at the home 4 of the target person 200, as shown in FIG.
  • the acquisition step S4 is executed by the target person 200 operating the operation terminal 3 owned by the target person 200 and downloading the activity menu M1 stored in the server 2 via the network N1.
  • the operation terminal 3 is, for example, a portable information terminal such as a tablet terminal or a smartphone, a personal computer (including a laptop type), a television receiver, various wearable terminals such as a watch type, or a dedicated device.
  • the presenting step S5 is a step of presenting the activity menu M1 acquired in the acquiring step S4.
  • the presentation step S5 is performed at the home 4 of the target person 200, as in the acquisition step S4.
  • the activity menu M1 downloaded to the operation terminal 3 is output from the operation terminal 3 by voice, for example, or displayed on the display unit 34 of the operation terminal 3 by an image (still image or moving image)
  • the presenting step S5 is performed (see FIGS. 5A to 5C).
  • the generation step S2 is performed at the first location, and the acquisition step S4 and the presentation step S5 are performed at the second location remote from the first location.
  • the present embodiment it is possible to present the activity menu M1 generated by the target person 200 inputting physical information at the facility 1 at the home 4 of the target person 200. Therefore, the present embodiment has an advantage that it is easy to present the target person 200 an appropriate activity menu M1, that is, an activity to be performed.
  • the activity support system 100 which is a system for implementing the activity support method according to the present embodiment, will be described below with reference to FIGS.
  • the activity support system 100 includes a facility system 10 provided in the facility 1, a server 2, and an operation terminal 3.
  • the facility system 10, the server 2 and the operation terminal 3 are connected to each other via the network N1.
  • the server 2 is described as not being a component of the activity support system 100 in the present embodiment, the server 2 may be included in the component of the activity support system 100.
  • the server 2 is not an essential component, and may be omitted as appropriate.
  • the facility system 10 includes a first input unit 11, a first processing unit 12, a first communication unit 13, and a first storage unit 14.
  • the facility system 10 is implemented mainly with a computer system having one or more processors and a memory.
  • the functions of the first input unit 11, the first processing unit 12, and the first communication unit 13 are realized by one or more processors executing appropriate programs.
  • the program may be pre-recorded in a memory, or may be provided through a telecommunication line such as the Internet or in a non-transitory recording medium such as a memory card.
  • the first input unit 11 is an input device for the subject person 200 to input physical information.
  • the physical information input to the first input unit 11 is provided to the first processing unit 12.
  • a sensor device 111 and a display device 112 are connected to the facility system 10.
  • a communication method between the facility system 10 and each of the sensor device 111 and the display device 112 is, for example, bidirectional wire communication via a network such as a LAN (Local Area Network).
  • the communication system between the facility system 10 and the sensor device 111 or the display device 112 is not limited to wired communication, and may be wireless communication.
  • the sensor device 111 is a device that detects the position of the object person 200 in the detection space and the posture of the object person 200.
  • the “detection space” referred to in the present disclosure is a space of an appropriate size defined by the sensor device 111, and the target person 200 is in the detection space when inputting the physical information to the first input unit 11.
  • the sensor device 111 is installed, for example, on a wall surface 300 in a room. Since an image is projected on the wall surface 300 by the display device 112 as described later, basically, the object person 200 faces the wall surface 300 side (the sensor device 111 side).
  • the sensor device 111 includes a plurality of sensors such as a camera (image sensor) and a depth sensor.
  • the sensor device 111 further includes a processor or the like that performs appropriate signal processing on the outputs of the plurality of sensors.
  • the sensor device 111 detects a captured image obtained by imaging the target person 200, the position of the target person 200 including the lateral direction and the depth direction (the front-rear direction of the target person 200), and the posture of the target person 200. . That is, the sensor device 111 detects the position (including the center of gravity position) of the target person 200 in the horizontal plane. Furthermore, the sensor device 111 detects, for example, which direction the target person 200 is bent forward and backward, and in what direction, and how many directions such as the back, the waist, the knee, and the like are bent.
  • the information on the position of the target person 200 in the detection space and the information on the posture of the target person 200 detected by the sensor device 111 are the first information of the facility system 10 as physical information of the target person 200. It is given to the input unit 11. In this manner, whether the subject person 200 has sufficient muscle strength depending on whether or not the subject person 200 is given a specific posture (here, one foot standing) and the subject person 200 can maintain the specific posture for a predetermined time It is possible to measure whether or not the joint flexibility is sufficient.
  • the display device 112 is, for example, a projector device that projects an image on a part (screen area 301) of the indoor wall surface 300.
  • the display device 112 is attached to, for example, a ceiling surface in a room.
  • the display device 112 projects an arbitrary full-color image on a screen area 301 set below the sensor device 111 on the wall surface 300.
  • the display device 112 can project not only the wall surface 300 but also an image on a floor surface, a ceiling surface, a dedicated screen, or the like.
  • the display device 112 is not limited to the configuration for displaying a two-dimensional video, and may display a three-dimensional video using a technique such as 3D (three dimensions) projection mapping, for example.
  • 3D three dimensions
  • the reverse video 302 and the sample video 303 of the target person 200 are displayed in the screen area 301.
  • the reverse video 302 is a video obtained by horizontally reversing a video of the entire body of the subject 200 captured from the front of the subject 200 with the camera of the sensor device 111.
  • the size and the display position in the screen area 301 are adjusted so that the reverse video 302 is displayed in substantially real time and viewed in the same manner as the image (mirror image) of the subject person 200 in the mirror. It is done.
  • the sample image 303 is an image defining a movement (posture and the like) which becomes an example when inputting physical information.
  • a stick picture indicating the correct posture in “one foot standing” is displayed as the sample video 303 so as to be superimposed on the reverse video 302.
  • the first processing unit 12 (hereinafter also referred to as “generation unit 12”) has a function of executing the above-described generation step S2. That is, based on the physical information of the target person 200 input to the first input unit 11, the generation unit 12 generates the activity menu M1 of the target person 200 by one or more processors. In the present embodiment, the first processing unit 12 determines the difference between the posture of the target person 200 input as physical information and the reference posture specified by the reference data (the size of the deviation), and the target person 200 has one foot The exercise ability of the subject 200 is evaluated from the length of time for which the standing posture can be maintained.
  • the first processing unit 12 selects the activity menu M1 from the plurality of activity menus M1 stored in the first storage unit 14 in accordance with the evaluation of the exercise capacity of the object person 200, whereby the activity for the object person 200 is performed. Generate menu M1.
  • the first processing unit 12 has a function of executing an updating step S9 (see FIG. 6) of updating the activity menu M1 to be generated.
  • the updating step S9 is a step of updating the activity menu M1 by one or more processors based on the activity result (described later) of the subject 200 acquired in the result acquiring step S6 (described later).
  • the “activity result” in the present disclosure is the result of the subject person 200 executing the presented activity menu M1.
  • the update step S9 will be described in detail in "(3.2) Evaluation of activity result of target person and update of activity menu" described later.
  • the first communication unit 13 is a communication interface for communicating with the server 2 or the operation terminal 3 via the network N1.
  • the communication method between the first communication unit 13 and the server 2 or the operation terminal 3 is bidirectional wireless communication.
  • the communication method between the first communication unit 13 and the server 2 or the operation terminal 3 is not limited to wireless communication, and may be wired communication.
  • the first communication unit 13 transmits the activity menu M1 generated by the first processing unit 12 to the server 2 via the network N1.
  • the first storage unit 14 includes, for example, a rewritable nonvolatile memory such as an EEPROM (Electrically Erasable Programmable Read-Only Memory).
  • the first storage unit 14 stores a plurality of activity menus M1 that can be selected when the first processing unit 12 generates the activity menu M1.
  • the server 2 includes a second communication unit 21, a second processing unit 22, and a second storage unit 23.
  • the server 2 is implemented mainly with a computer system having one or more processors and a memory.
  • the functions of the second communication unit 21 and the second processing unit 22 are realized by one or more processors executing appropriate programs.
  • the program may be pre-recorded in a memory, or may be provided through a telecommunication line such as the Internet or in a non-transitory recording medium such as a memory card.
  • the second communication unit 21 is a communication interface for communicating with the facility system 10 or the operation terminal 3 via the network N1.
  • the communication scheme between the second communication unit 21 and the facility system 10 or the operation terminal 3 is bidirectional wireless communication.
  • the communication method between the second communication unit 21 and the facility system 10 or the operation terminal 3 is not limited to wireless communication, and may be wired communication.
  • the second communication unit 21 communicates with the first communication unit 13 of the facility system 10 via the network N1 to receive the activity menu M1 generated by the first processing unit 12.
  • the second communication unit 21 is controlled by the second processing unit 22 to transmit the activity result of the target person 200 described later and the evaluation on the activity result to the facility system 10 via the network N1.
  • the second communication unit 21 is controlled by the second processing unit 22 so that the activity menu M1 for the target person 200 stored in the second storage unit 23 (that is, the activity menu M1 generated by the first processing unit 12). ) Is transmitted to the operation terminal 3 via the network N1.
  • the second communication unit 21 communicates with the third communication unit 31 via the network N1 to receive the activity result input to the operation terminal 3.
  • the second processing unit 22 has a function of executing an evaluation step S8 (see FIG. 6) for evaluating the activity result of the object person 200.
  • an evaluation step S8 for evaluating the activity result of the object person 200.
  • one or more activities of the target person 200 are performed based on the activity menu M1 generated in the generation step S2 and the activity result of the target person 200 acquired in the result acquisition step S6 (described later). It is a step evaluated by a processor.
  • the evaluation step S8 will be described in detail later in “(3.2) Evaluation of activity result of target person and update of activity menu”.
  • the second storage unit 23 includes, for example, an auxiliary storage device such as a hard disk drive (HDD) or a solid state drive (SSD).
  • the second storage unit 23 stores the activity menu M1 received by the second communication unit 21 in association with identification information (user ID) for identifying the target person 200.
  • the second storage unit 23 stores the activity result received by the second communication unit 21 in association with the identification information of the target person 200.
  • the operation terminal 3 includes a third communication unit 31, a third processing unit 32, a second input unit 33, and a display unit 34.
  • the operation terminal 3 is realized as a main configuration of a computer system having one or more processors and memories, and is a general-purpose tablet terminal as an example. Then, in the operation terminal 3, dedicated application software is installed, and when this application software is activated, the functions of the third communication unit 31, the third processing unit 32, the second input unit 33, and the display unit 34 are obtained. To be realized.
  • the operation terminal 3 has a touch panel display, and the touch panel display realizes a function of receiving an operation of the target person 200 and a function of displaying information on the target person 200.
  • the touch panel display is configured of, for example, a liquid crystal display or an organic EL (Electro Luminescence) display.
  • the operation terminal 3 determines that an object such as a button is operated by detecting an operation (tap, swipe, drag, etc.) of an object such as a button on the screen displayed on the touch panel display.
  • the touch panel display functions as a user interface that receives an operation input from the target person 200. That is, in the present embodiment, the touch panel display of the operation terminal 3 implements the functions of the second input unit 33 and the display unit 34.
  • the third communication unit 31 is a communication interface for communicating with the facility system 10 or the server 2 via the network N1.
  • the communication method between the third communication unit 31 and the facility system 10 or the server 2 is bidirectional wireless communication.
  • the third communication unit 31 communicates with the second communication unit 21 of the server 2 via the network N1 to receive the activity menu M1 for the target person 200 stored in the server 2.
  • the third communication unit 31 (hereinafter, also referred to as “acquisition unit 31”) has a function of executing the above-described acquisition step S4. That is, the acquisition unit 31 acquires the activity menu M1 generated by the first processing unit (generation unit) 12 via the network N1.
  • the third processing unit 32 requests the server 2 to transmit the activity menu M1 linked to the target person 200 by receiving the request operation by the target person 200 in the second input unit 33.
  • the third communication unit 31 receives the activity menu M1 from the server 2 via the network N1.
  • the target person 200 reads the QR code (registered trademark) distributed at the facility 1 using the built-in camera of the operation terminal 3, and accesses a URL (Uniform Resource Locator) included in the code.
  • the third communication unit 31 receives the activity menu M1 from the server 2 via the network N1.
  • the activity menu M1 transmitted from the server 2 is stored in the second storage unit 23 of the server 2. That is, in the present embodiment, in the acquisition step S4, the activity menu M1 is acquired from the second storage unit 23 (storage unit) that stores the activity menu M1 generated in the generation step S2. In other words, in this embodiment, in the acquisition step S4, the activity menu M1 temporarily stored in the server 2 is not acquired, but the activity menu M1 generated in the generation step S2 is acquired from the facility system 10. It is acquired from server 2.
  • the activity menu M1 is not presented to the subject 200 at the time when the activity menu M1 is generated at the facility 1, but the activity menu M1 is given to the subject 200 after leaving a predetermined time from this time. It will be possible to present In addition, if the history of the activity menu M1 presented to the object person 200 in the past is stored in the second storage unit 23, the object person 200 can refer to the history of the activity menu M1 by requesting the server 2 It will be possible.
  • the third processing unit 32 (hereinafter, also referred to as “presentation unit 32”) has a function of executing the above-described presentation step S5. That is, the presentation unit 32 presents the activity menu M1 acquired by the third communication unit (acquisition unit) 31.
  • the activity menu M1 is presented to the target person 200 by being displayed on the display unit 34 of the operation terminal 3 as shown in, for example, FIGS. 5A to 5C.
  • an exercise menu M11 and a cooking menu M12 are displayed on the display unit 34 as the activity menu M1.
  • the exercise menu M11 is, for example, a menu of exercises for training one or more specific portions of the subject 200's body to be strengthened, such as push-ups, squats, and single-legged eyes.
  • the display unit 34 displays, as an exercise menu M11, text and a figure (including a picture) for explaining the exercise method to be executed by the subject 200.
  • the cooking menu M12 is, for example, a recipe for cooking suitable for supplementing the nutrition that the subject 200 is underfeeding, such as a salad.
  • the display unit 34 displays, as the cooking menu M12, text and a diagram (including a photo) for explaining the food to be prepared by the subject.
  • the display unit 34 displays, as the exercise menu M11, a moving image for explaining the exercise to be performed by the subject 200. This moving image is displayed, for example, when the target person 200 performs a specific operation on the operation terminal 3, such as touching the exercise menu M11 while the image shown in FIG. 5A is displayed on the display unit 34. Ru. Further, in the example shown in FIG. 5C, the display unit 34 displays, as the cooking menu M12, a moving image for explaining the food to be prepared by the target person 200. This moving image is displayed, for example, when the object person 200 performs a specific operation on the operation terminal 3 such as touching the cooking menu M12 while the image shown in FIG. 5A is displayed on the display unit 34. Ru.
  • step S1 Operation (3.1) Presentation of Activity Menu to Target Person
  • the target person 200 or an agent of the target person 200 inputs physical information of the target person 200 (step S1).
  • the physical information of the target person 200 is transmitted to the first input unit of the facility system 10. It is input to 11.
  • one or more processors of the first processing unit 12 generate an activity menu M1 based on the physical information of the object person 200 input to the first input unit 11 (generation step S2).
  • the generated activity menu M1 is transmitted to the server 2 via the first communication unit 13 and the network N1.
  • the second processing unit 22 of the server 2 receives the activity menu M1 at the second communication unit 21, the second processing unit 22 associates the received activity menu M1 with the target person 200 and stores the result in the second storage unit 23 (step S3).
  • the target person 200 operates the operation terminal 3 at home 4 and downloads it from the server 2 to acquire the activity menu M1 from the server 2 (acquisition step S4).
  • the activity menu M1 acquired from the server 2 is stored in the memory of the operation terminal 3.
  • the target person 200 operates the operation terminal 3 at home 4 and causes the display unit 34 to display the activity menu M1, whereby the activity menu M1 is presented to the target person 200 (presentation step S5). Therefore, the target person 200 can carry out the activity menu M1 at home 4 while looking at the activity menu M1 displayed on the display unit 34.
  • the present embodiment it is possible to present the activity menu M1 generated by the target person 200 inputting physical information at the facility 1 at the home 4 of the target person 200. Therefore, the present embodiment has an advantage that it is easy to present the target person 200 an appropriate activity menu M1, that is, an activity to be performed. Moreover, in the present embodiment, the target person 200 only needs to input physical information at the facility 1, and there is an advantage that it is not necessary to draw up the activity menu M1 by oneself.
  • the present embodiment has an advantage that the target person 200 can easily carry out the activity menu M1 at a place different from the facility 1 (here, the home 4 of the target person 200). For example, even if the subject 200 carries out the activity menu M1 with the assistance of a therapist at the facility 1, there is a possibility that the subject 200 can not acquire the activity menu M1 or forgets the acquired activity menu M1. There is. In such a case, it is difficult for the target person 200 to carry out the same activity menu M1 at home 4. In addition, for example, when there is no space necessary for the activity menu M1 to be implemented in the facility 1, or when the facility 1 is a facility easily accessible to the public, the target person 200 implements the activity menu M1 in the facility 1 Can not do it. In such a case, the target person 200 does not have an opportunity to acquire the activity menu M1 at the facility 1 in the first place.
  • the target person 200 can carry out the activity menu M1 at home 4. Further, in the present embodiment, the therapist does not have to instruct the subject person 200 on the activity menu M1, and there is also an advantage that the burden on the therapist can be reduced.
  • the target person 200 implements the activity menu M1 while looking at the activity menu M1 displayed on the display unit 34. Then, the target person 200 operates the operation terminal 3 to input the result of performing the activity menu M1 (that is, the activity result). For example, if the activity menu M1 is the exercise menu M11, the target person 200 inputs the result of the activity to the operation terminal 3 by inputting to the operation terminal 3 a moving image obtained by imaging the figure of the person performing the exercise menu M11. Do. Furthermore, for example, if the activity menu M1 is the cooking menu M12, the target person 200 inputs the image of the food prepared based on the cooking menu M12 into the operation terminal 3 to input the activity result to the operation terminal 3 input. In addition, the target person 200 may input the result of the activity to the operation terminal 3 by inputting that the activity menu M1 has been performed, the time zone in which the activity menu M1 has been performed, or the like.
  • step S6 is a step of acquiring the activity result of the object person 200 based on the activity menu M1 generated in the generation step S2.
  • step S7 (hereinafter, also referred to as “storage step S7") is a step of storing the activity result acquired in the result acquisition step S6 in the second storage unit (storage unit) 23.
  • the second processing unit 22 of the server 2 evaluates the activity of the subject 200 by one or more processors based on the acquired activity result of the subject 200 (evaluation step S8).
  • the second processing unit 22 compares the moving image for which the target person 200 performs the exercise menu M11 with the reference moving image for which the trainer performs the exercise menu M11 using, for example, image analysis technology or the like. The implementation accuracy of the exercise menu M11 by the person 200 is evaluated.
  • the second processing unit 22 links the evaluation result to the target person 200 and stores the result in the second storage unit 23. Further, the second processing unit 22 transmits the evaluation result to the facility system 10 via the network N1 by controlling the second communication unit 21.
  • the first processing unit 12 of the facility system 10 When the first processing unit 12 of the facility system 10 receives the evaluation result in the first communication unit 13, the first processing unit 12 performs the activity by one or more processors based on the received evaluation result (in other words, the activity result of the target person 200).
  • the menu M1 is updated (update step S9). That is, even when the same physical information is input at the facility 1, the activity menu M1 generated is different before and after the update. Of course, the update may cause no change in the activity menu M1.
  • the first processing unit 12 updates the activity menu M1 by performing machine learning using, for example, an evaluation result (activity result of the target person 200). For example, it is assumed that a plurality of target persons 200 use the facility 1. In this case, the first processing unit 12 may preferentially present the activity menu M1 presented to the majority of the subjects 200 by referring to the evaluation results of the plurality of subjects 200. . In addition, for example, when the first processing unit 12 obtains an evaluation result that many subjects 200 have difficulty in performing an exercise included in the activity menu M1, the first processing unit 12 removes the exercise or a difficulty in place of the exercise.
  • the activity menu M1 may be updated by adding a low degree exercise or the like.
  • the degree of implementation of the activity menu M1 by the target person 200 is evaluated. Therefore, in the present embodiment, for example, the evaluation result is displayed on the display unit 34 of the operation terminal 3 in response to the request of the target person 200, and the evaluation result is presented to the target person 200. There is an advantage that it is possible to improve motivation. Further, in the present embodiment, for example, the evaluation result (the activity result of the target person 200) is fed back to the facility system 10 to update the activity menu M1, so that the activity menu M1 can be easily presented to the target person 200. Has the advantage of
  • the physical information is detected by the target person 200 executing the instruction menu that the facility system 10 presents to the target person 200 at the facility 1.
  • Ru That is, when the target person 200 executes the instruction menu, information on the position of the target person 200 in the detection space and information on the posture of the target person 200, which are detected by the sensor device 111, are first information as physical information. It is given to the input unit 11.
  • the “instruction menu” in the present disclosure is any menu selected from a plurality of rehabilitation menus, and is presented to the target person 200 to instruct the target person 200 on a specific training or the like. Be done.
  • a plurality of rehabilitation menus are classified into a plurality of first stage menus and a plurality of second stage menus.
  • Each of the plurality of first stage menus is, for example, a training menu of the motion itself that the subject person 200 performs in daily life such as a walking motion.
  • Each of the plurality of second stage menus is, for example, a training menu of element operations necessary for operations performed by the subject 200 in daily life.
  • the “element operation” in the present disclosure is an individual operation after decomposition when the operation performed by the target person 200 in daily life is decomposed into a plurality of elements.
  • the motion to bend the hip joint the motion to stretch the hip joint, the motion to bend the knee joint, the motion to stretch the knee joint, the motion to stretch the knee joint, the motion to bend the ankle, the motion to stretch the ankle, the motion to swing the arm forward and the arm backward It is decomposed into a plurality of element operations including a shaking operation and the like.
  • a plurality of element movements are defined for each part of the body and for each movement of each part.
  • the facility system 10 first performs screening for asking the target person 200 a question, as shown in FIG. 7A. At this time, the facility system 10 causes the screen area 301 to display questions about the target person 200, and acquires an answer to the question that the target person 200 has input to the operation terminal 5.
  • the operation terminal 5 is, for example, a tablet terminal or a smartphone, and presents information to the target person 200 (display and / or voice output), a communication function with the facility system 10, a function to receive the operation of the target person 200, and the like. Function to
  • the facility system 10 acquires data for determining the physical ability of the subject 200 in addition to the information on the attributes of the subject 200 (hereinafter, referred to as "third information").
  • Institutional system 10 determines the necessity of confirmation of physical ability based on the result of screening. For example, when it is judged that there is no problem in physical ability in daily life as a result of screening, confirmation of physical ability is judged to be unnecessary.
  • the facility system 10 executes a pre-menu selection process.
  • the “pre-menu” in the present disclosure is a menu to be executed by the target person 200 prior to the instruction menu in order to select (determine) the instruction menu.
  • the facility system 10 selects one of the first stage menus suitable for the subject 200 from among the plurality of first stage menus stored in the first storage unit 14 based on the result of the screening.
  • the first storage unit 14 stores a conditional expression for selecting a pre-menu from the result of screening in addition to the plurality of first-stage menus, and the facility system 10 performs the preliminary processing according to the conditional expression. Select menu
  • the facility system 10 controls the display device 112 based on pre-information representing the selected pre-menu.
  • the display device 112 displays, on the screen area 301, the pre-menu to be performed by the subject 200 and the support information to support the execution of the pre-menu by the subject 200.
  • the menu "one foot standing" is selected as the pre-menu, and the reverse video 302 and the sample video 303 of the target person 200 are displayed in the screen area 301 as support information. There is.
  • the sample image 303 is generated from the reference data stored in the first storage unit 14 in association with the rehabilitation menu (first stage menu) selected as the pre-menu as an example, and serves as an example in the pre-menu It is an image that defines movement (posture etc.).
  • the rehabilitation menu first stage menu
  • It is an image that defines movement (posture etc.).
  • a stick picture indicating the correct posture in the “one-foot standing” is displayed as the sample video 303 so as to be superimposed on the reverse video 302.
  • the sensor device 111 detects the movement of the target person 200, and the facility system 10 executes an instruction menu selection process.
  • the facility system 10 first acquires, from the sensor device 111, information (hereinafter referred to as "first information") regarding the operation of the target person 200 in operation according to the pre-menu. Furthermore, the facility system 10 acquires information (hereinafter, referred to as “second information”) for proposing any menu selected from the plurality of rehabilitation menus as an instruction menu. In the present input example, the facility system 10 acquires, from the first storage unit 14, second information for selecting any one of the plurality of second stage menus as an instruction menu. At this time, the facility system 10 acquires reference data defining at least an exemplary movement in the pre-menu in the second information.
  • the facility system 10 evaluates the operation of the target person 200 based on the first information and the second information, and further, the third information acquired in the screening.
  • the facility system 10 digitizes the difference (magnitude of deviation) between the posture of the subject 200 and the reference posture specified by the reference data, and evaluates the operation of the subject 200 by this difference.
  • the menu “stand on one foot” is selected as the pre-menu as shown in FIG. 7B, for example, the evaluation of the motion of the target person 200 for the length of time in which the target person 200 can maintain the posture of one leg stand Be added to
  • the facility system 10 determines (selects) an instruction menu based on the evaluation result of the target person 200 operating according to the pre-menu.
  • the facility system 10 controls the display device 112 based on the instruction information representing the selected instruction menu.
  • the display device 112 displays, on the screen area 301, an instruction menu to be executed by the subject 200 and support information for supporting the execution of the instruction menu by the subject 200.
  • the menu “foot-up in the horizontal direction” is selected as the instruction menu, and the reverse video 302 and the marker 304 are displayed in the screen area 301 as support information.
  • the marker 304 is, for example, an image imitating a ball, and is displayed around the foot of the object person 200 in the reverse video 302.
  • the facility system 10 can, for example, instruct the target person 200 to kick the ball represented by the marker 304, thereby prompting the target person 200 to move the foot in the lateral direction.
  • the facility system 10 is also based on the effect information indicating an effect expected to be obtained by the target person 200 executing the instruction menu represented by the instruction information.
  • Control the display device 112. Therefore, in addition to the instruction menu and the support information, the display device 112 displays, on the screen area 301, effect information indicating an effect expected by the execution of the instruction menu.
  • effect information indicating an effect expected by the execution of the instruction menu.
  • FIG. 7C as an effect expected to be obtained by the subject person 200 executing the instruction menu of “foot-up in the lateral direction”, for example, an effect of “being able to keep the body straight”. Information is displayed.
  • the sensor device 111 detects the movement of the object person 200, and the facility system 10 controls the display device 112 based on the result information indicating the evaluation result of the object person 200.
  • the display device 112 presents the evaluation result to the target person 200 in real time (immediately) by displaying the evaluation result in the screen area 301.
  • the facility system 10 changes the color of the marker 304 when the difference between the posture of the target person 200 and the reference posture specified by the reference data (the size of the deviation) exceeds the allowable range. And present the evaluation results.
  • the mode of presentation by the facility system 10 is not limited to this mode, and, for example, display of a message for the target person 200, voice (including warning sound) output, data transmission to terminals such as printout (printing) or a smartphone Or the like.
  • the facility system 10 does not have to perform the process of selecting the pre-menu and the instruction menu described above each time, and may execute the process of presenting the instruction menu. For example, at the time of screening, if it is found from the information such as the name of the target person 200 that the target menu 200 has already been selected, the facility system 10 determines that confirmation of physical ability is unnecessary, Skip the process of selecting a menu and an instruction menu. As a result, in the target person 200, the implementation of the pre-menu as shown in FIG. 7B can be skipped, and the implementation of the instruction menu as shown in FIG. 7C becomes possible.
  • the target person 200 executes the instruction menu as described above, information regarding the position of the target person 200 in the detection space and information regarding the posture of the target person 200 detected by the sensor device 111 Is given to the first input unit 11 as physical information.
  • the facility system 10 excludes a specific evaluation item among the plurality of evaluation items from the evaluation targets and makes the evaluation. It is preferred to do. That is, it is preferable that the facility system 10 excludes an evaluation item related to the motion of the left knee joint from the evaluation targets, for example, for the subject 200 having a disorder in the left knee joint.
  • the institutional system 10 may not perform the evaluation on the evaluation item related to the motion of the left knee joint at all, for example, the threshold on the evaluation item related to the movement of the left knee joint may be lowered and evaluated. You may
  • the facility system 10 sends the target person 200 of a plurality (two in this case) to the screening, the execution of the pre-menu, and the execution of the instruction menu. It differs from the first input example in that it is assumed to be simultaneously performed. Therefore, in the following description, descriptions of points common to the first input example will be omitted. However, the facility system 10 may perform at least the execution of at least the instruction menu at the same time by a plurality of subjects 200, and at least one of the screening and the pre-menu may be performed individually for each subject 200. It is also good. In this input example, in the case where a plurality of (two in this case) target persons 200 are distinguished, each target person is referred to as “target person 200A” or “target person 200B”.
  • institution system 10 determines the necessity of confirmation of physical ability based on the result of screening. For example, when it is determined that there is no problem in physical ability in daily life for both the subjects 200A and 200B as a result of screening, confirmation of the physical ability is determined to be unnecessary. On the other hand, if it is determined as a result of the screening that at least one of the subjects 200A and 200B needs to confirm the physical ability, the facility system 10 executes a pre-menu selection process.
  • the facility system 10 controls the display device 112 based on pre-information representing the selected pre-menu.
  • the display device 112 displays, on the screen area 301, the pre-menu to be performed by the subject 200 and the support information to support the execution of the pre-menu by the subject 200.
  • the menu “one foot standing” is selected as the pre-menu
  • the reverse video 302A, 302B and the sample video 303A, 303B of the target person 200 are screen area 301 as support information. Is displayed on.
  • the sensor device 111 detects the movement of the target person 200, and the facility system 10 executes an instruction menu selection process. Then, the facility system 10 evaluates the operation of the target person 200 based on the first information and the second information, and further, the third information acquired in the screening.
  • evaluation in the facility system 10 is performed individually for each target person 200, for example, a plurality of target persons 200A and 200B may select the same instruction menu. Evaluation results of the target persons 200A and 200B are comprehensively determined.
  • the instruction menu is selected based on the average value of the evaluation results for a plurality of subjects 200A and 200B. That is, a common instruction menu is selected for a plurality of target persons 200A and 200B.
  • the evaluation result for each target person 200 may be used to adjust the size of the load applied to the target person 200 when the instruction menu is performed in the selection process of the instruction menu. In this case, the size of the load can be adjusted for each target person 200 at the start of the instruction menu.
  • the facility system 10 controls the display device 112 based on the instruction information representing the selected instruction menu.
  • the display device 112 displays, on the screen area 301, an instruction menu to be executed by the subject 200 and support information for supporting the execution of the instruction menu by the subject 200.
  • the menu "Large foot in the horizontal direction" is selected as the instruction menu, and the reverse images 302A and 302B and the markers 304A and 304B are displayed on the screen area 301 as support information. It is displayed.
  • the markers 304A and 304B are not particularly distinguished between the target person 200A and the target person 200B, they are simply referred to as "markers 304".
  • the display device 112 causes the screen area 301 to arrange a plurality of similar instruction menus and support information in the horizontal direction so that a plurality of target persons 200A and 200B can simultaneously execute the instruction menu. ing. That is, the screen area 301 is divided into the first area 311 and the second area 312 in the left-right direction. Then, the display device 112 displays an instruction menu and support information (reverse video 302A and marker 304A) for the target person 200A in the first area 311 on the left side when viewing the screen area 301 from the front.
  • an instruction menu and support information reverse video 302A and marker 304A
  • the display device 112 displays an instruction menu and support information (reverse video 302B and marker 304B) for the target person 200B in the second area 312 on the right side when viewing the screen area 301 from the front.
  • the plurality of target persons 200A and 200B can simultaneously execute the instruction menu in a state where they are aligned in the left and right direction in front of the screen area 301.
  • the facility system 10 is expected to be obtained by the target person 200 executing the instruction menu represented by the instruction information, in addition to the instruction information, as in the first input example.
  • the display device 112 is controlled based also on the effect information indicating the effect. Therefore, in addition to the instruction menu and the support information, the display device 112 displays, on the screen area 301, effect information indicating an effect expected by the execution of the instruction menu.
  • the sensor device 111 detects the movement of the target person 200, and the facility system 10 evaluates the degree of achievement of the instruction menu for each target person 200.
  • the facility system 10 acquires, from the sensor device 111, first information (operation information) on the operation of the target person 200 in operation according to the instruction menu.
  • the first information includes information such as a heartbeat measured by a wearable sensor terminal worn by each of the target persons 200A and 200B.
  • the “sensor terminal” mentioned here includes sensors such as a gyro sensor, an acceleration sensor, an activity meter, and a heart rate meter, and can measure, for example, the heartbeat of the subject 200.
  • the facility system 10 acquires, from the first storage unit 14 as second information, reference data defining at least an exemplary movement in the instruction menu.
  • the facility system 10 evaluates the degree of achievement of the instruction menu for each target person 200 based on the first information and the second information, and further, the third information acquired in the screening.
  • the facility system 10 digitizes the difference (magnitude of the deviation) between the posture of the target person 200 and the reference posture specified in the reference data, and evaluates the achievement degree of the instruction menu by this difference.
  • the menu "foot raising in the lateral direction" is selected as the instruction menu as shown in FIG. 8C, for example, the time taken for the target person 200 to raise the foot to the height of the marker 304 (reaction time)
  • the length of the subject is also added to the evaluation of the motion of the subject 200.
  • the facility system 10 adjusts the size of the load applied when performing the instruction menu for each target person 200 according to the evaluation result of the achievement level. For example, when the target person 200 exercises according to the instruction menu, the facility system 10 increases the amount of movement of a specific part of the body so that the physical load on the target person 200 becomes large, Increase the load by speeding up the movement.
  • the facility system 10 displays, for example, the heights of the markers 304 displayed in the screen area 301 for each target person 200. Adjust the size of the load by adjusting (position). In the example of FIG. 8C, the marker 304A for the subject 200A is displayed at a higher position than the marker 304B for the subject 200B so that the load on the subject 200A is larger than that of the subject 200B.
  • the facility system 10 presents an evaluation result for each target person 200.
  • the facility system 10 controls the display device 112 based on the result information indicating the evaluation result of the target person 200.
  • the display device 112 presents the evaluation result to the target person 200 in real time (immediately) by displaying the evaluation result in the screen area 301.
  • the facility system 10 determines, for example, whether the instruction menu has ended. If the instruction menu has not ended, the facility system 10 repeats the processing after presenting the instruction menu. If the instruction menu has ended, the facility system 10 ends a series of processing.
  • information on the position of the target person 200 in the detection space and information on the posture of the target person 200 detected by the sensor device 111 as a plurality of target persons 200 execute the instruction menu. Is given to the first input unit 11 as physical information. At this time, physical information is given to the first input unit 11 for each target person 200.
  • each target person 200 can work on the instruction menu with another target person 200 instead of one person.
  • the individual subjects 200 can easily communicate with other subjects 200 and can keep their motivation higher than working alone.
  • the embodiment described above is only one of the various embodiments of the present disclosure.
  • the above-mentioned embodiment can be variously changed according to design etc. if the object of the present disclosure can be achieved.
  • the same function as the activity support method may be embodied by a (computer) program or a non-temporary recording medium or the like recording the program.
  • a (computer) program according to one aspect is a program for causing one or more processors to execute the above-described activity support method.
  • the activity support system 100 in the present disclosure includes a computer system.
  • a computer system mainly includes one or more processors and memory as hardware.
  • the function as the activity support system 100 in the present disclosure is realized by one or more processors executing a program recorded in the memory of the computer system.
  • the program may be pre-recorded in the memory of the computer system, may be provided through a telecommunication line, and recorded in a non-transitory recording medium such as a computer system-readable memory card, an optical disc, a hard disk drive, etc. It may be provided.
  • Each of the one or more processors of the computer system is configured with one or more electronic circuits including a semiconductor integrated circuit (IC) or a large scale integrated circuit (LSI).
  • IC semiconductor integrated circuit
  • LSI large scale integrated circuit
  • integrated circuit such as IC or LSI
  • IC integrated circuit
  • LSI very large scale integration
  • ULSI ultra large scale integration
  • use as a processor also a field-programmable gate array (FPGA) or a logic device capable of reconfiguring junction relations inside the LSI or reconfiguring circuit sections inside the LSI, which are programmed after the LSI is manufactured.
  • FPGA field-programmable gate array
  • the plurality of electronic circuits may be integrated into one chip or may be distributed to a plurality of chips.
  • the plurality of chips may be integrated into one device or may be distributed to a plurality of devices.
  • a computer system as referred to herein includes a microcontroller having one or more processors and one or more memories. Therefore, the microcontroller is also configured with one or more electronic circuits including a semiconductor integrated circuit or a large scale integrated circuit.
  • the fact that the third communication unit (acquisition unit) 31 and the third processing unit (presentation unit) 32 are integrated in one case is not an essential configuration for the activity support system 100. These may be provided separately in a plurality of housings. Also, at least part of the functions of the activity support system 100 may be realized by, for example, a server or a cloud (cloud computing).
  • exercise menu M11 and cooking menu M12 are mentioned as an example of activity menu M1, it is not the meaning which limits activity menu M1.
  • the activity menu M1 proposes participation in a circle activity for the purpose of recovery from the psychiatric disorder.
  • the menu may be
  • the one or more processors generate the activity menu M1 based on the physical information input at the facility 1, but the present invention is not limited thereto.
  • the one or more processors may generate the activity menu M1 based on the physical information and the supplementary information obtained by the therapist examining the subject 200.
  • the supplementary information is preferably input by the therapist at the same time as the physical information is input.
  • the activity menu M1 generated by the first processing unit (generation unit) 12 at the generation step S2 may be plural.
  • the facility system 10 includes, for example, a device such as a touch panel display that displays a plurality of activity menus M1, and a device that receives an operation to select one of the plurality of activity menus M1. Is preferred.
  • the selected activity menu M1 is uploaded to the server 2.
  • the request of the target person 200 may be further input.
  • the one or more processors can generate the activity menu M1 after excluding in advance the activity menu M1 which is not suitable for the request of the object person 200.
  • the server 2 acquires not only the activity menu M1 but also the physical information of the target person 200 from the facility system 10, and stores the acquired activity menu M1 and the physical information in the second storage unit 23 There is.
  • an evaluation of the physical information of the subject 200 may be presented in the presentation step S5. This evaluation may be performed by the facility system 10 or may be performed by the server 2.
  • effect information may be presented representing an effect expected to be obtained by the subject person 200 executing the activity menu M1.
  • the second place is not limited to the home 4 of the target person 200.
  • the second place may be an office of a company where the target person 200 works, a public facility such as a public hall, or a park. That is, the second place may be a place different from the facility 1 that generates the activity menu M1, in particular, a place where the target person 200 visits in daily life.
  • first place and the second place may be different areas in the same facility.
  • the facility including the first place and the second place is a welfare facility for the elderly
  • the first place may be a common area on the first floor of the facility
  • the second place may be a living area on the second floor of the facility .
  • the activity menu M1 is updated based on the activity result of the object person 200 in update step S9, it is not the meaning limited to this. For example, if the target person 200 measures physical information again at the facility 1 before executing the update step S9, based on the activity result of the target person 200 and the latest physical information of the target person 200.
  • the activity menu M1 may be updated.
  • physical information is physical information of subject 200 who is shown activity menu M1, it is not the meaning limited to this.
  • the physical information may be physical information of a person targeted by the subject 200, or physical information of a patient having the same disease as the subject 200. That is, if the activity menu M1 capable of supporting the activity of the object person 200 can be presented to the object person 200, the activity menu M1 may be generated based on the physical information of another person different from the object person 200. Good.
  • institution 1 is a medical institution which performs rehabilitation of a rehabilitation center etc., it is not the meaning limited to this.
  • the facility 1 may be another medical delivery facility such as a pharmacy.
  • the facility 1 may be a fitness facility or a commercial facility such as a shopping mall.
  • the facility system 10 may be provided in any facility.
  • the sensor device 111 is not limited to a configuration having a camera and a depth sensor, and instead of or in addition to these sensors, for example, a load sensor, an infrared sensor, a thermography, a radio wave (microwave) sensor, etc. You may have.
  • the sensor device 111 may include a sensor such as a gyro sensor, an acceleration sensor, an activity meter, and a heart rate sensor worn by the target person 200. In this case, by using the sensor device 111, it is possible to measure the exercise ability other than the exercise ability to maintain the posture of the subject 200, and to input the measured exercise ability as physical information.
  • the first communication unit 13 may be configured to communicate with the server 2 or the operation terminal 3 via, for example, a relay such as a router and the network N1.
  • the second communication unit 21 and the third communication unit 31 may be configured to communicate via the relay device and the network N1.
  • all of the first communication unit 13, the second communication unit 21, and the third communication unit 31 may be connected to the network N1 via a mobile phone network (carrier network) provided by a communication carrier.
  • the mobile telephone network includes, for example, a 3G (third generation) line, an LTE (Long Term Evolution) line, and the like.
  • the third processing unit (presentation unit) 32 not only displays the activity menu M1, but also displays a specific instruction for causing the subject person 200 to execute the activity menu M1. It may be displayed on the screen.
  • the third processing unit 32 specifically instructs support information indicating how to move the body, posture, rhythm of walking, etc. necessary for correct walking motion. May be displayed on the display unit 34 as
  • the operation terminal 3 may cooperate with an exercise device owned by the subject person 200.
  • the “exercise device” in the present disclosure is, for example, a device that exerts a force on at least a part of the body of the subject 200 and passively exercises at least a part of the body of the subject 200.
  • the activity menu M1 is downloaded from the operation terminal 3 to the exercise device, the activity menu M1 is displayed on the display device of the exercise device, or the activity menu M1. It is possible to make a voice guide.
  • the exercise apparatus has a function of measuring the exercise of the subject 200, the measurement result is uploaded from the exercise apparatus to the server 2 through the operation terminal 3 as the activity result of the subject 200. It is possible.
  • the activity menu M1 after the activity menu M1 is temporarily stored in the server 2, it is transmitted to the operation terminal 3 in response to the request of the object person 200, but the present invention is not limited thereto.
  • the activity menu M1 may be transmitted from the facility system 10 to the operation terminal 3 via the network N1 without the server 2 in response to the request of the object person 200.
  • the activity menu M1 generated by the first processing unit 12 is stored in the first storage unit 14 of the facility system 10. That is, in this case, the first storage unit 14 corresponds to the second storage unit 23 of the server 2.
  • the activity support method includes the generation step (S2), the acquisition step (S4), and the presentation step (S5).
  • the generation step (S2) is a step of generating the activity menu (M1) of the subject (200) by one or more processors based on the input physical information.
  • the acquisition step (S4) is a step of acquiring the activity menu (M1) generated in the generation step (S2) via the network.
  • the presenting step (S5) is a step of presenting the activity menu (M1) acquired in the acquiring step (S4).
  • a storage unit (second storage unit) that stores the activity menu (M1) generated in the generation step (S2) in the acquisition step (S4)
  • the activity menu (M1) is obtained from (23).
  • the activity menu (M1) can be easily presented to the subject (200) at a timing desired by the subject (200), such as when the subject (200) is at home.
  • the activity support method further includes a result acquisition step (S6) in any of the first to third aspects.
  • the result acquisition step (S6) is a step of acquiring the activity result of the object person (200) based on the activity menu (M1) generated in the generation step (S2).
  • the activity support method further includes an evaluation step (S8) in the fourth aspect.
  • the evaluation step (S8) based on the activity menu (M1) generated in the generation step (S2) and the activity result acquired in the result acquisition step (S6), the activity of the target person (200) is 1 It is a step evaluated by the above processor.
  • the activity support method further includes a storing step (S7) in the fourth or fifth aspect.
  • the storing step (S7) is a step of storing the activity result acquired in the result acquiring step (S6) in the storage unit (23) storing the activity menu (M1) generated in the generating step (S2).
  • the activity support method further includes an updating step (S9) in any of the fourth to sixth aspects.
  • the updating step (S9) is a step of updating the activity menu (M1) by one or more processors based on the activity result acquired in the result acquiring step (S6).
  • the physical information input in the generation step (S2) includes the following information. That is, the physical information includes at least one of information on the position of the subject (200) detected by the subject (200) executing the instruction menu in the detection space and information on the posture of the subject (200). .
  • the instruction menu is presented to the subject (200) by the following method. That is, this method acquires the first information, acquires the second information, and outputs the instruction information.
  • the first information is information on the operation of the target person (200).
  • the second information is information for proposing, as an instruction menu, any menu selected from among a plurality of rehabilitation menus.
  • the instruction information is information representing an instruction menu selected based on at least the first information and the second information.
  • the instruction menu is selected. Then, since instruction information representing the selected instruction menu is output, it is possible to present the instruction menu to the target person (200). In this manner, in this aspect, it is possible to automatically determine which subject (200) is to perform which rehabilitation menu, based on the action of the subject (200). That is, it is possible to automatically propose a menu for rehabilitation suitable for the subject (200) without intervention by a therapist or the like who assists the rehabilitation of the subject (200). Therefore, according to the rehabilitation support method, there is an advantage that it is possible to reduce the burden of a therapist or the like who assists the rehabilitation of the subject (200).
  • a plurality of subjects (200) are included, and the following method is included. That is, this method outputs instruction information so as to be simultaneously performed on a plurality of subjects (200).
  • this method the degree of achievement of the instruction menu is evaluated for each target person (200) based on operation information on the operation of each of a plurality of target persons (200) operating according to the instruction menu. Furthermore, this method adjusts the size of the load applied when the instruction menu is implemented for each subject (200) according to the evaluation result of the achievement level.
  • the instruction menu can be simultaneously implemented to a plurality of subjects (200).
  • the degree of achievement of the instruction menu is evaluated for each target person (200) based on the operation information on the operation of each of a plurality of target persons (200) operating according to the instruction menu, one target person ( 200) There is no need for a single therapist etc. Therefore, when the number of subjects (200) increases, an increase in the burden on the therapist or the like is suppressed.
  • the size of the load applied when performing the instruction menu is adjusted for each target person (200). Therefore, even if there is not one therapist etc. per one subject (200), an effective rehabilitation can be practiced for each subject (200). Therefore, according to this aspect, there is an advantage that the burden on the therapist or the like can be reduced when a plurality of subjects (200) perform rehabilitation simultaneously.
  • a program according to an eleventh aspect is a program for causing one or more processors to execute the activity support method according to any one of the first to tenth aspects.
  • the activity support system (100) includes a generation unit (first processing unit) (12), an acquisition unit (third communication unit) (31), and a presentation unit (32).
  • the generation unit (12) generates an activity menu (M1) of the subject (200) by the one or more processors based on the input physical information.
  • the acquisition unit (31) acquires the activity menu (M1) generated by the generation unit (12) via the network (N1).
  • the presentation unit (32) presents the activity menu (M1) acquired by the acquisition unit (31).
  • the methods according to the second to tenth aspects are not essential to the activity support method, and can be omitted as appropriate.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Marketing (AREA)
  • Pain & Pain Management (AREA)
  • Veterinary Medicine (AREA)
  • Child & Adolescent Psychology (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Rehabilitation Therapy (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The purpose of the present disclosure is to facilitate presenting, to a subject, an appropriate activity to be performed. An activity assistant method has a generation step (S2), an acquisition step (S4), and a presentation step (S5). The generation step (S2) is a step in which an activity menu for the subject is generated by one or more processors on the basis of input physical information. The acquisition step (S4) is a step in which the activity menu generated in the generation step (S2) is acquired via a network. The presentation step (S5) is a step in which the activity menu acquired in the acquisition step (S4) is presented.

Description

活動支援方法、プログラム、及び活動支援システムActivity support method, program and activity support system
 本開示は、一般に活動支援方法、プログラム、及び活動支援システムに関し、より詳細には、対象者の活動を支援する活動支援方法、プログラム、及び活動支援システムに関する。 The present disclosure generally relates to an activity support method, a program, and an activity support system, and more particularly, to an activity support method, a program, and an activity support system for supporting a target person's activity.
 従来、ユーザの運動を支援する運動支援システムが知られており、例えば特許文献1に開示されている。特許文献1に記載の運動支援システムは、リスト機器及びチェスト機器と、撮像機器と、ネットワークサーバと、ユーザ端末と、を備えている。リスト機器及びチェスト機器は、ユーザのランニング動作中のセンサデータを取得する。撮像機器は、センサデータ等に同期してランニング映像を取得する。ネットワークサーバは、センサデータ等、ランニング映像を加工、分析処理することで、ユーザとエリートランナーとの比較映像、指導項目ごとに比較映像に重畳表示される指標、及び指導項目に応じたアドバイステキストを含むアドバイスデータを作成する。ユーザ端末は、アドバイスデータを、ネットワークを介して所定の表示形態で表示する。 BACKGROUND Conventionally, an exercise support system that supports the exercise of a user is known, and is disclosed, for example, in Patent Document 1. The exercise support system described in Patent Document 1 includes a wrist device and a chest device, an imaging device, a network server, and a user terminal. The wrist device and the chest device acquire sensor data during the running operation of the user. The imaging device acquires a running image in synchronization with sensor data and the like. The network server processes and analyzes the running video such as sensor data, compares the video between the user and the elite runner, the index displayed superimposed on the comparative video for each teaching item, and the advice text according to the teaching item Create advice data to include. The user terminal displays the advice data in a predetermined display form via the network.
 特許文献1に記載の運動支援システムでは、ユーザ(対象者)が自発的に行うランニング動作(活動)に対するアドバイスを対象者に提供しているが、その活動が対象者にとって適切な活動であるか否かは不明である。このため、この運動支援システムでは、対象者に対して適切な実行すべき活動を提示し難い、という問題があった。 In the exercise support system described in Patent Document 1, the target person is provided with advice for the running motion (activity) spontaneously performed by the user (target person), but is the activity appropriate for the target person? It is unknown whether or not. Therefore, in this exercise support system, there is a problem that it is difficult to present appropriate activities to be performed on the subject.
特開2015-119833号公報JP, 2015-119833, A
 本開示は、上記の点に鑑みてなされており、対象者に対して適切な実行すべき活動を提示し易い活動支援方法、プログラム、及び活動支援システムを提供することを目的とする。 The present disclosure has been made in view of the above, and it is an object of the present disclosure to provide an activity support method, program, and activity support system that easily present an activity to be performed appropriately to a target person.
 本開示の一態様に係る活動支援方法は、生成ステップと、取得ステップと、提示ステップと、を有する。前記生成ステップは、入力された身体情報に基づいて対象者の活動メニューを1以上のプロセッサにより生成するステップである。前記取得ステップは、ネットワークを介して、前記生成ステップにて生成された前記活動メニューを取得するステップである。前記提示ステップは、前記取得ステップにて取得した前記活動メニューを提示するステップである。 The activity support method according to an aspect of the present disclosure includes a generation step, an acquisition step, and a presentation step. The generation step is a step of generating an activity menu of the subject by one or more processors based on the input physical information. The acquisition step is a step of acquiring the activity menu generated in the generation step via a network. The presenting step is a step of presenting the activity menu acquired in the acquiring step.
 本開示の一態様に係るプログラムは、上記の活動支援方法を1以上のプロセッサに実行させるためのプログラムである。 A program according to an aspect of the present disclosure is a program for causing one or more processors to execute the above-described activity support method.
 本開示の一態様に係る活動支援システムは、生成部と、取得部と、提示部と、を備える。前記生成部は、入力された身体情報に基づいて対象者の活動メニューを1以上のプロセッサにより生成する。前記取得部は、ネットワークを介して、前記生成部で生成された前記活動メニューを取得する。前記提示部は、前記取得部で取得した前記活動メニューを提示する。 An activity support system according to an aspect of the present disclosure includes a generation unit, an acquisition unit, and a presentation unit. The generation unit generates an activity menu of the subject by one or more processors based on the input physical information. The acquisition unit acquires the activity menu generated by the generation unit via a network. The presentation unit presents the activity menu acquired by the acquisition unit.
図1は、本開示の一実施形態に係る活動支援システムの動作(活動支援方法)の一例を示すフローチャートである。FIG. 1 is a flowchart illustrating an example of an operation (activity support method) of an activity support system according to an embodiment of the present disclosure. 図2は、同上の活動支援システムにおける施設での動作の一例を示す概念図である。FIG. 2 is a conceptual diagram showing an example of the operation at the facility in the above-described activity support system. 図3は、同上の活動支援システムにおけるユーザ宅での動作の一例を示す概念図である。FIG. 3 is a conceptual diagram showing an example of the operation at the user's home in the above-described activity support system. 図4は、同上の活動支援システムの構成を示すブロック図である。FIG. 4 is a block diagram showing the configuration of the above-mentioned activity support system. 図5A~図5Cは、それぞれ同上の活動支援システムにおいて、提示部が提示する活動メニューの一例を示す概念図である。5A to 5C are conceptual diagrams showing an example of an activity menu presented by the presentation unit in the above-described activity support system. 図6は、同上の活動支援システムの動作(活動支援方法)の他の一例を示すフローチャートである。FIG. 6 is a flowchart showing another example of the operation (activity support method) of the above-described activity support system. 図7A~図7Cは、同上の活動支援システムにおける施設用システムを用いて対象者がリハビリテーションを行う様子を示す概念図である。FIG. 7A to FIG. 7C are conceptual diagrams showing that the target person performs rehabilitation using the facility system in the above-described activity support system. 図8A~図8Cは、同上の活動支援システムにおける施設用システムを用いて、複数人の対象者がリハビリテーションを行う様子を示す概念図である。8A to 8C are conceptual diagrams showing how a plurality of subjects perform rehabilitation using the facility system in the above-mentioned activity support system.
 (1)概要
 本実施形態に係る活動支援方法は、図1~図3に示すように、対象者200の活動を支援するための方法である。本開示でいう「活動」は、対象者200の日常生活における行動全般を意味する。つまり、「活動」は、例えばリハビリテーション(rehabilitation)、トレーニングなどの対象者200の運動のみならず、食事などの対象者200が栄養を摂取する行動、及び対象者200によるサークル活動といった精神的な行動も含む。本開示でいう「リハビリテーション」は、例えば、加齢、病気又はけが等により身体能力及び認知機能等が低下した状態の人を対象とし、対象者の自立した日常生活を可能とするために行う、身体的又は心理的な訓練を意味する。
(1) Overview As shown in FIGS. 1 to 3, the activity support method according to the present embodiment is a method for supporting the activity of the target person 200. The “activity” in the present disclosure means the general behavior of the target person 200 in daily life. That is, “activity” is not only exercise of the target person 200 such as rehabilitation (rehabilitation) and training but also mental activity such as an action in which the target person 200 eats nutrition such as food and a circle activity by the target person 200 Also includes. The “rehabilitation” referred to in the present disclosure is performed, for example, in order to enable a subject's independent daily life, targeting a person in a state in which physical ability and cognitive function etc. have been reduced due to aging, illness or injury. Means physical or psychological training.
 本実施形態では、一例として、いわゆる「フレイル」と呼ばれる程度に身体能力が低下した高齢者であって、自立した日常生活を送ることを目的としている者を、対象者200とする場合について説明する。つまり、以下で説明する活動支援方法は、対象者200のリハビリテーションを支援するための方法である。 In this embodiment, as an example, a case where an elderly person whose physical ability has deteriorated to a degree called so-called “flare” and whose purpose is to live an independent daily life is the target person 200 will be described. . That is, the activity support method described below is a method for supporting the rehabilitation of the subject person 200.
 活動支援方法は、図1に示すように、生成ステップS2と、取得ステップS4と、提示ステップS5と、を有している。生成ステップS2は、入力された身体情報に基づいて対象者200の活動メニューM1(図5A~図5C参照)を1以上のプロセッサにより生成するステップである。本開示でいう「身体情報」は、例えば、対象者200の年齢、性別、身長、体重、BMI(Body Mass Index)、運動能力、及び身体的又は精神的な疾患の有無などを含む。「運動能力」は、対象者200が身体(手、足、首及び腰など)を動かす能力であって、例えば、握力、姿勢を維持する能力(開眼片足立ちの状態を維持する時間)などを含む。また、本開示でいう「活動メニュー」は、対象者200に対して特定の活動等を指示するために対象者200に提示されるメニューである。 As shown in FIG. 1, the activity support method includes a generation step S2, an acquisition step S4, and a presentation step S5. The generation step S2 is a step of generating the activity menu M1 (see FIGS. 5A to 5C) of the subject 200 by one or more processors based on the input physical information. The “physical information” in the present disclosure includes, for example, the age, sex, height, weight, BMI (Body Mass Index), exercise capacity, and presence or absence of a physical or mental disease of the subject 200. The “exercise ability” is the ability of the subject 200 to move the body (hand, foot, neck, waist, etc.), and includes, for example, grip strength, ability to maintain posture (time to keep open one's foot standing), etc. . Further, the “activity menu” in the present disclosure is a menu presented to the target person 200 in order to instruct the target person 200 on a specific activity or the like.
 本実施形態では、身体情報の入力は、リハビリテーションセンタ等の施設1(以下、「第1場所」ともいう)にて行われる。身体情報の入力は、一例として図2に示すように、対象者200の動作を測定することで行われる。身体情報の入力は、例えばキーボード又はマイクロフォン等の入力装置を用いて、質問形式で行われてもよい。また、本実施形態では、身体情報の入力は、理学療法士、作業療法士及び言語聴覚士等のセラピストの補助無しで対象者200が自ら行っているが、セラピストの補助がある状態で対象者200が行ってもよい。質問形式での身体情報の入力の場合は、対象者200ではなく、セラピスト、対象者200の家族などの対象者200の代理人等が入力を行ってもよい。 In the present embodiment, the input of the physical information is performed at a facility 1 such as a rehabilitation center (hereinafter, also referred to as a “first place”). The input of the physical information is performed by measuring the motion of the object person 200 as shown in FIG. 2 as an example. The input of physical information may be performed in the form of a question using an input device such as a keyboard or a microphone, for example. Further, in the present embodiment, the input of the physical information is performed by the target person 200 without the assistance of a therapist such as a physical therapist, an occupational therapist and a speech therapist, but the target person with the assistance of the therapist 200 may go. In the case of the input of the physical information in the question format, the therapist, an agent of the target person 200 such as the family of the target person 200, or the like may perform the input instead of the target person 200.
 生成ステップS2では、1以上のプロセッサが、施設1にて入力された身体情報に基づいて、対象者200が自宅4(以下、「第2場所」ともいう)にて実行し得るリハビリテーション用の活動メニューM1を生成する。活動メニューM1は、例えば、歩行動作、片足立ち動作、起き上がり動作、立ち座り動作及び踏み台昇降動作等の、日常生活で必要となる対象者200の種々の動作を訓練するためのメニューを含む。本開示でいう「起き上がり動作」は対象者200が寝転んだ状態から起き上がる動作を意味し、本開示でいう「立ち座り動作」は対象者200が椅子から立ち上がる動作及び/又は椅子に座る動作を意味する。また、活動メニューM1は、例えば、対象者200の健康を回復又は維持するために必要な栄養素を採るための料理のレシピを含む。 In the generation step S2, an activity for rehabilitation that one or more processors can execute at the home 4 (hereinafter, also referred to as "the second place") based on the physical information input at the facility 1. Generate menu M1. The activity menu M1 includes, for example, a menu for training various operations of the object person 200 required in daily life, such as a walking operation, a single-foot standing operation, an uprising operation, a standing and sitting operation, and a platform lifting operation. The “rising movement” in the present disclosure means the movement of the target person 200 rising from the lying state, and the “stand-up movement” in the present disclosure means the movement of the target person 200 rising from the chair and / or the movement of sitting on the chair. Do. In addition, the activity menu M1 includes, for example, a recipe of cooking for obtaining nutrients necessary to restore or maintain the health of the subject 200.
 また、本開示でいう「生成」は、身体情報に基づいて新たな活動メニューM1を生成すること、身体情報に基づいて既存の活動メニューM1の一部を変更することを含む。その他、「生成」は、身体情報に基づいて既存の複数の活動メニューM1から対象者200に適した活動メニューM1を選択することを含む。このように、生成ステップS2では、どの対象者200にいずれの活動メニューM1を実施させるかが、身体情報に基づいて自動的に決定される。生成ステップS2で生成された活動メニューM1は、ネットワークN1を介してサーバ2へアップロードされ、サーバ2にて記憶される(図4参照)。 Further, “generation” in the present disclosure includes generating a new activity menu M1 based on physical information, and changing a part of the existing activity menu M1 based on physical information. In addition, "generation" includes selecting an activity menu M1 suitable for the target person 200 from a plurality of existing activity menus M1 based on physical information. Thus, in the generation step S2, it is automatically determined based on the physical information which subject person 200 is to perform which activity menu M1. The activity menu M1 generated in the generation step S2 is uploaded to the server 2 via the network N1 and stored in the server 2 (see FIG. 4).
 取得ステップS4は、ネットワークN1を介して、生成ステップS2にて生成された活動メニューM1を取得するステップである。本実施形態では、取得ステップS4は、図3に示すように、対象者200の自宅4にて行われる。具体的には、対象者200が、対象者200の所有する操作端末3を操作し、ネットワークN1を介してサーバ2に記憶されている活動メニューM1をダウンロードすることで、取得ステップS4が実行される。操作端末3は、例えばタブレット端末又はスマートフォン等の携帯情報端末、パーソナルコンピュータ(ラップトップ型を含む)、テレビ受像機、時計型などの各種のウェアラブル端末、又は専用の装置などである。 The acquisition step S4 is a step of acquiring the activity menu M1 generated in the generation step S2 via the network N1. In the present embodiment, the obtaining step S4 is performed at the home 4 of the target person 200, as shown in FIG. Specifically, the acquisition step S4 is executed by the target person 200 operating the operation terminal 3 owned by the target person 200 and downloading the activity menu M1 stored in the server 2 via the network N1. Ru. The operation terminal 3 is, for example, a portable information terminal such as a tablet terminal or a smartphone, a personal computer (including a laptop type), a television receiver, various wearable terminals such as a watch type, or a dedicated device.
 提示ステップS5は、取得ステップS4にて取得した活動メニューM1を提示するステップである。本実施形態では、提示ステップS5は、取得ステップS4と同様に、対象者200の自宅4にて行われる。具体的には、操作端末3にダウンロードされた活動メニューM1を、例えば音声で操作端末3から出力したり、又は画像(静止画像又は動画像)で操作端末3の表示部34に表示したりすることで、提示ステップS5が実行される(図5A~図5C参照)。 The presenting step S5 is a step of presenting the activity menu M1 acquired in the acquiring step S4. In the present embodiment, the presentation step S5 is performed at the home 4 of the target person 200, as in the acquisition step S4. Specifically, the activity menu M1 downloaded to the operation terminal 3 is output from the operation terminal 3 by voice, for example, or displayed on the display unit 34 of the operation terminal 3 by an image (still image or moving image) Thus, the presenting step S5 is performed (see FIGS. 5A to 5C).
 上述のように、生成ステップS2は、第1場所にて行われ、取得ステップS4及び提示ステップS5は、第1場所から離れた第2場所にて行われる。このように、本実施形態では、対象者200が施設1で身体情報を入力することで生成される活動メニューM1を、対象者200の自宅4にて提示することが可能である。このため、本実施形態では、対象者200に対して適切な活動メニューM1、つまり実行すべき活動を提示し易い、という利点がある。 As mentioned above, the generation step S2 is performed at the first location, and the acquisition step S4 and the presentation step S5 are performed at the second location remote from the first location. Thus, in the present embodiment, it is possible to present the activity menu M1 generated by the target person 200 inputting physical information at the facility 1 at the home 4 of the target person 200. Therefore, the present embodiment has an advantage that it is easy to present the target person 200 an appropriate activity menu M1, that is, an activity to be performed.
 (2)詳細
 以下、本実施形態に係る活動支援方法を実施するためのシステムである活動支援システム100の構成について図2~図4を用いて説明する。本実施形態に係る活動支援システム100は、図4に示すように、施設1に設けられた施設用システム10と、サーバ2と、操作端末3と、を備えている。施設用システム10と、サーバ2と、操作端末3とは、ネットワークN1を介して互いに接続されている。本実施形態では、サーバ2は、活動支援システム100の構成要素ではないこととして説明するが、サーバ2は活動支援システム100の構成要素に含まれてもよい。また、サーバ2は必須の構成ではなく、適宜省略されてもよい。
(2) Details The configuration of the activity support system 100, which is a system for implementing the activity support method according to the present embodiment, will be described below with reference to FIGS. As shown in FIG. 4, the activity support system 100 according to the present embodiment includes a facility system 10 provided in the facility 1, a server 2, and an operation terminal 3. The facility system 10, the server 2 and the operation terminal 3 are connected to each other via the network N1. Although the server 2 is described as not being a component of the activity support system 100 in the present embodiment, the server 2 may be included in the component of the activity support system 100. The server 2 is not an essential component, and may be omitted as appropriate.
 施設用システム10は、第1入力部11と、第1処理部12と、第1通信部13と、第1記憶部14と、を備えている。施設用システム10は、1以上のプロセッサ及びメモリを有するコンピュータシステムを主構成として実現されている。そして、1以上のプロセッサが適宜のプログラムを実行することにより、第1入力部11、第1処理部12、及び第1通信部13の機能が実現される。プログラムは、メモリに予め記録されていてもよいし、インターネット等の電気通信回線を通じて、又はメモリカード等の非一時的な記録媒体に記録されて提供されてもよい。 The facility system 10 includes a first input unit 11, a first processing unit 12, a first communication unit 13, and a first storage unit 14. The facility system 10 is implemented mainly with a computer system having one or more processors and a memory. The functions of the first input unit 11, the first processing unit 12, and the first communication unit 13 are realized by one or more processors executing appropriate programs. The program may be pre-recorded in a memory, or may be provided through a telecommunication line such as the Internet or in a non-transitory recording medium such as a memory card.
 第1入力部11は、対象者200が身体情報を入力するための入力装置である。第1入力部11に入力された身体情報は、第1処理部12に与えられる。本実施形態では、施設用システム10には、図2に示すように、センサ装置111と、表示装置112と、が接続されている。施設用システム10と、センサ装置111及び表示装置112の各々との間の通信方式は、例えば、LAN(Local Area Network)等のネットワークを介した双方向の有線通信である。施設用システム10と、センサ装置111又は表示装置112との間の通信方式は、有線通信に限らず、無線通信であってもよい。 The first input unit 11 is an input device for the subject person 200 to input physical information. The physical information input to the first input unit 11 is provided to the first processing unit 12. In the present embodiment, as shown in FIG. 2, a sensor device 111 and a display device 112 are connected to the facility system 10. A communication method between the facility system 10 and each of the sensor device 111 and the display device 112 is, for example, bidirectional wire communication via a network such as a LAN (Local Area Network). The communication system between the facility system 10 and the sensor device 111 or the display device 112 is not limited to wired communication, and may be wireless communication.
 センサ装置111は、検知空間内での対象者200の位置、及び対象者200の姿勢を検知する装置である。本開示でいう「検知空間」は、センサ装置111によって規定される適当な大きさの空間であって、第1入力部11への身体情報の入力時においては、対象者200はこの検知空間内に存在していることとする。センサ装置111は、例えば室内の壁面300に設置されている。この壁面300には、後述するように表示装置112によって映像が投影されるので、基本的には、対象者200は壁面300側(センサ装置111側)を向く。センサ装置111は、カメラ(イメージセンサ)及び深度センサ等の複数のセンサを有している。センサ装置111は、これら複数のセンサの出力に対して適宜の信号処理を実行するプロセッサ等を更に有している。 The sensor device 111 is a device that detects the position of the object person 200 in the detection space and the posture of the object person 200. The “detection space” referred to in the present disclosure is a space of an appropriate size defined by the sensor device 111, and the target person 200 is in the detection space when inputting the physical information to the first input unit 11. To be present in The sensor device 111 is installed, for example, on a wall surface 300 in a room. Since an image is projected on the wall surface 300 by the display device 112 as described later, basically, the object person 200 faces the wall surface 300 side (the sensor device 111 side). The sensor device 111 includes a plurality of sensors such as a camera (image sensor) and a depth sensor. The sensor device 111 further includes a processor or the like that performs appropriate signal processing on the outputs of the plurality of sensors.
 本実施形態では、センサ装置111は、対象者200を撮像した撮像画像、左右方向及び奥行方向(対象者200の前後方向)を含めた対象者200の位置、並びに対象者200の姿勢を検知する。つまり、センサ装置111は、水平面内での対象者200の位置(重心位置を含む)を検知する。更に、センサ装置111は、対象者200の姿勢について、例えば、前傾か後傾か、背中、腰、ひざ等のどこが、どの向きに、何度曲がっているか、ということ等を検知する。本実施形態では、センサ装置111により検知される、検知空間内での対象者200の位置に関する情報、及び対象者200の姿勢に関する情報は、対象者200の身体情報として施設用システム10の第1入力部11に与えられる。このように、対象者200に特定の姿勢(ここでは、片足立ち)をとらせ、対象者200が特定の姿勢を所定時間維持できるか否か等により、対象者200の筋力が十分にあるか否か、又は関節の柔軟性が十分にあるか否か等を測定することが可能である。 In the present embodiment, the sensor device 111 detects a captured image obtained by imaging the target person 200, the position of the target person 200 including the lateral direction and the depth direction (the front-rear direction of the target person 200), and the posture of the target person 200. . That is, the sensor device 111 detects the position (including the center of gravity position) of the target person 200 in the horizontal plane. Furthermore, the sensor device 111 detects, for example, which direction the target person 200 is bent forward and backward, and in what direction, and how many directions such as the back, the waist, the knee, and the like are bent. In the present embodiment, the information on the position of the target person 200 in the detection space and the information on the posture of the target person 200 detected by the sensor device 111 are the first information of the facility system 10 as physical information of the target person 200. It is given to the input unit 11. In this manner, whether the subject person 200 has sufficient muscle strength depending on whether or not the subject person 200 is given a specific posture (here, one foot standing) and the subject person 200 can maintain the specific posture for a predetermined time It is possible to measure whether or not the joint flexibility is sufficient.
 表示装置112は、一例として、室内の壁面300の一部(スクリーン領域301)に映像を投影するプロジェクタ装置からなる。表示装置112は、例えば室内の天井面に取り付けられている。表示装置112は、壁面300におけるセンサ装置111の下方に設定されたスクリーン領域301に、フルカラーの任意の映像を投影する。表示装置112は、壁面300に限らず、床面、天井面又は専用スクリーン等に映像を投影することも可能である。表示装置112は、二次元の映像を表示する構成に限らず、例えば、3D(three dimensions)プロジェクションマッピング等の技術を利用して三次元の映像を表示してもよい。特に、表示装置112が仮想現実(VR:Virtual Reality)空間を生成する場合には、表示された映像を見る対象者200に没入感を与えることができる。 The display device 112 is, for example, a projector device that projects an image on a part (screen area 301) of the indoor wall surface 300. The display device 112 is attached to, for example, a ceiling surface in a room. The display device 112 projects an arbitrary full-color image on a screen area 301 set below the sensor device 111 on the wall surface 300. The display device 112 can project not only the wall surface 300 but also an image on a floor surface, a ceiling surface, a dedicated screen, or the like. The display device 112 is not limited to the configuration for displaying a two-dimensional video, and may display a three-dimensional video using a technique such as 3D (three dimensions) projection mapping, for example. In particular, when the display device 112 generates a virtual reality (VR) space, an immersive feeling can be given to the target person 200 who views the displayed image.
 図2に示す例では、対象者200の反転映像302及び見本映像303が、スクリーン領域301に表示されている。反転映像302は、センサ装置111のカメラで対象者200の正面から撮像した対象者200の全身の映像を左右反転させた映像である。この反転映像302は、略リアルタイムで表示され、かつ対象者200において鏡に映った自身の像(鏡像)と同様に視認されるように、大きさ及びスクリーン領域301内での表示位置等が調整されている。見本映像303は、身体情報の入力に当たり模範となる動き(姿勢等)を規定する映像である。図2に示す例では、「片足立ち」における正しい姿勢を示すスティックピクチャが、見本映像303として反転映像302に重ねて表示されている。 In the example illustrated in FIG. 2, the reverse video 302 and the sample video 303 of the target person 200 are displayed in the screen area 301. The reverse video 302 is a video obtained by horizontally reversing a video of the entire body of the subject 200 captured from the front of the subject 200 with the camera of the sensor device 111. The size and the display position in the screen area 301 are adjusted so that the reverse video 302 is displayed in substantially real time and viewed in the same manner as the image (mirror image) of the subject person 200 in the mirror. It is done. The sample image 303 is an image defining a movement (posture and the like) which becomes an example when inputting physical information. In the example illustrated in FIG. 2, a stick picture indicating the correct posture in “one foot standing” is displayed as the sample video 303 so as to be superimposed on the reverse video 302.
 第1処理部12(以下、「生成部12」ともいう)は、上述の生成ステップS2を実行する機能を有している。つまり、生成部12は、第1入力部11に入力された対象者200の身体情報に基づいて、対象者200の活動メニューM1を1以上のプロセッサにより生成する。本実施形態では、第1処理部12は、身体情報として入力された対象者200の姿勢と、基準データで規定されている基準姿勢との差分(ずれの大きさ)、及び対象者200が片足立ちの姿勢を維持できた時間の長さ等から、対象者200の運動能力を評価する。そして、第1処理部12は、対象者200の運動能力の評価に応じて、第1記憶部14に記憶された複数の活動メニューM1から活動メニューM1を選択することで、対象者200に対する活動メニューM1を生成する。 The first processing unit 12 (hereinafter also referred to as “generation unit 12”) has a function of executing the above-described generation step S2. That is, based on the physical information of the target person 200 input to the first input unit 11, the generation unit 12 generates the activity menu M1 of the target person 200 by one or more processors. In the present embodiment, the first processing unit 12 determines the difference between the posture of the target person 200 input as physical information and the reference posture specified by the reference data (the size of the deviation), and the target person 200 has one foot The exercise ability of the subject 200 is evaluated from the length of time for which the standing posture can be maintained. Then, the first processing unit 12 selects the activity menu M1 from the plurality of activity menus M1 stored in the first storage unit 14 in accordance with the evaluation of the exercise capacity of the object person 200, whereby the activity for the object person 200 is performed. Generate menu M1.
 また、第1処理部12は、生成する活動メニューM1を更新する更新ステップS9(図6参照)を実行する機能を有している。更新ステップS9は、結果取得ステップS6(後述する)にて取得した対象者200の活動結果(後述する)に基づいて、活動メニューM1を1以上のプロセッサにより更新するステップである。本開示でいう「活動結果」は、提示された活動メニューM1を対象者200が実施した結果である。更新ステップS9については、後述する「(3.2)対象者の活動結果の評価、及び活動メニューの更新」にて詳しく説明する。 Further, the first processing unit 12 has a function of executing an updating step S9 (see FIG. 6) of updating the activity menu M1 to be generated. The updating step S9 is a step of updating the activity menu M1 by one or more processors based on the activity result (described later) of the subject 200 acquired in the result acquiring step S6 (described later). The “activity result” in the present disclosure is the result of the subject person 200 executing the presented activity menu M1. The update step S9 will be described in detail in "(3.2) Evaluation of activity result of target person and update of activity menu" described later.
 第1通信部13は、ネットワークN1を介して、サーバ2又は操作端末3と通信するための通信インタフェースである。本実施形態では、第1通信部13と、サーバ2又は操作端末3との間の通信方式は、双方向の無線通信である。なお、第1通信部13と、サーバ2又は操作端末3との間の通信方式は、無線通信に限らず、有線通信であってもよい。第1通信部13は、第1処理部12に制御されることにより、第1処理部12で生成された活動メニューM1を、ネットワークN1を介してサーバ2へ送信する。 The first communication unit 13 is a communication interface for communicating with the server 2 or the operation terminal 3 via the network N1. In the present embodiment, the communication method between the first communication unit 13 and the server 2 or the operation terminal 3 is bidirectional wireless communication. The communication method between the first communication unit 13 and the server 2 or the operation terminal 3 is not limited to wireless communication, and may be wired communication. Under the control of the first processing unit 12, the first communication unit 13 transmits the activity menu M1 generated by the first processing unit 12 to the server 2 via the network N1.
 第1記憶部14は、例えば、EEPROM(Electrically Erasable Programmable Read-Only Memory)のような書き換え可能な不揮発性メモリを含む。第1記憶部14は、第1処理部12にて活動メニューM1を生成する際に選択され得る複数の活動メニューM1を記憶している。 The first storage unit 14 includes, for example, a rewritable nonvolatile memory such as an EEPROM (Electrically Erasable Programmable Read-Only Memory). The first storage unit 14 stores a plurality of activity menus M1 that can be selected when the first processing unit 12 generates the activity menu M1.
 サーバ2は、第2通信部21と、第2処理部22と、第2記憶部23と、を備えている。サーバ2は、1以上のプロセッサ及びメモリを有するコンピュータシステムを主構成として実現されている。そして、1以上のプロセッサが適宜のプログラムを実行することにより、第2通信部21及び第2処理部22の機能が実現される。プログラムは、メモリに予め記録されていてもよいし、インターネット等の電気通信回線を通じて、又はメモリカード等の非一時的な記録媒体に記録されて提供されてもよい。 The server 2 includes a second communication unit 21, a second processing unit 22, and a second storage unit 23. The server 2 is implemented mainly with a computer system having one or more processors and a memory. The functions of the second communication unit 21 and the second processing unit 22 are realized by one or more processors executing appropriate programs. The program may be pre-recorded in a memory, or may be provided through a telecommunication line such as the Internet or in a non-transitory recording medium such as a memory card.
 第2通信部21は、ネットワークN1を介して、施設用システム10又は操作端末3と通信するための通信インタフェースである。本実施形態では、第2通信部21と、施設用システム10又は操作端末3との間の通信方式は、双方向の無線通信である。なお、第2通信部21と、施設用システム10又は操作端末3との間の通信方式は、無線通信に限らず、有線通信であってもよい。第2通信部21は、ネットワークN1を介して施設用システム10の第1通信部13と通信することにより、第1処理部12で生成した活動メニューM1を受信する。また、第2通信部21は、第2処理部22に制御されることにより、後述する対象者200の活動結果、及び活動結果に対する評価を、ネットワークN1を介して施設用システム10へ送信する。第2通信部21は、第2処理部22に制御されることにより、第2記憶部23に記憶している対象者200に対する活動メニューM1(つまり、第1処理部12で生成した活動メニューM1)を、ネットワークN1を介して操作端末3へ送信する。また、第2通信部21は、ネットワークN1を介して第3通信部31と通信することにより、操作端末3に入力された活動結果を受信する。 The second communication unit 21 is a communication interface for communicating with the facility system 10 or the operation terminal 3 via the network N1. In the present embodiment, the communication scheme between the second communication unit 21 and the facility system 10 or the operation terminal 3 is bidirectional wireless communication. The communication method between the second communication unit 21 and the facility system 10 or the operation terminal 3 is not limited to wireless communication, and may be wired communication. The second communication unit 21 communicates with the first communication unit 13 of the facility system 10 via the network N1 to receive the activity menu M1 generated by the first processing unit 12. The second communication unit 21 is controlled by the second processing unit 22 to transmit the activity result of the target person 200 described later and the evaluation on the activity result to the facility system 10 via the network N1. The second communication unit 21 is controlled by the second processing unit 22 so that the activity menu M1 for the target person 200 stored in the second storage unit 23 (that is, the activity menu M1 generated by the first processing unit 12). ) Is transmitted to the operation terminal 3 via the network N1. The second communication unit 21 communicates with the third communication unit 31 via the network N1 to receive the activity result input to the operation terminal 3.
 第2処理部22は、対象者200の活動結果を評価する評価ステップS8(図6参照)を実行する機能を有している。評価ステップS8は、生成ステップS2にて生成された活動メニューM1と、結果取得ステップS6(後述する)にて取得した対象者200の活動結果とに基づいて、対象者200の活動を1以上のプロセッサにより評価するステップである。評価ステップS8については、後述する「(3.2)対象者の活動結果の評価、及び活動メニューの更新」にて詳しく説明する。 The second processing unit 22 has a function of executing an evaluation step S8 (see FIG. 6) for evaluating the activity result of the object person 200. In the evaluation step S8, one or more activities of the target person 200 are performed based on the activity menu M1 generated in the generation step S2 and the activity result of the target person 200 acquired in the result acquisition step S6 (described later). It is a step evaluated by a processor. The evaluation step S8 will be described in detail later in “(3.2) Evaluation of activity result of target person and update of activity menu”.
 第2記憶部23は、例えば、HDD(Hard Disk Drive)又はSSD(Solid State Drive)のような補助記憶装置を含む。第2記憶部23は、第2通信部21にて受信した活動メニューM1を、対象者200を識別するための識別情報(ユーザID)と紐付けて記憶する。また、第2記憶部23は、第2通信部21にて受信した活動結果を、対象者200の識別情報と紐付けて記憶する。 The second storage unit 23 includes, for example, an auxiliary storage device such as a hard disk drive (HDD) or a solid state drive (SSD). The second storage unit 23 stores the activity menu M1 received by the second communication unit 21 in association with identification information (user ID) for identifying the target person 200. In addition, the second storage unit 23 stores the activity result received by the second communication unit 21 in association with the identification information of the target person 200.
 操作端末3は、第3通信部31と、第3処理部32と、第2入力部33と、表示部34と、を備えている。操作端末3は、1以上のプロセッサ及びメモリを有するコンピュータシステムを主構成として実現されており、一例として汎用のタブレット端末である。そして、操作端末3では、専用のアプリケーションソフトがインストールされ、このアプリケーションソフトが起動されることにより、第3通信部31、第3処理部32、第2入力部33、及び表示部34の機能が実現される。 The operation terminal 3 includes a third communication unit 31, a third processing unit 32, a second input unit 33, and a display unit 34. The operation terminal 3 is realized as a main configuration of a computer system having one or more processors and memories, and is a general-purpose tablet terminal as an example. Then, in the operation terminal 3, dedicated application software is installed, and when this application software is activated, the functions of the third communication unit 31, the third processing unit 32, the second input unit 33, and the display unit 34 are obtained. To be realized.
 操作端末3は、タッチパネルディスプレイを搭載しており、タッチパネルディスプレイにて、対象者200の操作を受け付ける機能と、対象者200に情報を表示する機能と、が実現される。タッチパネルディスプレイは、例えば液晶ディスプレイ、又は有機EL(Electro Luminescence)ディスプレイ等で構成される。操作端末3は、タッチパネルディスプレイに表示される画面上でのボタン等のオブジェクトの操作(タップ、スワイプ、ドラッグ等)を検出することをもって、ボタン等のオブジェクトが操作されたことと判断する。このように、タッチパネルディスプレイは、各種の表示に加えて、対象者200からの操作入力を受け付けるユーザインタフェースとして機能する。つまり、本実施形態では、操作端末3のタッチパネルディスプレイが第2入力部33及び表示部34の機能を実現している。 The operation terminal 3 has a touch panel display, and the touch panel display realizes a function of receiving an operation of the target person 200 and a function of displaying information on the target person 200. The touch panel display is configured of, for example, a liquid crystal display or an organic EL (Electro Luminescence) display. The operation terminal 3 determines that an object such as a button is operated by detecting an operation (tap, swipe, drag, etc.) of an object such as a button on the screen displayed on the touch panel display. Thus, in addition to various displays, the touch panel display functions as a user interface that receives an operation input from the target person 200. That is, in the present embodiment, the touch panel display of the operation terminal 3 implements the functions of the second input unit 33 and the display unit 34.
 第3通信部31は、ネットワークN1を介して、施設用システム10又はサーバ2と通信するための通信インタフェースである。本実施形態では、第3通信部31と、施設用システム10又はサーバ2との間の通信方式は、双方向の無線通信である。第3通信部31は、ネットワークN1を介してサーバ2の第2通信部21と通信することにより、サーバ2に記憶された対象者200に対する活動メニューM1を受信する。言い換えれば、第3通信部31(以下、「取得部31」ともいう)は、上述の取得ステップS4を実行する機能を有している。つまり、取得部31は、ネットワークN1を介して、第1処理部(生成部)12で生成された活動メニューM1を取得する。 The third communication unit 31 is a communication interface for communicating with the facility system 10 or the server 2 via the network N1. In the present embodiment, the communication method between the third communication unit 31 and the facility system 10 or the server 2 is bidirectional wireless communication. The third communication unit 31 communicates with the second communication unit 21 of the server 2 via the network N1 to receive the activity menu M1 for the target person 200 stored in the server 2. In other words, the third communication unit 31 (hereinafter, also referred to as “acquisition unit 31”) has a function of executing the above-described acquisition step S4. That is, the acquisition unit 31 acquires the activity menu M1 generated by the first processing unit (generation unit) 12 via the network N1.
 例えば、第2入力部33にて対象者200による要求操作を受け付けることで、第3処理部32は、サーバ2に対して対象者200に紐付けられた活動メニューM1の送信を要求する。これにより、第3通信部31は、ネットワークN1を介して、サーバ2から活動メニューM1を受信する。また、例えば、対象者200が、施設1にて配布されたQRコード(登録商標)を操作端末3の内蔵カメラを用いて読み取り、コードに含まれるURL(Uniform Resource Locator)にアクセスする。これにより、第3通信部31は、ネットワークN1を介して、サーバ2から活動メニューM1を受信する。 For example, the third processing unit 32 requests the server 2 to transmit the activity menu M1 linked to the target person 200 by receiving the request operation by the target person 200 in the second input unit 33. Thereby, the third communication unit 31 receives the activity menu M1 from the server 2 via the network N1. Further, for example, the target person 200 reads the QR code (registered trademark) distributed at the facility 1 using the built-in camera of the operation terminal 3, and accesses a URL (Uniform Resource Locator) included in the code. Thereby, the third communication unit 31 receives the activity menu M1 from the server 2 via the network N1.
 ここで、サーバ2から送信される活動メニューM1は、サーバ2の第2記憶部23に記憶されている。つまり、本実施形態では、取得ステップS4にて、生成ステップS2で生成された活動メニューM1を記憶する第2記憶部23(記憶部)から、活動メニューM1を取得している。言い換えれば、本実施形態では、取得ステップS4にて、生成ステップS2で生成された活動メニューM1を施設用システム10から取得するのではなく、サーバ2にて一時的に記憶された活動メニューM1をサーバ2から取得している。 Here, the activity menu M1 transmitted from the server 2 is stored in the second storage unit 23 of the server 2. That is, in the present embodiment, in the acquisition step S4, the activity menu M1 is acquired from the second storage unit 23 (storage unit) that stores the activity menu M1 generated in the generation step S2. In other words, in this embodiment, in the acquisition step S4, the activity menu M1 temporarily stored in the server 2 is not acquired, but the activity menu M1 generated in the generation step S2 is acquired from the facility system 10. It is acquired from server 2.
 したがって、本実施形態では、施設1にて活動メニューM1を生成した時点で活動メニューM1を対象者200に提示するのではなく、この時点から所定の時間を空けた後に対象者200に活動メニューM1を提示することが可能になる。また、対象者200に対して過去に提示した活動メニューM1の履歴を第2記憶部23に記憶すれば、対象者200は、サーバ2に要求することで活動メニューM1の履歴を参照することが可能になる。 Therefore, in the present embodiment, the activity menu M1 is not presented to the subject 200 at the time when the activity menu M1 is generated at the facility 1, but the activity menu M1 is given to the subject 200 after leaving a predetermined time from this time. It will be possible to present In addition, if the history of the activity menu M1 presented to the object person 200 in the past is stored in the second storage unit 23, the object person 200 can refer to the history of the activity menu M1 by requesting the server 2 It will be possible.
 第3処理部32(以下、「提示部32」ともいう)は、上述の提示ステップS5を実行する機能を有している。つまり、提示部32は、第3通信部(取得部)31で取得した活動メニューM1を提示する。活動メニューM1は、例えば図5A~図5Cに示すように、操作端末3の表示部34に表示されることで、対象者200に提示される。 The third processing unit 32 (hereinafter, also referred to as “presentation unit 32”) has a function of executing the above-described presentation step S5. That is, the presentation unit 32 presents the activity menu M1 acquired by the third communication unit (acquisition unit) 31. The activity menu M1 is presented to the target person 200 by being displayed on the display unit 34 of the operation terminal 3 as shown in, for example, FIGS. 5A to 5C.
 図5Aに示す例では、活動メニューM1として、運動メニューM11と、料理メニューM12とが表示部34に表示されている。運動メニューM11は、例えば腕立て伏せ、スクワット、開眼片足立ちなど、対象者200の身体のうち強化すべき1以上の特定部位を鍛えるための運動のメニューである。表示部34には、運動メニューM11として、対象者200が実行すべき運動方法を解説するためのテキスト及び図(写真を含む)が表示される。料理メニューM12は、例えばサラダなど、対象者200が摂取不足である栄養を補給するために適した料理のレシピである。表示部34には、料理メニューM12として、対象者が作るべき料理を解説するためのテキスト及び図(写真を含む)が表示される。 In the example shown in FIG. 5A, an exercise menu M11 and a cooking menu M12 are displayed on the display unit 34 as the activity menu M1. The exercise menu M11 is, for example, a menu of exercises for training one or more specific portions of the subject 200's body to be strengthened, such as push-ups, squats, and single-legged eyes. The display unit 34 displays, as an exercise menu M11, text and a figure (including a picture) for explaining the exercise method to be executed by the subject 200. The cooking menu M12 is, for example, a recipe for cooking suitable for supplementing the nutrition that the subject 200 is underfeeding, such as a salad. The display unit 34 displays, as the cooking menu M12, text and a diagram (including a photo) for explaining the food to be prepared by the subject.
 図5Bに示す例では、表示部34には、運動メニューM11として、対象者200が実行すべき運動を解説するための動画が表示されている。この動画は、例えば表示部34に図5Aに示す画像が表示されている状態において、運動メニューM11をタッチする等、対象者200が操作端末3に対して特定の操作を実行することで表示される。また、図5Cに示す例では、表示部34には、料理メニューM12として、対象者200が作るべき料理を解説するための動画が表示されている。この動画は、例えば表示部34に図5Aに示す画像が表示されている状態において、料理メニューM12をタッチする等、対象者200が操作端末3に対して特定の操作を実行することで表示される。 In the example illustrated in FIG. 5B, the display unit 34 displays, as the exercise menu M11, a moving image for explaining the exercise to be performed by the subject 200. This moving image is displayed, for example, when the target person 200 performs a specific operation on the operation terminal 3, such as touching the exercise menu M11 while the image shown in FIG. 5A is displayed on the display unit 34. Ru. Further, in the example shown in FIG. 5C, the display unit 34 displays, as the cooking menu M12, a moving image for explaining the food to be prepared by the target person 200. This moving image is displayed, for example, when the object person 200 performs a specific operation on the operation terminal 3 such as touching the cooking menu M12 while the image shown in FIG. 5A is displayed on the display unit 34. Ru.
 (3)動作
 (3.1)対象者に対する活動メニューの提示
 以下、活動支援システム100による、対象者200に対する活動メニューM1を提示する動作(つまり、活動支援方法による活動メニューM1の提示方法)について図1を用いて説明する。まず、施設1にて、対象者200又は対象者200の代理人が、対象者200の身体情報を入力する(ステップS1)。本実施形態では、既に述べたように、センサ装置111及び表示装置112を用いて対象者200の位置及び姿勢を検知することで、対象者200の身体情報が施設用システム10の第1入力部11に入力される。次に、第1処理部12の1以上のプロセッサが、第1入力部11に入力された対象者200の身体情報に基づいて、活動メニューM1を生成する(生成ステップS2)。生成された活動メニューM1は、第1通信部13及びネットワークN1を介してサーバ2へ送信される。サーバ2の第2処理部22は、第2通信部21にて活動メニューM1を受信すると、受信した活動メニューM1を対象者200と紐付けて第2記憶部23に記憶する(ステップS3)。
(3) Operation (3.1) Presentation of Activity Menu to Target Person Hereinafter, an operation for presenting the activity menu M1 to the target person 200 by the activity support system 100 (that is, a method of presenting the activity menu M1 by the activity support method) This will be described with reference to FIG. First, in the facility 1, the target person 200 or an agent of the target person 200 inputs physical information of the target person 200 (step S1). In the present embodiment, as described above, by detecting the position and posture of the target person 200 using the sensor device 111 and the display device 112, the physical information of the target person 200 is transmitted to the first input unit of the facility system 10. It is input to 11. Next, one or more processors of the first processing unit 12 generate an activity menu M1 based on the physical information of the object person 200 input to the first input unit 11 (generation step S2). The generated activity menu M1 is transmitted to the server 2 via the first communication unit 13 and the network N1. When the second processing unit 22 of the server 2 receives the activity menu M1 at the second communication unit 21, the second processing unit 22 associates the received activity menu M1 with the target person 200 and stores the result in the second storage unit 23 (step S3).
 その後、対象者200が自宅4にて操作端末3を操作し、サーバ2からダウンロードすることで、活動メニューM1をサーバ2から取得する(取得ステップS4)。サーバ2から取得した活動メニューM1は、操作端末3のメモリに記憶される。そして、対象者200が自宅4にて操作端末3を操作し、活動メニューM1を表示部34に表示させることで、活動メニューM1が対象者200に提示される(提示ステップS5)。したがって、対象者200は、表示部34に表示される活動メニューM1を見ながら、自宅4にて活動メニューM1を実施することが可能である。 Thereafter, the target person 200 operates the operation terminal 3 at home 4 and downloads it from the server 2 to acquire the activity menu M1 from the server 2 (acquisition step S4). The activity menu M1 acquired from the server 2 is stored in the memory of the operation terminal 3. Then, the target person 200 operates the operation terminal 3 at home 4 and causes the display unit 34 to display the activity menu M1, whereby the activity menu M1 is presented to the target person 200 (presentation step S5). Therefore, the target person 200 can carry out the activity menu M1 at home 4 while looking at the activity menu M1 displayed on the display unit 34.
 上述のように、本実施形態では、対象者200が施設1で身体情報を入力することで生成される活動メニューM1を、対象者200の自宅4にて提示することが可能である。このため、本実施形態では、対象者200に対して適切な活動メニューM1、つまり実行すべき活動を提示し易い、という利点がある。また、本実施形態では、対象者200は施設1にて身体情報を入力するだけでよく、自ら活動メニューM1を立案する必要がない、という利点がある。 As described above, in the present embodiment, it is possible to present the activity menu M1 generated by the target person 200 inputting physical information at the facility 1 at the home 4 of the target person 200. Therefore, the present embodiment has an advantage that it is easy to present the target person 200 an appropriate activity menu M1, that is, an activity to be performed. Moreover, in the present embodiment, the target person 200 only needs to input physical information at the facility 1, and there is an advantage that it is not necessary to draw up the activity menu M1 by oneself.
 更に、本実施形態では、対象者200が施設1とは異なる場所(ここでは、対象者200の自宅4)にて活動メニューM1を実施し易い、という利点がある。例えば、施設1にて、セラピストの補助の下、対象者200が活動メニューM1を実施した場合でも、対象者200が活動メニューM1を体得できなかったり、体得した活動メニューM1を忘れたりする可能性がある。このような場合、対象者200は、自宅4にて同じ活動メニューM1を実施することは難しい。また、例えば施設1に活動メニューM1を実施するために必要なスペースが無かったり、施設1が公衆の目に触れ易い施設であったりする場合、施設1にて対象者200が活動メニューM1を実施することができない。このような場合、対象者200には、そもそも施設1にて活動メニューM1を体得する機会が無い。 Furthermore, the present embodiment has an advantage that the target person 200 can easily carry out the activity menu M1 at a place different from the facility 1 (here, the home 4 of the target person 200). For example, even if the subject 200 carries out the activity menu M1 with the assistance of a therapist at the facility 1, there is a possibility that the subject 200 can not acquire the activity menu M1 or forgets the acquired activity menu M1. There is. In such a case, it is difficult for the target person 200 to carry out the same activity menu M1 at home 4. In addition, for example, when there is no space necessary for the activity menu M1 to be implemented in the facility 1, or when the facility 1 is a facility easily accessible to the public, the target person 200 implements the activity menu M1 in the facility 1 Can not do it. In such a case, the target person 200 does not have an opportunity to acquire the activity menu M1 at the facility 1 in the first place.
 これに対して、本実施形態では、上記のような場合であっても、対象者200は、自宅4にて活動メニューM1を実施することが可能である。また、本実施形態では、セラピストが対象者200に対して活動メニューM1を指導しなくてもよく、セラピストの負担を軽減できる、という利点もある。 On the other hand, in the present embodiment, even in the case as described above, the target person 200 can carry out the activity menu M1 at home 4. Further, in the present embodiment, the therapist does not have to instruct the subject person 200 on the activity menu M1, and there is also an advantage that the burden on the therapist can be reduced.
 (3.2)対象者の活動結果の評価、及び活動メニューの更新
 以下、活動支援システム100による、対象者200の活動結果を評価する動作、及び活動メニューM1を更新する動作(つまり、活動支援方法による活動結果の評価方法、及び活動メニューM1の更新方法)について図6を用いて説明する。これらの動作は、「(3.1)対象者に対する活動メニューの提示」での動作が行われた後、つまり、対象者200に活動メニューM1が提示された後に実行される。
(3.2) Evaluation of Activity Result of Target Person and Update of Activity Menu Hereinafter, an operation of evaluating the activity result of the target person 200 by the activity support system 100, and an operation of updating the activity menu M1 (that is, activity support A method of evaluating the activity result by the method and a method of updating the activity menu M1 will be described with reference to FIG. These actions are executed after the action of “(3.1) Presentation of activity menu to subject” is performed, that is, after the subject 200 is presented with the activity menu M1.
 まず、対象者200が、表示部34に表示された活動メニューM1を見ながら、活動メニューM1を実施する。そして、対象者200は、操作端末3を操作することで、活動メニューM1を実施した結果(つまり、活動結果)を入力する。例えば、活動メニューM1が運動メニューM11であれば、対象者200は、運動メニューM11を実施する自らの姿を撮像した動画像を操作端末3に入力することで、活動結果を操作端末3に入力する。また、例えば、活動メニューM1が料理メニューM12であれば、対象者200は、料理メニューM12に基づいて作った料理を撮像した画像を操作端末3に入力することで、活動結果を操作端末3に入力する。その他、対象者200は、活動メニューM1を実施した旨、又は活動メニューM1を実施した時間帯などを入力することで、活動結果を操作端末3に入力してもよい。 First, the target person 200 implements the activity menu M1 while looking at the activity menu M1 displayed on the display unit 34. Then, the target person 200 operates the operation terminal 3 to input the result of performing the activity menu M1 (that is, the activity result). For example, if the activity menu M1 is the exercise menu M11, the target person 200 inputs the result of the activity to the operation terminal 3 by inputting to the operation terminal 3 a moving image obtained by imaging the figure of the person performing the exercise menu M11. Do. Furthermore, for example, if the activity menu M1 is the cooking menu M12, the target person 200 inputs the image of the food prepared based on the cooking menu M12 into the operation terminal 3 to input the activity result to the operation terminal 3 input. In addition, the target person 200 may input the result of the activity to the operation terminal 3 by inputting that the activity menu M1 has been performed, the time zone in which the activity menu M1 has been performed, or the like.
 これにより、操作端末3は、活動結果を取得する(ステップS6)。つまり、ステップS6(以下、「結果取得ステップS6」ともいう)は、生成ステップS2にて生成された活動メニューM1に基づく対象者200の活動結果を取得するステップである。 Thereby, the operating terminal 3 acquires an activity result (step S6). That is, step S6 (hereinafter, also referred to as “result acquisition step S6”) is a step of acquiring the activity result of the object person 200 based on the activity menu M1 generated in the generation step S2.
 次に、操作端末3の第3処理部32は、活動結果が入力されると、第3通信部31を制御することにより、ネットワークN1を介して活動結果をサーバ2へ送信する。そして、サーバ2の第2処理部22は、第2通信部21にて活動結果を受信すると、受信した活動結果を対象者200に紐付けて第2記憶部23に記憶する(ステップS7)。つまり、ステップS7(以下、「記憶ステップS7」ともいう)は、第2記憶部(記憶部)23に、結果取得ステップS6にて取得した活動結果を記憶するステップである。 Next, when the activity result is input, the third processing unit 32 of the operation terminal 3 transmits the activity result to the server 2 via the network N1 by controlling the third communication unit 31. Then, when the second communication unit 21 receives the activity result, the second processing unit 22 of the server 2 associates the received activity result with the target person 200 and stores the result in the second storage unit 23 (step S7). That is, step S7 (hereinafter, also referred to as "storage step S7") is a step of storing the activity result acquired in the result acquisition step S6 in the second storage unit (storage unit) 23.
 その後、サーバ2の第2処理部22は、取得した対象者200の活動結果に基づいて、1以上のプロセッサにより対象者200の活動を評価する(評価ステップS8)。一例として、運動メニューM11を実施する対象者200を撮像した動画像が活動結果である場合を挙げる。この場合、第2処理部22は、例えば画像解析技術等を用いて、対象者200が運動メニューM11を実施する動画と、トレーナーが運動メニューM11を実施する基準動画とを比較することで、対象者200による運動メニューM11の実施精度を評価する。第2処理部22は、対象者200の活動を評価すると、評価結果を対象者200に紐付けて第2記憶部23に記憶する。また、第2処理部22は、第2通信部21を制御することにより、ネットワークN1を介して評価結果を施設用システム10へ送信する。 Thereafter, the second processing unit 22 of the server 2 evaluates the activity of the subject 200 by one or more processors based on the acquired activity result of the subject 200 (evaluation step S8). As an example, the case where the moving image which imaged the subject 200 which implements exercise menu M11 is an activity result is mentioned. In this case, the second processing unit 22 compares the moving image for which the target person 200 performs the exercise menu M11 with the reference moving image for which the trainer performs the exercise menu M11 using, for example, image analysis technology or the like. The implementation accuracy of the exercise menu M11 by the person 200 is evaluated. After evaluating the activity of the target person 200, the second processing unit 22 links the evaluation result to the target person 200 and stores the result in the second storage unit 23. Further, the second processing unit 22 transmits the evaluation result to the facility system 10 via the network N1 by controlling the second communication unit 21.
 施設用システム10の第1処理部12は、第1通信部13にて評価結果を受信すると、受信した評価結果(言い換えれば、対象者200の活動結果)に基づいて、1以上のプロセッサにより活動メニューM1を更新する(更新ステップS9)。つまり、施設1にて同じ身体情報が入力された場合であっても、更新の前後では生成される活動メニューM1が異なる。もちろん、更新により活動メニューM1に変化が生じない場合もある。 When the first processing unit 12 of the facility system 10 receives the evaluation result in the first communication unit 13, the first processing unit 12 performs the activity by one or more processors based on the received evaluation result (in other words, the activity result of the target person 200). The menu M1 is updated (update step S9). That is, even when the same physical information is input at the facility 1, the activity menu M1 generated is different before and after the update. Of course, the update may cause no change in the activity menu M1.
 第1処理部12は、例えば評価結果(対象者200の活動結果)を用いて機械学習することで、活動メニューM1を更新する。例えば、複数人の対象者200が施設1を利用していると仮定する。この場合、第1処理部12は、複数人の対象者200の評価結果を参照することで、大多数の対象者200に対して提示されている活動メニューM1を優先的に提示してもよい。また、第1処理部12は、例えば活動メニューM1に含まれる運動について、多くの対象者200が実行困難であるとの評価結果を得た場合、この運動を外す、又はこの運動の代わりに難易度の低い運動を加える等して、活動メニューM1を更新してもよい。 The first processing unit 12 updates the activity menu M1 by performing machine learning using, for example, an evaluation result (activity result of the target person 200). For example, it is assumed that a plurality of target persons 200 use the facility 1. In this case, the first processing unit 12 may preferentially present the activity menu M1 presented to the majority of the subjects 200 by referring to the evaluation results of the plurality of subjects 200. . In addition, for example, when the first processing unit 12 obtains an evaluation result that many subjects 200 have difficulty in performing an exercise included in the activity menu M1, the first processing unit 12 removes the exercise or a difficulty in place of the exercise. The activity menu M1 may be updated by adding a low degree exercise or the like.
 上述のように、本実施形態では、対象者200による活動メニューM1の実施度合いを評価している。このため、本実施形態では、例えば対象者200の要求に応じて評価結果を操作端末3の表示部34に表示させる等して、評価結果を対象者200に提示することで、対象者200のモチベーションの向上を図ることが可能になる、という利点がある。また、本実施形態では、例えば評価結果(対象者200の活動結果)を施設用システム10にフィードバックして活動メニューM1を更新することで、より適した活動メニューM1を対象者200に提示し易くなる、という利点がある。 As described above, in the present embodiment, the degree of implementation of the activity menu M1 by the target person 200 is evaluated. Therefore, in the present embodiment, for example, the evaluation result is displayed on the display unit 34 of the operation terminal 3 in response to the request of the target person 200, and the evaluation result is presented to the target person 200. There is an advantage that it is possible to improve motivation. Further, in the present embodiment, for example, the evaluation result (the activity result of the target person 200) is fed back to the facility system 10 to update the activity menu M1, so that the activity menu M1 can be easily presented to the target person 200. Has the advantage of
 (4)身体情報の入力例
 以下、上記のステップS1における、身体情報の入力例について説明する。以下に説明する第1入力例及び第2入力例では、いずれも身体情報は、施設1にて、施設用システム10が対象者200に提示する指示メニューを対象者200が実行することで検知される。つまり、対象者200が指示メニューを実行することで、センサ装置111により検知される、検知空間内での対象者200の位置に関する情報、及び対象者200の姿勢に関する情報が、身体情報として第1入力部11に与えられる。本開示でいう「指示メニュー」は、複数のリハビリテーション用のメニューの中から選択されるいずれかのメニューであって、対象者200に対して特定の訓練等を指示するために対象者200に提示される。
(4) Input Example of Physical Information Hereinafter, an input example of physical information in the above-described step S1 will be described. In the first input example and the second input example described below, the physical information is detected by the target person 200 executing the instruction menu that the facility system 10 presents to the target person 200 at the facility 1. Ru. That is, when the target person 200 executes the instruction menu, information on the position of the target person 200 in the detection space and information on the posture of the target person 200, which are detected by the sensor device 111, are first information as physical information. It is given to the input unit 11. The “instruction menu” in the present disclosure is any menu selected from a plurality of rehabilitation menus, and is presented to the target person 200 to instruct the target person 200 on a specific training or the like. Be done.
 なお、以下の説明では、複数のリハビリテーション用のメニューは、複数の第1段階メニューと、複数の第2段階メニューと、に分類される。複数の第1段階メニューの各々は、例えば、歩行動作等の日常生活において対象者200が行う動作そのものの訓練メニューである。複数の第2段階メニューの各々は、例えば、日常生活において対象者200が行う動作に必要な要素動作の訓練メニューである。本開示でいう「要素動作」は、日常生活において対象者200が行う動作を、複数の要素に分解した場合の分解後の個々の動作である。例えば、歩行動作については、股関節を曲げる動作、股関節を伸ばす動作、膝関節を曲げる動作、膝関節を伸ばす動作、足首を曲げる動作、足首を伸ばす動作、腕を前に振る動作及び腕を後ろに振る動作等を含む複数の要素動作に分解される。このように、複数の要素動作は、身体の部位ごと、かつ各部位の動きごとに規定される。 In the following description, a plurality of rehabilitation menus are classified into a plurality of first stage menus and a plurality of second stage menus. Each of the plurality of first stage menus is, for example, a training menu of the motion itself that the subject person 200 performs in daily life such as a walking motion. Each of the plurality of second stage menus is, for example, a training menu of element operations necessary for operations performed by the subject 200 in daily life. The “element operation” in the present disclosure is an individual operation after decomposition when the operation performed by the target person 200 in daily life is decomposed into a plurality of elements. For example, with regard to walking motion, the motion to bend the hip joint, the motion to stretch the hip joint, the motion to bend the knee joint, the motion to stretch the knee joint, the motion to bend the ankle, the motion to stretch the ankle, the motion to swing the arm forward and the arm backward It is decomposed into a plurality of element operations including a shaking operation and the like. Thus, a plurality of element movements are defined for each part of the body and for each movement of each part.
 (4.1)第1入力例
 第1入力例では、施設用システム10は、まず図7Aに示すように、対象者200に対して質問するためのスクリーニングを実施する。このとき、施設用システム10は、対象者200に対する質問事項をスクリーン領域301に表示させ、対象者200が操作端末5に入力した質問事項に対する回答を取得する。操作端末5は、例えばタブレット端末又はスマートフォン等であって、施設用システム10との通信機能と、対象者200の操作を受け付ける機能と、対象者200に情報を提示(表示及び/又は音声出力)する機能と、を有する。
(4.1) First Input Example In the first input example, the facility system 10 first performs screening for asking the target person 200 a question, as shown in FIG. 7A. At this time, the facility system 10 causes the screen area 301 to display questions about the target person 200, and acquires an answer to the question that the target person 200 has input to the operation terminal 5. The operation terminal 5 is, for example, a tablet terminal or a smartphone, and presents information to the target person 200 (display and / or voice output), a communication function with the facility system 10, a function to receive the operation of the target person 200, and the like. Function to
 対象者200に対する質問事項の具体例としては、対象者200の氏名、年齢及び性別等の対象者200の属性に関する事項に加え、最近転倒したことがあるか、及び平坦な床でつまずくことがあるか等、対象者200の身体能力を判断するための事項がある。そのため、スクリーニングにおいては、施設用システム10は、対象者200の属性に関する情報(以下、「第3情報」という)に加えて、対象者200の身体能力を判断するためのデータを取得する。 Specific examples of the questions for the target person 200 include, in addition to matters concerning the attribute of the target person 200 such as the name, age, and gender of the target person 200, there has been a recent fall or there may be a flat floor stumble There are matters to determine the physical ability of the target person 200, such as?. Therefore, in the screening, the facility system 10 acquires data for determining the physical ability of the subject 200 in addition to the information on the attributes of the subject 200 (hereinafter, referred to as "third information").
 施設用システム10は、スクリーニングの結果に基づいて、身体能力の確認の要否を判断する。例えば、スクリーニングの結果、日常生活を送る上での身体能力に何ら問題が無いと判断される場合には、身体能力の確認は不要と判断される。 Institutional system 10 determines the necessity of confirmation of physical ability based on the result of screening. For example, when it is judged that there is no problem in physical ability in daily life as a result of screening, confirmation of physical ability is judged to be unnecessary.
 一方、スクリーニングの結果、身体能力の確認が必要と判断されると、施設用システム10は、プレメニューの選択処理を実行する。本開示でいう「プレメニュー」は、指示メニューを選択(決定)するために、指示メニューに先駆けて対象者200に実施させるメニューである。このとき、施設用システム10は、第1記憶部14に記憶されている複数の第1段階メニューの中から、スクリーニングの結果に基づいて、対象者200に適したいずれかの第1段階メニューを、プレメニューとして選択する。例えば、第1記憶部14には、複数の第1段階メニューに加えて、スクリーニングの結果からプレメニューを選択するための条件式が記憶されており、施設用システム10は、この条件式に従ってプレメニューを選択する。 On the other hand, if it is determined as a result of the screening that confirmation of physical ability is necessary, the facility system 10 executes a pre-menu selection process. The “pre-menu” in the present disclosure is a menu to be executed by the target person 200 prior to the instruction menu in order to select (determine) the instruction menu. At this time, the facility system 10 selects one of the first stage menus suitable for the subject 200 from among the plurality of first stage menus stored in the first storage unit 14 based on the result of the screening. , As a pre-menu. For example, the first storage unit 14 stores a conditional expression for selecting a pre-menu from the result of screening in addition to the plurality of first-stage menus, and the facility system 10 performs the preliminary processing according to the conditional expression. Select menu
 施設用システム10は、選択したプレメニューを表すプレ情報に基づいて表示装置112を制御する。これにより、表示装置112は、図7Bに示すように、対象者200に実施させるプレメニュー、及び対象者200によるプレメニューの実施をサポートするためのサポート情報を、スクリーン領域301に表示する。図7Bの例では、プレメニューとして「片足立ち」というメニューが選択された場合を想定しており、対象者200の反転映像302及び見本映像303が、サポート情報として、スクリーン領域301に表示されている。見本映像303は、一例として、プレメニューとして選択されたリハビリテーション用のメニュー(第1段階メニュー)に対応付けて第1記憶部14に記憶されている基準データから生成され、プレメニューにおける模範となる動き(姿勢等)を規定する映像である。図7Bの例では、「片足立ち」における正しい姿勢を示すスティックピクチャが、見本映像303として反転映像302に重ねて表示されている。 The facility system 10 controls the display device 112 based on pre-information representing the selected pre-menu. As a result, as shown in FIG. 7B, the display device 112 displays, on the screen area 301, the pre-menu to be performed by the subject 200 and the support information to support the execution of the pre-menu by the subject 200. In the example of FIG. 7B, it is assumed that the menu "one foot standing" is selected as the pre-menu, and the reverse video 302 and the sample video 303 of the target person 200 are displayed in the screen area 301 as support information. There is. The sample image 303 is generated from the reference data stored in the first storage unit 14 in association with the rehabilitation menu (first stage menu) selected as the pre-menu as an example, and serves as an example in the pre-menu It is an image that defines movement (posture etc.). In the example of FIG. 7B, a stick picture indicating the correct posture in the “one-foot standing” is displayed as the sample video 303 so as to be superimposed on the reverse video 302.
 この状態で、センサ装置111が対象者200の動きを検知し、施設用システム10は、指示メニューの選択処理を実行する。 In this state, the sensor device 111 detects the movement of the target person 200, and the facility system 10 executes an instruction menu selection process.
 指示メニューの選択処理においては、施設用システム10は、まずプレメニューに従って動作中の対象者200の動作に関する情報(以下、「第1情報」という)を、センサ装置111から取得する。更に、施設用システム10は、複数のリハビリテーション用のメニューの中から選択されるいずれかのメニューを、指示メニューとして提案するための情報(以下、「第2情報」という)を取得する。本入力例では、施設用システム10は、複数の第2段階メニューの中からいずれかのメニューを指示メニューとして選択するための第2情報を、第1記憶部14から取得する。このとき、施設用システム10は、少なくともプレメニューにおける模範となる動きを規定する基準データを第2情報に含めて取得する。 In the selection process of the instruction menu, the facility system 10 first acquires, from the sensor device 111, information (hereinafter referred to as "first information") regarding the operation of the target person 200 in operation according to the pre-menu. Furthermore, the facility system 10 acquires information (hereinafter, referred to as “second information”) for proposing any menu selected from the plurality of rehabilitation menus as an instruction menu. In the present input example, the facility system 10 acquires, from the first storage unit 14, second information for selecting any one of the plurality of second stage menus as an instruction menu. At this time, the facility system 10 acquires reference data defining at least an exemplary movement in the pre-menu in the second information.
 そして、施設用システム10は、第1情報及び第2情報、更にはスクリーニングで取得した第3情報に基づいて、対象者200の動作を評価する。一例として、施設用システム10は、対象者200の姿勢と、基準データで規定されている基準姿勢との差分(ずれの大きさ)を数値化し、この差分にて対象者200の動作を評価する。更に、図7Bのようにプレメニューとして「片足立ち」というメニューが選択された場合、例えば、対象者200を片足立ちの姿勢を維持できた時間の長さについても、対象者200の動作の評価に加味される。施設用システム10は、プレメニューに従って動作中の対象者200についての評価結果に基づいて、指示メニューを決定(選択)する。 Then, the facility system 10 evaluates the operation of the target person 200 based on the first information and the second information, and further, the third information acquired in the screening. As an example, the facility system 10 digitizes the difference (magnitude of deviation) between the posture of the subject 200 and the reference posture specified by the reference data, and evaluates the operation of the subject 200 by this difference. . Furthermore, when the menu “stand on one foot” is selected as the pre-menu as shown in FIG. 7B, for example, the evaluation of the motion of the target person 200 for the length of time in which the target person 200 can maintain the posture of one leg stand Be added to The facility system 10 determines (selects) an instruction menu based on the evaluation result of the target person 200 operating according to the pre-menu.
 指示メニューの選択処理が完了すると、施設用システム10は、選択した指示メニューを表す指示情報に基づいて、表示装置112を制御する。これにより、表示装置112は、図7Cに示すように、対象者200に実施させる指示メニュー、及び対象者200による指示メニューの実施をサポートするためのサポート情報を、スクリーン領域301に表示する。図7Cの例では、指示メニューとして「横方向への足上げ」というメニューが選択された場合を想定しており、反転映像302及びマーカ304が、サポート情報として、スクリーン領域301に表示されている。マーカ304は、一例として、ボールを模した図像であって、反転映像302中の対象者200の足元周辺に表示される。この状態で、施設用システム10は、例えば、マーカ304が表すボールを蹴るように対象者200に指示することで、対象者200に対して、足を横方向へ上げる動作を促すことができる。 When the selection process of the instruction menu is completed, the facility system 10 controls the display device 112 based on the instruction information representing the selected instruction menu. As a result, as shown in FIG. 7C, the display device 112 displays, on the screen area 301, an instruction menu to be executed by the subject 200 and support information for supporting the execution of the instruction menu by the subject 200. In the example shown in FIG. 7C, it is assumed that the menu “foot-up in the horizontal direction” is selected as the instruction menu, and the reverse video 302 and the marker 304 are displayed in the screen area 301 as support information. . The marker 304 is, for example, an image imitating a ball, and is displayed around the foot of the object person 200 in the reverse video 302. In this state, the facility system 10 can, for example, instruct the target person 200 to kick the ball represented by the marker 304, thereby prompting the target person 200 to move the foot in the lateral direction.
 更に本入力例では、施設用システム10は、指示情報に加えて、指示情報で表される指示メニューを対象者200が実行することで得られることが期待される効果を表す効果情報にも基づいて、表示装置112を制御する。そのため、表示装置112は、指示メニュー及びサポート情報に加えて、指示メニューの実行により期待される効果を表す効果情報をスクリーン領域301に表示する。図7Cの例では、「横方向への足上げ」という指示メニューを対象者200が実行することで得られることが期待される効果として、例えば、「身体を真っ直ぐに保つことができる」という効果情報が表示される。 Furthermore, in the present input example, in addition to the instruction information, the facility system 10 is also based on the effect information indicating an effect expected to be obtained by the target person 200 executing the instruction menu represented by the instruction information. Control the display device 112. Therefore, in addition to the instruction menu and the support information, the display device 112 displays, on the screen area 301, effect information indicating an effect expected by the execution of the instruction menu. In the example of FIG. 7C, as an effect expected to be obtained by the subject person 200 executing the instruction menu of “foot-up in the lateral direction”, for example, an effect of “being able to keep the body straight”. Information is displayed.
 この状態で、センサ装置111が対象者200の動きを検知し、施設用システム10は、対象者200についての評価結果を表す結果情報に基づいて表示装置112を制御する。これにより、表示装置112は、スクリーン領域301に評価結果を表示することにより、評価結果をリアルタイムで(即時に)対象者200に提示する。一例として、施設用システム10は、対象者200の姿勢と、基準データで規定されている基準姿勢との差分(ずれの大きさ)が許容範囲を超えると、マーカ304の色を変える等の態様で、評価結果を提示する。ただし、施設用システム10による提示の態様はこの態様に限らず、例えば、対象者200に対するメッセージの表示、音声(警告音を含む)出力、プリントアウト(印刷)又はスマートフォン等の端末へのデータ送信等であってもよい。 In this state, the sensor device 111 detects the movement of the object person 200, and the facility system 10 controls the display device 112 based on the result information indicating the evaluation result of the object person 200. Thereby, the display device 112 presents the evaluation result to the target person 200 in real time (immediately) by displaying the evaluation result in the screen area 301. As an example, the facility system 10 changes the color of the marker 304 when the difference between the posture of the target person 200 and the reference posture specified by the reference data (the size of the deviation) exceeds the allowable range. And present the evaluation results. However, the mode of presentation by the facility system 10 is not limited to this mode, and, for example, display of a message for the target person 200, voice (including warning sound) output, data transmission to terminals such as printout (printing) or a smartphone Or the like.
 ここで、対象者200が、例えば、1週間に1乃至数回のペースで施設用システム10を利用する場合、対象者200は、同一の指示メニューに従った動作を1~数か月の期間にわたって繰り返し行うことが効果的である。そのため、施設用システム10は、上述したプレメニュー及び指示メニューを選択する処理については毎回行う必要はなく、指示メニューを提示する処理を実行すればよい。例えば、スクリーニングの際、対象者200の氏名等の情報から、指示メニューを選択済みの対象者200であることが判明すれば、施設用システム10は、身体能力の確認が不要と判断し、プレメニュー及び指示メニューを選択する処理をスキップする。その結果、対象者200においては、図7Bに示すようなプレメニューの実施をスキップして、図7Cに示すような指示メニューの実施が可能となる。 Here, when the target person 200 uses the facility system 10 at a pace of one to several times a week, for example, the target person 200 performs an operation according to the same instruction menu for a period of one to several months. It is effective to repeat it over time. Therefore, the facility system 10 does not have to perform the process of selecting the pre-menu and the instruction menu described above each time, and may execute the process of presenting the instruction menu. For example, at the time of screening, if it is found from the information such as the name of the target person 200 that the target menu 200 has already been selected, the facility system 10 determines that confirmation of physical ability is unnecessary, Skip the process of selecting a menu and an instruction menu. As a result, in the target person 200, the implementation of the pre-menu as shown in FIG. 7B can be skipped, and the implementation of the instruction menu as shown in FIG. 7C becomes possible.
 本入力例では、上述のように対象者200が指示メニューを実行することで、センサ装置111により検知される、検知空間内での対象者200の位置に関する情報、及び対象者200の姿勢に関する情報が、身体情報として第1入力部11に与えられる。 In the present input example, when the target person 200 executes the instruction menu as described above, information regarding the position of the target person 200 in the detection space and information regarding the posture of the target person 200 detected by the sensor device 111 Is given to the first input unit 11 as physical information.
 ところで、スクリーニングにおいて、例えば、対象者200が左膝関節に障害を持つことが判明した場合、施設用システム10は、複数の評価項目のうちの特定の評価項目については評価対象から外して評価を行うことが好ましい。つまり、施設用システム10は、例えば、左膝関節に障害を持つ対象者200については、左膝関節の動作に関連する評価項目を評価対象から除外することが好ましい。このとき、施設用システム10は、左膝関節の動作に関連する評価項目についての評価を全く行わなくてもよいし、例えば、左膝関節の動作に関連する評価項目についての閾値を下げて評価を行ってもよい。 By the way, in the screening, for example, when the target person 200 is found to have a disorder in the left knee joint, the facility system 10 excludes a specific evaluation item among the plurality of evaluation items from the evaluation targets and makes the evaluation. It is preferred to do. That is, it is preferable that the facility system 10 excludes an evaluation item related to the motion of the left knee joint from the evaluation targets, for example, for the subject 200 having a disorder in the left knee joint. At this time, the institutional system 10 may not perform the evaluation on the evaluation item related to the motion of the left knee joint at all, for example, the threshold on the evaluation item related to the movement of the left knee joint may be lowered and evaluated. You may
 (4.2)第2入力例
 第2入力例は、スクリーニング、プレメニューの実施、及び指示メニューの実施の全てついて、施設用システム10が、複数人(ここでは2人)の対象者200に同時に行わせる場合を想定している点で、第1入力例と相違する。したがって、以下の説明では、第1入力例と共通する点については説明を省略する。ただし、施設用システム10は、少なくとも指示メニューの実施について、複数人の対象者200に同時に行わせればよく、スクリーニング及びプレメニューの実施の少なくとも一方については、対象者200ごとに個別に行わせてもよい。本入力例では、複数人(ここでは、2人)の対象者200を区別する場合には、各対象者を「対象者200A」又は「対象者200B」と呼ぶ。
(4.2) Second Input Example In the second input example, the facility system 10 sends the target person 200 of a plurality (two in this case) to the screening, the execution of the pre-menu, and the execution of the instruction menu. It differs from the first input example in that it is assumed to be simultaneously performed. Therefore, in the following description, descriptions of points common to the first input example will be omitted. However, the facility system 10 may perform at least the execution of at least the instruction menu at the same time by a plurality of subjects 200, and at least one of the screening and the pre-menu may be performed individually for each subject 200. It is also good. In this input example, in the case where a plurality of (two in this case) target persons 200 are distinguished, each target person is referred to as “target person 200A” or “target person 200B”.
 本入力例では、まず図8Aに示すように、複数人の対象者200A,200Bに対して質問するためのスクリーニングを実施する。施設用システム10は、スクリーニングの結果に基づいて、身体能力の確認の要否を判断する。例えば、スクリーニングの結果、対象者200A,200Bの両方について、日常生活を送る上での身体能力に何ら問題が無いと判断される場合には、身体能力の確認は不要と判断される。一方、スクリーニングの結果、対象者200A,200Bの少なくとも一方について、身体能力の確認が必要と判断されると、施設用システム10は、プレメニューの選択処理を実行する。 In this input example, first, as shown in FIG. 8A, screening for asking a plurality of subjects 200A and 200B is performed. Institutional system 10 determines the necessity of confirmation of physical ability based on the result of screening. For example, when it is determined that there is no problem in physical ability in daily life for both the subjects 200A and 200B as a result of screening, confirmation of the physical ability is determined to be unnecessary. On the other hand, if it is determined as a result of the screening that at least one of the subjects 200A and 200B needs to confirm the physical ability, the facility system 10 executes a pre-menu selection process.
 施設用システム10は、選択したプレメニューを表すプレ情報に基づいて表示装置112を制御する。これにより、表示装置112は、図8Bに示すように、対象者200に実施させるプレメニュー、及び対象者200によるプレメニューの実施をサポートするためのサポート情報を、スクリーン領域301に表示する。図8Bの例では、プレメニューとして「片足立ち」というメニューが選択された場合を想定しており、対象者200の反転映像302A,302B及び見本映像303A,303Bが、サポート情報として、スクリーン領域301に表示されている。 The facility system 10 controls the display device 112 based on pre-information representing the selected pre-menu. As a result, as shown in FIG. 8B, the display device 112 displays, on the screen area 301, the pre-menu to be performed by the subject 200 and the support information to support the execution of the pre-menu by the subject 200. In the example of FIG. 8B, it is assumed that the menu “one foot standing” is selected as the pre-menu, and the reverse video 302A, 302B and the sample video 303A, 303B of the target person 200 are screen area 301 as support information. Is displayed on.
 この状態で、センサ装置111が対象者200の動きを検知し、施設用システム10は、指示メニューの選択処理を実行する。そして、施設用システム10は、第1情報及び第2情報、更にはスクリーニングで取得した第3情報に基づいて、対象者200の動作を評価する。 In this state, the sensor device 111 detects the movement of the target person 200, and the facility system 10 executes an instruction menu selection process. Then, the facility system 10 evaluates the operation of the target person 200 based on the first information and the second information, and further, the third information acquired in the screening.
 ここにおいて、施設用システム10での評価は、対象者200ごとに個別に行われるが、複数人の対象者200A,200Bに対して同一の指示メニューが選択されるように、例えば、複数人の対象者200A,200Bについての評価結果が総合的に判断される。一例として、複数人の対象者200A,200Bについての評価結果の平均値に基づいて、指示メニューが選択される。つまり、複数人の対象者200A,200Bに共通の指示メニューが選択されることになる。ただし、対象者200ごとの評価結果は、指示メニューの選択処理において、指示メニューを実施する際に対象者200に掛かる負荷の大きさを調整するのに使用されてもよい。この場合、指示メニューの開始時において、対象者200ごとに負荷の大きさを調整可能となる。 Here, although evaluation in the facility system 10 is performed individually for each target person 200, for example, a plurality of target persons 200A and 200B may select the same instruction menu. Evaluation results of the target persons 200A and 200B are comprehensively determined. As one example, the instruction menu is selected based on the average value of the evaluation results for a plurality of subjects 200A and 200B. That is, a common instruction menu is selected for a plurality of target persons 200A and 200B. However, the evaluation result for each target person 200 may be used to adjust the size of the load applied to the target person 200 when the instruction menu is performed in the selection process of the instruction menu. In this case, the size of the load can be adjusted for each target person 200 at the start of the instruction menu.
 指示メニューの選択処理が完了すると、施設用システム10は、選択した指示メニューを表す指示情報に基づいて、表示装置112を制御する。これにより、表示装置112は、図8Cに示すように、対象者200に実施させる指示メニュー、及び対象者200による指示メニューの実施をサポートするためのサポート情報を、スクリーン領域301に表示する。図8Cの例では、指示メニューとして「横方向への足上げ」というメニューが選択された場合を想定しており、反転映像302A,302B及びマーカ304A,304Bが、サポート情報として、スクリーン領域301に表示されている。マーカ304A,304Bを、対象者200A向けと対象者200B向けとで特に区別しない場合には、各々を単に「マーカ304」と呼ぶ。 When the selection process of the instruction menu is completed, the facility system 10 controls the display device 112 based on the instruction information representing the selected instruction menu. As a result, as shown in FIG. 8C, the display device 112 displays, on the screen area 301, an instruction menu to be executed by the subject 200 and support information for supporting the execution of the instruction menu by the subject 200. In the example shown in FIG. 8C, it is assumed that the menu "Large foot in the horizontal direction" is selected as the instruction menu, and the reverse images 302A and 302B and the markers 304A and 304B are displayed on the screen area 301 as support information. It is displayed. When the markers 304A and 304B are not particularly distinguished between the target person 200A and the target person 200B, they are simply referred to as "markers 304".
 本入力例では、表示装置112は、複数人の対象者200A,200Bが同時に指示メニューを実施できるように、スクリーン領域301に対して、同様の指示メニュー及びサポート情報を左右方向に並べて複数表示させている。すなわち、スクリーン領域301は、左右方向において第1領域311と第2領域312とに区分されている。そして、表示装置112は、スクリーン領域301を正面から見て左側となる第1領域311には、対象者200A向けの指示メニュー及びサポート情報(反転映像302A及びマーカ304A)を表示させる。表示装置112は、スクリーン領域301を正面から見て右側となる第2領域312には、対象者200B向けの指示メニュー及びサポート情報(反転映像302B及びマーカ304B)を表示させる。これにより、複数人の対象者200A,200Bは、スクリーン領域301の正面において左右方向に並んだ状態で、同時に指示メニューを実施することができる。 In this input example, the display device 112 causes the screen area 301 to arrange a plurality of similar instruction menus and support information in the horizontal direction so that a plurality of target persons 200A and 200B can simultaneously execute the instruction menu. ing. That is, the screen area 301 is divided into the first area 311 and the second area 312 in the left-right direction. Then, the display device 112 displays an instruction menu and support information (reverse video 302A and marker 304A) for the target person 200A in the first area 311 on the left side when viewing the screen area 301 from the front. The display device 112 displays an instruction menu and support information (reverse video 302B and marker 304B) for the target person 200B in the second area 312 on the right side when viewing the screen area 301 from the front. As a result, the plurality of target persons 200A and 200B can simultaneously execute the instruction menu in a state where they are aligned in the left and right direction in front of the screen area 301.
 更に本入力例では、第1入力例と同様に、施設用システム10は、指示情報に加えて、指示情報で表される指示メニューを対象者200が実行することで得られることが期待される効果を表す効果情報にも基づいて、表示装置112を制御する。そのため、表示装置112は、指示メニュー及びサポート情報に加えて、指示メニューの実行により期待される効果を表す効果情報をスクリーン領域301に表示する。 Furthermore, in the present input example, the facility system 10 is expected to be obtained by the target person 200 executing the instruction menu represented by the instruction information, in addition to the instruction information, as in the first input example. The display device 112 is controlled based also on the effect information indicating the effect. Therefore, in addition to the instruction menu and the support information, the display device 112 displays, on the screen area 301, effect information indicating an effect expected by the execution of the instruction menu.
 この状態で、センサ装置111が対象者200の動きを検知し、施設用システム10は、対象者200ごとに指示メニューの達成度を評価する。このとき、施設用システム10は、指示メニューに従って動作中の対象者200の動作に関する第1情報(動作情報)を、センサ装置111から取得する。ここで、第1情報には、対象者200A,200Bの各々が装着しているウェアラブル型のセンサ端末で計測される心拍等の情報が含まれていることが好ましい。ここでいう「センサ端末」は、ジャイロセンサ、加速度センサ、活動量計及び心拍計等のセンサを含んでおり、例えば、対象者200の心拍等の計測が可能である。更に、施設用システム10は、少なくとも指示メニューにおける模範となる動きを規定する基準データを第2情報として第1記憶部14から取得する。 In this state, the sensor device 111 detects the movement of the target person 200, and the facility system 10 evaluates the degree of achievement of the instruction menu for each target person 200. At this time, the facility system 10 acquires, from the sensor device 111, first information (operation information) on the operation of the target person 200 in operation according to the instruction menu. Here, it is preferable that the first information includes information such as a heartbeat measured by a wearable sensor terminal worn by each of the target persons 200A and 200B. The “sensor terminal” mentioned here includes sensors such as a gyro sensor, an acceleration sensor, an activity meter, and a heart rate meter, and can measure, for example, the heartbeat of the subject 200. Furthermore, the facility system 10 acquires, from the first storage unit 14 as second information, reference data defining at least an exemplary movement in the instruction menu.
 そして、施設用システム10は、第1情報及び第2情報、更にはスクリーニングで取得した第3情報に基づいて、対象者200ごとに、指示メニューの達成度を評価する。一例として、施設用システム10は、対象者200の姿勢と、基準データで規定されている基準姿勢との差分(ずれの大きさ)を数値化し、この差分にて指示メニューの達成度を評価する。更に、図8Cのように指示メニューとして「横方向への足上げ」というメニューが選択されている場合、例えば、対象者200がマーカ304の高さまで足を上げるのに要した時間(反応時間)の長さについても、対象者200の動作の評価に加味される。 Then, the facility system 10 evaluates the degree of achievement of the instruction menu for each target person 200 based on the first information and the second information, and further, the third information acquired in the screening. As an example, the facility system 10 digitizes the difference (magnitude of the deviation) between the posture of the target person 200 and the reference posture specified in the reference data, and evaluates the achievement degree of the instruction menu by this difference. . Furthermore, when the menu "foot raising in the lateral direction" is selected as the instruction menu as shown in FIG. 8C, for example, the time taken for the target person 200 to raise the foot to the height of the marker 304 (reaction time) The length of the subject is also added to the evaluation of the motion of the subject 200.
 その後、施設用システム10は、達成度の評価結果に応じて、対象者200ごとに指示メニューを実施する際に掛かる負荷の大きさを調整する。一例として、指示メニューに従って対象者200が運動する場合には、施設用システム10は、対象者200に身体的に掛かる負担量が大きくなるように、身体の特定部位の動き量を大きくしたり、動きを高速化したりすることで、負荷を大きくする。図8Cのように指示メニューとして「横方向への足上げ」というメニューが選択されている場合、施設用システム10は、対象者200ごとに、例えば、スクリーン領域301に表示させるマーカ304の高さ(位置)を調整することにより、負荷の大きさを調整する。図8Cの例では、対象者200Bよりも対象者200Aに掛かる負荷が大きくなるように、対象者200A向けのマーカ304Aは、対象者200B向けのマーカ304Bよりも高い位置に表示されている。 Thereafter, the facility system 10 adjusts the size of the load applied when performing the instruction menu for each target person 200 according to the evaluation result of the achievement level. For example, when the target person 200 exercises according to the instruction menu, the facility system 10 increases the amount of movement of a specific part of the body so that the physical load on the target person 200 becomes large, Increase the load by speeding up the movement. In the case where the menu “foot-up in the lateral direction” is selected as the instruction menu as shown in FIG. 8C, the facility system 10 displays, for example, the heights of the markers 304 displayed in the screen area 301 for each target person 200. Adjust the size of the load by adjusting (position). In the example of FIG. 8C, the marker 304A for the subject 200A is displayed at a higher position than the marker 304B for the subject 200B so that the load on the subject 200A is larger than that of the subject 200B.
 更に、施設用システム10は、対象者200ごとの評価結果を提示する。このとき、施設用システム10は、対象者200についての評価結果を表す結果情報に基づいて表示装置112を制御する。これにより、表示装置112は、スクリーン領域301に評価結果を表示することにより、評価結果をリアルタイムで(即時に)対象者200に提示する。 Further, the facility system 10 presents an evaluation result for each target person 200. At this time, the facility system 10 controls the display device 112 based on the result information indicating the evaluation result of the target person 200. Thereby, the display device 112 presents the evaluation result to the target person 200 in real time (immediately) by displaying the evaluation result in the screen area 301.
 その後、施設用システム10は、例えば指示メニューが終了したか否かを判断する。指示メニューが終了していなければ、施設用システム10は、指示メニューを提示した以降の処理を繰り返す。指示メニューが終了していれば、施設用システム10は、一連の処理を終了する。 Thereafter, the facility system 10 determines, for example, whether the instruction menu has ended. If the instruction menu has not ended, the facility system 10 repeats the processing after presenting the instruction menu. If the instruction menu has ended, the facility system 10 ends a series of processing.
 本入力例では、複数人の対象者200がそれぞれ指示メニューを実行することで、センサ装置111により検知される、検知空間内での対象者200の位置に関する情報、及び対象者200の姿勢に関する情報が、身体情報として第1入力部11に与えられる。このとき、第1入力部11には、対象者200ごとに身体情報が与えられる。 In this input example, information on the position of the target person 200 in the detection space and information on the posture of the target person 200 detected by the sensor device 111 as a plurality of target persons 200 execute the instruction menu. Is given to the first input unit 11 as physical information. At this time, physical information is given to the first input unit 11 for each target person 200.
 本入力例では、指示メニューを複数人の対象者200に対して同時に実施させるので、各対象者200は、一人ではなく他の対象者200と一緒に指示メニューに取り組むことができる。その結果、個々の対象者200にとっては、他の対象者200との間のコミュニケーションを取りやすくなり、一人で取り組むよりもモチベーションを高く保つことができる。更に、複数人の対象者200が同時に指示メニューを実施することで、時間の短縮を図ることも可能である。 In this input example, since the instruction menu is simultaneously performed on a plurality of target persons 200, each target person 200 can work on the instruction menu with another target person 200 instead of one person. As a result, the individual subjects 200 can easily communicate with other subjects 200 and can keep their motivation higher than working alone. Furthermore, it is possible to reduce time by simultaneously executing a plurality of target persons 200's instruction menu.
 (5)変形例
 上述の実施形態は、本開示の様々な実施形態の一つに過ぎない。上述の実施形態は、本開示の目的を達成できれば、設計等に応じて種々の変更が可能である。また、活動支援方法と同様の機能は、(コンピュータ)プログラム、又はプログラムを記録した非一時的記録媒体等で具現化されてもよい。一態様に係る(コンピュータ)プログラムは、上記の活動支援方法を1以上のプロセッサに実行させるためのプログラムである。
(5) Modification The embodiment described above is only one of the various embodiments of the present disclosure. The above-mentioned embodiment can be variously changed according to design etc. if the object of the present disclosure can be achieved. Also, the same function as the activity support method may be embodied by a (computer) program or a non-temporary recording medium or the like recording the program. A (computer) program according to one aspect is a program for causing one or more processors to execute the above-described activity support method.
 以下、上述の実施形態の変形例を列挙する。以下に説明する変形例は、適宜組み合わせて適用可能である。 Hereinafter, modifications of the above-described embodiment will be listed. The modifications described below can be applied in combination as appropriate.
 本開示における活動支援システム100は、コンピュータシステムを含んでいる。コンピュータシステムは、ハードウェアとしての1以上のプロセッサ及びメモリを主構成とする。コンピュータシステムのメモリに記録されたプログラムを1以上のプロセッサが実行することによって、本開示における活動支援システム100としての機能が実現される。プログラムは、コンピュータシステムのメモリに予め記録されてもよく、電気通信回線を通じて提供されてもよく、コンピュータシステムで読み取り可能なメモリカード、光学ディスク、ハードディスクドライブ等の非一時的記録媒体に記録されて提供されてもよい。コンピュータシステムの1以上のプロセッサの各々は、半導体集積回路(IC)又は大規模集積回路(LSI)を含む1乃至複数の電子回路で構成される。ここでいうIC又はLSI等の集積回路は、集積の度合いによって呼び方が異なっており、システムLSI、VLSI(Very Large Scale Integration)、又はULSI(Ultra Large Scale Integration)と呼ばれる集積回路を含む。更に、LSIの製造後にプログラムされる、FPGA(Field-Programmable Gate Array)、又はLSI内部の接合関係の再構成若しくはLSI内部の回路区画の再構成が可能な論理デバイスについても、プロセッサとして採用することができる。複数の電子回路は、1つのチップに集約されていてもよいし、複数のチップに分散して設けられていてもよい。複数のチップは、1つの装置に集約されていてもよいし、複数の装置に分散して設けられていてもよい。ここでいうコンピュータシステムは、1以上のプロセッサ及び1以上のメモリを有するマイクロコントローラを含む。したがって、マイクロコントローラについても、半導体集積回路又は大規模集積回路を含む1乃至複数の電子回路で構成される。 The activity support system 100 in the present disclosure includes a computer system. A computer system mainly includes one or more processors and memory as hardware. The function as the activity support system 100 in the present disclosure is realized by one or more processors executing a program recorded in the memory of the computer system. The program may be pre-recorded in the memory of the computer system, may be provided through a telecommunication line, and recorded in a non-transitory recording medium such as a computer system-readable memory card, an optical disc, a hard disk drive, etc. It may be provided. Each of the one or more processors of the computer system is configured with one or more electronic circuits including a semiconductor integrated circuit (IC) or a large scale integrated circuit (LSI). The term “integrated circuit such as IC or LSI” as used herein varies depending on the degree of integration, and includes integrated circuits called system LSI, very large scale integration (VLSI), or ultra large scale integration (ULSI). Furthermore, use as a processor also a field-programmable gate array (FPGA) or a logic device capable of reconfiguring junction relations inside the LSI or reconfiguring circuit sections inside the LSI, which are programmed after the LSI is manufactured. Can. The plurality of electronic circuits may be integrated into one chip or may be distributed to a plurality of chips. The plurality of chips may be integrated into one device or may be distributed to a plurality of devices. A computer system as referred to herein includes a microcontroller having one or more processors and one or more memories. Therefore, the microcontroller is also configured with one or more electronic circuits including a semiconductor integrated circuit or a large scale integrated circuit.
 また、活動支援システム100において、第3通信部(取得部)31及び第3処理部(提示部)32が1つの筐体内に集約されていることは、活動支援システム100に必須の構成ではなく、これらは複数の筐体に分散して設けられていてもよい。また、活動支援システム100の少なくとも一部の機能は、例えば、サーバ又はクラウド(クラウドコンピューティング)等によって実現されてもよい。 Moreover, in the activity support system 100, the fact that the third communication unit (acquisition unit) 31 and the third processing unit (presentation unit) 32 are integrated in one case is not an essential configuration for the activity support system 100. These may be provided separately in a plurality of housings. Also, at least part of the functions of the activity support system 100 may be realized by, for example, a server or a cloud (cloud computing).
 上述の実施形態では、活動メニューM1の一例として、運動メニューM11及び料理メニューM12を挙げているが、活動メニューM1を限定する趣旨ではない。例えば、対象者200が精神疾患(うつ病、アルコール依存症、薬物依存症など)を患っている場合には、活動メニューM1は、精神疾患からの回復を目的とするサークル活動への参加を提案したりするメニューであってもよい。 In the above-mentioned embodiment, although exercise menu M11 and cooking menu M12 are mentioned as an example of activity menu M1, it is not the meaning which limits activity menu M1. For example, if the subject 200 suffers from a psychiatric disorder (such as depression, alcoholism, drug addiction, etc.), the activity menu M1 proposes participation in a circle activity for the purpose of recovery from the psychiatric disorder. The menu may be
 上述の実施形態において、生成ステップS2では、1以上のプロセッサが、施設1にて入力された身体情報に基づいて活動メニューM1を生成しているが、これに限定する趣旨ではない。例えば、生成ステップS2では、1以上のプロセッサは、身体情報と、セラピストが対象者200を診察することで得られた補足的な情報とに基づいて、活動メニューM1を生成してもよい。補足的な情報は、身体情報を入力するときに、併せてセラピストが入力するのが好ましい。 In the above-described embodiment, at the generation step S2, the one or more processors generate the activity menu M1 based on the physical information input at the facility 1, but the present invention is not limited thereto. For example, in the generation step S2, the one or more processors may generate the activity menu M1 based on the physical information and the supplementary information obtained by the therapist examining the subject 200. The supplementary information is preferably input by the therapist at the same time as the physical information is input.
 上述の実施形態において、生成ステップS2にて第1処理部(生成部)12が生成する活動メニューM1は、複数であってもよい。この場合、施設用システム10は、例えばタッチパネルディスプレイ等、複数の活動メニューM1を表示する装置と、複数の活動メニューM1のうちいずれかの活動メニューM1を選択する操作を受け付ける装置と、を備えているのが好ましい。この態様では、対象者200、又は対象者200の担当のセラピストがいずれかの活動メニューM1を選択すると、選択された活動メニューM1がサーバ2へアップロードされる。 In the above embodiment, the activity menu M1 generated by the first processing unit (generation unit) 12 at the generation step S2 may be plural. In this case, the facility system 10 includes, for example, a device such as a touch panel display that displays a plurality of activity menus M1, and a device that receives an operation to select one of the plurality of activity menus M1. Is preferred. In this aspect, when the subject person 200 or a therapist in charge of the subject person 200 selects one of the activity menus M1, the selected activity menu M1 is uploaded to the server 2.
 上述の実施形態において、施設1にて身体情報を入力するときに、対象者200の要望を更に入力してもよい。この態様では、生成ステップS2にて、1以上のプロセッサは、対象者200の要望に適さない活動メニューM1を予め排除した上で、活動メニューM1を生成することが可能になる。 In the above-described embodiment, when physical information is input in the facility 1, the request of the target person 200 may be further input. In this aspect, at the generation step S2, the one or more processors can generate the activity menu M1 after excluding in advance the activity menu M1 which is not suitable for the request of the object person 200.
 上述の実施形態において、提示ステップS5にて、活動メニューM1に加えて、対象者200の身体情報を提示してもよい。この態様では、取得ステップS4にて、対象者200の身体情報を更に取得する。つまり、この態様では、サーバ2は、活動メニューM1だけではなく、対象者200の身体情報も施設用システム10から取得し、取得した活動メニューM1及び身体情報を第2記憶部23に記憶している。また、提示ステップS5にて、活動メニューM1に加えて、対象者200の身体情報の評価を提示してもよい。この評価は、施設用システム10で実行されてもよいし、サーバ2で実行されてもよい。その他、提示ステップS5にて、活動メニューM1に加えて、活動メニューM1を対象者200が実行することで得ることが期待される効果を表す効果情報を提示してもよい。 In the above-described embodiment, in addition to the activity menu M1, physical information of the subject 200 may be presented in the presentation step S5. In this aspect, the physical information of the object person 200 is further acquired in the acquisition step S4. That is, in this aspect, the server 2 acquires not only the activity menu M1 but also the physical information of the target person 200 from the facility system 10, and stores the acquired activity menu M1 and the physical information in the second storage unit 23 There is. In addition to the activity menu M1, an evaluation of the physical information of the subject 200 may be presented in the presentation step S5. This evaluation may be performed by the facility system 10 or may be performed by the server 2. In addition to the activity menu M1, in the presentation step S5, effect information may be presented representing an effect expected to be obtained by the subject person 200 executing the activity menu M1.
 上述の実施形態において、取得ステップS4及び提示ステップS5は、いずれも対象者200の自宅4で実行されているが、これに限定する趣旨ではない。言い換えれば、第2場所は、対象者200の自宅4に限定されない。例えば、第2場所は、対象者200の勤める会社のオフィス、公民館などの公共施設、又は公園などであってもよい。つまり、第2場所は、活動メニューM1を生成する施設1と異なる場所、特に対象者200が日常生活において訪れる場所であればよい。 In the above-mentioned embodiment, although acquisition step S4 and presentation step S5 are performed by home 4 of object person 200, it is not the meaning limited to this. In other words, the second place is not limited to the home 4 of the target person 200. For example, the second place may be an office of a company where the target person 200 works, a public facility such as a public hall, or a park. That is, the second place may be a place different from the facility 1 that generates the activity menu M1, in particular, a place where the target person 200 visits in daily life.
 また、第1場所と第2場所とは、同じ施設内の異なるエリアであってもよい。例えば、第1場所及び第2場所を含む施設が老人福祉施設である場合、第1場所は施設の1階にある共用エリア、第2場所は施設の2階にある居住エリアであってもよい。 Also, the first place and the second place may be different areas in the same facility. For example, if the facility including the first place and the second place is a welfare facility for the elderly, the first place may be a common area on the first floor of the facility, and the second place may be a living area on the second floor of the facility .
 上述の実施形態において、更新ステップS9にて、対象者200の活動結果に基づいて活動メニューM1を更新しているが、これに限定する趣旨ではない。例えば、更新ステップS9を実行する前に、対象者200が施設1にて身体情報を再び測定している場合には、対象者200の活動結果と、対象者200の最新の身体情報とに基づいて、活動メニューM1を更新してもよい。 In the above-mentioned embodiment, although the activity menu M1 is updated based on the activity result of the object person 200 in update step S9, it is not the meaning limited to this. For example, if the target person 200 measures physical information again at the facility 1 before executing the update step S9, based on the activity result of the target person 200 and the latest physical information of the target person 200. The activity menu M1 may be updated.
 上述の実施形態では、身体情報は、活動メニューM1を提示される対象者200の身体情報であるが、これに限定する趣旨ではない。例えば、身体情報は、対象者200が目標とする人物の身体情報であってもよいし、対象者200と同じ疾患を抱える患者の身体情報であってもよい。つまり、対象者200の活動を支援することが可能な活動メニューM1を対象者200に提示できるのであれば、活動メニューM1は、対象者200とは異なる他人の身体情報に基づいて生成されてもよい。 In the above-mentioned embodiment, although physical information is physical information of subject 200 who is shown activity menu M1, it is not the meaning limited to this. For example, the physical information may be physical information of a person targeted by the subject 200, or physical information of a patient having the same disease as the subject 200. That is, if the activity menu M1 capable of supporting the activity of the object person 200 can be presented to the object person 200, the activity menu M1 may be generated based on the physical information of another person different from the object person 200. Good.
 上述の実施形態では、施設1は、リハビリテーションセンタ等のリハビリテーションを行う医療施設であるが、これに限定する趣旨ではない。例えば、施設1は、薬局などの他の医療提供施設であってもよい。また、例えば、施設1は、フィットネス施設であってもよいし、ショッピングモールなどの商業施設であってもよい。いずれの施設においても、施設用システム10が備わっていればよい。 In the above-mentioned embodiment, although institution 1 is a medical institution which performs rehabilitation of a rehabilitation center etc., it is not the meaning limited to this. For example, the facility 1 may be another medical delivery facility such as a pharmacy. Also, for example, the facility 1 may be a fitness facility or a commercial facility such as a shopping mall. The facility system 10 may be provided in any facility.
 上述の実施形態において、センサ装置111は、カメラ及び深度センサを有する構成に限らず、これらのセンサに代えて又は加えて、例えば、荷重センサ、赤外線センサ、サーモグラフィ及び電波(マイクロ波)センサ等を有していてもよい。センサ装置111は、対象者200が装着するジャイロセンサ、加速度センサ、活動量計及び心拍計等のセンサを含んでいてもよい。この場合、センサ装置111を用いることで、対象者200の姿勢を維持する運動能力以外の運動能力を測定し、測定した運動能力を身体情報として入力することが可能になる。 In the above embodiment, the sensor device 111 is not limited to a configuration having a camera and a depth sensor, and instead of or in addition to these sensors, for example, a load sensor, an infrared sensor, a thermography, a radio wave (microwave) sensor, etc. You may have. The sensor device 111 may include a sensor such as a gyro sensor, an acceleration sensor, an activity meter, and a heart rate sensor worn by the target person 200. In this case, by using the sensor device 111, it is possible to measure the exercise ability other than the exercise ability to maintain the posture of the subject 200, and to input the measured exercise ability as physical information.
 上述の実施形態において、第1通信部13は、例えばルータなどの中継器及びネットワークN1を介して、サーバ2又は操作端末3と通信する構成であってもよい。第2通信部21及び第3通信部31も同様に、中継器及びネットワークN1を介して通信する構成であってもよい。また、第1通信部13、第2通信部21、及び第3通信部31は、いずれも通信事業者が提供する携帯電話網(キャリア網)を介して、ネットワークN1に接続されていてもよい。携帯電話網には、例えば3G(第3世代)回線、LTE(Long Term Evolution)回線等がある。 In the above embodiment, the first communication unit 13 may be configured to communicate with the server 2 or the operation terminal 3 via, for example, a relay such as a router and the network N1. Similarly, the second communication unit 21 and the third communication unit 31 may be configured to communicate via the relay device and the network N1. In addition, all of the first communication unit 13, the second communication unit 21, and the third communication unit 31 may be connected to the network N1 via a mobile phone network (carrier network) provided by a communication carrier. . The mobile telephone network includes, for example, a 3G (third generation) line, an LTE (Long Term Evolution) line, and the like.
 上述の実施形態において、第3処理部(提示部)32は、単に活動メニューM1を表示させるだけでなく、活動メニューM1を対象者200に実施させるための具体的な指示についても、表示部34に表示させてもよい。例えば、歩行動作を訓練するための活動メニューM1であれば、第3処理部32は、正しい歩行動作に必要な身体の動かし方、姿勢及び歩行のリズム等を示すサポート情報を、具体的な指示として表示部34に表示させてもよい。 In the above embodiment, the third processing unit (presentation unit) 32 not only displays the activity menu M1, but also displays a specific instruction for causing the subject person 200 to execute the activity menu M1. It may be displayed on the screen. For example, in the case of an activity menu M1 for training a walking motion, the third processing unit 32 specifically instructs support information indicating how to move the body, posture, rhythm of walking, etc. necessary for correct walking motion. May be displayed on the display unit 34 as
 上述の実施形態において、操作端末3は、対象者200の所有する運動機器と連携してもよい。本開示でいう「運動機器」は、例えば対象者200の身体の少なくとも一部に力を加え、対象者200の身体の少なくとも一部を他動的に運動させる機器である。この態様では、例えば対象者200が運動機器を利用するときに、活動メニューM1を操作端末3から運動機器にダウンロードすれば、運動機器の有する表示装置に活動メニューM1を表示させたり、活動メニューM1を音声で案内させたりすることが可能である。また、この態様では、例えば運動機器が対象者200の運動を測定する機能を有している場合、測定結果を対象者200の活動結果として、運動機器から操作端末3を介してサーバ2にアップロードすることが可能である。 In the above-described embodiment, the operation terminal 3 may cooperate with an exercise device owned by the subject person 200. The “exercise device” in the present disclosure is, for example, a device that exerts a force on at least a part of the body of the subject 200 and passively exercises at least a part of the body of the subject 200. In this aspect, for example, when the target person 200 uses an exercise device, if the activity menu M1 is downloaded from the operation terminal 3 to the exercise device, the activity menu M1 is displayed on the display device of the exercise device, or the activity menu M1. It is possible to make a voice guide. Moreover, in this aspect, for example, when the exercise apparatus has a function of measuring the exercise of the subject 200, the measurement result is uploaded from the exercise apparatus to the server 2 through the operation terminal 3 as the activity result of the subject 200. It is possible.
 上述の実施形態では、活動メニューM1は、サーバ2にて一時的に記憶された後に、対象者200の要求に応じて操作端末3へ送信されるが、これに限定する趣旨ではない。例えば、活動メニューM1は、対象者200の要求に応じて、サーバ2を介さずに、施設用システム10からネットワークN1を介して操作端末3へ送信されてもよい。この場合、第1処理部12にて生成された活動メニューM1は、施設用システム10の第1記憶部14に記憶される。つまり、この場合、第1記憶部14が、サーバ2の第2記憶部23に相当する。 In the above-mentioned embodiment, after the activity menu M1 is temporarily stored in the server 2, it is transmitted to the operation terminal 3 in response to the request of the object person 200, but the present invention is not limited thereto. For example, the activity menu M1 may be transmitted from the facility system 10 to the operation terminal 3 via the network N1 without the server 2 in response to the request of the object person 200. In this case, the activity menu M1 generated by the first processing unit 12 is stored in the first storage unit 14 of the facility system 10. That is, in this case, the first storage unit 14 corresponds to the second storage unit 23 of the server 2.
 (まとめ)
 以上述べたように、第1の態様に係る活動支援方法は、生成ステップ(S2)と、取得ステップ(S4)と、提示ステップ(S5)と、を有する。生成ステップ(S2)は、入力された身体情報に基づいて対象者(200)の活動メニュー(M1)を1以上のプロセッサにより生成するステップである。取得ステップ(S4)は、ネットワークを介して、生成ステップ(S2)にて生成された活動メニュー(M1)を取得するステップである。提示ステップ(S5)は、取得ステップ(S4)にて取得した活動メニュー(M1)を提示するステップである。
(Summary)
As described above, the activity support method according to the first aspect includes the generation step (S2), the acquisition step (S4), and the presentation step (S5). The generation step (S2) is a step of generating the activity menu (M1) of the subject (200) by one or more processors based on the input physical information. The acquisition step (S4) is a step of acquiring the activity menu (M1) generated in the generation step (S2) via the network. The presenting step (S5) is a step of presenting the activity menu (M1) acquired in the acquiring step (S4).
 この態様によれば、対象者(200)に対して適切な実行すべき活動を提示し易い、という利点がある。 According to this aspect, there is an advantage that it is easy to present an activity to be appropriately performed to the subject (200).
 第2の態様に係る活動支援方法では、第1の態様において、取得ステップ(S4)にて、生成ステップ(S2)で生成された活動メニュー(M1)を記憶する記憶部(第2記憶部)(23)から、活動メニュー(M1)を取得する。 In the activity support method according to the second aspect, in the first aspect, a storage unit (second storage unit) that stores the activity menu (M1) generated in the generation step (S2) in the acquisition step (S4) The activity menu (M1) is obtained from (23).
 この態様によれば、例えば対象者(200)の在宅時など、対象者(200)の所望するタイミングで、活動メニュー(M1)を対象者(200)に提示し易い、という利点がある。 According to this aspect, there is an advantage that the activity menu (M1) can be easily presented to the subject (200) at a timing desired by the subject (200), such as when the subject (200) is at home.
 第3の態様に係る活動支援方法では、第1又は第2の態様において、取得ステップ(S4)にて、身体情報を更に取得する。 In the activity support method according to the third aspect, in the first or second aspect, physical information is further acquired in the acquisition step (S4).
 この態様によれば、必要に応じて、活動メニュー(M1)に加えて身体情報を対象者(200)に提示することが可能になる、という利点がある。 According to this aspect, it is possible to present physical information to the subject (200) in addition to the activity menu (M1) as needed.
 第4の態様に係る活動支援方法は、第1~第3のいずれかの態様において、結果取得ステップ(S6)を更に有する。結果取得ステップ(S6)は、生成ステップ(S2)にて生成された活動メニュー(M1)に基づく対象者(200)の活動結果を取得するステップである。 The activity support method according to the fourth aspect further includes a result acquisition step (S6) in any of the first to third aspects. The result acquisition step (S6) is a step of acquiring the activity result of the object person (200) based on the activity menu (M1) generated in the generation step (S2).
 この態様によれば、取得した活動結果を用いることで、対象者(200)が活動メニュー(M1)を正しく実行したか否かを把握することが可能になる、という利点がある。 According to this aspect, there is an advantage that it is possible to grasp whether the target person (200) has properly executed the activity menu (M1) by using the acquired activity result.
 第5の態様に係る活動支援方法は、第4の態様において、評価ステップ(S8)を更に有する。評価ステップ(S8)は、生成ステップ(S2)にて生成された活動メニュー(M1)と、結果取得ステップ(S6)にて取得した活動結果とに基づいて、対象者(200)の活動を1以上のプロセッサにより評価するステップである。 The activity support method according to the fifth aspect further includes an evaluation step (S8) in the fourth aspect. In the evaluation step (S8), based on the activity menu (M1) generated in the generation step (S2) and the activity result acquired in the result acquisition step (S6), the activity of the target person (200) is 1 It is a step evaluated by the above processor.
 この態様によれば、対象者(200)に評価を提示することで、対象者(200)のモチベーションの向上を図ることが可能になる、という利点がある。 According to this aspect, it is possible to improve the motivation of the target person (200) by presenting the evaluation to the target person (200).
 第6の態様に係る活動支援方法は、第4又は第5の態様において、記憶ステップ(S7)を更に有する。記憶ステップ(S7)は、生成ステップ(S2)で生成された活動メニュー(M1)を記憶する記憶部(23)に、結果取得ステップ(S6)にて取得した活動結果を記憶するステップである。 The activity support method according to the sixth aspect further includes a storing step (S7) in the fourth or fifth aspect. The storing step (S7) is a step of storing the activity result acquired in the result acquiring step (S6) in the storage unit (23) storing the activity menu (M1) generated in the generating step (S2).
 この態様によれば、例えば対象者(200)の活動結果を施設(1)にフィードバックする場合に、任意のタイミングで活動結果を施設(1)にフィードバックし易い、という利点がある。 According to this aspect, for example, when feeding back the activity result of the subject (200) to the facility (1), there is an advantage that it is easy to feed back the activity result to the facility (1) at an arbitrary timing.
 第7の態様に係る活動支援方法は、第4~第6のいずれかの態様において、更新ステップ(S9)を更に有する。更新ステップ(S9)は、結果取得ステップ(S6)にて取得した活動結果に基づいて、活動メニュー(M1)を1以上のプロセッサにより更新するステップである。 The activity support method according to the seventh aspect further includes an updating step (S9) in any of the fourth to sixth aspects. The updating step (S9) is a step of updating the activity menu (M1) by one or more processors based on the activity result acquired in the result acquiring step (S6).
 この態様によれば、対象者(200)の活動結果をフィードバックして活動メニュー(M1)を更新することで、より適した活動メニュー(M1)を対象者(200)に提示し易くなる、という利点がある。 According to this aspect, by updating the activity menu (M1) by feeding back the activity result of the target person (200), it becomes easy to present the target person (200) with a more suitable activity menu (M1). There is an advantage.
 第8の態様に係る活動支援方法では、第1~第7のいずれかの態様において、生成ステップ(S2)において入力される身体情報は、以下の情報を含む。すなわち、身体情報は、検知空間内で対象者(200)が指示メニューを実行することで検知される対象者(200)の位置に関する情報及び対象者(200)の姿勢に関する情報の少なくとも一方を含む。 In the activity support method according to the eighth aspect, in any of the first to seventh aspects, the physical information input in the generation step (S2) includes the following information. That is, the physical information includes at least one of information on the position of the subject (200) detected by the subject (200) executing the instruction menu in the detection space and information on the posture of the subject (200). .
 この態様によれば、対象者(200)が単に指示メニューを実行するだけで、生成ステップ(S2)にて必要とされる身体情報が入力される、という利点がある。 According to this aspect, there is an advantage that the physical information required in the generation step (S2) is input only by the subject (200) simply executing the instruction menu.
 第9の態様に係る活動支援方法では、第8の態様において、指示メニューは、以下の方法により対象者(200)に提示される。すなわち、この方法は、第1情報を取得し、第2情報を取得し、指示情報を出力する。第1情報は、対象者(200)の動作に関する情報である。第2情報は、複数のリハビリテーション用のメニューの中から選択されるいずれかのメニューを指示メニューとして提案するための情報である。指示情報は、少なくとも第1情報及び第2情報に基づいて選択される指示メニューを表す情報である。 In the activity support method according to the ninth aspect, in the eighth aspect, the instruction menu is presented to the subject (200) by the following method. That is, this method acquires the first information, acquires the second information, and outputs the instruction information. The first information is information on the operation of the target person (200). The second information is information for proposing, as an instruction menu, any menu selected from among a plurality of rehabilitation menus. The instruction information is information representing an instruction menu selected based on at least the first information and the second information.
 この態様によれば、対象者(200)の動作に関する第1情報と、複数のリハビリテーション用のメニューの中から選択されるいずれかのメニューを指示メニューとして提案するための第2情報とに基づいて、指示メニューが選択される。そして、選択された指示メニューを表す指示情報が出力されるので、対象者(200)に対して指示メニューを提示することが可能である。このように、この態様では、どの対象者(200)にいずれのリハビリテーション用のメニューを実施させるかを、対象者(200)の動作に基づいて自動的に決定可能である。つまり、対象者(200)のリハビリテーションを補助するセラピスト等が介在しなくても、その対象者(200)に合ったリハビリテーション用のメニューを自動的に提案可能となる。したがって、リハビリテーション支援方法によれば、対象者(200)のリハビリテーションを補助するセラピスト等の負担を低減可能である、という利点がある。 According to this aspect, based on the first information regarding the operation of the subject person (200) and the second information for proposing any menu selected from among the plurality of rehabilitation menus as the instruction menu , The instruction menu is selected. Then, since instruction information representing the selected instruction menu is output, it is possible to present the instruction menu to the target person (200). In this manner, in this aspect, it is possible to automatically determine which subject (200) is to perform which rehabilitation menu, based on the action of the subject (200). That is, it is possible to automatically propose a menu for rehabilitation suitable for the subject (200) without intervention by a therapist or the like who assists the rehabilitation of the subject (200). Therefore, according to the rehabilitation support method, there is an advantage that it is possible to reduce the burden of a therapist or the like who assists the rehabilitation of the subject (200).
 第10の態様に係る活動支援方法は、第9の態様において、対象者(200)は複数人であって、以下の方法を含む。すなわち、この方法は、複数人の対象者(200)に対して同時に実施させるように、指示情報を出力する。この方法は、指示メニューに従って動作中の複数人の対象者(200)の各々の動作に関する動作情報に基づいて、対象者(200)ごとに指示メニューの達成度を評価する。更に、この方法は、達成度の評価結果に応じて、対象者(200)ごとに指示メニューを実施する際に掛かる負荷の大きさを調整する。 In the activity support method according to the tenth aspect, in the ninth aspect, a plurality of subjects (200) are included, and the following method is included. That is, this method outputs instruction information so as to be simultaneously performed on a plurality of subjects (200). In this method, the degree of achievement of the instruction menu is evaluated for each target person (200) based on operation information on the operation of each of a plurality of target persons (200) operating according to the instruction menu. Furthermore, this method adjusts the size of the load applied when the instruction menu is implemented for each subject (200) according to the evaluation result of the achievement level.
 この態様によれば、指示メニューを複数人の対象者(200)に対して同時に実施させることができる。しかも、指示メニューに従って動作中の複数人の対象者(200)の各々の動作に関する動作情報に基づいて、対象者(200)ごとに指示メニューの達成度が評価されるので、一人の対象者(200)につきセラピスト等が一人つく必要がない。そのため、対象者(200)の人数が増えた場合における、セラピスト等の負担の増加が抑制される。更に、達成度の評価結果に応じて、対象者(200)ごとに指示メニューを実施する際に掛かる負荷の大きさが調整される。そのため、一人の対象者(200)につきセラピスト等が一人つかなくても、個々の対象者(200)にとって効果的なリハビリテーションを実践できる。したがって、この態様によれば、複数人の対象者(200)がリハビリテーションを同時に行う場合において、セラピスト等の負担を低減可能である、という利点がある。 According to this aspect, the instruction menu can be simultaneously implemented to a plurality of subjects (200). In addition, since the degree of achievement of the instruction menu is evaluated for each target person (200) based on the operation information on the operation of each of a plurality of target persons (200) operating according to the instruction menu, one target person ( 200) There is no need for a single therapist etc. Therefore, when the number of subjects (200) increases, an increase in the burden on the therapist or the like is suppressed. Furthermore, according to the evaluation result of the achievement level, the size of the load applied when performing the instruction menu is adjusted for each target person (200). Therefore, even if there is not one therapist etc. per one subject (200), an effective rehabilitation can be practiced for each subject (200). Therefore, according to this aspect, there is an advantage that the burden on the therapist or the like can be reduced when a plurality of subjects (200) perform rehabilitation simultaneously.
 第11の態様に係るプログラムは、第1~第10のいずれかの態様の活動支援方法を1以上のプロセッサに実行させるためのプログラムである。 A program according to an eleventh aspect is a program for causing one or more processors to execute the activity support method according to any one of the first to tenth aspects.
 この態様によれば、対象者(200)に対して適切な実行すべき活動を提示し易い、という利点がある。 According to this aspect, there is an advantage that it is easy to present an activity to be appropriately performed to the subject (200).
 第12の態様に係る活動支援システム(100)は、生成部(第1処理部)(12)と、取得部(第3通信部)(31)と、提示部(32)と、を備える。生成部(12)は、入力された身体情報に基づいて対象者(200)の活動メニュー(M1)を1以上のプロセッサにより生成する。取得部(31)は、ネットワーク(N1)を介して、生成部(12)で生成された活動メニュー(M1)を取得する。提示部(32)は、取得部(31)で取得した活動メニュー(M1)を提示する。 The activity support system (100) according to the twelfth aspect includes a generation unit (first processing unit) (12), an acquisition unit (third communication unit) (31), and a presentation unit (32). The generation unit (12) generates an activity menu (M1) of the subject (200) by the one or more processors based on the input physical information. The acquisition unit (31) acquires the activity menu (M1) generated by the generation unit (12) via the network (N1). The presentation unit (32) presents the activity menu (M1) acquired by the acquisition unit (31).
 この態様によれば、対象者(200)に対して適切な実行すべき活動を提示し易い、という利点がある。 According to this aspect, there is an advantage that it is easy to present an activity to be appropriately performed to the subject (200).
 第2~第10の態様に係る方法については、活動支援方法に必須の方法ではなく、適宜省略可能である。 The methods according to the second to tenth aspects are not essential to the activity support method, and can be omitted as appropriate.
 100 活動支援システム
 12 第1処理部(生成部)
 23 第2記憶部(記憶部)
 31 第3通信部(取得部)
 32 第3処理部(提示部)
 200,200A,200B 対象者
 N1 ネットワーク
 S2 生成ステップ
 S4 取得ステップ
 S5 提示ステップ
 S6 結果取得ステップ
 S7 記憶ステップ
 S8 評価ステップ
 S9 更新ステップ
100 activity support system 12 first processing unit (generation unit)
23 second storage unit (storage unit)
31 Third communication unit (acquisition unit)
32 Third processing unit (presentation unit)
200, 200A, 200B Target person N1 network S2 generation step S4 acquisition step S5 presentation step S6 result acquisition step S7 storage step S8 evaluation step S9 update step

Claims (12)

  1.  入力された身体情報に基づいて対象者の活動メニューを1以上のプロセッサにより生成する生成ステップと、
     ネットワークを介して、前記生成ステップにて生成された前記活動メニューを取得する取得ステップと、
     前記取得ステップにて取得した前記活動メニューを提示する提示ステップと、を有する
     活動支援方法。
    Generating an activity menu of the subject by one or more processors based on the input physical information;
    Acquiring, through a network, the activity menu generated in the generation step;
    A presentation step of presenting the activity menu acquired in the acquisition step.
  2.  前記取得ステップにて、前記生成ステップで生成された前記活動メニューを記憶する記憶部から、前記活動メニューを取得する
     請求項1記載の活動支援方法。
    The activity support method according to claim 1, wherein in the acquisition step, the activity menu is acquired from a storage unit that stores the activity menu generated in the generation step.
  3.  前記取得ステップにて、前記身体情報を更に取得する
     請求項1又は2に記載の活動支援方法。
    The activity support method according to claim 1, wherein the physical information is further acquired in the acquisition step.
  4.  前記生成ステップにて生成された前記活動メニューに基づく対象者の活動結果を取得する結果取得ステップを更に有する
     請求項1乃至3のいずれか1項に記載の活動支援方法。
    The activity support method according to any one of claims 1 to 3, further comprising a result acquisition step of acquiring an activity result of a subject based on the activity menu generated in the generation step.
  5.  前記生成ステップにて生成された前記活動メニューと、前記結果取得ステップにて取得した前記活動結果とに基づいて、前記対象者の活動を1以上のプロセッサにより評価する評価ステップを更に有する
     請求項4記載の活動支援方法。
    The method further includes an evaluation step of evaluating the activity of the subject by one or more processors based on the activity menu generated in the generation step and the activity result acquired in the result acquisition step. Activity support method described.
  6.  前記生成ステップで生成された前記活動メニューを記憶する記憶部に、前記結果取得ステップにて取得した前記活動結果を記憶する記憶ステップを更に有する
     請求項4又は5に記載の活動支援方法。
    The activity support method according to claim 4 or 5, further comprising a storage step of storing the activity result acquired in the result acquisition step in a storage unit that stores the activity menu generated in the generation step.
  7.  前記結果取得ステップにて取得した前記活動結果に基づいて、前記活動メニューを1以上のプロセッサにより更新する更新ステップを更に有する
     請求項4乃至6のいずれか1項に記載の活動支援方法。
    The activity support method according to any one of claims 4 to 6, further comprising: an update step of updating the activity menu by one or more processors based on the activity result acquired in the result acquisition step.
  8.  前記生成ステップにおいて入力される前記身体情報は、検知空間内で前記対象者が指示メニューを実行することで検知される前記対象者の位置に関する情報及び前記対象者の姿勢に関する情報の少なくとも一方を含む
     請求項1乃至7のいずれか1項に記載の活動支援方法。
    The physical information input in the generation step includes at least one of information on the position of the subject and the information on the posture of the subject detected by the subject executing the instruction menu in the detection space. The activity support method according to any one of claims 1 to 7.
  9.  前記指示メニューは、
     前記対象者の動作に関する第1情報を取得し、
     複数のリハビリテーション用のメニューの中から選択されるいずれかのメニューを前記指示メニューとして提案するための第2情報を取得し、
     少なくとも前記第1情報及び前記第2情報に基づいて選択される前記指示メニューを表す指示情報を出力することにより、前記対象者に提示される、
     請求項8記載の活動支援方法。
    The instruction menu is
    Acquiring first information about the action of the subject,
    Acquiring second information for proposing, as the instruction menu, any menu selected from among a plurality of rehabilitation menus;
    It is presented to the subject by outputting instruction information representing the instruction menu selected based on at least the first information and the second information.
    The activity support method according to claim 8.
  10.  前記対象者は複数人であって、
     前記複数人の対象者に対して同時に実施させるように、前記指示情報を出力し、
     前記指示メニューに従って動作中の前記複数人の対象者の各々の動作に関する動作情報に基づいて、前記対象者ごとに前記指示メニューの達成度を評価し、
     前記達成度の評価結果に応じて、前記対象者ごとに前記指示メニューを実施する際に掛かる負荷の大きさを調整する
     請求項9記載の活動支援方法。
    The target person is a plurality of people,
    Outputting the instruction information so as to be simultaneously performed on the plurality of subjects;
    Evaluating the degree of achievement of the instruction menu for each of the subjects based on motion information on each of the plurality of subjects operating according to the instruction menu;
    The activity support method according to claim 9, wherein the size of the load applied when the instruction menu is implemented for each of the subjects is adjusted according to the evaluation result of the achievement level.
  11.  請求項1乃至10のいずれか1項に記載の活動支援方法を1以上のプロセッサに実行させるための
     プログラム。
    A program for causing one or more processors to execute the activity support method according to any one of claims 1 to 10.
  12.  入力された身体情報に基づいて対象者の活動メニューを1以上のプロセッサにより生成する生成部と、
     ネットワークを介して、前記生成部で生成された前記活動メニューを取得する取得部と、
     前記取得部で取得した前記活動メニューを提示する提示部と、を備える
     活動支援システム。
    A generation unit that generates an activity menu of the subject by one or more processors based on the input physical information;
    An acquisition unit for acquiring the activity menu generated by the generation unit via a network;
    A presentation unit that presents the activity menu acquired by the acquisition unit;
PCT/JP2018/027797 2017-07-25 2018-07-25 Activity assistant method, program, and activity assistant system WO2019022102A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2017-144008 2017-07-25
JP2017-144007 2017-07-25
JP2017144008A JP2019024580A (en) 2017-07-25 2017-07-25 Rehabilitation support system, rehabilitation support method, and program
JP2017144007A JP2019024579A (en) 2017-07-25 2017-07-25 Rehabilitation support system, rehabilitation support method, and program
JP2017-184111 2017-09-25
JP2017184111A JP2019058285A (en) 2017-09-25 2017-09-25 Activity support method, program, and activity support system

Publications (1)

Publication Number Publication Date
WO2019022102A1 true WO2019022102A1 (en) 2019-01-31

Family

ID=65040712

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/027797 WO2019022102A1 (en) 2017-07-25 2018-07-25 Activity assistant method, program, and activity assistant system

Country Status (2)

Country Link
TW (1) TW201909058A (en)
WO (1) WO2019022102A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021246423A1 (en) * 2020-06-01 2021-12-09 株式会社Arblet Information processing system, server, information processing method, and program
JP2021190129A (en) * 2020-06-01 2021-12-13 株式会社Arblet Information processing system, server, information processing method, and program
WO2022269930A1 (en) * 2021-06-25 2022-12-29 株式会社Cureapp Information processing device, information processing method, and information processing program
WO2023127870A1 (en) * 2021-12-28 2023-07-06 株式会社Sportip Care support device, care support program, and care support method
JP7344622B1 (en) * 2022-05-09 2023-09-14 株式会社Utヘルステック A telemedicine support system to assist in the selection and implementation of rehabilitation programs for orthopedic patients.
WO2023219056A1 (en) * 2022-05-09 2023-11-16 株式会社Utヘルステック Telemedicine assisstance system for assisting selection and execution of rehabilitation program for orthopedic patients

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005018653A (en) * 2003-06-27 2005-01-20 Nissan Motor Co Ltd Rehabilitation menu presentation device and nursing service support system using the same
JP2006302122A (en) * 2005-04-22 2006-11-02 Nippon Telegr & Teleph Corp <Ntt> Exercise support system, user terminal therefor and exercise support program
WO2012168999A1 (en) * 2011-06-06 2012-12-13 システム・インスツルメンツ株式会社 Training device
JP2014104139A (en) * 2012-11-27 2014-06-09 Toshiba Corp Rehabilitation information processing system, information processor, and information management device
WO2015019477A1 (en) * 2013-08-08 2015-02-12 株式会社日立製作所 Rehab system and control method therefor
JP2017060572A (en) * 2015-09-24 2017-03-30 パナソニックIpマネジメント株式会社 Function training device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005018653A (en) * 2003-06-27 2005-01-20 Nissan Motor Co Ltd Rehabilitation menu presentation device and nursing service support system using the same
JP2006302122A (en) * 2005-04-22 2006-11-02 Nippon Telegr & Teleph Corp <Ntt> Exercise support system, user terminal therefor and exercise support program
WO2012168999A1 (en) * 2011-06-06 2012-12-13 システム・インスツルメンツ株式会社 Training device
JP2014104139A (en) * 2012-11-27 2014-06-09 Toshiba Corp Rehabilitation information processing system, information processor, and information management device
WO2015019477A1 (en) * 2013-08-08 2015-02-12 株式会社日立製作所 Rehab system and control method therefor
JP2017060572A (en) * 2015-09-24 2017-03-30 パナソニックIpマネジメント株式会社 Function training device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021246423A1 (en) * 2020-06-01 2021-12-09 株式会社Arblet Information processing system, server, information processing method, and program
JP2021190129A (en) * 2020-06-01 2021-12-13 株式会社Arblet Information processing system, server, information processing method, and program
WO2022269930A1 (en) * 2021-06-25 2022-12-29 株式会社Cureapp Information processing device, information processing method, and information processing program
WO2023127870A1 (en) * 2021-12-28 2023-07-06 株式会社Sportip Care support device, care support program, and care support method
JP7344622B1 (en) * 2022-05-09 2023-09-14 株式会社Utヘルステック A telemedicine support system to assist in the selection and implementation of rehabilitation programs for orthopedic patients.
WO2023219056A1 (en) * 2022-05-09 2023-11-16 株式会社Utヘルステック Telemedicine assisstance system for assisting selection and execution of rehabilitation program for orthopedic patients

Also Published As

Publication number Publication date
TW201909058A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
WO2019022102A1 (en) Activity assistant method, program, and activity assistant system
AU2022201300B2 (en) Augmented reality therapeutic movement display and gesture analyzer
US20210272376A1 (en) Virtual or augmented reality rehabilitation
JP6871379B2 (en) Treatment and / or Exercise Guidance Process Management Systems, Programs, Computer Devices, and Methods for Treatment and / or Exercise Guidance Process Management
JP4594157B2 (en) Exercise support system, user terminal device thereof, and exercise support program
JP2019058285A (en) Activity support method, program, and activity support system
JP7373788B2 (en) Rehabilitation support device, rehabilitation support system, and rehabilitation support method
KR20190113265A (en) Augmented reality display apparatus for health care and health care system using the same
JP2021049208A (en) Exercise evaluation system
JP2021049319A (en) Rehabilitation operation evaluation method and rehabilitation operation evaluation device
WO2022034771A1 (en) Program, method, and information processing device
JP2019024579A (en) Rehabilitation support system, rehabilitation support method, and program
JP2020081413A (en) Motion detection device, motion detection system, motion detection method and program
JP4840509B2 (en) Passive motion system
JPWO2019003429A1 (en) Human body model display system, human body model display method, communication terminal device, and computer program
JP2005050122A (en) Renovation simulation system
JP2019024580A (en) Rehabilitation support system, rehabilitation support method, and program
Jiménez et al. Monitoring of motor function in the rehabilitation room
JP6320702B2 (en) Medical information processing apparatus, program and system
JP7150387B1 (en) Programs, methods and electronics
JP7382581B2 (en) Daily life activity status determination system, daily life activity status determination method, program, daily life activity status determination device, and daily life activity status determination device
JP6995737B2 (en) Support device
JP7397282B2 (en) Stationary determination system and computer program
CA3155745A1 (en) Exercise support device, exercise support system, exercise support method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18838986

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18838986

Country of ref document: EP

Kind code of ref document: A1