WO2023007930A1 - 判定方法、判定装置、及び、判定システム - Google Patents
判定方法、判定装置、及び、判定システム Download PDFInfo
- Publication number
- WO2023007930A1 WO2023007930A1 PCT/JP2022/021370 JP2022021370W WO2023007930A1 WO 2023007930 A1 WO2023007930 A1 WO 2023007930A1 JP 2022021370 W JP2022021370 W JP 2022021370W WO 2023007930 A1 WO2023007930 A1 WO 2023007930A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subject
- dimensional
- unit
- skeletal
- determination
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 210000000707 wrist Anatomy 0.000 claims abstract description 45
- 230000000694 effects Effects 0.000 claims description 82
- 230000033001 locomotion Effects 0.000 claims description 55
- 238000004891 communication Methods 0.000 claims description 54
- 238000012549 training Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 13
- 238000010801 machine learning Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 24
- 230000003863 physical function Effects 0.000 description 20
- 230000010365 information processing Effects 0.000 description 16
- 238000012986 modification Methods 0.000 description 13
- 230000004048 modification Effects 0.000 description 13
- 238000000605 extraction Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 210000003811 finger Anatomy 0.000 description 6
- 230000005057 finger movement Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 4
- 239000000470 constituent Substances 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 238000005406 washing Methods 0.000 description 4
- 238000005401 electroluminescence Methods 0.000 description 3
- 210000004247 hand Anatomy 0.000 description 3
- 210000003813 thumb Anatomy 0.000 description 3
- 238000005452 bending Methods 0.000 description 2
- 210000002310 elbow joint Anatomy 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000003370 grooming effect Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 230000000474 nursing effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008933 bodily movement Effects 0.000 description 1
- 230000001447 compensatory effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 210000001503 joint Anatomy 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H1/00—Apparatus for passive exercising; Vibrating apparatus; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
- A61H1/02—Stretching or bending or torsioning apparatus for exercising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Definitions
- the present invention relates to a determination method, a determination device, and a determination system.
- Nursing facilities have traditionally provided training (so-called rehabilitation) services for the elderly so that they can live independently.
- rehabilitation nursing facility staff who are qualified to create a training plan visit the elderly's home, determine the state of the elderly's physical function and daily living (ADL: Activities of Daily Living), Create a training plan according to the condition.
- Rehabilitation is carried out according to the prepared training plan.
- Patent Document 1 in evaluation of rehabilitation, motion information of a subject performing a predetermined motion is acquired, the acquired motion information is analyzed, and a display based on an analysis value regarding motion of a specified part is disclosed.
- a motion information processor for displaying information is disclosed.
- Patent Document 1 when creating a rehabilitation training plan, the subject's rehabilitation cannot be accurately evaluated unless the subject's state of daily living activities is accurately determined.
- the present invention provides a determination method, a determination device, and a determination system that can easily and accurately determine the state of a subject's activities of daily living.
- a determination method is a determination method executed by a computer, and includes an instruction step of instructing a subject to perform a specific action, and the subject performing the specific action as a subject.
- a determination device includes an instruction unit that instructs a subject to perform a specific action, a camera that captures an image including the subject performing the specific action as a subject, and an estimating unit for estimating a skeletal model of the subject in the image based on the image; and a plurality of three-dimensional regions around the skeletal model based on positions of a plurality of skeletal points in the estimated skeletal model.
- a specifying unit that specifies a three-dimensional region including the skeletal points of the subject's wrist in the specific motion among the plurality of set three-dimensional regions; and the specified three-dimensional a determination unit that determines the state of the subject's daily living activities based on the area.
- a determination system is a system comprising an information terminal and a server device connected to the information terminal via communication, wherein the information terminal includes a communication unit that communicates with the server device. an instruction unit that instructs a target person to perform a specific action; and a camera that captures an image including the target person performing the specific action as a subject, wherein the server device captures an image captured by the camera.
- an estimating unit for estimating a skeletal model of the subject in the image based on the obtained image; a setting unit for setting a region; a specifying unit for specifying a three-dimensional region including the skeletal points of the subject's wrist in the specific motion among the plurality of set three-dimensional regions; and a determination unit that determines the state of the subject's daily living activities based on the original region.
- a determination method, a determination device, and a determination system that can easily and accurately determine the state of a subject's daily living activities are realized.
- FIG. 1 is a block diagram showing an example of a functional configuration of a determination system according to an embodiment.
- FIG. 2 is a flow chart showing a first example of operation of the determination system according to the embodiment.
- FIG. 3 is a flow chart showing a second example of the operation of the determination system according to the embodiment.
- FIG. 4 is a diagram conceptually showing estimation of a two-dimensional skeleton model of a subject.
- FIG. 5 is a diagram conceptually showing estimation of a three-dimensional skeleton model.
- FIG. 6 is a diagram conceptually showing setting of a plurality of three-dimensional regions.
- FIG. 7 is a diagram conceptually showing identification of a three-dimensional region in which the wrist is located.
- FIG. 8 is a diagram showing an example of a database.
- FIG. 1 is a block diagram showing an example of a functional configuration of a determination system according to an embodiment.
- FIG. 2 is a flow chart showing a first example of operation of the determination system according to the embodiment.
- FIG. 9 is a diagram showing an example of presentation information.
- FIG. 10 is a diagram showing an example of presentation information.
- FIG. 11 is a diagram showing an example of presentation information.
- FIG. 12 is a diagram showing another example of presentation information.
- FIG. 13 is a diagram showing another example of presentation information.
- each figure is a schematic diagram and is not necessarily strictly illustrated. Moreover, in each figure, the same code
- FIG. 1 is a block diagram showing an example of the functional configuration of the determination system according to the embodiment.
- the determination system 10 sets a plurality of three-dimensional regions around a skeletal model estimated based on an image of a subject performing a specific action, and determines whether the subject performs the specific action in the set three-dimensional regions.
- This system specifies a three-dimensional region that includes skeletal points of the wrist of the subject, and determines the state of the subject's daily living activities based on the specified three-dimensional region. A determination method will be described later.
- the target person is a person whose physical function, which is the ability to move the body due to disease, injury, aging, or disability, has decreased.
- the user is, for example, a physical therapist, an occupational therapist, a nurse, or a rehabilitation professional.
- the determination system 10 includes, for example, a camera 20, an information terminal 30, and a server device 40.
- the camera 20 captures an image (for example, a moving image composed of a plurality of images) including a target person performing a specific action as a subject.
- the camera 20 may be a camera using a CMOS (Complementary Metal Oxide Semiconductor) image sensor, or may be a camera using a CCD (Charge Coupled Device) image sensor.
- CMOS Complementary Metal Oxide Semiconductor
- CCD Charge Coupled Device
- the information terminal 30 instructs the subject to perform a specific action, acquires an image (more specifically, image data or image information) of the subject captured by the camera 20, and sends the acquired image to the server. Send to device 40 .
- the information terminal 30 is, for example, a portable computer device such as a smart phone or a tablet terminal used by a user, but may be a stationary computer device such as a personal computer.
- the information terminal 30 specifically includes a first communication unit 31a, a second communication unit 31b, a control unit 32, a storage unit 33, a reception unit 34, a presentation unit 35, and an instruction unit 36. .
- the first communication unit 31a is a communication circuit (in other words, a communication module) for the information terminal 30 to communicate with the camera 20 via the local communication network.
- the first communication unit 31a is, for example, a wireless communication circuit that performs wireless communication, but may be a wired communication circuit that performs wired communication.
- the communication standard for communication performed by the first communication unit 31a is not particularly limited.
- the first communication unit 31a may communicate with the camera 20 via a router (not shown) using Wi-Fi (registered trademark) or the like, or may communicate directly with the camera 20 using Bluetooth (registered trademark). good too.
- the second communication unit 31b is a communication circuit (in other words, a communication module) for the information terminal 30 to communicate with the server device 40 via the wide area communication network 5 such as the Internet.
- the second communication unit 31b is, for example, a wireless communication circuit that performs wireless communication, but may be a wired communication circuit that performs wired communication. Note that the communication standard for communication performed by the second communication unit 31b is not particularly limited.
- the control unit 32 performs various information processing related to the information terminal 30 based on the operation input received by the receiving unit 34.
- the control unit 32 is implemented by, for example, a microcomputer, but may be implemented by a processor.
- the storage unit 33 is a storage device that stores a dedicated application program and the like for the control unit 32 to execute.
- the storage unit 33 is implemented by, for example, a semiconductor memory.
- the reception unit 34 is an input interface that receives operation inputs from users of the information terminal 30 (for example, rehabilitation specialists). For example, the reception unit 34 transmits to the server device 40 a weighting condition in the determination by the determination unit 42e, an extraction condition for the determination result, or a condition such as a presentation method to the presentation unit 35, and an instruction to start or end measurement. Accepts a user's input operation to perform.
- the reception unit 34 is specifically realized by a touch panel display or the like. For example, when the reception unit 34 is equipped with a touch panel display, the touch panel display functions as the presentation unit 35 and the reception unit 34 .
- reception unit 34 is not limited to a touch panel display, and may be, for example, a keyboard, a pointing device (for example, a touch pen or mouse), hardware buttons, or the like. Further, the receiving unit 34 may be a microphone when receiving an input by voice. Further, the accepting unit 34 may be a camera when accepting an input by a gesture.
- the presentation unit 35 presents the determination result of the state of daily living activities to the user.
- the presentation unit 35 presents to the user information regarding the state of the subject's daily living activities extracted based on the user's instruction.
- the instruction unit 36 is, for example, at least one of a display panel such as a liquid crystal panel or an organic EL (Electro Luminescence) panel, a speaker, and earphones.
- a display panel such as a liquid crystal panel or an organic EL (Electro Luminescence) panel
- a speaker and earphones.
- the presentation unit 35 may be a display panel and a speaker or earphones, or may be a display panel, a speaker and earphones.
- the instruction unit 36 instructs the subject to perform a specific action.
- the instruction unit 36 may instruct the subject using at least one of voice, text, and video.
- the instruction unit 36 is, for example, at least one of a display panel such as a liquid crystal panel or an organic EL panel, a speaker, and earphones.
- the instruction unit 36 may be a display panel and a speaker or earphones, or may be a display panel, a speaker and earphones.
- the instruction unit 36 may function as the presentation unit 35 depending on the mode of instruction, and the presentation unit 35 may function as the instruction unit 36. That is, the instruction section 36 may be integrated with the presentation section 35 .
- the server device 40 acquires an image transmitted from the information terminal 30, estimates a skeletal model in the acquired image, and determines the state of the subject's daily living activities based on the estimated skeletal model.
- the server device 40 includes a communication section 41 , an information processing section 42 and a storage section 43 .
- the communication unit 41 is a communication circuit (in other words, a communication module) for the server device 40 to communicate with the information terminal 30 .
- the communication unit 41 may include a communication circuit (communication module) for communicating via the wide area communication network 5 and a communication circuit (communication module) for communicating via the local communication network.
- the communication unit 41 is, for example, a wireless communication circuit that performs wireless communication. Note that the communication standard for communication performed by the communication unit 41 is not particularly limited.
- the information processing section 42 performs various types of information processing regarding the server device 40 .
- the information processing unit 42 is implemented by, for example, a microcomputer, but may be implemented by a processor.
- the functions of the information processing section 42 are realized by, for example, executing a computer program stored in the storage section 43 by a microcomputer, processor, or the like that constitutes the information processing section 42 .
- the information processing unit 42 includes an acquisition unit 42a, an estimation unit 42b, an identification unit 42d, a determination unit 42e, and an output unit 42f.
- the acquisition unit 42a acquires an image (for example, a moving image composed of a plurality of images) transmitted from the information terminal 30 and the user's operation input received by the reception unit 34.
- an image for example, a moving image composed of a plurality of images
- the estimation unit 42b estimates the skeleton model of the subject in the image based on the image acquired by the acquisition unit 42a. More specifically, the estimating unit 42b estimates a skeleton model for each of a plurality of images forming the moving image based on the moving image including the plurality of images. For example, the estimation unit 42b estimates a two-dimensional skeleton model of the subject based on the image, and uses the learned model 44, which is a machine learning model that has been trained, based on the estimated two-dimensional skeleton model. Estimate a 3D skeletal model of a person.
- the setting unit 42c sets a plurality of three-dimensional regions around the skeletal model based on the positions of the skeletal points in the skeletal model estimated by the estimating unit 42b. More specifically, for example, the setting unit 42c sets a plurality of three-dimensional regions based on the three-dimensional skeleton model. For example, the setting unit 42c sets a plurality of three-dimensional regions around the skeletal model with one of the skeletal points of the skeletal model as a base point. For details of estimating a two-dimensional skeleton model and a three-dimensional skeleton model, and setting a plurality of three-dimensional regions, see [2. Since the description will be made in [First example] of [Operation], the description will be omitted here.
- the specifying unit 42d specifies a three-dimensional region in which the skeletal points of the target person's wrist are located in a specific motion among the plurality of three-dimensional regions set by the setting unit 42c.
- the determining unit 42e determines the state of the subject's daily life activities based on the three-dimensional area specified by the specifying unit 42d. For example, the determination unit 42e determines based on the database 45 in which a specific action, a three-dimensional region in which the wrist is positioned in the specific action, and a daily life action corresponding to the specific action are stored in association with each other. By determining whether or not the three-dimensional area specified by the unit 42d matches the three-dimensional area stored in the database 45, the state of the subject's daily living activities is determined.
- the output unit 42f outputs, for example, at least one of the determination result of the state of the subject's daily living activities and information on the state of the subject's daily living activities. Furthermore, the output unit 42f outputs the three-dimensional skeleton model in the moving image of the subject, the feature amount (for example, data of physical function such as the range of motion of the joint) used for the determination result of the state of the daily living activity, the body of the subject A function determination result, a rehabilitation training plan, or the like may be output.
- the feature amount for example, data of physical function such as the range of motion of the joint
- the storage unit 43 is a storage device that stores the image data acquired by the acquisition unit 42a.
- the storage unit 43 also stores computer programs to be executed by the information processing unit 42 .
- a database 45 in which a specific action, a three-dimensional region in which the wrist is positioned in the specific action, and a daily life action corresponding to the specific action are stored in association with each other, A learned machine learning model (learned model 44) is stored.
- the storage unit 43 is realized by a semiconductor memory, HDD (Hard Disk Drive), or the like.
- the determination system 10 is composed of a plurality of devices in the example of FIG. 1, it may be a single device.
- FIG. 2 is a flow chart showing a first example of operation of the determination system 10 according to the embodiment.
- FIG. 4 is a diagram conceptually showing setting of a plurality of three-dimensional regions.
- FIG. 5 is a diagram conceptually showing estimation of a two-dimensional skeleton model of a subject.
- FIG. 6 is a diagram conceptually showing estimation of a three-dimensional skeleton model.
- the determination system 10 acquires an image captured by the camera 20 and identifies the subject in the acquired image.
- a known image analysis technique is used to identify the subject in the image.
- the instruction unit 36 instructs the target person to perform a specific action (S11).
- the camera 20 captures an image including a target person performing a specific action as a subject (S12), and transmits the captured image (hereinafter also referred to as image data) to the information terminal 30 (not shown).
- the camera 20 may capture a moving image composed of a plurality of images.
- the information terminal 30 acquires the image data transmitted from the camera 20 via the first communication unit 31a (not shown), and transmits the acquired data to the server device 40 via the second communication unit 31b. (not shown). At this time, the information terminal 30 may anonymize the image data and transmit it to the server device 40 . This protects the privacy data of the subject.
- the estimation unit 42b of the server device 40 estimates the skeleton model of the subject in the image based on the image (image data) acquired by the acquisition unit 42a (S13). Note that when the acquiring unit 42a acquires a moving image composed of a plurality of images, the estimating unit 42b estimates a skeleton model for each of the plurality of images constituting the moving image based on the acquired moving image. may
- the estimation unit 42b estimates a two-dimensional skeleton model of the subject based on the image, and based on the estimated two-dimensional skeleton model, the learned model 44, which is a machine learning model that has been trained, may be used to estimate the subject's three-dimensional coordinate data (so-called three-dimensional skeleton model).
- FIG. 4 is a diagram conceptually showing the estimation of the subject's two-dimensional skeleton model.
- the two-dimensional skeleton model is a model in which the positions (circles in the drawing) of the joints 100 of the subject 1 appearing in the image are connected by links (lines in the drawing).
- Existing pose and skeleton estimation algorithms are used to estimate the two-dimensional skeleton model.
- FIG. 5 is a diagram conceptually showing estimation of a three-dimensional skeleton model.
- the learned model 44 (learning model in the figure) is pre-constructed by machine learning using a two-dimensional skeleton model with known three-dimensional coordinate data of each joint as learning data and the three-dimensional coordinate data as teacher data. It is a discriminator.
- the trained model 44 can receive a two-dimensional skeleton model as an input and output three-dimensional coordinate data, that is, a three-dimensional skeleton model.
- the estimation unit 42b may estimate the three-dimensional coordinate data (three-dimensional skeleton model) based on the image acquired by the acquisition unit 42a.
- a trained model indicating the relationship between the subject's image and the three-dimensional coordinate data may be used.
- the setting unit 42c sets a plurality of three-dimensional regions around the skeletal model based on the positions of the skeletal points in the skeletal model estimated by the estimating unit 42b in step S13. More specifically, for example, the setting unit 42c sets a plurality of three-dimensional regions based on the three-dimensional skeleton model. For example, the setting unit 42c sets a plurality of three-dimensional regions around the skeletal model with one of the skeletal points of the skeletal model as a base point. Setting of a plurality of three-dimensional regions will be specifically described below.
- FIG. 6 is a diagram conceptually showing setting of a plurality of three-dimensional areas.
- FIG. 6(b), FIG. 6(d) and FIG. 6(f) As shown in (b) of FIG. 6, (d) of FIG. 6, and (f) of FIG. A back area A3 on the back side of the subject (see (f) in FIG. 6) and a front area A2 on the front side provided adjacent to each other across a first reference axis Z1 that is a longitudinal axis and passes through the base point. (see (d) of FIG. 6) and the front region A1 (see (b) of FIG. 6) provided adjacent to the front side region and on the front side of the subject. In addition, as shown in FIGS.
- the left region B2 and the right region B1 of the subject which are provided adjacent to each other across a second reference axis Z2 that is a vertical axis and passes through the base point, each of the left region B2 and the right region B1 is the subject It includes a predetermined number of regions divided vertically from the head to the legs of the person.
- each of the left region B2 and the right region B1 of the front region A1 includes three regions divided in the horizontal direction perpendicular to the vertical direction from the subject's head to the leg.
- the predetermined number of regions are included in the same number for the right side region B1 and the left side region B2, but different numbers are included in each of the back region A3, the front region A2 and the front region A1.
- the first reference axis Z1 is set with the skeletal points of the neck and waist of the subject as the base points
- the second reference axis Z2 is set with the skeletal points of the neck and the elbows of the subject as the base points. It may be set as a base point.
- the setting unit 42c for example, as shown in FIG. 6(b), FIG. 6(d), and FIG.
- the first distance L1 which is the distance from the top of the hand to the tip of the hand, is set as the width W1 of each of the back area A3, the front area A2, and the front area A1.
- FIG. 7 is a diagram conceptually showing identification of a three-dimensional region in which the wrist is located. Based on the three-dimensional coordinate data (so-called three-dimensional skeleton model) of the subject in the image, the specifying unit 42d determines which one of the plurality of three-dimensional regions the coordinates of the skeleton point of the wrist of the subject are located. (in other words, included). The identified three-dimensional area is the shaded area shown in FIG.
- the determining unit 42e determines the state of the subject's daily life activities based on the three-dimensional region specified by the specifying unit 42d in step S15 (S16). For example, the determination unit 42e stores a specific motion, a three-dimensional region where the wrist is positioned in the specific motion, and a daily life motion corresponding to the specific motion in a database 45 that is associated with and stored. to determine whether the three-dimensional region specified by the specifying unit 42d matches the three-dimensional region stored in the database 45 in association with the specific motion, thereby determining whether the subject's daily life activity state may be determined.
- FIG. 8 is a diagram showing an example of the database 45.
- the database 45 stores specific motions, three-dimensional regions in which the target person's wrist is positioned in the specific motions, and activities of daily living (ADL) in association with each other.
- ADL daily living
- the specific action is banzai
- the three-dimensional regions where the subject's wrist is positioned in the specific action are D2-2 (region where the right wrist is positioned) and G2-2 (region where the left wrist is positioned) shown in FIG. )
- daily life activities such as eating, grooming (face washing, shaving, makeup), and washing are possible.
- the determination system 10 may perform the processing of steps S11 to S16 as one loop processing each time the subject performs a plurality of specific actions, or may perform the processing of steps S11 and S12 multiple times. , and after the subject completes all the specific actions, steps S13 to S16 may be performed for each of the plurality of specific actions.
- the determination system 10 estimates a skeletal model in an image including a target person performing a specific action as a subject, and sets a plurality of three-dimensional regions around the estimated skeletal model. By specifying in which of the plurality of three-dimensional regions the subject's wrist is located, the state of the subject's daily living activities can be determined simply and accurately.
- the subject may be instructed to stand up from a sitting posture.
- the determination system 10 may determine whether or not the subject can stand up based on the image of the subject photographed by the camera 20, or may determine based on the user's instruction.
- the determination may be made by the determination unit 42e.
- An image-based determination may be made, for example, by estimating a skeletal model of the subject in the image.
- the user's instruction may be, for example, a gesture, a voice, or input by operating a touch panel or a remote control button.
- the gesture may be shaking one hand from side to side, shaking the head from side to side, or crossing both arms to take the shape of a cross. If the action is possible, it may be nodding, thumbs up, or making a circle with both hands.
- the voice may also utter short words such as, for example, "no" or "yes.”
- the state of the subject's daily living activities can be determined efficiently and accurately.
- the determination system 10 sets a plurality of three-dimensional regions based on the three-dimensional skeleton model of the subject performing a specific action, and By identifying the three-dimensional area where the wrist of the subject is located, the state of the subject's activities of daily living was determined.
- modification 2 of the first example furthermore, by determining whether or not the target person's finger movement is possible (for example, opening and closing of the hand (gooper), finger confrontation (OK sign), etc.), the target person determine the state of activities of daily living.
- the control unit 32 instructs the instruction unit 36 to perform an action involving finger movement.
- the information terminal 30 When the information terminal 30 acquires the image captured by the camera 20 and including the target person performing the action involving finger movement as a subject, the information terminal 30 receives the instruction received by the reception unit 34 and the image captured by the camera 20 (specifically, , the image data) is transmitted to the server device 40 .
- the determination unit 42e of the server device 40 uses, for example, another trained model (not shown) different from the trained model 44, and when goo and par are identified in the image, the hand can be opened and closed. It may be determined that there is In addition, the determination unit 42e uses another learned model to identify whether or not the tip of the index finger and the tip of the thumb are attached in the image, and the shape and size of the space between the index finger and the thumb. , may determine whether finger conflict motion is possible.
- the determination unit 42e derives the positions of two non-joint parts 101 of the subject 1 that are connected via a predetermined joint 100, and derives 2 Based on the straight line connecting the positions of the two non-joint parts 101, the joint angle (non shown).
- the joint angle associated with bending of the elbow joint is derived based on three-dimensional coordinate data (three-dimensional skeleton model) estimated based on the two-dimensional skeleton model.
- the determination unit 42e determines the body function of the subject based on a database (not shown) in which the range of joint angles related to bending of the elbow joint in a specific motion and the determination result of the body function are stored in association with each other. function may be determined.
- a database not shown
- the range of joint angles related to bending of the elbow joint in a specific motion and the determination result of the body function are stored in association with each other. function may be determined.
- the database not only the joint angles but also the following feature amounts may be similarly stored in association with the determination results of the physical functions.
- the determination unit 42e derives the distance between the predetermined joint 100 and the end part in a specific motion, the fluctuation range of the position of the predetermined joint 100, etc., and determines whether or not these values are equal to or greater than the threshold value, or , it may be determined whether or not it is within a predetermined range.
- the determining unit 42e derives the fluctuation and the fluctuation range of the position of the predetermined joint 100 or the end part (for example, the tip of the hand), thereby You may determine the presence or absence of
- a feature quantity indicating the movement characteristics of the subject's skeleton in a specific motion is derived, and the body function of the subject is determined based on the derived feature quantity.
- FIG. 3 is a flow chart showing a second example of the operation of the determination system 10 according to the embodiment.
- an example of presenting information about the state of activities of daily living extracted based on a user's instruction from the state of activities of daily living of the subject determined in the first example will be described.
- the determination unit 42e outputs the state of the subject's daily living activities (hereinafter also referred to as determination results) to the output unit 42f.
- the output unit 42 f outputs the acquired determination result to the information terminal 30 via the communication unit 41 .
- the output determination result may be anonymized by the information processing section 42 .
- the presentation unit 35 presents the acquired determination result of the state of activities of daily living (S21).
- step S21 when the subject performs a plurality of specific actions, the determination result of the state of the daily living activity linked to each of the specific actions may be presented, or the determination result may be bad. may only be presented. These determination results may be presented in descending order of results.
- the reception unit 34 receives an instruction from the user (S22).
- the user's instruction may be a specification of an extraction condition for extracting desired information from the determination result under a predetermined condition, a specification of a presentation method of the determination result, or a specification of the extraction condition and the presentation method.
- the intended information may be, for example, a 3D skeleton model in an image including a target person performing a specific action as a subject, a 3D skeleton model in a model image, or the state of bodily functions.
- the presentation method is, for example, presentation of only image information including characters, presentation of image information and audio information, or the like.
- the information terminal 30 transmits the user's instruction received by the receiving unit 34 in step S22 to the server device 40 (not shown).
- the determination unit 42e of the server device 40 extracts information regarding the state of daily living activities based on the user's instruction (S23). For example, if the user's instruction is to specify an extraction condition for weighting the daily living activities related to the transfer, the determination result of the state of the daily living activities related to the transfer is given priority among the daily living activities corresponding to a plurality of specific movements. extracted.
- the output unit 42f of the server device 40 outputs the information (hereinafter also referred to as the extracted information or the extraction result) regarding the daily life activities extracted by the determination unit 42e in step S23 to the information terminal 30 (not shown).
- the information about the state of activities in daily life includes, for example, at least one of a three-dimensional skeleton model of the subject who performs a specific action, a determination result of the subject's physical function, and training content proposed to the subject.
- the information about the state of activities of daily living includes the physical function of the subject, and the physical function of the subject is, for example, the movement of opening and closing the hand of the subject (gooper), and the movement of the fingers (OK sign).
- a determination is made based on at least one of the conditions.
- the presentation unit 35 presents the information on the daily life activities extracted in step S23 to the user (S24).
- the determination system 10 may notify the user that the determination has been completed before presenting the determination result. Accordingly, information desired by the user can be extracted from the determination result and presented to the user.
- FIG. 9 is a diagram showing an example of determination of the state of the daily living activity in the action of touching the back.
- 9 to 11 are diagrams showing examples of presentation information. In the following, descriptions of the contents described with reference to FIGS. 2 and 3 are omitted or simplified.
- the user is presented with the determination result of the state of activities of daily living. Present information to the user.
- the information terminal 30 transmits the instruction to the server device 40 .
- the information processing section 42 When the server device 40 acquires the instruction, the information processing section 42 outputs presentation information to be presented by the presentation section 35 to the information terminal 30 .
- the presentation unit 35 presents the presentation information
- the instruction unit 36 instructs the target person to perform a specific action (step S11 in FIG. 2).
- the instruction may be given, for example, by outputting a voice such as "Raise your hands up with your hands folded behind your back.”
- the camera 20 captures an image (here, a moving image) including a target person performing a specific action as a subject (S12 in FIG. 2), and the estimating unit 42b, based on the captured moving image, A skeleton model is estimated (S13).
- the process of step S23 in FIG. 3 is performed in parallel with the process of step S13.
- the presentation unit 35 presents these skeleton models.
- the setting unit 42c sets a plurality of three-dimensional regions around the skeletal model based on the positions of the plurality of skeletal points (circles in the drawing) in the skeletal model that has been processed (S14 in FIG. 2). .
- the process of step S23 in FIG. 3 is performed in parallel with the process of step S14. For example, when a plurality of three-dimensional regions are set in step S14, an image is presented in which a plurality of three-dimensional regions are displayed around the three-dimensional skeleton model, as shown in (b) of FIGS. It is presented in part 35 .
- the specifying unit 42d specifies a three-dimensional region in which the skeletal points of the target person's wrist are positioned in a specific motion from among the plurality of three-dimensional regions set by the setting unit 42c (S15 in FIG. 2), and determines The unit 42e determines the state of the subject's activities of daily living based on the three-dimensional region specified by the specifying unit 42d in step S15 (S16 in FIG. 2).
- the processes of steps S21 and S23 of FIG. 3 are performed in parallel with the processes of steps S15 and S16.
- step S15 an image in which the three-dimensional region where the subject's wrist is positioned among the plurality of three-dimensional regions is marked up as shown in FIG. Presented by the presentation unit 35 .
- the three-dimensional area through which the wrist passes may also be marked up so that the movement trajectory of the position of the subject's wrist can be known.
- FIGS. 10B and 11B the three-dimensional area through which the wrist passes may also be marked up so that the movement trajectory of the position of the subject's wrist can be known.
- FIGS. 10B and 11B the three-dimensional area through which the wrist passes may also be marked up so that the movement trajectory of the position of the subject's wrist can be known.
- FIGS. 9 to 11 from the viewpoint of visibility, only the three-dimensional region where one wrist is positioned may be marked up, or both wrists may be marked up. A three-dimensional region may be marked up.
- step S16 when the state of the subject's activities of daily living is determined in step S16, the activities of daily living (ADL) associated with the specific activities are shown in (c) of FIGS. and the determination result of the state of daily living activities are presented.
- FIG. 9 and FIG. 10 when the specific motion is a back touch, the regions (three-dimensional regions E3-1 and H3-1) where the wrist is positioned in the specific motion (three-dimensional regions E3-1 and H3-1) (FIG. 6 and FIG. 8), the subject's wrist is not positioned, so it is determined that the state of daily living activities related to clothing such as taking off the jacket is impossible, and the determination result is displayed on the presentation unit 35. .
- the presentation unit 35 may output the determination result of the presentation information by voice.
- 12 and 13 are diagrams showing other examples of presentation information. 12 and 13, (a) shows a two-dimensional skeleton model in an image (here, a moving image) captured by the camera 20, and (b) shows a three-dimensional skeleton model and a plurality of A three-dimensional area is shown.
- the specific action is banzai, and the target person's wrist is positioned in the region (three-dimensional regions D2-2 and G2-2) (see FIGS. 6 and 8) where the wrist is positioned in the specific action. Therefore, it is determined that daily life activities such as eating, grooming (face washing, shaving, makeup), and washing are possible, and the determination result is presented to the user by voice.
- the specific motion is a back-of-the-head touch
- the target person's wrist is positioned in the region (three-dimensional regions D3 and G3) (see FIGS. 6 and 8) where the wrist is positioned in the specific motion.
- shampooing, etc. is determined to be possible, and the determination result is presented to the user by voice.
- the region above the skeleton point of the neck (that is, toward the head) is set according to the orientation of the face and the inclination of the neck. .
- the state of activities of daily living including the presence or absence of compensatory activities.
- Modification 2 of Second Example in addition to the determination result of the state of the activities of daily living and the information on the activities of daily living, a rehabilitation training plan is generated and presented to the user. Specifically, the information processing unit 42 of the server device 40 creates a rehabilitation training plan based on the determination result of the state of the subject's daily living activities. At this time, for example, the information processing unit 42 may create a rehabilitation training plan based on the determination result of the subject's physical function in addition to the determination result of the state of the activities of daily living.
- the information processing unit 42 may generate a training plan for enabling the daily living activity determined to be impossible among the determination results based on a plurality of specific actions. . Further, for example, even when it is determined that all of the determination results based on a plurality of specific actions are possible, the information processing unit 42 selects the daily living action with the inferior result among the determination results, A training plan may be developed to improve or maintain physical function so that activities of daily living can be carried out more smoothly. Further, for example, the information processing unit 42 adds, in addition to the above determination result, training for enhancing or maintaining the physical function for grasping an object, for example, based on the determination result of the physical function of the subject. may
- the determination method is a determination method executed by a computer.
- a photographing step (S12) of photographing an image including a subject an estimating step (S13) of estimating a skeleton model of the subject in the image based on the photographed image, and a plurality of skeleton points in the estimated skeleton model.
- Such a determination method identifies a 3D region in which the skeletal points of the subject's wrist in a specific motion are located, among a plurality of 3D regions set around the skeletal model.
- the state of living activities can be determined simply and accurately.
- a specific action, a three-dimensional region where the wrist is positioned in the specific action, and a daily life action corresponding to the specific action are stored in association with each other.
- the state of the subject's daily living activities can be determined. judge.
- Such a determination method determines whether or not the three-dimensional region in which the skeletal points of the wrist of the subject performing a specific action are located matches the three-dimensional region stored in the database 45 in association with the specific action. By doing so, it is possible to easily determine the state of the subject's daily living activities.
- the photographing step (S12) a moving image composed of a plurality of images is photographed, and in the estimating step (S13), based on the moving image, a plurality of images constituting the moving image are selected. Estimate the skeletal model in each.
- Such a determination method can estimate a skeletal model corresponding to the movement of the target person performing the specific action based on the skeletal model in the moving image in which the target person performing the specific action is included as a subject.
- a plurality of three-dimensional areas can be set according to the movement of the subject.
- a two-dimensional skeleton model of the subject is estimated based on the image, and based on the estimated two-dimensional skeleton model, a learned model, which is a trained machine learning model is used to estimate a three-dimensional skeleton model of the subject, and in the setting step, a plurality of three-dimensional regions are set based on the three-dimensional skeleton model.
- Such a determination method can estimate a three-dimensional skeleton model using a trained model with a two-dimensional skeleton model in an image as an input. can determine the state of the subject's activities of daily living.
- a plurality of three-dimensional regions are set around the skeletal model using one of the plurality of skeletal points in the skeletal model as a base point, and the plurality of three-dimensional regions are is a vertical axis extending from the head to the legs of the subject in a side view of the subject and adjacent to the subject's back side with a first reference axis Z1 passing through the base point.
- Each of the regions A1 includes a left side region B2 and a right side region B1 of the subject, which are provided adjacent to each other across a second reference axis Z2 passing through the base point, which is the longitudinal axis when viewed from the front of the subject.
- the left region B2 and the right region B1 each include a predetermined number of regions divided in the horizontal direction orthogonal to the vertical direction from the subject's head to the leg.
- the size and position of the three-dimensional region where the subject's wrist is positioned in a specific motion are set according to the subject's daily life motion, so that the subject's daily life can be more accurately measured.
- the state of daily activities can be determined.
- the determination method is such that the first reference axis Z1 is based on the skeletal points of the neck and waist of the subject, and the second reference axis Z2 is set on the skeletal points of the neck and elbows of the subject.
- the first distance L1 which is the distance from the skeletal point of the elbow of the subject to the tip of the hand in the lateral view of the subject, is set to the back area A3, the front area A2, and the front area A1. is set as the width W1 of each of the left region B2 and the right region Set as the width W2 of each of B1.
- widths of a plurality of three-dimensional regions are set based on the positions of skeleton points, for example, even if the height of the subject is the same, A plurality of three-dimensional regions can be set according to the skeleton.
- the determination method further includes a presentation step (S21 in FIG. 3) of presenting to the user the state of the subject's daily living activities determined in the determination step, and a reception step (S22 of FIG. 3) of receiving an instruction regarding user's operation. ), and in the determination step (S16 in FIG. 2), based on the user's instruction received in the reception step (S22), information on the state of the subject's daily life activities is extracted (S23) and presented In step (S21), the information extracted in determination step (S23) is presented to the user (S24).
- the information about the state of activities in daily life includes at least one of a three-dimensional skeletal model of the subject performing the specific movement, the determination result of the body function of the subject, and training content proposed to the subject. including.
- Such a determination method can extract necessary information for the user from among the information regarding the state of the subject's daily living activities based on the user's instructions and present it to the user.
- the information related to the state of activities of daily living includes the physical function of the subject, and the physical function of the subject includes at least the movement of opening and closing the hand of the subject and the movement of the fingers. It is determined based on either state.
- Such a determination method can determine, for example, whether or not the subject can grasp an object by determining whether or not the subject can perform an action involving finger movement. state can be determined more accurately.
- the determination device includes an instruction unit 36 for instructing the target person to perform a specific action, a camera 20 for capturing an image including the target person performing the specific action as a subject, and based on the captured image, an estimating unit 42b for estimating a skeletal model of a subject in an image; a setting unit 42c for setting a plurality of three-dimensional regions around the skeletal model based on the positions of a plurality of skeletal points in the estimated skeletal model; a specifying unit 42d that specifies a three-dimensional region that includes the skeletal points of the subject's wrist in a specific motion from among the plurality of three-dimensional regions that have been identified; and a determination unit 42e that determines the state of
- Such a determination device identifies a 3D region in which the skeletal points of the subject's wrist in a specific motion are located, from among a plurality of 3D regions set around the skeleton model, thereby determining the subject's daily life.
- the state of living activities can be determined simply and accurately.
- the determination system 10 is a system including an information terminal 30 and a server device 40 connected to the information terminal 30 via communication, and the information terminal 30 communicates with the server device 40. , an instruction unit 36 for instructing the target person to perform a specific action, and a camera 20 for capturing an image including the target person performing the specific action as a subject. an estimating unit 42b for estimating a skeletal model of the subject in the image based on the obtained image; and setting a plurality of three-dimensional regions around the skeletal model based on the positions of a plurality of skeletal points in the estimated skeletal model.
- a specifying unit 42d for specifying a three-dimensional region including the skeletal points of the subject's wrist in a specific motion from among the plurality of set three-dimensional regions; and based on the specified three-dimensional region , and a determination unit 42e that determines the state of the subject's daily life activities.
- Such a determination system 10 identifies, from among a plurality of three-dimensional regions set around the skeleton model, a three-dimensional region in which the skeleton point of the subject's wrist in a specific motion is located, thereby determining whether the subject's The state of activities of daily living can be easily and accurately determined.
- processing executed by a specific processing unit may be executed by another processing unit.
- order of multiple processes may be changed, and multiple processes may be executed in parallel.
- each component may be realized by executing a software program suitable for each component.
- Each component may be realized by reading and executing a software program recorded in a recording medium such as a hard disk or a semiconductor memory by a program execution unit such as a CPU or processor.
- each component may be realized by hardware.
- Each component may be a circuit (or integrated circuit). These circuits may form one circuit as a whole, or may be separate circuits. These circuits may be general-purpose circuits or dedicated circuits.
- the present invention may be implemented as a determination method, a program for causing a computer to execute the determination method, or a computer-readable non-temporary program in which such a program is recorded. may be realized as a recording medium.
- the determination system includes a camera, an information terminal, and a server device
- the determination system may be realized as a single device such as an information terminal, It may be implemented by multiple devices.
- the determination system may be implemented as a client-server system.
- the components provided in the determination system described in the above embodiment may be distributed to the plurality of devices in any way.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Psychiatry (AREA)
- Epidemiology (AREA)
- Pain & Pain Management (AREA)
- Physical Education & Sports Medicine (AREA)
- Rehabilitation Therapy (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Description
[1.構成]
まず、実施の形態に係る判定システムの構成について説明する。図1は、実施の形態に係る判定システムの機能構成の一例を示すブロック図である。
カメラ20は、特定の動作を行う対象者を被写体として含む画像(例えば、複数の画像によって構成される動画像)を撮影する。カメラ20は、CMOS(Complementary Metal Oxide Semiconductor)イメージセンサを用いたカメラであってもよいし、CCD(Charge Coupled Device)イメージセンサを用いたカメラであってもよい。図1の例では、カメラ20は、情報端末30と通信により接続されるカメラであるが、情報端末30に取り付けられる外付けのカメラであってもよく、情報端末30に搭載されたカメラであってもよい。
情報端末30は、対象者に特定の動作を行うように指示し、カメラ20によって撮影された対象者の画像(より詳細には、画像データ又は画像情報)を取得し、取得された画像をサーバ装置40に送信する。情報端末30は、例えば、ユーザによって使用されるスマートフォン又はタブレット端末などの携帯型のコンピュータ装置であるが、パーソナルコンピュータなどの据え置き型のコンピュータ装置であってもよい。情報端末30は、具体的には、第1通信部31aと、第2通信部31bと、制御部32と、記憶部33と、受付部34と、提示部35と、指示部36とを備える。
サーバ装置40は、情報端末30から送信された画像を取得し、取得された画像における骨格モデルを推定し、推定された骨格モデルに基づいて対象者の日常生活動作の状態を判定する。サーバ装置40は、通信部41と、情報処理部42と、記憶部43とを備える。
続いて、判定システム10の動作について図面を参照しながら具体的に説明する。
まず、動作の第1の例について図2を参照しながら説明する。図2は、実施の形態に係る判定システム10の動作の第1の例を示すフローチャートである。図4は、複数の三次元領域の設定を概念的に示す図である。図5は、対象者の二次元骨格モデルの推定を概念的に示す図である。図6は、三次元骨格モデルの推定を概念的に示す図である。
第1の例では、対象者に特定の動作を行うように指示する際に、対象者の身体機能に応じて特定の動作を選定していないが、第1の例の変形例では、特定の動作の指示を行う前に、対象者の身体機能に応じて対象者に行わせる動作を選定してもよい。
第1の例及び第1の例の変形例1では、判定システム10は、特定の動作を行う対象者の三次元骨格モデルに基づいて複数の三次元領域を設定し、特定の動作において対象者の手首が位置する三次元領域を特定することにより、対象者の日常生活動作の状態を判定した。第1の例の変形例2では、さらに、対象者の指の動きを伴う動作(例えば、手の開閉(グーパー)、指の対立(OKサイン)など)の可否を判定することにより、対象者の日常生活動作の状態を判定する。
第1の例の変形例3では、推定部42bにより推定された骨格モデルに基づいて、特定の動作における対象者の骨格の動きの特徴を示す特徴量を導出し、特徴量に基づいて、対象者の身体動作を行うための能力である身体機能を判定する。
次に、動作の第2の例について図3を参照しながら説明する。図3は、実施の形態に係る判定システム10の動作の第2の例を示すフローチャートである。第2の例では、第1の例で判定された対象者の日常生活動作の状態から、ユーザの指示に基づいて抽出された日常生活動作の状態に関する情報を提示する例について説明する。
次に、第2の例の変形例1について図9、図10及び図11を参照しながら説明する。図9は、背中タッチの動作における日常生活動作の状態の判定の一例を示す図である。図9~図11は、提示情報の一例を示す図である。以下では、図2及び図3で説明した内容については説明を省略又は簡略化する。
第2の例の変形例2では、日常生活動作の状態の判定結果及び日常生活動作に関する情報に加えて、リハビリテーションの訓練計画を生成し、ユーザに提示する。具体的には、サーバ装置40の情報処理部42は、対象者の日常生活動作の状態の判定結果に基づいて、リハビリテーションの訓練計画を作成する。このとき、例えば、情報処理部42は、日常生活動作の状態の判定結果に加えて、対象者の身体機能の判定結果に基づいて、リハビリテーションの訓練計画を作成してもよい。
以上説明したように、判定方法は、コンピュータが実行する判定方法であって、対象者に特定の動作を行うように指示する指示ステップ(図2のS11)と、特定の動作を行う対象者を被写体として含む画像を撮影する撮影ステップ(S12)と、撮影された画像に基づいて、画像における対象者の骨格モデルを推定する推定ステップ(S13)と、推定された骨格モデルにおける複数の骨格点の位置に基づいて、骨格モデルの周囲に複数の三次元領域を設定する設定ステップ(S14)と、設定された複数の三次元領域のうち特定の動作において対象者の手首の骨格点が位置する三次元領域を特定する特定ステップ(S15)と、特定された三次元領域に基づいて、対象者の日常生活動作の状態を判定する判定ステップ(S16)と、を含む。
以上、実施の形態について説明したが、本発明は、上記実施の形態に限定されるものではない。
10 判定システム
20 カメラ
30 情報端末
31b 第2通信部
34 受付部
35 提示部
36 指示部
40 サーバ装置
42b 推定部
42c 設定部
42d 特定部
42e 判定部
43 記憶部
44 学習済みモデル
45 データベース
Z1 第1基準軸
Z2 第2基準軸
A1 前方領域
A2 正面領域
A3 背面領域
B1 右側領域
B2 左側領域
L1 第1距離
L2 第2距離
W1 幅
W2 幅
Claims (11)
- コンピュータが実行する判定方法であって、
対象者に特定の動作を行うように指示する指示ステップと、
前記特定の動作を行う前記対象者を被写体として含む画像を撮影する撮影ステップと、
撮影された前記画像に基づいて、前記画像における前記対象者の骨格モデルを推定する推定ステップと、
推定された前記骨格モデルにおける複数の骨格点の位置に基づいて、前記骨格モデルの周囲に複数の三次元領域を設定する設定ステップと、
設定された前記複数の三次元領域のうち前記特定の動作において前記対象者の手首の骨格点が位置する三次元領域を特定する特定ステップと、
特定された前記三次元領域に基づいて、前記対象者の日常生活動作の状態を判定する判定ステップと、
を含む、
判定方法。 - 前記判定ステップでは、
特定の動作と、前記特定の動作において手首が位置する三次元領域と、前記特定の動作に対応する日常生活動作とが紐づけられて格納されたデータベースに基づいて、前記特定ステップにより特定された前記三次元領域が前記データベースに格納された前記三次元領域と一致するか否かを判定することにより、前記対象者の日常生活動作の状態を判定する、
請求項1に記載の判定方法。 - 前記撮影ステップでは、複数の前記画像から構成される動画像を撮影し、
前記推定ステップでは、前記動画像に基づいて、前記動画像を構成する複数の前記画像のそれぞれにおける前記骨格モデルを推定する、
請求項1又は2に記載の判定方法。 - 前記推定ステップでは、
前記画像に基づいて、前記対象者の二次元骨格モデルを推定し、
推定された前記二次元骨格モデルに基づいて、学習済みの機械学習モデルである学習済みモデルを用いて前記対象者の三次元骨格モデルを推定し、
前記設定ステップでは、
前記三次元骨格モデルに基づいて、前記複数の三次元領域を設定する、
請求項1~3のいずれか1項に記載の判定方法。 - 前記設定ステップでは、
前記骨格モデルにおける前記複数の骨格点のうち1つの骨格点を基点として前記骨格モデルの周囲に前記複数の三次元領域を設定し、
前記複数の三次元領域は、
前記対象者の側面視において、前記対象者の頭部から脚部に向かう縦方向の軸であって前記基点を通る第1基準軸を挟んで隣接して設けられた前記対象者の背面側の背面領域及び正面側の正面領域と、前記正面側の領域に隣接して前記対象者の前方側に設けられた前方領域とのいずれかの領域に含まれ、
前記背面領域、前記正面領域及び前記前方領域のそれぞれは、前記対象者の正面視において、前記縦方向の軸であって、前記基点を通る第2基準軸を挟んで隣接して設けられた前記対象者の左側領域及び右側領域を含み、
前記左側領域及び前記右側領域のそれぞれは、前記対象者の前記頭部から前記脚部にかけて前記縦方向と直交する横方向に分割された所定の数の領域を含む、
請求項1~4のいずれか1項に記載の判定方法。 - 前記第1基準軸は、前記対象者の首の骨格点及び腰の骨格点を前記基点とし、
前記第2基準軸は、前記対象者の前記首の骨格点及び肘の骨格点を前記基点とし、
前記設定ステップでは、
前記対象者の側面視において、前記対象者の肘の骨格点から手の先端までの距離である第1距離を、前記背面領域、前記正面領域及び前記前方領域のそれぞれの幅として設定し、
前記対象者の正面視において、前記対象者の首の骨格点から肩の骨格点までの距離である第2距離の2倍の距離を、前記左側領域及び前記右側領域のそれぞれの幅として設定する、
請求項5に記載の判定方法。 - さらに、
前記判定ステップにより判定された前記対象者の日常生活動作の状態をユーザに提示する提示ステップと、
前記ユーザの操作に関する指示を受け付ける受付ステップと、
を含み、
前記判定ステップでは、前記受付ステップにより受け付けられた前記ユーザの指示に基づいて、前記対象者の日常生活動作の状態に関する情報を抽出し、
前記提示ステップでは、前記判定ステップにより抽出された前記情報を前記ユーザに提示する、
請求項1~6のいずれか1項に記載の判定方法。 - 前記日常生活動作の状態に関する情報は、前記特定の動作を行う前記対象者の三次元骨格モデル、前記対象者の身体機能の判定結果、及び、前記対象者に提案する訓練内容の少なくともいずれかを含む、
請求項7に記載の判定方法。 - 前記日常生活動作の状態に関する情報は、前記対象者の身体機能を含み、
前記対象者の身体機能は、前記対象者の手の開閉の動作、及び、指の対立の動作の少なくともいずれかの状態に基づいて判定される、
請求項8に記載の判定方法。 - 対象者に特定の動作を行うように指示する指示部と、
前記特定の動作を行う前記対象者を被写体として含む画像を撮影するカメラと、
撮影された前記画像に基づいて、前記画像における前記対象者の骨格モデルを推定する推定部と、
推定された前記骨格モデルにおける複数の骨格点の位置に基づいて、前記骨格モデルの周囲に複数の三次元領域を設定する設定部と、
設定された前記複数の三次元領域のうち前記特定の動作において前記対象者の手首の骨格点が含まれる三次元領域を特定する特定部と、
特定された前記三次元領域に基づいて、前記対象者の日常生活動作の状態を判定する判定部と、
を備える、
判定装置。 - 情報端末と前記情報端末と通信を介して接続されるサーバ装置とを備えるシステムであって、
前記情報端末は、
前記サーバ装置と通信を行う通信部と、
対象者に特定の動作を行うように指示する指示部と、
前記特定の動作を行う前記対象者を被写体として含む画像を撮影するカメラと、
を備え、
前記サーバ装置は、
前記カメラで撮影された前記画像に基づいて、前記画像における前記対象者の骨格モデルを推定する推定部と、
推定された前記骨格モデルにおける複数の骨格点の位置に基づいて、前記骨格モデルの周囲に複数の三次元領域を設定する設定部と、
設定された前記複数の三次元領域のうち前記特定の動作において前記対象者の手首の骨格点が含まれる三次元領域を特定する特定部と、
特定された前記三次元領域に基づいて、前記対象者の日常生活動作の状態を判定する判定部と、
を備える、
判定システム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023538301A JPWO2023007930A1 (ja) | 2021-07-28 | 2022-05-25 | |
CN202280051042.0A CN117769726A (zh) | 2021-07-28 | 2022-05-25 | 判定方法、判定装置以及判定系统 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021122906 | 2021-07-28 | ||
JP2021-122906 | 2021-07-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023007930A1 true WO2023007930A1 (ja) | 2023-02-02 |
Family
ID=85086533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/021370 WO2023007930A1 (ja) | 2021-07-28 | 2022-05-25 | 判定方法、判定装置、及び、判定システム |
Country Status (3)
Country | Link |
---|---|
JP (1) | JPWO2023007930A1 (ja) |
CN (1) | CN117769726A (ja) |
WO (1) | WO2023007930A1 (ja) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004322244A (ja) * | 2003-04-23 | 2004-11-18 | Toyota Motor Corp | ロボット動作規制方法とその装置およびそれを備えたロボット |
JP2015159935A (ja) * | 2014-02-27 | 2015-09-07 | 株式会社東芝 | リハビリテーション支援装置 |
US20210029305A1 (en) * | 2018-11-29 | 2021-01-28 | Beijing Bytedance Network Technology Co., Ltd. | Method and apparatus for adding a video special effect, terminal device and storage medium |
-
2022
- 2022-05-25 CN CN202280051042.0A patent/CN117769726A/zh active Pending
- 2022-05-25 JP JP2023538301A patent/JPWO2023007930A1/ja active Pending
- 2022-05-25 WO PCT/JP2022/021370 patent/WO2023007930A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004322244A (ja) * | 2003-04-23 | 2004-11-18 | Toyota Motor Corp | ロボット動作規制方法とその装置およびそれを備えたロボット |
JP2015159935A (ja) * | 2014-02-27 | 2015-09-07 | 株式会社東芝 | リハビリテーション支援装置 |
US20210029305A1 (en) * | 2018-11-29 | 2021-01-28 | Beijing Bytedance Network Technology Co., Ltd. | Method and apparatus for adding a video special effect, terminal device and storage medium |
Non-Patent Citations (1)
Title |
---|
HARUKA SONODA, NAGAI TAKAYUKI: "Motion analysis of presenter and its application to self-reflection tool for rehearsal video", IPSJ SIG TECHNICAL REPORT. CLE, INFORMATION PROCESSING SOCIETY OF JAPAN, JP, vol. 2020-CLE-30, no. 5, 1 March 2020 (2020-03-01), JP , XP093028659, ISSN: 2188-8620 * |
Also Published As
Publication number | Publication date |
---|---|
CN117769726A (zh) | 2024-03-26 |
JPWO2023007930A1 (ja) | 2023-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104274183B (zh) | 动作信息处理装置 | |
CN102981603B (zh) | 图像处理装置和图像处理方法 | |
WO2017181717A1 (zh) | 电子教练实现方法及系统 | |
JP2019198638A (ja) | 測定情報提供システム、測定情報提供方法、サーバ装置、通信端末、及びコンピュータプログラム | |
US20150320343A1 (en) | Motion information processing apparatus and method | |
JP7008342B2 (ja) | 運動評価システム | |
JP2016101229A (ja) | 歩行解析システムおよび歩行解析プログラム | |
Fieraru et al. | Learning complex 3D human self-contact | |
JP2020174910A (ja) | 運動支援システム | |
Li et al. | An automatic rehabilitation assessment system for hand function based on leap motion and ensemble learning | |
JP2024016153A (ja) | プログラム、装置、及び方法 | |
US11386615B2 (en) | Creating a custom three-dimensional body shape model | |
WO2023007930A1 (ja) | 判定方法、判定装置、及び、判定システム | |
JP6439106B2 (ja) | 身体歪みチェッカー、身体歪みチェック方法およびプログラム | |
JP6577150B2 (ja) | 人体モデル表示システム、人体モデル表示方法、通信端末装置、及びコンピュータプログラム | |
WO2014104357A1 (ja) | 動作情報処理システム、動作情報処理装置及び医用画像診断装置 | |
WO2023100679A1 (ja) | 判定方法、判定装置、及び、判定システム | |
JP7382581B2 (ja) | 日常生活動作状態判定システム、日常生活動作状態判定方法、プログラム、日常生活動作状態判定装置、及び、日常生活動作状態判定デバイス | |
WO2022081745A1 (en) | Real-time rendering of 3d wearable articles on human bodies for camera-supported computing devices | |
WO2023157500A1 (ja) | 動画編集方法、動画編集装置、及び、表示システム | |
WO2023127870A1 (ja) | 介護支援装置、介護支援プログラム、介護支援方法 | |
WO2024176465A1 (ja) | 情報処理プログラム、および情報処理装置 | |
WO2024069944A1 (ja) | 情報処理装置、情報処理方法、及びプログラム | |
US20240112367A1 (en) | Real-time pose estimation through bipartite matching of heatmaps of joints and persons and display of visualizations based on the same | |
WO2023188217A1 (ja) | 情報処理プログラム、情報処理方法、および情報処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22849009 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023538301 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280051042.0 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18290767 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22849009 Country of ref document: EP Kind code of ref document: A1 |