WO2014104357A1 - Motion information processing system, motion information processing device and medical image diagnosis device - Google Patents
Motion information processing system, motion information processing device and medical image diagnosis device Download PDFInfo
- Publication number
- WO2014104357A1 WO2014104357A1 PCT/JP2013/085244 JP2013085244W WO2014104357A1 WO 2014104357 A1 WO2014104357 A1 WO 2014104357A1 JP 2013085244 W JP2013085244 W JP 2013085244W WO 2014104357 A1 WO2014104357 A1 WO 2014104357A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- information
- joint
- motion information
- motion
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Definitions
- Embodiments of the present invention relate to a motion information processing system, a motion information processing device, and a medical image diagnostic device.
- each patient's medical record (electronic medical record) is called at a doctor terminal provided in each clinic, and various information is input to the called patient's electronic medical record.
- doctors use a doctor's terminal to input information such as chief complaints, findings, etc., schema diagrams, order contents, etc. according to the medical condition of the patient's electronic medical records. This is done by individually searching / selecting the definition contents and inputting them. If the patient is a revisited patient, the patient's past medical chart information may be called, copied as it is, and input to the current electronic medical chart.
- motion capture technology that digitally records the movement of a person or an object has been advanced.
- a method of motion capture technology for example, an optical type, a mechanical type, a magnetic type, a camera type and the like are known.
- a camera system is known in which a marker is attached to a person, the marker is detected by a tracker such as a camera, and the movement of the person is digitally recorded by processing the detected marker.
- an infrared sensor is used to measure the distance from the sensor to the person and detect the movement of the person digitally by detecting the size of the person and various movements of the skeleton.
- the recording method is known.
- Kinect registered trademark
- the problem to be solved by the present invention is to provide a motion information processing system, a motion information processing device, and a medical image diagnostic device that make it easy to input information in an electronic medical record.
- the motion information processing system of the embodiment includes an acquisition unit, an extraction unit, a selection unit, and a display control unit.
- the acquisition unit acquires motion information including position information of a joint of a subject who is a target of motion acquisition.
- the extraction unit extracts the affected part based on the joint position information in the motion information of the subject acquired by the acquisition unit.
- the selection unit selects related information related to the affected part extracted by the extraction unit.
- the display control unit controls the display unit to display related information selected by the selection unit.
- FIG. 1 is a diagram illustrating an example of a configuration of a motion information processing system according to the first embodiment.
- FIG. 2A is a diagram for explaining an example of a display screen of the electronic medical record according to the first embodiment.
- FIG. 2B is a diagram for explaining an example of inputting chart information according to the first embodiment.
- FIG. 3 is a diagram illustrating an example of the configuration of the doctor terminal according to the first embodiment.
- FIG. 4A is a diagram for explaining processing of the motion information generation unit according to the first embodiment.
- FIG. 4B is a diagram for explaining processing of the motion information generation unit according to the first embodiment.
- FIG. 4C is a diagram for explaining processing of the motion information generation unit according to the first embodiment.
- FIG. 4A is a diagram for explaining processing of the motion information generation unit according to the first embodiment.
- FIG. 4B is a diagram for explaining processing of the motion information generation unit according to the first embodiment.
- FIG. 4C is a diagram for explaining processing of the motion information generation unit according
- FIG. 5 is a diagram illustrating an example of skeleton information generated by the motion information generation unit according to the first embodiment.
- FIG. 6 is a diagram illustrating an example of a detailed configuration of the doctor terminal according to the first embodiment.
- FIG. 7 is a diagram for explaining an example of processing by the extraction unit according to the first embodiment.
- FIG. 8A is a diagram for explaining an example of processing by the selection unit according to the first embodiment.
- FIG. 8B is a diagram for explaining an example of processing performed by the selection unit according to the first embodiment.
- FIG. 8C is a diagram for explaining an example of processing performed by the selection unit according to the first embodiment.
- FIG. 9A is a diagram illustrating an example of processing performed by the mark assigning unit according to the first embodiment.
- FIG. 9B is a diagram illustrating an example of processing performed by the mark assigning unit according to the first embodiment.
- FIG. 9C is a diagram illustrating an example of processing performed by the mark assigning unit according to the first embodiment.
- FIG. 10 is a diagram illustrating an example of schema selection and mark assignment according to the first embodiment.
- FIG. 11 is a diagram for explaining an example of display control by the display control unit according to the first embodiment.
- FIG. 12 is a diagram for explaining an example of processing by the medical record information storage unit according to the first embodiment.
- FIG. 13 is a flowchart illustrating a processing procedure performed by the doctor terminal according to the first embodiment.
- FIG. 14 is a diagram for explaining an example of processing by the selection unit according to the second embodiment.
- FIG. 15 is a diagram for explaining operation information collection condition changing processing by the doctor terminal according to the third embodiment.
- a motion information processing system including a doctor terminal as a motion information processing apparatus will be described as an example.
- FIG. 1 is a diagram illustrating an example of a configuration of a motion information processing system 1 according to the first embodiment.
- the motion information processing system 1 according to the first embodiment includes a doctor terminal 100, a reception terminal 200, and a server device 300.
- the doctor terminal 100, the reception terminal 200, and the server device 300 are in a state where they can communicate with each other directly or indirectly, for example, via a hospital LAN (Local Area Network) installed in the hospital.
- LAN Local Area Network
- PACS Picture Archiving and Communication System
- HIS Hospital Information System
- RIS Radiology Information System
- the motion information processing system 1 also includes functions as an electronic medical record system, a receipt computer processing system, an ordering system, a reception (individual / qualification authentication) system, a medical assistance system, and the like.
- the reception terminal 200 executes reception registration when a patient visits, creation of a patient's electronic medical record, or calling of the patient's electronic medical record from the electronic medical record managed by the server device 300. Then, the reception terminal 200 creates a queue of reception information and electronic medical record information for each patient according to the reception time or the reservation time, and transmits the queue to the server device 300.
- the server apparatus 300 manages the electronic medical records of registered patients.
- the server apparatus 300 manages a queue of reception information and electronic medical record information received from the reception terminal 200.
- the server device 300 manages a queue of received reception information and electronic medical record information for each department.
- the doctor terminal 100 is a terminal installed in each examination room, for example, and the medical chart information of the electronic medical record is input by the doctor.
- the chart information includes, for example, symptoms and doctor's findings.
- the doctor operates the doctor terminal 100 to read reception information and electronic medical record information from the server device 300 in the queue order. Then, the doctor examines the corresponding patient and inputs medical chart information to the read electronic medical chart.
- FIG. 2A is a diagram for explaining an example of a display screen of the electronic medical record according to the first embodiment.
- the doctor terminal 100 includes an operation area in which a chart area R1 that is an area in which patient chart information is input and operation buttons for inputting chart information in the chart area R1 are arranged.
- An electronic medical chart display screen having R2 is displayed.
- the chart area R1 includes, for example, an area R3 in which patient data such as name, date of birth, and gender is displayed, an area R4 in which current chart information is input, and previous chart information. It is comprised from the area
- the operation area R2 includes an area R6 that is an area for selecting a schema to be used when using a schema for medical chart information, an area R7 in which operation buttons to which various functions are assigned, and the like.
- FIG. 2B is a diagram for explaining an example of inputting chart information according to the first embodiment.
- FIG. 2B shows a schema selection window displayed by operating a button arranged in the region R6 shown in FIG. 2A when using a schema as chart information.
- a doctor when a doctor inputs a head schema as chart information, the doctor operates a button arranged in the region R6 shown in FIG. 2A to display a plurality of heads as shown in the right region of FIG. 2B. A selection window with the schema is displayed. Then, the doctor selects a desired schema from the displayed plurality of head schemas, thereby displaying the selected schema in the left region of the window as shown in FIG. 2B. Furthermore, the doctor gives a mark to a position corresponding to the affected part of the schema as shown in FIG. 2B by operating a button arranged in the left region of FIG. 2B.
- the doctor inputs the schema to which the mark is added into the electronic medical record as medical record information.
- the doctor inputs a schema having a mark added to the region R4 in FIG. 2A.
- the doctor terminal 100 transmits the electronic medical record in which the medical chart information is input to the server device 300 based on the operation of the doctor.
- the server device 300 manages the received electronic medical record for each patient.
- the input of information in the electronic medical record becomes complicated and may take time.
- FIG. 2B when inputting a schema, first, information on the affected part is obtained from a patient by pointing orally or by hand, which part of the body is used, and for each part To select which schema to use. Furthermore, the doctor gives a mark on the schema according to the range of the affected part. Therefore, when inputting medical chart information into the electronic medical chart, the selection operation may be repeated, and the information input may be complicated and troublesome.
- FIG. 3 is a diagram illustrating an example of the configuration of the doctor terminal 100 according to the first embodiment. As shown in FIG. 1, in the first embodiment, the doctor terminal 100 is connected to the motion information collection unit 10.
- the motion information collection unit 10 detects the motion of a person or an object in the space where the examination is performed, and collects motion information representing the motion of the person or the object. Note that the operation information will be described in detail when the processing of the operation information generation unit 14 described later is described. For example, Kinect (registered trademark) is used as the operation information collection unit 10.
- the motion information collection unit 10 includes, for example, a color image collection unit 11, a distance image collection unit 12, a voice recognition unit 13, and a motion information generation unit 14. Note that the configuration of the operation information collection unit 10 illustrated in FIG. 3 is merely an example, and the embodiment is not limited thereto.
- the color image collection unit 11 shoots a subject such as a person or an object in a space where a medical examination is performed, and collects color image information. For example, the color image collection unit 11 detects light reflected from the subject surface with a light receiving element, and converts visible light into an electrical signal. Then, the color image collection unit 11 converts the electrical signal into digital data, thereby generating one frame of color image information corresponding to the shooting range.
- the color image information for one frame includes, for example, shooting time information and information in which each pixel included in the one frame is associated with an RGB (Red Green Blue) value.
- the color image collection unit 11 shoots a moving image of the shooting range by generating color image information of a plurality of continuous frames from visible light detected one after another.
- the color image information generated by the color image collection unit 11 may be output as a color image in which the RGB values of each pixel are arranged in a bitmap.
- the color image collection unit 11 includes, for example, a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device) as a light receiving element.
- CMOS Complementary Metal Oxide Semiconductor
- CCD Charge Coupled Device
- the distance image collection unit 12 photographs a subject such as a person or an object in a space where rehabilitation is performed, and collects distance image information. For example, the distance image collection unit 12 irradiates the surrounding area with infrared rays, and detects a reflected wave obtained by reflecting the irradiation wave on the surface of the subject with the light receiving element. Then, the distance image collection unit 12 obtains the distance between the subject and the distance image collection unit 12 based on the phase difference between the irradiation wave and the reflected wave and the time from irradiation to detection, and corresponds to the shooting range. Generate frame distance image information.
- the distance image information for one frame includes, for example, shooting time information and information in which each pixel included in the shooting range is associated with the distance between the subject corresponding to the pixel and the distance image collection unit 12. included.
- the distance image collection unit 12 captures a moving image of the shooting range by generating distance image information of a plurality of continuous frames from reflected waves detected one after another.
- the distance image information generated by the distance image collection unit 12 may be output as a distance image in which color shades corresponding to the distance of each pixel are arranged in a bitmap.
- the distance image collection unit 12 includes, for example, a CMOS or a CCD as a light receiving element. This light receiving element may be shared with the light receiving element used in the color image collection unit 11.
- the unit of the distance calculated by the distance image collection unit 12 is, for example, meters [m].
- the voice recognition unit 13 collects surrounding voices, identifies the direction of the sound source, and performs voice recognition.
- the voice recognition unit 13 has a microphone array including a plurality of microphones, and performs beam forming. Beam forming is a technique for selectively collecting sound from a specific direction. For example, the voice recognition unit 13 specifies the direction of the sound source by beam forming using a microphone array.
- the voice recognition unit 13 recognizes a word from the collected voice using a known voice recognition technique. That is, the speech recognition unit 13 generates, as a speech recognition result, for example, information associated with a word recognized by the speech recognition technology, a direction in which the word is emitted, and a time at which the word is recognized.
- the motion information generation unit 14 generates motion information representing the motion of a person or an object. This motion information is generated by, for example, capturing a human motion (gesture) as a series of a plurality of postures (poses). In brief, the motion information generation unit 14 first obtains the coordinates of each joint forming the skeleton of the human body from the distance image information generated by the distance image collection unit 12 by pattern matching using a human body pattern. The coordinates of each joint obtained from the distance image information are values represented by a distance image coordinate system (hereinafter referred to as “distance image coordinate system”).
- distance image coordinate system hereinafter referred to as “distance image coordinate system”.
- the motion information generation unit 14 next represents the values of the coordinates of each joint in the distance image coordinate system in a coordinate system of a three-dimensional space in which medical examination is performed (hereinafter referred to as “world coordinate system”). Convert to The coordinates of each joint represented in this world coordinate system become the skeleton information for one frame. Further, the skeleton information for a plurality of frames is the operation information.
- processing of the motion information generation unit 14 according to the first embodiment will be specifically described.
- FIG. 4A to 4C are diagrams for explaining processing of the motion information generation unit 14 according to the first embodiment.
- FIG. 4A shows an example of a distance image generated by the distance image collection unit 12.
- an image expressed by a line drawing is shown.
- an actual distance image is an image expressed by shading of colors according to the distance.
- each pixel has a “pixel position X” in the left-right direction of the distance image, a “pixel position Y” in the up-down direction of the distance image, and a subject corresponding to the pixel and the distance image collection unit 12. It has a three-dimensional value associated with “distance Z”.
- the coordinate value of the distance image coordinate system is expressed by the three-dimensional value (X, Y, Z).
- the motion information generation unit 14 stores in advance human body patterns corresponding to various postures, for example, by learning. Each time the distance image collection unit 12 generates distance image information, the motion information generation unit 14 acquires the generated distance image information of each frame. Then, the motion information generation unit 14 performs pattern matching using a human body pattern on the acquired distance image information of each frame.
- FIG. 4B shows an example of a human body pattern.
- the human body pattern is a pattern used for pattern matching with distance image information, it is expressed in the distance image coordinate system, and is similar to the person depicted in the distance image, on the surface of the human body.
- Information hereinafter referred to as “human body surface”.
- the human body surface corresponds to the skin or clothing surface of the person.
- the human body pattern includes information on each joint forming the skeleton of the human body. That is, in the human body pattern, the relative positional relationship between the human body surface and each joint is known.
- the human body pattern includes information on 20 joints from joint 2a to joint 2t.
- the joint 2a corresponds to the head
- the joint 2b corresponds to the center of both shoulders
- the joint 2c corresponds to the waist
- the joint 2d corresponds to the center of the buttocks.
- the joint 2e corresponds to the right shoulder
- the joint 2f corresponds to the right elbow
- the joint 2g corresponds to the right wrist
- the joint 2h corresponds to the right hand.
- the joint 2i corresponds to the left shoulder
- the joint 2j corresponds to the left elbow
- the joint 2k corresponds to the left wrist
- the joint 2l corresponds to the left hand.
- the joint 2m corresponds to the right hip
- the joint 2n corresponds to the right knee
- the joint 2o corresponds to the right ankle
- the joint 2p corresponds to the right foot
- the joint 2q corresponds to the left hip
- the joint 2r corresponds to the left knee
- the joint 2s corresponds to the left ankle
- the joint 2t corresponds to the left foot.
- FIG. 4B the case where the human body pattern has information on 20 joints has been described.
- the embodiment is not limited to this, and the operator may arbitrarily set the position and number of joints. .
- the operator may arbitrarily set the position and number of joints. .
- information on the joint 2b and the joint 2c among the joints 2a to 2d may not be acquired.
- the joint 2i when capturing changes in the right hand movement in detail, not only the joint 2i but also the finger joint of the right hand may be newly set.
- the joint 2a, the joint 2h, the joint 2l, the joint 2p, and the joint 2t in FIG. 4B are different from so-called joints because they are the end portions of the bone, but are important points that represent the position and orientation of the bone. For the sake of convenience, it is described here as a joint.
- the motion information generation unit 14 performs pattern matching with the distance image information of each frame using the human body pattern. For example, the motion information generation unit 14 extracts a person with a certain posture from the distance image information by pattern matching the human body surface of the human body pattern shown in FIG. 4B and the distance image shown in FIG. 4A. In this way, the motion information generation unit 14 obtains the coordinates of the human body surface depicted in the distance image. Further, as described above, in the human body pattern, the relative positional relationship between the human body surface and each joint is known. Therefore, the motion information generation unit 14 calculates the coordinates of each joint in the person from the coordinates of the human body surface depicted in the distance image. Thus, as illustrated in FIG. 4C, the motion information generation unit 14 acquires the coordinates of each joint forming the skeleton of the human body from the distance image information. Note that the coordinates of each joint obtained here are the coordinates of the distance coordinate system.
- the motion information generation unit 14 may use information representing the positional relationship of each joint as an auxiliary when performing pattern matching.
- the information representing the positional relationship between the joints includes, for example, a joint relationship between the joints (for example, “joint 2a and joint 2b are coupled”) and a movable range of each joint.
- a joint is a site that connects two or more bones.
- the angle between the bones changes according to the change in posture, and the range of motion differs depending on the joint.
- the range of motion is represented by the maximum and minimum values of the angles formed by the bones connected by each joint.
- the motion information generation unit 14 also learns the range of motion of each joint and stores it in association with each joint.
- the motion information generation unit 14 converts the coordinates of each joint in the distance image coordinate system into values represented in the world coordinate system.
- the world coordinate system is a coordinate system in a three-dimensional space where rehabilitation is performed.
- the position of the motion information collection unit 10 is the origin, the horizontal direction is the x axis, the vertical direction is the y axis, and the direction is orthogonal to the xy plane.
- the coordinate value in the z-axis direction may be referred to as “depth”.
- the motion information generation unit 14 stores in advance a conversion formula for converting from the distance image coordinate system to the world coordinate system.
- this conversion formula receives the coordinates of the distance image coordinate system and the incident angle of the reflected light corresponding to the coordinates, and outputs the coordinates of the world coordinate system.
- the motion information generation unit 14 inputs the coordinates (X1, Y1, Z1) of a certain joint and the incident angle of reflected light corresponding to the coordinates to the conversion formula, and coordinates (X1, Y1) of the certain joint , Z1) are converted into coordinates (x1, y1, z1) in the world coordinate system.
- the motion information generation unit 14 Since the correspondence relationship between the coordinates of the distance image coordinate system and the incident angle of the reflected light is known, the motion information generation unit 14 inputs the incident angle corresponding to the coordinates (X1, Y1, Z1) into the conversion equation. can do. Although the case has been described here in which the motion information generation unit 14 converts the coordinates of the distance image coordinate system to the coordinates of the world coordinate system, it is also possible to convert the coordinates of the world coordinate system to the coordinates of the distance coordinate system. is there.
- FIG. 5 is a diagram illustrating an example of the skeleton information generated by the motion information generation unit 14.
- the skeleton information of each frame includes shooting time information of the frame and coordinates of each joint.
- the motion information generation unit 14 generates skeleton information in which joint identification information and coordinate information are associated with each other.
- the shooting time information is not shown.
- the joint identification information is identification information for identifying a joint and is set in advance.
- joint identification information “2a” corresponds to the head
- joint identification information “2b” corresponds to the center of both shoulders.
- each joint identification information indicates a corresponding joint.
- the coordinate information indicates the coordinates of each joint in each frame in the world coordinate system.
- joint identification information “2a” and coordinate information “(x1, y1, z1)” are associated with each other. That is, the skeleton information in FIG. 5 indicates that the head is present at the coordinates (x1, y1, z1) in a certain frame. Also, in the second row of FIG. 5, joint identification information “2b” and coordinate information “(x2, y2, z2)” are associated. That is, the skeleton information in FIG. 5 represents that the center of both shoulders exists at the coordinates (x2, y2, z2) in a certain frame. Similarly, other joints indicate that each joint exists at a position represented by each coordinate in a certain frame.
- the motion information generation unit 14 performs pattern matching on the distance image information of each frame, and the world coordinate from the distance image coordinate system. By converting into a system, skeleton information of each frame is generated. Then, the motion information generation unit 14 outputs the generated skeleton information of each frame to the doctor terminal 100 and stores it in the later-described motion information storage unit 131.
- the process of the operation information generation part 14 is not restricted to the method mentioned above.
- the method in which the motion information generation unit 14 performs pattern matching using a human body pattern has been described, but the embodiment is not limited thereto.
- a pattern matching method using a pattern for each part may be used instead of the human body pattern or together with the human body pattern.
- the motion information generation unit 14 may obtain a coordinate of each joint using color image information together with distance image information.
- the motion information generation unit 14 performs pattern matching between the human body pattern expressed in the color image coordinate system and the color image information, and obtains the coordinates of the human body surface from the color image information.
- the coordinate system of this color image does not include the “distance Z” information referred to in the distance image coordinate system. Therefore, for example, the motion information generation unit 14 obtains the information of “distance Z” from the distance image information, and obtains the coordinates of the world coordinate system of each joint by calculation processing using these two pieces of information.
- the motion information generation unit 14 needs the color image information generated by the color image collection unit 11, the distance image information generated by the distance image collection unit 12, and the voice recognition result output by the voice recognition unit 13. Accordingly, the information is appropriately output to the doctor terminal 100 and stored in the operation information storage unit 131 described later.
- the pixel position of the color image information and the pixel position of the distance image information can be associated in advance according to the positions of the color image collection unit 11 and the distance image collection unit 12 and the shooting direction. For this reason, the pixel position of the color image information and the pixel position of the distance image information can be associated with the world coordinate system calculated by the motion information generation unit 14.
- the height and the length of each part of the body for example, the length of the arm and the length of the abdomen
- the color image It is also possible to calculate the distance between the two points specified above (between two pixels).
- the shooting time information of the color image information and the shooting time information of the distance image information can be associated in advance.
- the motion information generation unit 14 refers to the speech recognition result and the distance image information, and if there is a joint 2a in the vicinity of the direction in which the speech-recognized word is issued at a certain time, the person including the joint 2a is displayed. It can be output as an emitted word.
- the motion information generation unit 14 also appropriately outputs information representing the positional relationship between the joints to the doctor terminal 100 as necessary, and stores the information in the motion information storage unit 131 described later.
- the motion information collection unit 10 may detect the motions of a plurality of subjects. In such a case, the motion information generation unit 14 generates skeleton information of a plurality of persons from the distance image information of the same frame, and outputs information associated with the generated skeleton information to the doctor terminal 100 as motion information. To do.
- the configuration of the operation information collection unit 10 is not limited to the above configuration.
- the motion information collection unit 10 when motion information is generated by detecting the motion of a person by other motion capture, such as optical, mechanical, magnetic, etc., the motion information collection unit 10 does not necessarily include the distance image collection unit 12. It does not have to be.
- the motion information collection unit 10 includes, as motion sensors, a marker that is worn on the human body to detect the motion of the person, and a sensor that detects the marker. Then, the motion information collection unit 10 detects motion of a person using a motion sensor and generates motion information.
- the motion information collecting unit 10 associates the pixel position of the color image information with the coordinates of the motion information using the position of the marker included in the image photographed by the color image collecting unit 11, and if necessary, Output to the doctor terminal 100 as appropriate.
- the motion information collection unit 10 may not include the speech recognition unit 13 when the speech recognition result is not output to the doctor terminal 100.
- the motion information collection unit 10 outputs the coordinates of the world coordinate system as the skeleton information, but the embodiment is not limited to this.
- the motion information collection unit 10 may output the coordinates of the distance image coordinate system before conversion, and the conversion from the distance image coordinate system to the world coordinate system may be performed on the doctor terminal 100 side as necessary.
- the doctor terminal 100 uses the operation information output from the operation information collection unit 10 to perform processing for assisting input of electronic medical record information (medical record information).
- the doctor terminal 100 is an information processing device such as a computer or a workstation, for example, and includes an output unit 110, an input unit 120, a storage unit 130, and a control unit 140, as shown in FIG.
- the doctor terminal 100 includes a communication unit (not shown) and performs communication with the reception terminal 200, the server device 300, and the like.
- the output unit 110 outputs various information for creating an electronic medical record.
- the output unit 110 displays a GUI (Graphical User Interface) for an operator (doctor) operating the doctor terminal 100 to input various requests using the input unit 120 to create an electronic medical record,
- An output image generated in the terminal 100 is displayed.
- the output unit 110 is a monitor, a speaker, or the like.
- the input unit 120 receives input of various information for creating an electronic medical record.
- the input unit 120 accepts input of various requests (for example, a request for reading an electronic medical record from the server device 300 or a request for inputting medical record information to the electronic medical record) from the operator (doctor) of the doctor terminal 100, The received various requests are transferred to the doctor terminal 100.
- the input unit 120 accepts information such as a patient ID (name number) of a patient as a reading request of the electronic medical record.
- the input unit 120 is, for example, a mouse, a keyboard, a touch command screen, a trackball, or the like.
- the storage unit 130 is, for example, a semiconductor memory device such as a RAM (Random Access Memory) or a flash memory, a storage device such as a hard disk device or an optical disk device.
- the control unit 140 can be realized by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array) or a CPU (Central Processing Unit) executing a predetermined program.
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- CPU Central Processing Unit
- the doctor terminal 100 according to the first embodiment extracts and extracts the affected part by analyzing the motion information of the patient (target of motion acquisition) collected by the motion information collecting unit 10. By selecting related information related to the affected area, it is possible to easily input information in the electronic medical record.
- a schema is selected as an example of the relevant information relevant to an affected part.
- the schema is a schematic diagram showing the positional relationship between the human body structure and the affected part.
- FIG. 6 is a diagram illustrating an example of a detailed configuration of the doctor terminal 100 according to the first embodiment.
- the storage unit 130 includes an operation information storage unit 131, an extraction condition storage unit 132, and a medical record information storage unit 133.
- the operation information storage unit 131 stores various types of information collected by the operation information collection unit 10. Specifically, the motion information storage unit 131 stores the motion information generated by the motion information generation unit 14. More specifically, the motion information storage unit 131 stores skeleton information for each frame generated by the motion information generation unit 14. Here, the motion information storage unit 131 can further store the color image information, the distance image information, and the speech recognition result output by the motion information generation unit 14 in association with each other.
- the extraction condition storage unit 132 stores extraction conditions for extracting an affected part of a patient from operation information. Specifically, the extraction condition storage unit 132 stores an extraction condition for extracting an affected part based on information on a hand joint in the operation condition. For example, the extraction condition storage unit 132 determines the distance from the joint position of the hand at a predetermined timing to the line segment connecting two joints different from the joint of the hand in the joint position information in the motion information of the subject. Is stored below the extraction condition in which the part corresponding to the position on the shortest line segment is equal to or less than the predetermined threshold value.
- the extraction condition storage unit 132 includes, as the predetermined timing, a timing at which predetermined audio information is acquired, or a case where the movement of the hand joint is stopped for a certain period of time. A detailed example of the extraction condition described above will be described later.
- the extraction condition storage unit 132 stores the extraction condition in which the part corresponding to the predetermined movement range of the joint position of the hand is the affected part in the joint position information in the motion information of the subject.
- the extraction condition storage unit 132 stores an extraction condition in which a part corresponding to the area is an affected area. For example, when the joint of the hand moves back and forth in a certain area, or when the extraction condition storage unit 132 stops for a certain period of time in a certain joint area, the extraction condition storage unit 132 The extraction condition with the part corresponding to is affected is stored.
- the chart information storage unit 133 stores the chart information of the electronic chart acquired from the server device 300 via a communication unit (not shown). Specifically, the medical chart information storage unit 133 stores the patient's medical chart information acquired from the server apparatus 300 by the communication unit in response to the electronic medical chart read request received from the operator (doctor) via the input unit 120. To do. For example, the chart information storage unit 133 stores the patient's past chart information corresponding to the read request and the chart information stored by the control of the control unit 140 described later. In other words, the chart information storage unit 133 is a storage unit that stores the electronic chart read by the doctor terminal 100 from the server device 300 during diagnosis. That is, the medical record information stored by the medical record information storage unit 133 is stored in the server apparatus 300 via the communication unit after the diagnosis is completed. A detailed example of the chart information stored by the chart information storage unit 133 will be described later.
- the control unit 140 has an acquisition unit 141, an extraction unit 142, a selection unit 143, a mark addition unit 144, a display control unit 145, and medical record information storage.
- Unit 146 and using various information stored in the storage unit 130, it is possible to facilitate the input of electronic medical record information by assisting in inputting medical record information.
- the acquisition unit 141 acquires motion information of a target person (patient) that is a target of motion acquisition. Specifically, the acquisition unit 141 acquires patient motion information collected by the motion information collection unit 10 and stored in the motion information storage unit 131. More specifically, the acquisition unit 141 acquires patient skeleton information stored for each frame by the motion information storage unit 131.
- the acquisition unit 141 acquires the skeleton information of the operation corresponding to the time point when the patient who visited the hospital made a chief complaint such as a symptom or an affected part during the examination. For example, the acquisition unit 141 acquires the skeleton information of the patient from the frame at the time when the medical inquiry by the doctor is started to the frame at the time when the medical examination is completed. The acquisition of the skeletal information by the acquisition unit 141 is performed each time a patient's examination is started.
- the extraction unit 142 extracts the affected part based on the joint position information in the motion information of the subject (patient) acquired by the acquisition unit 141. Specifically, the extraction unit 142 refers to the extraction condition stored by the extraction condition storage unit 132 and extracts the affected part from the skeleton information of the patient acquired by the acquisition unit 141.
- FIG. 7 is a diagram for explaining an example of processing by the extraction unit 142 according to the first embodiment.
- (A) of FIG. 7 shows the color image information and joint position information of the patient collected by the motion information collection unit 10.
- FIG. 7B shows a first example in which the affected part is extracted from the information shown in FIG.
- FIG. 7C shows a second example in which the affected part is extracted from the information shown in FIG.
- the first example is a distance from a joint position of a hand at a predetermined timing to a line segment connecting two joints different from the joint of the hand in the position information of the joint in the motion information (skeleton information) of the patient. Is a case where a region corresponding to a position on the shortest line segment that is equal to or less than a predetermined threshold value is an affected part.
- the extraction unit 142 monitors patient skeleton information acquired in real time in a time-series order from the frame at the time when the diagnosis is started by the acquisition unit 141, and extracts the affected part from the position of the hand joint at a predetermined timing.
- the extraction unit 142 indicates a point in time when a word indicating a human body part such as “the right arm hurts” or a pronoun word such as “here hurts” is emitted from the patient. It is possible to use. That is, the extraction unit 142 uses, as a predetermined timing, the time when the corresponding word is recognized in the speech recognition result generated by the speech recognition unit 13.
- the extraction unit 142 can use, for example, a point in time when the joint of the hand stops for a certain period of time at a position below a predetermined distance from the bone. That is, the extraction unit 142 uses the time as a predetermined timing when a certain place on the surface of the body is touched or pressed for a certain time. Note that whether or not the joint of the hand has stopped for a certain time at a position equal to or less than a predetermined distance from the bone is determined by analyzing changes in coordinates of the position of the hand joint in the skeleton information acquired by the acquisition unit 141. Can be done.
- the allowable amount of coordinate change when determining that the joint of the hand is stopped and the stop time used for the determination can be arbitrarily set. For example, it may be set such that the allowable amount of coordinate change and the stop time are changed according to the age of the patient.
- the predetermined timing described above is merely an example, and can be arbitrarily applied in addition to these timings.
- it may be the time when a doctor issues a word that prompts the patient to touch the affected part in addition to the word generated from the patient.
- the extraction unit 142 extracts the affected part using the position information of the hand joint at the timing described above. For example, as illustrated in (B) of FIG. 7, the extraction unit 142 extracts the affected part using the position information of the joint identification information “2l” corresponding to the left hand and the information on the surrounding bone. That is, the extraction unit 142 first acquires coordinate information corresponding to the joint identification information “2l” of the left hand.
- the extraction unit 142 calculates the coordinate information of the portion corresponding to the bone from the coordinate information of the joint identification information corresponding to other joints in the same frame. For example, the extraction unit 142 performs the coordinate information of the bone between “2h” and “2g” shown in FIG. 7B, the coordinate information of the bone between “2g” and “2f”, and “2f” to “2e”. Bone coordinate information between “2e” and “2b”, Bone coordinate information between “2b” and “2i”, Bone coordinate information between “2i” and “2j”, “ The coordinate information of the bone between 2j ”and“ 2k ”is calculated.
- the extraction unit 142 calculates the coordinate information of the bone not connected to the joint identification information “2l” corresponding to the left hand. That is, the extraction unit 142 calculates the coordinate information of the bone that can be easily touched by the hand. Then, the extraction unit 142 calculates the distance from the left hand to each bone using the coordinate information of the left hand and the calculated coordinate information of each bone.
- the extraction unit 142 performs the bones “2h” to “2g”, the bones “2g” to “2f”, the bones “2f” to “2e”, “2e” to “2e”
- the distances to the bone between “2b”, the bone between “2b” and “2i”, the bone between “2i” and “2j”, and the bone between “2j” and “2k” are calculated.
- the extraction unit 142 extracts a bone whose calculated distance is equal to or less than a predetermined threshold and whose distance is the shortest. For example, the extraction unit 142 extracts a bone between “2g” and “2f” that has a minimum distance from “2l” and is a predetermined threshold value or less. Then, the extraction unit 142 projects the tip of the hand onto the extracted bone and calculates a position on the bone. For example, as shown in FIG. 7B, the extraction unit 142 projects (dotted line) from “2l” to the bone between “2g” and “2f”, and projects the projected position (“2g” to “2g”). The position of the intersection between the bone and the dotted line between “2f” is calculated. After that, the extraction unit 142 calculates the ratio of the distance on the bone by calculating the distance from each joint at the intersection between the bone and the dotted line between “2g” and “2f”, for example.
- the extraction unit 142 calculates the distance between the intersection between the bone and the dotted line between “2g” to “2f” and the distance between “2g” and “2f” shown in FIG. The ratio between the distance from “2g” to the intersection and the distance from the intersection to “2f” with respect to the distance from “2f” is calculated. Then, the extraction unit 142 extracts the position of the calculated ratio in the bones “2g” to “2f” as the affected part.
- the extraction unit 142 extracts the affected part from the movement range of the joint of the hand.
- the extraction unit 142 starts when the left hand “2l” moves in a reciprocating manner starting from the joint identification information “2j” corresponding to the left elbow.
- the coordinate information of the stop position of the left hand “2l” is calculated at both ends of the reciprocating movement.
- the extracting unit 142 extracts the calculated coordinate information of both ends as both ends of the affected area, and extracts the area between them as the affected area.
- the extraction unit 142 extracts the range indicated by the double-ended arrows in FIG.
- the extraction unit 142 can be set to start calculating the distance between the hand joint and the bone at a predetermined timing.
- the extraction unit 142 obtains coordinate information of joints other than the joints of the hand at the predetermined timing (speech recognition or monitoring the movement of the hand joint), and calculates the distance between the hand and the bone. To do.
- the extraction unit 142 extracts a bone whose distance from the joint of the hand is equal to or less than a predetermined threshold and is the shortest, and analyzes the movement state of the joint of the hand with respect to the extracted bone.
- the extraction unit 142 extracts only the range where the distance between the hand and the bone is moving below the predetermined threshold as the affected area.
- the selection unit 143 selects related information related to the affected part extracted by the extraction unit 142.
- the selection unit 143 selects a schema for indicating the positional relationship between the human body structure and the affected part extracted by the extraction unit 142 as the related information.
- 8A to 8C are diagrams for explaining an example of processing performed by the selection unit 143 according to the first embodiment.
- FIG. 8A to FIG. 8C show a case where a schema is selected for three patterns extracted as a certain position by the extraction unit 142.
- the extraction of the affected area by the extraction unit 142 is the same as the method described above.
- the selection unit 143 has a position indicated by a circle between joint identification information “2f” corresponding to the right elbow and joint identification information “2g” corresponding to the right wrist.
- the schema on the front surface of the whole body including the position extracted as the affected part is selected.
- the selection unit 143 selects a schema by combining information on a medical department in which a patient is undergoing medical care and information specialized by a doctor.
- the selection unit 143 selects a schema that takes into account the difference in the schema used in each department and the proper use by doctors. Note that information relating to the schema selection is set in advance and stored in the storage unit 130.
- the selection unit 143 is a schema as described in the medical chart based on the information on the affected part extracted by the extraction unit 142, information on the schema selection, information on the medical department and doctor's specialty that the patient is visiting, and the like. Choose the best schema.
- the selection unit 143 first selects a schema of a part including the affected part, and selects a schema that can more express the positional relationship of the affected part from a plurality of schemas corresponding to the selected part.
- the selection unit 143 selects a plurality of schemas indicating the head (for example, see FIG. 2B). Then, the selection unit 143 determines that the affected part is the front surface of the head from the positional relationship of the coordinates of the left hand joint “2l” with respect to the coordinates of the head “2a”, and selects from among a plurality of schemas indicating the head Select the schema in front of the head.
- the selection unit 143 includes information on the schema corresponding to the affected part extracted by the extraction unit 142 and information on the specialty of the department and doctor who the patient is receiving. Select based on.
- the mark assigning unit 144 assigns information indicating the affected part position to the position corresponding to the affected part in the schema selected by the selecting unit 143.
- 9A to 9C are diagrams illustrating an example of processing performed by the mark assigning unit 144 according to the first embodiment. 9A to 9C show a case where information (marks) indicating the position of the affected area is given to each of the schemas selected in FIGS. 8A to 8C.
- the mark assigning unit 144 assigns the mark M1 to the right elbow portion of the schema on the front of the whole body selected by the selection unit 143.
- the mark imparting unit 144 imparts a mark on the schema using the ratio information on the bone calculated by the extracting unit 142.
- the mark assigning unit 144 assigns a mark M2 and a mark M3 on each schema using the ratio information on the bone calculated by the extracting unit 142.
- FIG. 10 is a diagram illustrating an example of schema selection and mark assignment according to the first embodiment.
- FIG. 10 shows the selection of the schema and the application of marks to the affected area extracted in FIG.
- the selection unit 143 includes information on the affected area (area indicated by a double-ended arrow), information on selection of a schema, information on clinical departments, and information on specialist doctors. Select the front schema of the whole body.
- the selection of the schema by the selection unit 143 is the same as the processing described above.
- the mark assigning unit 144 assigns a mark M4 indicating the affected part to the left arm region on the schema selected by the selecting unit 143.
- the mark assigning unit 144 determines and determines the region to which the mark on the schema is to be applied based on the information on the affected region extracted by the extracting unit 142 (coordinate information on the region indicated by the double-ended arrows) and the like. Mark the marked area.
- the display control unit 145 causes the output unit 110 to display the related information selected by the selection unit 143. Specifically, the display control unit 145 causes the output unit 110 to display the schema added with the mark by the mark adding unit 144 on the schema selected by the selection unit 143.
- the display control unit 145 extracts and extracts a schema in which the position where the mark is added by the mark applying unit 144 substantially matches the affected part from the past schema included in the patient's electronic medical record from which the affected part has been extracted.
- the schema is displayed on the output unit 110.
- FIG. 11 is a diagram for explaining an example of display control by the display control unit 145 according to the first embodiment.
- the display control unit 145 compares a schema provided with a mark closer to the position of the mark provided this time from the chart information of the patient's electronic medical record as compared to the schema selected this time.
- the data is read out and displayed on the output unit 110.
- the display control unit 145 acquires the chart information of the schema stored in the electronic chart of the same patient, and in the acquired schema, a mark is provided at a position close to the position of the mark applied this time.
- the previously selected schema is displayed on the output unit 110.
- the display control unit 145 can read out the chart information of the same schema as the currently selected schema and display it on the output unit 110. In addition, the display control unit 145 displays the latest schema when a plurality of schema information with the same mark as the schema selected this time or the schema with the same mark as the current selected schema is stored. It is also possible to control to display. In addition, the display control unit 145 displays all schemas in the case where a plurality of schema information of the same schema as the schema selected this time or the schema assigned the current mark is stored. It is also possible to control to display.
- the chart information storage unit 146 associates at least one of the voice of the patient's chief complaint from which the affected part is extracted and the image showing the position of the affected part of the patient with the related information in the storage unit.
- the medical record information storage unit 146 stores voice information of the main complaint issued by the patient in the medical record information storage unit 133 in association with the schema information.
- FIG. 12 is a diagram for explaining an example of processing by the medical record information storage unit 146 according to the first embodiment.
- the acquisition unit 141, the extraction unit 142, the selection The unit 143 and the mark providing unit 144 perform the above-described processing, thereby inputting a mark to the schema.
- the chart information storage unit 146 stores the voice data of the chief complaint “the elbow is uncomfortable and hurts when bent” included in the voice recognition result generated by the voice identification unit 13 in the chart information
- a schema in which a voice data link is added to the mark given by the unit 144 is stored in the chart information storage unit 133.
- the chart information storage unit 146 stores the chart information in which the voice file, schema, and mark ID are associated with the patient ID in the chart information storage unit 133. That is, the chart information storage unit 146 stores the chart information “patient ID: 100033, audio file: 2012-07-02-0005.mp3, schema: whole body, mark ID: 00000021” in the chart information storage unit 133, At the time, a voice link is given to the mark ID.
- the doctor terminal 100 extracts the affected part from the operation information when the patient is examined, selects the optimum schema for the extracted affected part, and gives the mark. Can do. Therefore, it is possible to omit the work that the doctor has selected and provided with the mark conventionally, and the doctor terminal 100 according to the first embodiment can facilitate the information input of the electronic medical record. To do.
- FIG. 13 is a flowchart illustrating a processing procedure performed by the doctor terminal 100 according to the first embodiment.
- FIG. 13 shows the process when displaying a past schema, the past schema may not be displayed.
- Step S101 when the voice storage mode is ON (Yes in step S101), the motion information collection unit 10 acquires the voice of the patient's main complaint. (Step S102). If the voice storage mode is OFF (No at Step S101), the process proceeds to Step S103 without executing Step S102.
- step S103 information about the subject (patient) is acquired (step S103), and the extraction unit 142 extracts the affected part from the patient motion information collected by the motion information collection unit 10 (step S104). Then, the selection unit 143 selects a schema corresponding to the affected part extracted by the extraction unit 142 (step S105), and the mark imparting unit 144 imparts a normal mark or a mark linked with the voice of the chief complaint to the schema. (Step S106).
- the display control unit 145 displays the schema (step S107), further displays the past schema (step S108), and determines whether or not the save operation is accepted (step S109).
- the input unit 120 receives a storage operation from the operator (doctor) (Yes at Step S109)
- the medical record information storage unit 146 determines whether or not the voice storage mode is set (Step S110).
- the medical record information storage unit 146 stores the medical record information in the medical record information storage unit 133 (Step S111), and the process is terminated.
- the medical record information storage unit 146 stores the medical record information associated with the voice data in the medical record information storage unit 133 (Step S112), and ends the process. .
- the doctor terminal 100 continues displaying the schema until accepting the save operation (No at Step S109).
- the acquisition unit 141 acquires skeleton information including position information of a patient's joint that is a target of motion acquisition. Then, the extraction unit 142 extracts the affected part based on the joint position information in the patient skeleton information acquired by the acquisition unit 141. Then, the selection unit 143 selects a schema related to the affected part extracted by the extraction unit 142. Then, the display control unit 145 controls the output unit 110 to display the schema selected by the selection unit 143. Therefore, the doctor terminal 100 according to the first embodiment can omit the selection work related to the medical record information of the electronic medical record, and can easily input the information of the electronic medical record.
- the extraction unit 142 connects two joints different from the joints of the hand from the position of the joints of the hand at a predetermined timing in the joint position information in the skeleton information of the patient.
- the part corresponding to the position on the bone that has the shortest distance to the bone and is equal to or less than a predetermined threshold is extracted as an affected part. Therefore, the doctor terminal 100 according to the first embodiment can extract the affected area based on the action taken at the time of medical treatment of the patient, and enables accurate chart information selection processing.
- the extraction unit 142 extracts a part corresponding to a predetermined movement range of the joint position of the hand as the affected part in the joint position information in the skeleton information of the patient. Therefore, the doctor terminal 100 according to the first embodiment makes it possible to cope with a case where the affected area covers a wide range.
- the selection unit 143 selects the schema of the part extracted by the extraction unit 142 as the related information. Therefore, the doctor terminal 100 according to the first embodiment can omit a selection operation that tends to be complicated in inputting information in the electronic medical record, and makes it easier to input information.
- the mark assigning unit 144 assigns information indicating the affected part position to the position corresponding to the affected part in the schema selected by the selecting unit 143. Then, the display control unit 145 controls the output unit 110 to display the schema to which the information indicating the affected area position is added by the mark adding unit 144. Therefore, the doctor terminal 100 according to the first embodiment can omit the work of adding a mark on the schema, and can further facilitate the input of information in the electronic medical record.
- the display control unit 145 extracts the position to which the information on the affected part position is added by the mark applying part 144 from the past schema included in the electronic medical record of the patient from which the affected part is extracted. A schema that substantially matches the affected part is extracted, and the extracted schema is controlled to be displayed on the output unit 110. Therefore, the doctor terminal 100 according to the first embodiment can automatically read past medical data that is required to be compared with the current medical care.
- the medical record information storage unit 146 corresponds to at least one of the voice of the patient's chief complaint from which the affected part is extracted and the image showing the position of the affected part of the patient as related information. At the same time, it is stored in the chart information storage unit 133. Therefore, the doctor terminal 100 according to the first embodiment makes it possible to save the information issued by the patient at the time of medical care as audio or video.
- the selection unit 143 selects items of the electronic medical record related to the affected part extracted by the extraction unit 142 as the related information.
- FIG. 14 is a diagram for explaining an example of processing by the selection unit 143 according to the second embodiment. For example, as shown in FIG. 14, the selection unit 143 selects from the contents of the doctor's examination for the patient (the act of placing a stethoscope on the chest) to display the heading “Auscultation Result” in the column of the findings of the electronic medical record. To do.
- the selection unit 143 selects the auscultation result input screen for inputting the auscultation result from the contents of the doctor's examination for the patient (an act of applying a stethoscope to the chest).
- the display control unit 145 controls the output unit 110 to display the electronic medical record item selected by the selection unit 143.
- the doctor terminal 100 when extracting a doctor's action on a patient, acquires the patient's motion information and the doctor's motion information in each frame. That is, the acquisition unit 141 acquires patient skeleton information and doctor skeleton information for each frame collected by the motion information collection unit 10. Then, the extraction unit 142 extracts a doctor's action (examination contents) on the patient from the patient's skeleton information (coordinate information of each joint) and the doctor's skeleton information (coordinate information of each joint) acquired by the acquisition unit 141. To do.
- the extraction unit 142 extracts a doctor's action (examination contents) on the patient from the positional relationship of the coordinates of the joints of the doctor's hand with respect to the coordinates of each joint of the patient.
- Information for extracting a doctor's action (examination contents) is set in advance and stored in the storage unit 130.
- the storage unit 130 stores information indicating that a stethoscope is applied to the chest when the joint of the doctor's hand moves around the chest of the patient with a temporary stop. To do.
- a doctor is performing auscultation by detecting a stethoscope by pattern matching.
- what kind of method may be used for identification with a patient and a doctor in operation
- the selection unit 143 selects the item of the electronic medical record related to the affected part extracted by the extraction unit 142 as the related information. Therefore, the doctor terminal 100 according to the second embodiment can omit selection by the doctor of various items related to the creation of the electronic medical record, and can facilitate information input of the electronic medical record.
- the embodiment is not limited to this.
- the image may be stored in association with the chart information.
- the chart information storage unit 146 stores, as a still image or a moving image, an image of the patient touching the affected area in the color image collected by the color image collection unit 11 of the motion information collection unit 10.
- the operation information collection unit 10 collects operation information under a certain collection condition.
- the embodiment is not limited to this.
- the collection condition may be changed according to the information on the affected part extracted by the extraction unit 142.
- FIG. 15 is a diagram for explaining operation information collection condition changing processing by the doctor terminal 100 according to the third embodiment.
- the doctor terminal 100 controls the motion information collecting unit 10 to change the camera direction and zoom.
- the doctor terminal 100 changes the direction of the camera so that the affected part of the right arm is located at the center of the screen, and further enlarges and captures the affected part in an appropriate size. To control.
- the doctor terminal 100 can also control to cut out and save an image area extracted as an affected part in a captured color image.
- the doctor terminal 100 can measure, for example, the area and color of the affected part included in the color image based on the extracted information on the affected part, and can compare it with the results measured in the past.
- the doctor terminal 100 controls to shoot the current affected area at the same enlargement rate as when the image was taken in the past and to display the affected area in parallel. Is also possible.
- each process may be executed by the server device 300 on the network.
- the server apparatus 300 provides the doctor terminal 100 with the same processing as the doctor terminal 100.
- the server apparatus 300 includes an acquisition unit 141, an extraction unit 142, a selection unit 143, and a display control unit 145.
- the acquisition unit 141 acquires skeleton information including position information of a joint of a patient that is a target of motion acquisition.
- the extraction unit 142 extracts the affected part based on the joint position information in the patient skeleton information acquired by the acquisition unit 141.
- the selection unit 143 selects a schema related to the affected part extracted by the extraction unit 142.
- the display control unit 145 performs control so that the schema selected by the selection unit 143 is displayed on the output unit 110 of the doctor terminal 100.
- the doctor terminal 100 extracts an affected area and selects and displays related information related to the affected area (for example, items such as a schema and an electronic medical record) has been described.
- related information related to the affected area for example, items such as a schema and an electronic medical record
- the embodiment is not limited to this.
- a medical image diagnostic apparatus such as an ultrasonic diagnostic apparatus or an X-ray diagnostic apparatus may execute each process.
- the ultrasonic diagnostic apparatus when processing is executed in the ultrasonic diagnostic apparatus, the ultrasonic diagnostic apparatus first acquires coordinate information of a joint of a doctor's hand that operates an ultrasonic probe with respect to patient coordinate information. Then, the ultrasonic diagnostic apparatus extracts the affected part to which the ultrasonic probe is applied from the acquired coordinate information, and controls to display a body mark corresponding to the extracted affected part on an output unit such as a monitor. At this time, the ultrasonic diagnostic apparatus can also be controlled to display the body mark together with the ultrasonic image.
- the functions of the acquisition unit 141, the extraction unit 142, the selection unit 143, and the display control unit 145 described in the first to second embodiments can be realized by software.
- the functions of the acquisition unit 141, the extraction unit 142, the selection unit 143, and the display control unit 145 have been described as being performed by the acquisition unit 141, the extraction unit 142, the selection unit 143, and the display control unit 145 in the above embodiment.
- This is realized by causing a computer to execute an operation information processing program that defines the above procedure.
- the operation information processing program is stored in, for example, a hard disk or a semiconductor memory device, and is read and executed by a processor such as a CPU or MPU.
- the operation information processing program can be recorded and distributed on a computer-readable recording medium such as a CD-ROM (Compact Disc-Read Only Memory), MO (Magnetic Optical disk), or DVD (Digital Versatile Disc). .
- the motion information processing system, the motion information processing device, and the medical image diagnostic device of the present embodiment can facilitate information input of electronic medical records. To do.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Public Health (AREA)
- Economics (AREA)
- Medical Informatics (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- General Health & Medical Sciences (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- General Physics & Mathematics (AREA)
- Game Theory and Decision Science (AREA)
- General Business, Economics & Management (AREA)
- Operations Research (AREA)
- Theoretical Computer Science (AREA)
- Marketing (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Quality & Reliability (AREA)
- Development Economics (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Medical Treatment And Welfare Office Work (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
A motion information processing system (1) of an embodiment is provided with an acquisition unit (141), an extraction unit (142), a selection unit (143), and a display control unit (145). The acquisition unit (141) acquires skeletal information including positional information of joints of a patient the motion of which is to be acquired. The extraction unit (142) extracts an affected part on the basis of the positional information of the joints in the patient's skeletal information acquired by the acquisition unit (141). The selection unit (143) then selects a schema related to the affected part extracted by the extraction unit (142). The display control unit (145) then performs control such that the schema selected by the selection unit (143) is displayed in an output unit.
Description
本発明の実施の形態は、動作情報処理システム、動作情報処理装置及び医用画像診断装置に関する。
Embodiments of the present invention relate to a motion information processing system, a motion information processing device, and a medical image diagnostic device.
従来、医療機関においては、来院患者の診療情報の管理に電子カルテシステムが用いられるようになってきている。かかる電子カルテシステムは、診療室ごとに備えられた医師端末にて各患者のカルテ(電子カルテ)を呼出し、呼出した患者の電子カルテに種々の情報入力が行われる。例えば、電子カルテにおける情報入力は、医師が医師端末を用いて、患者の電子カルテに、その病状に従って主訴・所見等の内容・シェーマ図・オーダ内容等の情報入力を、予め用意された診療情報定義内容から個々に検索/選択して、これを用い入力することにより行われる。また、当該患者が再来患者である場合には、該患者の過去のカルテ情報を呼び出し、これをそのままコピーして今回の電子カルテに入力することもある。
Conventionally, in medical institutions, an electronic medical record system has been used to manage medical information of visiting patients. In such an electronic medical record system, each patient's medical record (electronic medical record) is called at a doctor terminal provided in each clinic, and various information is input to the called patient's electronic medical record. For example, for information input in electronic medical records, doctors use a doctor's terminal to input information such as chief complaints, findings, etc., schema diagrams, order contents, etc. according to the medical condition of the patient's electronic medical records. This is done by individually searching / selecting the definition contents and inputting them. If the patient is a revisited patient, the patient's past medical chart information may be called, copied as it is, and input to the current electronic medical chart.
一方、近年、人物や物体の動きをデジタル的に記録するモーションキャプチャ(motion capture)技術の開発が進んでいる。モーションキャプチャ技術の方式としては、例えば、光学式、機械式、磁気式、カメラ式などが知られている。一例を挙げると、人物にマーカを装着させて、カメラなどのトラッカーによってマーカを検出し、検出したマーカを処理することにより人物の動きをデジタル的に記録するカメラ方式が知られている。また、マーカ及びトラッカーを用いない方式としては、赤外線センサーを利用して、センサーから人物までの距離を計測し、該人物の大きさや骨格のさまざまな動きを検出することで人物の動きをデジタル的に記録する方式が知られている。このような方式を利用したセンサーとしては、例えば、Kinect(登録商標)が知られている。
On the other hand, in recent years, development of motion capture technology that digitally records the movement of a person or an object has been advanced. As a method of motion capture technology, for example, an optical type, a mechanical type, a magnetic type, a camera type and the like are known. For example, a camera system is known in which a marker is attached to a person, the marker is detected by a tracker such as a camera, and the movement of the person is digitally recorded by processing the detected marker. As a method that does not use markers and trackers, an infrared sensor is used to measure the distance from the sensor to the person and detect the movement of the person digitally by detecting the size of the person and various movements of the skeleton. The recording method is known. As a sensor using such a method, for example, Kinect (registered trademark) is known.
本発明が解決しようとする課題は、電子カルテの情報入力を容易にすることを可能にする動作情報処理システム、動作情報処理装置及び医用画像診断装置を提供することである。
The problem to be solved by the present invention is to provide a motion information processing system, a motion information processing device, and a medical image diagnostic device that make it easy to input information in an electronic medical record.
実施の形態の動作情報処理システムは、取得部と、抽出部と、選択部と、表示制御部とを備える。取得部は、動作取得の対象となる対象者の関節の位置情報を含む動作情報を取得する。抽出部は、前記取得部によって取得された対象者の動作情報における関節の位置情報に基づいて、患部を抽出する。選択部は、前記抽出部によって抽出された患部に関連する関連情報を選択する。表示制御部は、前記選択部によって選択された関連情報を表示部にて表示させるように制御する。
The motion information processing system of the embodiment includes an acquisition unit, an extraction unit, a selection unit, and a display control unit. The acquisition unit acquires motion information including position information of a joint of a subject who is a target of motion acquisition. The extraction unit extracts the affected part based on the joint position information in the motion information of the subject acquired by the acquisition unit. The selection unit selects related information related to the affected part extracted by the extraction unit. The display control unit controls the display unit to display related information selected by the selection unit.
以下、図面を参照して、実施形態に係る動作情報処理システム、動作情報処理装置及び医用画像診断装置を説明する。なお、以下、動作情報処理装置としての医師端末を含む動作情報処理システムを例に挙げて説明する。
Hereinafter, a motion information processing system, a motion information processing apparatus, and a medical image diagnostic apparatus according to embodiments will be described with reference to the drawings. Hereinafter, a motion information processing system including a doctor terminal as a motion information processing apparatus will be described as an example.
(第1の実施形態)
図1は、第1の実施形態に係る動作情報処理システム1の構成の一例を示す図である。図1に示すように、第1の実施形態に係る動作情報処理システム1は、医師端末100と、受付端末200と、サーバ装置300とを備える。医師端末100と、受付端末200と、サーバ装置300とは、例えば、病院内に設置された院内LAN(Local Area Network)により、直接的、又は間接的に相互に通信可能な状態となっている。このような動作情報処理システム1は、例えば、PACS(Picture Archiving and Communication System)やHIS(Hospital Information System)、RIS(Radiology Information System)などが適用される。また、動作情報処理システム1は、例えば、電子カルテシステム、レセプト電算処理システム、オーダリングシステム、受付(個人、資格認証)システム、診療支援システムとしての機能なども備える。 (First embodiment)
FIG. 1 is a diagram illustrating an example of a configuration of a motioninformation processing system 1 according to the first embodiment. As illustrated in FIG. 1, the motion information processing system 1 according to the first embodiment includes a doctor terminal 100, a reception terminal 200, and a server device 300. The doctor terminal 100, the reception terminal 200, and the server device 300 are in a state where they can communicate with each other directly or indirectly, for example, via a hospital LAN (Local Area Network) installed in the hospital. . For example, PACS (Picture Archiving and Communication System), HIS (Hospital Information System), RIS (Radiology Information System), or the like is applied to the motion information processing system 1. The motion information processing system 1 also includes functions as an electronic medical record system, a receipt computer processing system, an ordering system, a reception (individual / qualification authentication) system, a medical assistance system, and the like.
図1は、第1の実施形態に係る動作情報処理システム1の構成の一例を示す図である。図1に示すように、第1の実施形態に係る動作情報処理システム1は、医師端末100と、受付端末200と、サーバ装置300とを備える。医師端末100と、受付端末200と、サーバ装置300とは、例えば、病院内に設置された院内LAN(Local Area Network)により、直接的、又は間接的に相互に通信可能な状態となっている。このような動作情報処理システム1は、例えば、PACS(Picture Archiving and Communication System)やHIS(Hospital Information System)、RIS(Radiology Information System)などが適用される。また、動作情報処理システム1は、例えば、電子カルテシステム、レセプト電算処理システム、オーダリングシステム、受付(個人、資格認証)システム、診療支援システムとしての機能なども備える。 (First embodiment)
FIG. 1 is a diagram illustrating an example of a configuration of a motion
受付端末200は、患者が来院した際の受付登録や、患者の電子カルテの作成又はサーバ装置300によって管理された電子カルテから当該患者の電子カルテの呼出しを実行する。そして、受付端末200は、受付時刻又は予約時刻に応じて、患者ごとの受付情報及び電子カルテの情報の待ち行列を作成して、サーバ装置300に送信する。
The reception terminal 200 executes reception registration when a patient visits, creation of a patient's electronic medical record, or calling of the patient's electronic medical record from the electronic medical record managed by the server device 300. Then, the reception terminal 200 creates a queue of reception information and electronic medical record information for each patient according to the reception time or the reservation time, and transmits the queue to the server device 300.
サーバ装置300は、登録患者の電子カルテを管理する。また、サーバ装置300は、受付端末200から受信した受付情報及び電子カルテの情報の待ち行列を管理する。例えば、サーバ装置300は、受信した受付情報及び電子カルテの情報の待ち行列を、診療科ごとに管理する。
The server apparatus 300 manages the electronic medical records of registered patients. The server apparatus 300 manages a queue of reception information and electronic medical record information received from the reception terminal 200. For example, the server device 300 manages a queue of received reception information and electronic medical record information for each department.
医師端末100は、例えば、診察室ごとに設置された端末であり、医師によって電子カルテのカルテ情報が入力される。ここで、カルテ情報としては、例えば、症状や医師の所見などが挙げられる。医師は、医師端末100を操作して、サーバ装置300から待ち行列順に受付情報及び電子カルテの情報を読み出す。そして、医師は、該当する患者の診察を行い、読み出した電子カルテにカルテ情報の入力を行う。
The doctor terminal 100 is a terminal installed in each examination room, for example, and the medical chart information of the electronic medical record is input by the doctor. Here, the chart information includes, for example, symptoms and doctor's findings. The doctor operates the doctor terminal 100 to read reception information and electronic medical record information from the server device 300 in the queue order. Then, the doctor examines the corresponding patient and inputs medical chart information to the read electronic medical chart.
図2Aは、第1の実施形態に係る電子カルテの表示画面の一例を説明するための図である。例えば、医師端末100は、図2Aに示すように、患者のカルテ情報が入力される領域であるカルテ領域R1と、カルテ領域R1にカルテ情報を入力するための操作ボタンなどが配置された操作領域R2とを有する電子カルテの表示画面を表示する。ここで、カルテ領域R1は、図2Aに示すように、例えば、氏名や生年月日、性別などの患者データが表示される領域R3、今回のカルテ情報が入力される領域R4、前回のカルテ情報が表示される領域R5などから構成される。また、操作領域R2は、カルテ情報にシェーマを利用する場合に、利用されるシェーマの選択を行う領域である領域R6や、各種機能が割り当てられた操作ボタンが配置された領域R7などから構成される。
FIG. 2A is a diagram for explaining an example of a display screen of the electronic medical record according to the first embodiment. For example, as illustrated in FIG. 2A, the doctor terminal 100 includes an operation area in which a chart area R1 that is an area in which patient chart information is input and operation buttons for inputting chart information in the chart area R1 are arranged. An electronic medical chart display screen having R2 is displayed. Here, as shown in FIG. 2A, the chart area R1 includes, for example, an area R3 in which patient data such as name, date of birth, and gender is displayed, an area R4 in which current chart information is input, and previous chart information. It is comprised from the area | region R5 etc. which are displayed. The operation area R2 includes an area R6 that is an area for selecting a schema to be used when using a schema for medical chart information, an area R7 in which operation buttons to which various functions are assigned, and the like. The
例えば、医師は、領域R6や、領域R7などに配置されたボタンを操作して、電子カルテにカルテ情報の入力を行う。図2Bは、第1の実施形態に係るカルテ情報の入力の一例を説明するための図である。図2Bにおいては、カルテ情報としてシェーマを利用する際に、図2Aに示す領域R6に配置されたボタンを操作して表示されたシェーマの選択ウィンドウについて示す。
For example, the doctor operates the buttons arranged in the region R6, the region R7, etc., and inputs medical chart information to the electronic medical record. FIG. 2B is a diagram for explaining an example of inputting chart information according to the first embodiment. FIG. 2B shows a schema selection window displayed by operating a button arranged in the region R6 shown in FIG. 2A when using a schema as chart information.
例えば、医師は、頭部のシェーマをカルテ情報として入力する場合に、図2Aに示す領域R6に配置されたボタンを操作することで、図2Bの右側の領域に示すように、複数の頭部のシェーマが表示された選択ウィンドウを表示させる。そして、医師は、表示された複数の頭部のシェーマの中から所望のシェーマを選択することで、図2Bに示すように、ウィンドウの左側の領域に選択したシェーマを表示させる。さらに、医師は、図2Bの左側の領域に配置されたボタンを操作することで、図2Bに示すように、シェーマの患部に相当する位置にマークを付与する。
For example, when a doctor inputs a head schema as chart information, the doctor operates a button arranged in the region R6 shown in FIG. 2A to display a plurality of heads as shown in the right region of FIG. 2B. A selection window with the schema is displayed. Then, the doctor selects a desired schema from the displayed plurality of head schemas, thereby displaying the selected schema in the left region of the window as shown in FIG. 2B. Furthermore, the doctor gives a mark to a position corresponding to the affected part of the schema as shown in FIG. 2B by operating a button arranged in the left region of FIG. 2B.
そして、医師は、マークを付与したシェーマをカルテ情報として電子カルテに入力する。例えば、医師は、図2Aの領域R4にマークを付与したシェーマを入力する。診察が終了すると、医師端末100は、医師の操作に基づいて、カルテ情報が入力された電子カルテをサーバ装置300に送信する。サーバ装置300は、受信した電子カルテを患者ごとに管理する。
Then, the doctor inputs the schema to which the mark is added into the electronic medical record as medical record information. For example, the doctor inputs a schema having a mark added to the region R4 in FIG. 2A. When the examination ends, the doctor terminal 100 transmits the electronic medical record in which the medical chart information is input to the server device 300 based on the operation of the doctor. The server device 300 manages the received electronic medical record for each patient.
このように、電子カルテは、医師端末100において種々のカルテ情報の入力が行われる。ここで、従来技術においては、電子カルテの情報入力が煩雑となり、手間がかかる場合がある。例えば、図2Bに示すように、シェーマの入力に際しては、まず、患者から口頭や手で指し示すなどの動作によって患部の情報を得ることで、体のどの部位のシェーマを用いるのか、そして、部位ごとに複数あるシェーマのうち、どのシェーマを用いるのかを選択する。さらに、医師は、患部の範囲に応じて、シェーマ上にマークを付与する。従って、電子カルテにカルテ情報を入力する際に、選択操作を繰り返す場合があり、情報入力が煩雑となり手間がかかる場合があった。
As described above, in the electronic medical record, various types of medical record information are input at the doctor terminal 100. Here, in the prior art, the input of information in the electronic medical record becomes complicated and may take time. For example, as shown in FIG. 2B, when inputting a schema, first, information on the affected part is obtained from a patient by pointing orally or by hand, which part of the body is used, and for each part To select which schema to use. Furthermore, the doctor gives a mark on the schema according to the range of the affected part. Therefore, when inputting medical chart information into the electronic medical chart, the selection operation may be repeated, and the information input may be complicated and troublesome.
そこで、第1の実施形態に係る動作情報処理システム1は、以下、詳細に説明する医師端末100の処理によって、電子カルテの情報入力を容易にすることを可能にする。図3は、第1の実施形態に係る医師端末100の構成の一例を示す図である。図1に示すように、第1の実施形態において、医師端末100は、動作情報収集部10に接続される。
Therefore, the motion information processing system 1 according to the first embodiment makes it easy to input information in the electronic medical chart by the processing of the doctor terminal 100 described in detail below. FIG. 3 is a diagram illustrating an example of the configuration of the doctor terminal 100 according to the first embodiment. As shown in FIG. 1, in the first embodiment, the doctor terminal 100 is connected to the motion information collection unit 10.
動作情報収集部10は、診察が行われる空間における人物や物体等の動作を検知し、人物や物体等の動作を表す動作情報を収集する。なお、動作情報については、後述の動作情報生成部14の処理を説明する際に詳述する。また、動作情報収集部10としては、例えば、Kinect(登録商標)が用いられる。
The motion information collection unit 10 detects the motion of a person or an object in the space where the examination is performed, and collects motion information representing the motion of the person or the object. Note that the operation information will be described in detail when the processing of the operation information generation unit 14 described later is described. For example, Kinect (registered trademark) is used as the operation information collection unit 10.
図3に示すように、動作情報収集部10は、例えば、カラー画像収集部11と、距離画像収集部12と、音声認識部13と、動作情報生成部14とを有する。なお、図3に示す動作情報収集部10の構成は、あくまでも一例であり、実施形態はこれに限定されるものではない。
As shown in FIG. 3, the motion information collection unit 10 includes, for example, a color image collection unit 11, a distance image collection unit 12, a voice recognition unit 13, and a motion information generation unit 14. Note that the configuration of the operation information collection unit 10 illustrated in FIG. 3 is merely an example, and the embodiment is not limited thereto.
カラー画像収集部11は、診察が行われる空間における人物や物体等の被写体を撮影し、カラー画像情報を収集する。例えば、カラー画像収集部11は、被写体表面で反射される光を受光素子で検知し、可視光を電気信号に変換する。そして、カラー画像収集部11は、その電気信号をデジタルデータに変換することにより、撮影範囲に対応する1フレームのカラー画像情報を生成する。この1フレーム分のカラー画像情報には、例えば、撮影時刻情報と、この1フレームに含まれる各画素にRGB(Red Green Blue)値が対応付けられた情報とが含まれる。カラー画像収集部11は、次々に検知される可視光から連続する複数フレームのカラー画像情報を生成することで、撮影範囲を動画撮影する。なお、カラー画像収集部11によって生成されるカラー画像情報は、各画素のRGB値をビットマップに配置したカラー画像として出力されても良い。また、カラー画像収集部11は、受光素子として、例えば、CMOS(Complementary Metal Oxide Semiconductor)やCCD(Charge Coupled Device)を有する。
The color image collection unit 11 shoots a subject such as a person or an object in a space where a medical examination is performed, and collects color image information. For example, the color image collection unit 11 detects light reflected from the subject surface with a light receiving element, and converts visible light into an electrical signal. Then, the color image collection unit 11 converts the electrical signal into digital data, thereby generating one frame of color image information corresponding to the shooting range. The color image information for one frame includes, for example, shooting time information and information in which each pixel included in the one frame is associated with an RGB (Red Green Blue) value. The color image collection unit 11 shoots a moving image of the shooting range by generating color image information of a plurality of continuous frames from visible light detected one after another. The color image information generated by the color image collection unit 11 may be output as a color image in which the RGB values of each pixel are arranged in a bitmap. The color image collection unit 11 includes, for example, a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device) as a light receiving element.
距離画像収集部12は、リハビリテーションが行われる空間における人物や物体等の被写体を撮影し、距離画像情報を収集する。例えば、距離画像収集部12は、赤外線を周囲に照射し、照射波が被写体表面で反射された反射波を受光素子で検知する。そして、距離画像収集部12は、照射波とその反射波との位相差や、照射から検知までの時間に基づいて、被写体と距離画像収集部12との距離を求め、撮影範囲に対応する1フレームの距離画像情報を生成する。この1フレーム分の距離画像情報には、例えば、撮影時刻情報と、撮影範囲に含まれる各画素に、その画素に対応する被写体と距離画像収集部12との距離が対応付けられた情報とが含まれる。距離画像収集部12は、次々に検知される反射波から連続する複数フレームの距離画像情報を生成することで、撮影範囲を動画撮影する。なお、距離画像収集部12によって生成される距離画像情報は、各画素の距離に応じた色の濃淡をビットマップに配置した距離画像として出力されても良い。また、距離画像収集部12は、受光素子として、例えば、CMOSやCCDを有する。この受光素子は、カラー画像収集部11で用いられる受光素子と共用されても良い。また、距離画像収集部12によって算出される距離の単位は、例えば、メートル[m]である。
The distance image collection unit 12 photographs a subject such as a person or an object in a space where rehabilitation is performed, and collects distance image information. For example, the distance image collection unit 12 irradiates the surrounding area with infrared rays, and detects a reflected wave obtained by reflecting the irradiation wave on the surface of the subject with the light receiving element. Then, the distance image collection unit 12 obtains the distance between the subject and the distance image collection unit 12 based on the phase difference between the irradiation wave and the reflected wave and the time from irradiation to detection, and corresponds to the shooting range. Generate frame distance image information. The distance image information for one frame includes, for example, shooting time information and information in which each pixel included in the shooting range is associated with the distance between the subject corresponding to the pixel and the distance image collection unit 12. included. The distance image collection unit 12 captures a moving image of the shooting range by generating distance image information of a plurality of continuous frames from reflected waves detected one after another. The distance image information generated by the distance image collection unit 12 may be output as a distance image in which color shades corresponding to the distance of each pixel are arranged in a bitmap. The distance image collection unit 12 includes, for example, a CMOS or a CCD as a light receiving element. This light receiving element may be shared with the light receiving element used in the color image collection unit 11. The unit of the distance calculated by the distance image collection unit 12 is, for example, meters [m].
音声認識部13は、周囲の音声を集音し、音源の方向特定及び音声認識を行う。音声認識部13は、複数のマイクを備えたマイクアレイを有し、ビームフォーミングを行う。ビームフォーミングは、特定の方向からの音声を選択的に集音する技術である。例えば、音声認識部13は、マイクアレイを用いたビームフォーミングによって、音源の方向を特定する。また、音声認識部13は、既知の音声認識技術を用いて、集音した音声から単語を認識する。すなわち、音声認識部13は、例えば、音声認識技術によって認識された単語、その単語が発せられた方向及びその単語を認識した時刻が対応付けられた情報を、音声認識結果として生成する。
The voice recognition unit 13 collects surrounding voices, identifies the direction of the sound source, and performs voice recognition. The voice recognition unit 13 has a microphone array including a plurality of microphones, and performs beam forming. Beam forming is a technique for selectively collecting sound from a specific direction. For example, the voice recognition unit 13 specifies the direction of the sound source by beam forming using a microphone array. The voice recognition unit 13 recognizes a word from the collected voice using a known voice recognition technique. That is, the speech recognition unit 13 generates, as a speech recognition result, for example, information associated with a word recognized by the speech recognition technology, a direction in which the word is emitted, and a time at which the word is recognized.
動作情報生成部14は、人物や物体等の動作を表す動作情報を生成する。この動作情報は、例えば、人物の動作(ジェスチャー)を複数の姿勢(ポーズ)の連続として捉えることにより生成される。概要を説明すると、動作情報生成部14は、まず、人体パターンを用いたパターンマッチングにより、距離画像収集部12によって生成される距離画像情報から、人体の骨格を形成する各関節の座標を得る。距離画像情報から得られた各関節の座標は、距離画像の座標系(以下、「距離画像座標系」と呼ぶ)で表される値である。このため、動作情報生成部14は、次に、距離画像座標系における各関節の座標を、診察が行われる3次元空間の座標系(以下、「世界座標系」と呼ぶ)で表される値に変換する。この世界座標系で表される各関節の座標が、1フレーム分の骨格情報となる。また、複数フレーム分の骨格情報が、動作情報である。以下、第1の実施形態に係る動作情報生成部14の処理を具体的に説明する。
The motion information generation unit 14 generates motion information representing the motion of a person or an object. This motion information is generated by, for example, capturing a human motion (gesture) as a series of a plurality of postures (poses). In brief, the motion information generation unit 14 first obtains the coordinates of each joint forming the skeleton of the human body from the distance image information generated by the distance image collection unit 12 by pattern matching using a human body pattern. The coordinates of each joint obtained from the distance image information are values represented by a distance image coordinate system (hereinafter referred to as “distance image coordinate system”). For this reason, the motion information generation unit 14 next represents the values of the coordinates of each joint in the distance image coordinate system in a coordinate system of a three-dimensional space in which medical examination is performed (hereinafter referred to as “world coordinate system”). Convert to The coordinates of each joint represented in this world coordinate system become the skeleton information for one frame. Further, the skeleton information for a plurality of frames is the operation information. Hereinafter, processing of the motion information generation unit 14 according to the first embodiment will be specifically described.
図4Aから図4Cは、第1の実施形態に係る動作情報生成部14の処理を説明するための図である。図4Aには、距離画像収集部12によって生成される距離画像の一例を示す。なお、図4Aにおいては、説明の便宜上、線画で表現された画像を示すが、実際の距離画像は、距離に応じた色の濃淡で表現された画像等である。この距離画像において、各画素は、距離画像の左右方向における「画素位置X」と、距離画像の上下方向における「画素位置Y」と、当該画素に対応する被写体と距離画像収集部12との「距離Z」とを対応付けた3次元の値を有する。以下では、距離画像座標系の座標の値を、この3次元の値(X,Y,Z)で表記する。
4A to 4C are diagrams for explaining processing of the motion information generation unit 14 according to the first embodiment. FIG. 4A shows an example of a distance image generated by the distance image collection unit 12. In FIG. 4A, for convenience of explanation, an image expressed by a line drawing is shown. However, an actual distance image is an image expressed by shading of colors according to the distance. In this distance image, each pixel has a “pixel position X” in the left-right direction of the distance image, a “pixel position Y” in the up-down direction of the distance image, and a subject corresponding to the pixel and the distance image collection unit 12. It has a three-dimensional value associated with “distance Z”. Hereinafter, the coordinate value of the distance image coordinate system is expressed by the three-dimensional value (X, Y, Z).
第1の実施形態において、動作情報生成部14は、様々な姿勢に対応する人体パターンを、例えば、学習により予め記憶している。動作情報生成部14は、距離画像収集部12によって距離画像情報が生成されるごとに、生成された各フレームの距離画像情報を取得する。そして、動作情報生成部14は、取得した各フレームの距離画像情報に対して人体パターンを用いたパターンマッチングを行う。
In the first embodiment, the motion information generation unit 14 stores in advance human body patterns corresponding to various postures, for example, by learning. Each time the distance image collection unit 12 generates distance image information, the motion information generation unit 14 acquires the generated distance image information of each frame. Then, the motion information generation unit 14 performs pattern matching using a human body pattern on the acquired distance image information of each frame.
ここで、人体パターンについて説明する。図4Bには、人体パターンの一例を示す。第1の実施形態において、人体パターンは、距離画像情報とのパターンマッチングに用いられるパターンであるので、距離画像座標系で表現され、また、距離画像に描出される人物と同様、人体の表面の情報(以下、「人体表面」と呼ぶ)を有する。例えば、人体表面は、その人物の皮膚や衣服の表面に対応する。更に、人体パターンは、図4Bに示すように、人体の骨格を形成する各関節の情報を有する。すなわち、人体パターンにおいて、人体表面と各関節との相対的な位置関係は既知である。
Here, human body patterns will be described. FIG. 4B shows an example of a human body pattern. In the first embodiment, since the human body pattern is a pattern used for pattern matching with distance image information, it is expressed in the distance image coordinate system, and is similar to the person depicted in the distance image, on the surface of the human body. Information (hereinafter referred to as “human body surface”). For example, the human body surface corresponds to the skin or clothing surface of the person. Furthermore, as shown in FIG. 4B, the human body pattern includes information on each joint forming the skeleton of the human body. That is, in the human body pattern, the relative positional relationship between the human body surface and each joint is known.
図4Bに示す例では、人体パターンは、関節2aから関節2tまでの20点の関節の情報を有する。このうち、関節2aは、頭部に対応し、関節2bは、両肩の中央部に対応し、関節2cは、腰に対応し、関節2dは、臀部の中央部に対応する。また、関節2eは、右肩に対応し、関節2fは、右肘に対応し、関節2gは、右手首に対応し、関節2hは、右手に対応する。また、関節2iは、左肩に対応し、関節2jは、左肘に対応し、関節2kは、左手首に対応し、関節2lは、左手に対応する。また、関節2mは、右臀部に対応し、関節2nは、右膝に対応し、関節2oは、右足首に対応し、関節2pは、右足の足根に対応する。また、関節2qは、左臀部に対応し、関節2rは、左膝に対応し、関節2sは、左足首に対応し、関節2tは、左足の足根に対応する。
In the example shown in FIG. 4B, the human body pattern includes information on 20 joints from joint 2a to joint 2t. Of these, the joint 2a corresponds to the head, the joint 2b corresponds to the center of both shoulders, the joint 2c corresponds to the waist, and the joint 2d corresponds to the center of the buttocks. The joint 2e corresponds to the right shoulder, the joint 2f corresponds to the right elbow, the joint 2g corresponds to the right wrist, and the joint 2h corresponds to the right hand. The joint 2i corresponds to the left shoulder, the joint 2j corresponds to the left elbow, the joint 2k corresponds to the left wrist, and the joint 2l corresponds to the left hand. Also, the joint 2m corresponds to the right hip, the joint 2n corresponds to the right knee, the joint 2o corresponds to the right ankle, and the joint 2p corresponds to the right foot. Further, the joint 2q corresponds to the left hip, the joint 2r corresponds to the left knee, the joint 2s corresponds to the left ankle, and the joint 2t corresponds to the left foot.
なお、図4Bでは、人体パターンが20点の関節の情報を有する場合を説明したが、実施形態はこれに限定されるものではなく、関節の位置及び数は操作者が任意に設定して良い。例えば、四肢の動きの変化のみを捉える場合には、関節2aから関節2dまでのうち、関節2b及び関節2cの情報は取得しなくても良い。また、右手の動きの変化を詳細に捉える場合には、関節2iのみならず、右手の指の関節を新たに設定して良い。なお、図4Bの関節2a、関節2h、関節2l、関節2p、関節2tは、骨の末端部分であるためいわゆる関節とは異なるが、骨の位置及び向きを表す重要な点であるため、説明の便宜上、ここでは関節として説明する。
In FIG. 4B, the case where the human body pattern has information on 20 joints has been described. However, the embodiment is not limited to this, and the operator may arbitrarily set the position and number of joints. . For example, when only the change in the movement of the limbs is captured, information on the joint 2b and the joint 2c among the joints 2a to 2d may not be acquired. In addition, when capturing changes in the right hand movement in detail, not only the joint 2i but also the finger joint of the right hand may be newly set. The joint 2a, the joint 2h, the joint 2l, the joint 2p, and the joint 2t in FIG. 4B are different from so-called joints because they are the end portions of the bone, but are important points that represent the position and orientation of the bone. For the sake of convenience, it is described here as a joint.
動作情報生成部14は、かかる人体パターンを用いて、各フレームの距離画像情報とのパターンマッチングを行う。例えば、動作情報生成部14は、図4Bに示す人体パターンの人体表面と、図4Aに示す距離画像とをパターンマッチングすることで、距離画像情報から、ある姿勢の人物を抽出する。こうして、動作情報生成部14は、距離画像に描出された人物の人体表面の座標を得る。また、上述したように、人体パターンにおいて、人体表面と各関節との相対的な位置関係は既知である。このため、動作情報生成部14は、距離画像に描出された人物の人体表面の座標から、当該人物内の各関節の座標を算出する。こうして、図4Cに示すように、動作情報生成部14は、距離画像情報から、人体の骨格を形成する各関節の座標を取得する。なお、ここで得られる各関節の座標は、距離座標系の座標である。
The motion information generation unit 14 performs pattern matching with the distance image information of each frame using the human body pattern. For example, the motion information generation unit 14 extracts a person with a certain posture from the distance image information by pattern matching the human body surface of the human body pattern shown in FIG. 4B and the distance image shown in FIG. 4A. In this way, the motion information generation unit 14 obtains the coordinates of the human body surface depicted in the distance image. Further, as described above, in the human body pattern, the relative positional relationship between the human body surface and each joint is known. Therefore, the motion information generation unit 14 calculates the coordinates of each joint in the person from the coordinates of the human body surface depicted in the distance image. Thus, as illustrated in FIG. 4C, the motion information generation unit 14 acquires the coordinates of each joint forming the skeleton of the human body from the distance image information. Note that the coordinates of each joint obtained here are the coordinates of the distance coordinate system.
なお、動作情報生成部14は、パターンマッチングを行う際、各関節の位置関係を表す情報を補助的に用いても良い。各関節の位置関係を表す情報には、例えば、関節同士の連結関係(例えば、「関節2aと関節2bとが連結」等)や、各関節の可動域が含まれる。関節は、2つ以上の骨を連結する部位である。姿勢の変化に応じて骨と骨とがなす角は変化するものであり、また、関節に応じてその可動域は異なる。例えば、可動域は、各関節が連結する骨同士がなす角の最大値及び最小値等で表される。例えば、動作情報生成部14は、人体パターンを学習する際に、各関節の可動域も学習し、各関節に対応付けてこれを記憶する。
Note that the motion information generation unit 14 may use information representing the positional relationship of each joint as an auxiliary when performing pattern matching. The information representing the positional relationship between the joints includes, for example, a joint relationship between the joints (for example, “joint 2a and joint 2b are coupled”) and a movable range of each joint. A joint is a site that connects two or more bones. The angle between the bones changes according to the change in posture, and the range of motion differs depending on the joint. For example, the range of motion is represented by the maximum and minimum values of the angles formed by the bones connected by each joint. For example, when learning the human body pattern, the motion information generation unit 14 also learns the range of motion of each joint and stores it in association with each joint.
続いて、動作情報生成部14は、距離画像座標系における各関節の座標を、世界座標系で表される値に変換する。世界座標系とは、リハビリテーションが行われる3次元空間の座標系であり、例えば、動作情報収集部10の位置を原点とし、水平方向をx軸、鉛直方向をy軸、xy平面に直交する方向をz軸とする座標系である。なお、このz軸方向の座標の値を「深度」と呼ぶことがある。
Subsequently, the motion information generation unit 14 converts the coordinates of each joint in the distance image coordinate system into values represented in the world coordinate system. The world coordinate system is a coordinate system in a three-dimensional space where rehabilitation is performed. For example, the position of the motion information collection unit 10 is the origin, the horizontal direction is the x axis, the vertical direction is the y axis, and the direction is orthogonal to the xy plane. Is a coordinate system with z as the z-axis. The coordinate value in the z-axis direction may be referred to as “depth”.
ここで、距離画像座標系から世界座標系へ変換する処理について説明する。第1の実施形態において、動作情報生成部14は、距離画像座標系から世界座標系へ変換するための変換式を予め記憶しているものとする。例えば、この変換式は、距離画像座標系の座標、及び当該座標に対応する反射光の入射角を入力として、世界座標系の座標を出力する。例えば、動作情報生成部14は、ある関節の座標(X1,Y1,Z1)、及び、当該座標に対応する反射光の入射角をこの変換式に入力して、ある関節の座標(X1,Y1,Z1)を世界座標系の座標(x1,y1,z1)に変換する。なお、距離画像座標系の座標と、反射光の入射角との対応関係は既知であるので、動作情報生成部14は、座標(X1,Y1,Z1)に対応する入射角を変換式に入力することができる。また、ここでは、動作情報生成部14が距離画像座標系の座標を世界座標系の座標に変換する場合を説明したが、世界座標系の座標を距離座標系の座標に変換することも可能である。
Here, the process of converting from the distance image coordinate system to the world coordinate system will be described. In the first embodiment, it is assumed that the motion information generation unit 14 stores in advance a conversion formula for converting from the distance image coordinate system to the world coordinate system. For example, this conversion formula receives the coordinates of the distance image coordinate system and the incident angle of the reflected light corresponding to the coordinates, and outputs the coordinates of the world coordinate system. For example, the motion information generation unit 14 inputs the coordinates (X1, Y1, Z1) of a certain joint and the incident angle of reflected light corresponding to the coordinates to the conversion formula, and coordinates (X1, Y1) of the certain joint , Z1) are converted into coordinates (x1, y1, z1) in the world coordinate system. Since the correspondence relationship between the coordinates of the distance image coordinate system and the incident angle of the reflected light is known, the motion information generation unit 14 inputs the incident angle corresponding to the coordinates (X1, Y1, Z1) into the conversion equation. can do. Although the case has been described here in which the motion information generation unit 14 converts the coordinates of the distance image coordinate system to the coordinates of the world coordinate system, it is also possible to convert the coordinates of the world coordinate system to the coordinates of the distance coordinate system. is there.
そして、動作情報生成部14は、この世界座標系で表される各関節の座標から骨格情報を生成する。図5は、動作情報生成部14によって生成される骨格情報の一例を示す図である。各フレームの骨格情報は、当該フレームの撮影時刻情報と、各関節の座標とを含む。例えば、動作情報生成部14は、図5に示すように、関節識別情報と座標情報とを対応付けた骨格情報を生成する。なお、図5において、撮影時刻情報は図示を省略する。関節識別情報は、関節を識別するための識別情報であり、予め設定されている。例えば、関節識別情報「2a」は、頭部に対応し、関節識別情報「2b」は、両肩の中央部に対応する。他の関節識別情報についても同様に、各関節識別情報は、それぞれ対応する関節を示す。また、座標情報は、各フレームにおける各関節の座標を世界座標系で示す。
Then, the motion information generation unit 14 generates skeleton information from the coordinates of each joint represented in the world coordinate system. FIG. 5 is a diagram illustrating an example of the skeleton information generated by the motion information generation unit 14. The skeleton information of each frame includes shooting time information of the frame and coordinates of each joint. For example, as illustrated in FIG. 5, the motion information generation unit 14 generates skeleton information in which joint identification information and coordinate information are associated with each other. In FIG. 5, the shooting time information is not shown. The joint identification information is identification information for identifying a joint and is set in advance. For example, joint identification information “2a” corresponds to the head, and joint identification information “2b” corresponds to the center of both shoulders. Similarly for the other joint identification information, each joint identification information indicates a corresponding joint. The coordinate information indicates the coordinates of each joint in each frame in the world coordinate system.
図5の1行目には、関節識別情報「2a」と、座標情報「(x1,y1,z1)」とが対応付けられている。つまり、図5の骨格情報は、あるフレームにおいて頭部が座標(x1,y1,z1)の位置に存在することを表す。また、図5の2行目には、関節識別情報「2b」と、座標情報「(x2,y2,z2)」とが対応付けられている。つまり、図5の骨格情報は、あるフレームにおいて両肩の中央部が座標(x2,y2,z2)の位置に存在することを表す。また、他の関節についても同様に、あるフレームにおいてそれぞれの関節がそれぞれの座標で表される位置に存在することを表す。
5, joint identification information “2a” and coordinate information “(x1, y1, z1)” are associated with each other. That is, the skeleton information in FIG. 5 indicates that the head is present at the coordinates (x1, y1, z1) in a certain frame. Also, in the second row of FIG. 5, joint identification information “2b” and coordinate information “(x2, y2, z2)” are associated. That is, the skeleton information in FIG. 5 represents that the center of both shoulders exists at the coordinates (x2, y2, z2) in a certain frame. Similarly, other joints indicate that each joint exists at a position represented by each coordinate in a certain frame.
このように、動作情報生成部14は、距離画像収集部12から各フレームの距離画像情報を取得するごとに、各フレームの距離画像情報に対してパターンマッチングを行い、距離画像座標系から世界座標系に変換することで、各フレームの骨格情報を生成する。そして、動作情報生成部14は、生成した各フレームの骨格情報を、医師端末100へ出力し、後述の動作情報記憶部131へ格納する。
In this way, every time the distance image information of each frame is acquired from the distance image collection unit 12, the motion information generation unit 14 performs pattern matching on the distance image information of each frame, and the world coordinate from the distance image coordinate system. By converting into a system, skeleton information of each frame is generated. Then, the motion information generation unit 14 outputs the generated skeleton information of each frame to the doctor terminal 100 and stores it in the later-described motion information storage unit 131.
なお、動作情報生成部14の処理は、上述した手法に限られるものではない。例えば、上述では、動作情報生成部14が人体パターンを用いてパターンマッチングを行う手法を説明したが、実施形態はこれに限られるものではない。例えば、人体パターンに替えて、若しくは人体パターンとともに、部位別のパターンを用いてパターンマッチングを行う手法でも良い。
In addition, the process of the operation information generation part 14 is not restricted to the method mentioned above. For example, in the above description, the method in which the motion information generation unit 14 performs pattern matching using a human body pattern has been described, but the embodiment is not limited thereto. For example, instead of the human body pattern or together with the human body pattern, a pattern matching method using a pattern for each part may be used.
また、例えば、上述では、動作情報生成部14が距離画像情報から各関節の座標を得る手法を説明したが、実施形態はこれに限られるものではない。例えば、動作情報生成部14が、距離画像情報とともにカラー画像情報を用いて各関節の座標を得る手法でも良い。この場合、例えば、動作情報生成部14は、カラー画像の座標系で表現された人体パターンとカラー画像情報とでパターンマッチングを行い、カラー画像情報から人体表面の座標を得る。このカラー画像の座標系には、距離画像座標系でいう「距離Z」の情報は含まれない。そこで、動作情報生成部14は、例えば、この「距離Z」の情報については距離画像情報から得て、これら2つの情報を用いた計算処理によって、各関節の世界座標系の座標を得る。
For example, in the above description, the method in which the motion information generation unit 14 obtains the coordinates of each joint from the distance image information has been described, but the embodiment is not limited thereto. For example, the motion information generation unit 14 may obtain a coordinate of each joint using color image information together with distance image information. In this case, for example, the motion information generation unit 14 performs pattern matching between the human body pattern expressed in the color image coordinate system and the color image information, and obtains the coordinates of the human body surface from the color image information. The coordinate system of this color image does not include the “distance Z” information referred to in the distance image coordinate system. Therefore, for example, the motion information generation unit 14 obtains the information of “distance Z” from the distance image information, and obtains the coordinates of the world coordinate system of each joint by calculation processing using these two pieces of information.
また、動作情報生成部14は、カラー画像収集部11によって生成されたカラー画像情報、距離画像収集部12によって生成された距離画像情報及び音声認識部13によって出力された音声認識結果を、必要に応じて医師端末100へ適宜出力し、後述の動作情報記憶部131へ格納する。なお、カラー画像情報の画素位置及び距離画像情報の画素位置は、カラー画像収集部11及び距離画像収集部12の位置及び撮影方向に応じて予め対応付け可能である。このため、カラー画像情報の画素位置及び距離画像情報の画素位置は、動作情報生成部14によって算出される世界座標系とも対応付けが可能である。さらに、この対応付けと距離画像収集部12によって算出される距離[m]を用いることで、身長や体の各部の長さ(例えば、腕の長さや腹部の長さなど)、及び、カラー画像上で指定された2点間(2ピクセル間)の距離を算出することも可能である。
Further, the motion information generation unit 14 needs the color image information generated by the color image collection unit 11, the distance image information generated by the distance image collection unit 12, and the voice recognition result output by the voice recognition unit 13. Accordingly, the information is appropriately output to the doctor terminal 100 and stored in the operation information storage unit 131 described later. The pixel position of the color image information and the pixel position of the distance image information can be associated in advance according to the positions of the color image collection unit 11 and the distance image collection unit 12 and the shooting direction. For this reason, the pixel position of the color image information and the pixel position of the distance image information can be associated with the world coordinate system calculated by the motion information generation unit 14. Further, by using this association and the distance [m] calculated by the distance image collection unit 12, the height and the length of each part of the body (for example, the length of the arm and the length of the abdomen), and the color image It is also possible to calculate the distance between the two points specified above (between two pixels).
また、同様に、カラー画像情報の撮影時刻情報及び距離画像情報の撮影時刻情報も、予め対応付け可能である。また、動作情報生成部14は、音声認識結果と距離画像情報とを参照し、ある時刻に音声認識された単語が発せられた方向の付近に関節2aがあれば、その関節2aを含む人物が発した単語として出力可能である。更に、動作情報生成部14は、各関節の位置関係を表す情報についても、必要に応じて医師端末100へ適宜出力し、後述の動作情報記憶部131へ格納する。
Similarly, the shooting time information of the color image information and the shooting time information of the distance image information can be associated in advance. In addition, the motion information generation unit 14 refers to the speech recognition result and the distance image information, and if there is a joint 2a in the vicinity of the direction in which the speech-recognized word is issued at a certain time, the person including the joint 2a is displayed. It can be output as an emitted word. Furthermore, the motion information generation unit 14 also appropriately outputs information representing the positional relationship between the joints to the doctor terminal 100 as necessary, and stores the information in the motion information storage unit 131 described later.
なお、ここでは、動作情報収集部10によって一人の対象者の動作が検知される場合を説明したが、実施形態はこれに限定されるものではない。動作情報収集部10の検知範囲に含まれていれば、動作情報収集部10は、複数人の対象者の動作を検知しても良い。かかる場合には、動作情報生成部14は、同一フレームの距離画像情報から複数人の人物の骨格情報をそれぞれ生成し、生成した各骨格情報を対応付けた情報を動作情報として医師端末100へ出力する。
In addition, although the case where the operation | movement of one subject was detected by the operation | movement information collection part 10 was demonstrated here, embodiment is not limited to this. If included in the detection range of the motion information collection unit 10, the motion information collection unit 10 may detect the motions of a plurality of subjects. In such a case, the motion information generation unit 14 generates skeleton information of a plurality of persons from the distance image information of the same frame, and outputs information associated with the generated skeleton information to the doctor terminal 100 as motion information. To do.
また、動作情報収集部10の構成は、上記の構成に限定されるものではない。例えば、光学式、機械式、磁気式等、他のモーションキャプチャによって人物の動作を検出することで動作情報を生成する場合には、動作情報収集部10は、必ずしも距離画像収集部12を有していなくても良い。かかる場合、動作情報収集部10は、モーションセンサとして、人物の動作を検知するために人体に装着させるマーカと、マーカを検出するセンサーとを有する。そして、動作情報収集部10は、モーションセンサを用いて人物の動作を検知して動作情報を生成する。また、動作情報収集部10は、カラー画像収集部11によって撮影した画像に含まれるマーカの位置を用いて、カラー画像情報の画素位置と動作情報の座標とを対応付けた上で、必要に応じて医師端末100へ適宜出力する。また、例えば、動作情報収集部10は、音声認識結果を医師端末100へ出力しない場合には、音声認識部13を有していなくても良い。
Further, the configuration of the operation information collection unit 10 is not limited to the above configuration. For example, when motion information is generated by detecting the motion of a person by other motion capture, such as optical, mechanical, magnetic, etc., the motion information collection unit 10 does not necessarily include the distance image collection unit 12. It does not have to be. In such a case, the motion information collection unit 10 includes, as motion sensors, a marker that is worn on the human body to detect the motion of the person, and a sensor that detects the marker. Then, the motion information collection unit 10 detects motion of a person using a motion sensor and generates motion information. Further, the motion information collecting unit 10 associates the pixel position of the color image information with the coordinates of the motion information using the position of the marker included in the image photographed by the color image collecting unit 11, and if necessary, Output to the doctor terminal 100 as appropriate. In addition, for example, the motion information collection unit 10 may not include the speech recognition unit 13 when the speech recognition result is not output to the doctor terminal 100.
更に、上述した実施形態において、動作情報収集部10は、骨格情報として世界座標系の座標を出力したが、実施形態はこれに限られるものではない。例えば、動作情報収集部10は、変換前の距離画像座標系の座標を出力し、距離画像座標系から世界座標系への変換は、必要に応じて、医師端末100側で行ってもよい。
Furthermore, in the embodiment described above, the motion information collection unit 10 outputs the coordinates of the world coordinate system as the skeleton information, but the embodiment is not limited to this. For example, the motion information collection unit 10 may output the coordinates of the distance image coordinate system before conversion, and the conversion from the distance image coordinate system to the world coordinate system may be performed on the doctor terminal 100 side as necessary.
図5の説明に戻る。医師端末100は、動作情報収集部10から出力される動作情報を用いて、電子カルテの情報(カルテ情報)入力を補助するための処理を行う。医師端末100は、例えば、コンピュータ、ワークステーション等の情報処理装置であり、図3に示すように、出力部110と、入力部120と、記憶部130と、制御部140とを有する。なお、医師端末100は、図示しない通信部を備え、受付端末200や、サーバ装置300などとの通信を行う。
Returning to the explanation of FIG. The doctor terminal 100 uses the operation information output from the operation information collection unit 10 to perform processing for assisting input of electronic medical record information (medical record information). The doctor terminal 100 is an information processing device such as a computer or a workstation, for example, and includes an output unit 110, an input unit 120, a storage unit 130, and a control unit 140, as shown in FIG. The doctor terminal 100 includes a communication unit (not shown) and performs communication with the reception terminal 200, the server device 300, and the like.
出力部110は、電子カルテを作成するための各種情報を出力する。例えば、出力部110は、医師端末100を操作する操作者(医師)が入力部120を用いて各種要求を入力して電子カルテを作成するためのGUI(Graphical User Interface)を表示したり、医師端末100において生成された出力画像等を表示したりする。例えば、出力部110は、モニタ、スピーカー等である。
The output unit 110 outputs various information for creating an electronic medical record. For example, the output unit 110 displays a GUI (Graphical User Interface) for an operator (doctor) operating the doctor terminal 100 to input various requests using the input unit 120 to create an electronic medical record, An output image generated in the terminal 100 is displayed. For example, the output unit 110 is a monitor, a speaker, or the like.
入力部120は、電子カルテを作成するための各種情報の入力を受け付ける。例えば、入力部120は、医師端末100の操作者(医師)から各種要求(例えば、サーバ装置300からの電子カルテの読み出し要求や、電子カルテへのカルテ情報の入力要求など)の入力を受け付け、受け付けた各種要求を医師端末100に転送する。ここで、入力部120は、電子カルテの読み出し要求として、患者の患者ID(氏名番号)などの情報を受付ける。入力部120は、例えば、マウス、キーボード、タッチコマンドスクリーン、トラックボール等である。
The input unit 120 receives input of various information for creating an electronic medical record. For example, the input unit 120 accepts input of various requests (for example, a request for reading an electronic medical record from the server device 300 or a request for inputting medical record information to the electronic medical record) from the operator (doctor) of the doctor terminal 100, The received various requests are transferred to the doctor terminal 100. Here, the input unit 120 accepts information such as a patient ID (name number) of a patient as a reading request of the electronic medical record. The input unit 120 is, for example, a mouse, a keyboard, a touch command screen, a trackball, or the like.
記憶部130は、例えば、RAM(Random Access Memory)、フラッシュメモリ(Flash Memory)等の半導体メモリ素子、ハードディスク装置や光ディスク装置等の記憶装置である。また、制御部140は、ASIC(Application Specific Integrated Circuit)やFPGA(Field Programmable Gate Array)等の集積回路、或いはCPU(Central Processing Unit)が所定のプログラムを実行することで実現することができる。
The storage unit 130 is, for example, a semiconductor memory device such as a RAM (Random Access Memory) or a flash memory, a storage device such as a hard disk device or an optical disk device. The control unit 140 can be realized by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array) or a CPU (Central Processing Unit) executing a predetermined program.
以上、第1の実施形態に係る医師端末100の構成について説明した。かかる構成のもと、第1の実施形態に係る医師端末100は、動作情報収集部10によって収集された患者(動作取得の対象者)の動作情報を解析することで患部を抽出し、抽出した患部に関連する関連情報を選択することで、電子カルテの情報入力を容易にすることを可能にする。ここで、第1の実施形態においては、患部に関連する関連情報の一例として、シェーマを選択する場合について説明する。なお、シェーマとは、人体構造と患部との位置関係を示す模式図である。
The configuration of the doctor terminal 100 according to the first embodiment has been described above. Under such a configuration, the doctor terminal 100 according to the first embodiment extracts and extracts the affected part by analyzing the motion information of the patient (target of motion acquisition) collected by the motion information collecting unit 10. By selecting related information related to the affected area, it is possible to easily input information in the electronic medical record. Here, in 1st Embodiment, the case where a schema is selected as an example of the relevant information relevant to an affected part is demonstrated. The schema is a schematic diagram showing the positional relationship between the human body structure and the affected part.
図6は、第1の実施形態に係る医師端末100の詳細な構成の一例を示す図である。図6に示すように、医師端末100においては、例えば、記憶部130が動作情報記憶部131と、抽出条件記憶部132と、カルテ情報記憶部133とを備える。
FIG. 6 is a diagram illustrating an example of a detailed configuration of the doctor terminal 100 according to the first embodiment. As shown in FIG. 6, in the doctor terminal 100, for example, the storage unit 130 includes an operation information storage unit 131, an extraction condition storage unit 132, and a medical record information storage unit 133.
動作情報記憶部131は、動作情報収集部10によって収集された各種情報を記憶する。具体的には、動作情報記憶部131は、動作情報生成部14によって生成された動作情報を記憶する。より具体的には、動作情報記憶部131は、動作情報生成部14によって生成されたフレームごとの骨格情報を記憶する。ここで、動作情報記憶部131は、動作情報生成部14によって出力されたカラー画像情報、距離画像情報及び音声認識結果をフレームごとにさらに対応付けて記憶することも可能である。
The operation information storage unit 131 stores various types of information collected by the operation information collection unit 10. Specifically, the motion information storage unit 131 stores the motion information generated by the motion information generation unit 14. More specifically, the motion information storage unit 131 stores skeleton information for each frame generated by the motion information generation unit 14. Here, the motion information storage unit 131 can further store the color image information, the distance image information, and the speech recognition result output by the motion information generation unit 14 in association with each other.
抽出条件記憶部132は、動作情報から患者における患部を抽出するための抽出条件を記憶する。具体的には、抽出条件記憶部132は、動作条件における手の関節の情報に基づいて、患部を抽出するための抽出条件を記憶する。例えば、抽出条件記憶部132は、対象者の動作情報における関節の位置情報において、所定のタイミングにおける手の関節の位置から、当該手の関節とは異なる2つの関節をつないだ線分までの距離が所定の閾値以下であり、かつ、最短となる線分上の位置に相当する部位を患部とする抽出条件を記憶する。ここで、抽出条件記憶部132は、所定のタイミングとして、所定の音声情報を取得したタイミングや、手の関節の動作が一定時間停止した場合などである。なお、上述した抽出条件の詳細な例は後述する。
The extraction condition storage unit 132 stores extraction conditions for extracting an affected part of a patient from operation information. Specifically, the extraction condition storage unit 132 stores an extraction condition for extracting an affected part based on information on a hand joint in the operation condition. For example, the extraction condition storage unit 132 determines the distance from the joint position of the hand at a predetermined timing to the line segment connecting two joints different from the joint of the hand in the joint position information in the motion information of the subject. Is stored below the extraction condition in which the part corresponding to the position on the shortest line segment is equal to or less than the predetermined threshold value. Here, the extraction condition storage unit 132 includes, as the predetermined timing, a timing at which predetermined audio information is acquired, or a case where the movement of the hand joint is stopped for a certain period of time. A detailed example of the extraction condition described above will be described later.
また、例えば、抽出条件記憶部132は、対象者の動作情報における関節の位置情報において、手の関節の位置の所定の移動範囲に相当する部位を患部とする抽出条件を記憶する。言い換えると、抽出条件記憶部132は、手の関節が所定の領域内を動いていた場合に、当該領域に相当する部位を患部とする抽出条件を記憶する。例を挙げて説明すると、抽出条件記憶部132は、手の関節が一定の領域を往復して移動している場合や、ある関節の領域で一定時間停止している場合などに、それらの領域に相当する部位を患部とする抽出条件を記憶する。
Further, for example, the extraction condition storage unit 132 stores the extraction condition in which the part corresponding to the predetermined movement range of the joint position of the hand is the affected part in the joint position information in the motion information of the subject. In other words, when the hand joint moves within a predetermined area, the extraction condition storage unit 132 stores an extraction condition in which a part corresponding to the area is an affected area. For example, when the joint of the hand moves back and forth in a certain area, or when the extraction condition storage unit 132 stops for a certain period of time in a certain joint area, the extraction condition storage unit 132 The extraction condition with the part corresponding to is affected is stored.
カルテ情報記憶部133は、図示しない通信部を介してサーバ装置300から取得した電子カルテのカルテ情報を記憶する。具体的には、カルテ情報記憶部133は、入力部120を介して操作者(医師)から受付けた電子カルテの読み出し要求に応じて、通信部がサーバ装置300から取得した患者のカルテ情報を記憶する。例えば、カルテ情報記憶部133は、読み出し要求に対応する患者の過去のカルテ情報や、後述する制御部140の制御によって格納されたカルテ情報を記憶する。言い換えると、カルテ情報記憶部133は、医師端末100がサーバ装置300から読み出した電子カルテを診断中に記憶させておく記憶部である。すなわち、カルテ情報記憶部133によって記憶されたカルテ情報は、診断が終了した後、通信部を介してサーバ装置300に格納されることとなる。なお、カルテ情報記憶部133によって記憶されるカルテ情報の詳細な例については、後述する。
The chart information storage unit 133 stores the chart information of the electronic chart acquired from the server device 300 via a communication unit (not shown). Specifically, the medical chart information storage unit 133 stores the patient's medical chart information acquired from the server apparatus 300 by the communication unit in response to the electronic medical chart read request received from the operator (doctor) via the input unit 120. To do. For example, the chart information storage unit 133 stores the patient's past chart information corresponding to the read request and the chart information stored by the control of the control unit 140 described later. In other words, the chart information storage unit 133 is a storage unit that stores the electronic chart read by the doctor terminal 100 from the server device 300 during diagnosis. That is, the medical record information stored by the medical record information storage unit 133 is stored in the server apparatus 300 via the communication unit after the diagnosis is completed. A detailed example of the chart information stored by the chart information storage unit 133 will be described later.
図6の説明に戻って、医師端末100においては、例えば、制御部140が取得部141と、抽出部142と、選択部143と、マーク付与部144と、表示制御部145と、カルテ情報格納部146とを備え、記憶部130に記憶された各種情報を用いて、カルテ情報の入力補助を実行して、電子カルテの情報入力を容易にすることを可能にする。
Returning to the description of FIG. 6, in the doctor terminal 100, for example, the control unit 140 has an acquisition unit 141, an extraction unit 142, a selection unit 143, a mark addition unit 144, a display control unit 145, and medical record information storage. Unit 146, and using various information stored in the storage unit 130, it is possible to facilitate the input of electronic medical record information by assisting in inputting medical record information.
取得部141は、動作取得の対象となる対象者(患者)の動作情報を取得する。具体的には、取得部141は、動作情報収集部10によって収集され、動作情報記憶部131によって記憶された患者の動作情報を取得する。より具体的には、取得部141は、動作情報記憶部131によってフレームごとに記憶された患者の骨格情報を取得する。
The acquisition unit 141 acquires motion information of a target person (patient) that is a target of motion acquisition. Specifically, the acquisition unit 141 acquires patient motion information collected by the motion information collection unit 10 and stored in the motion information storage unit 131. More specifically, the acquisition unit 141 acquires patient skeleton information stored for each frame by the motion information storage unit 131.
例えば、取得部141は、来院した患者が診察の際に症状、患部などの主訴を行った時点に対応する動作の骨格情報を取得する。一例を挙げると、取得部141は、医師による問診が開始された時点のフレームから診察が終了した時点のフレームまでの患者の骨格情報を取得する。なお、取得部141による骨格情報の取得は、患者の診察が開始されるごとにそれぞれ実行される。
For example, the acquisition unit 141 acquires the skeleton information of the operation corresponding to the time point when the patient who visited the hospital made a chief complaint such as a symptom or an affected part during the examination. For example, the acquisition unit 141 acquires the skeleton information of the patient from the frame at the time when the medical inquiry by the doctor is started to the frame at the time when the medical examination is completed. The acquisition of the skeletal information by the acquisition unit 141 is performed each time a patient's examination is started.
抽出部142は、取得部141によって取得された対象者(患者)の動作情報における関節の位置情報に基づいて、患部を抽出する。具体的には、抽出部142は、抽出条件記憶部132によって記憶された抽出条件を参照して、取得部141が取得した患者の骨格情報から患部を抽出する。図7は、第1の実施形態に係る抽出部142による処理の例を説明するための図である。図7においては、図7の(A)に動作情報収集部10によって収集された患者のカラー画像情報と関節の位置情報とを示す。また、図7の(B)においては、図7の(A)に示す情報から患部を抽出する第1の例を示す。また、図7の(C)においては、図7の(A)に示す情報から患部を抽出する第2の例を示す。
The extraction unit 142 extracts the affected part based on the joint position information in the motion information of the subject (patient) acquired by the acquisition unit 141. Specifically, the extraction unit 142 refers to the extraction condition stored by the extraction condition storage unit 132 and extracts the affected part from the skeleton information of the patient acquired by the acquisition unit 141. FIG. 7 is a diagram for explaining an example of processing by the extraction unit 142 according to the first embodiment. In FIG. 7, (A) of FIG. 7 shows the color image information and joint position information of the patient collected by the motion information collection unit 10. FIG. 7B shows a first example in which the affected part is extracted from the information shown in FIG. FIG. 7C shows a second example in which the affected part is extracted from the information shown in FIG.
まず、第1の例について説明する。第1の例は、患者の動作情報(骨格情報)における関節の位置情報において、所定のタイミングにおける手の関節の位置から、当該手の関節とは異なる2つの関節をつないだ線分までの距離が所定の閾値以下であり、かつ、最短となる線分上の位置に相当する部位を患部とする場合について説明する。
First, the first example will be described. The first example is a distance from a joint position of a hand at a predetermined timing to a line segment connecting two joints different from the joint of the hand in the position information of the joint in the motion information (skeleton information) of the patient. Is a case where a region corresponding to a position on the shortest line segment that is equal to or less than a predetermined threshold value is an affected part.
例えば、抽出部142は、取得部141によって診察が開始された時点のフレームから時系列順にリアルタイムで取得される患者の骨格情報を監視して、所定のタイミングにおける手の関節の位置から患部を抽出する。ここで、抽出部142は、所定のタイミングとして、例えば、「右腕が痛い」などの人体の部位を示す単語や、「ここが痛い」などの指示代名詞の単語などが患者から発せられた時点を用いることが可能である。すなわち、抽出部142は、音声認識部13によって生成された音声認識結果において、該当する単語が認識された時刻を所定のタイミングとして用いる。
For example, the extraction unit 142 monitors patient skeleton information acquired in real time in a time-series order from the frame at the time when the diagnosis is started by the acquisition unit 141, and extracts the affected part from the position of the hand joint at a predetermined timing. To do. Here, the extraction unit 142, for example, indicates a point in time when a word indicating a human body part such as “the right arm hurts” or a pronoun word such as “here hurts” is emitted from the patient. It is possible to use. That is, the extraction unit 142 uses, as a predetermined timing, the time when the corresponding word is recognized in the speech recognition result generated by the speech recognition unit 13.
また、抽出部142は、所定のタイミングとして、例えば、手の関節が骨から所定の距離以下の位置で一定時間停止した時点を用いることも可能である。すなわち、抽出部142は、体の表面上のある場所を一定時間触れている、或いは、押さえているような状態の場合に、その時点を所定のタイミングとして用いる。なお、手の関節が骨から所定の距離以下の位置で一定時間停止したか否かの判定は、取得部141によって取得される骨格情報において、手の関節の位置の座標の変化を解析することで行うことができる。ここで、手の関節が停止していると判定する際の座標変化の許容量及び、判定に用いる停止時間は、任意に設定することができる。例えば、患者の年齢などにより座標変化の許容量及び停止時間を変化させるように設定する場合であってもよい。
Also, the extraction unit 142 can use, for example, a point in time when the joint of the hand stops for a certain period of time at a position below a predetermined distance from the bone. That is, the extraction unit 142 uses the time as a predetermined timing when a certain place on the surface of the body is touched or pressed for a certain time. Note that whether or not the joint of the hand has stopped for a certain time at a position equal to or less than a predetermined distance from the bone is determined by analyzing changes in coordinates of the position of the hand joint in the skeleton information acquired by the acquisition unit 141. Can be done. Here, the allowable amount of coordinate change when determining that the joint of the hand is stopped and the stop time used for the determination can be arbitrarily set. For example, it may be set such that the allowable amount of coordinate change and the stop time are changed according to the age of the patient.
なお、上述した所定のタイミングは、あくまでも一例であり、これらのタイミング以外にも任意に適用することができる。例えば、音声認識の場合、患者から発生られた単語以外にも、医師が患部を触るように促す単語を発した時点であってもよい。
Note that the predetermined timing described above is merely an example, and can be arbitrarily applied in addition to these timings. For example, in the case of voice recognition, it may be the time when a doctor issues a word that prompts the patient to touch the affected part in addition to the word generated from the patient.
抽出部142は、上述したタイミングにおける手の関節の位置情報を用いて、患部を抽出する。例えば、抽出部142は、図7の(B)に示すように、左手に対応する関節識別情報「2l」の位置情報と、周囲の骨の情報とを用いて、患部を抽出する。すなわち、抽出部142は、まず、左手の関節識別情報「2l」に対応する座標情報を取得する。
The extraction unit 142 extracts the affected part using the position information of the hand joint at the timing described above. For example, as illustrated in (B) of FIG. 7, the extraction unit 142 extracts the affected part using the position information of the joint identification information “2l” corresponding to the left hand and the information on the surrounding bone. That is, the extraction unit 142 first acquires coordinate information corresponding to the joint identification information “2l” of the left hand.
そして、抽出部142は、同一フレームの他の関節に対応する関節識別情報の座標情報から骨に相当する部分の座標情報を算出する。例えば、抽出部142は、図7の(B)に示す「2h」~「2g」間の骨の座標情報、「2g」~「2f」間の骨の座標情報、「2f」~「2e」間の骨の座標情報、「2e」~「2b」間の骨の座標情報、「2b」~「2i」間の骨の座標情報、「2i」~「2j」間の骨の座標情報、「2j」~「2k」間の骨の座標情報などを算出する。
Then, the extraction unit 142 calculates the coordinate information of the portion corresponding to the bone from the coordinate information of the joint identification information corresponding to other joints in the same frame. For example, the extraction unit 142 performs the coordinate information of the bone between “2h” and “2g” shown in FIG. 7B, the coordinate information of the bone between “2g” and “2f”, and “2f” to “2e”. Bone coordinate information between “2e” and “2b”, Bone coordinate information between “2b” and “2i”, Bone coordinate information between “2i” and “2j”, “ The coordinate information of the bone between 2j ”and“ 2k ”is calculated.
ここで、抽出部142は、左手に対応する関節識別情報「2l」と繋がっていない骨の座標情報を算出する。すなわち、抽出部142は、手で容易に触れることができる部分の骨の座標情報を算出する。そして、抽出部142は、左手の座標情報と算出した各骨の座標情報とを用いて、左手から各骨までの距離を算出する。例えば、抽出部142は、左手「2l」から、「2h」~「2g」間の骨、「2g」~「2f」間の骨、「2f」~「2e」間の骨、「2e」~「2b」間の骨、「2b」~「2i」間の骨、「2i」~「2j」間の骨、「2j」~「2k」間の骨までの距離をそれぞれ算出する。
Here, the extraction unit 142 calculates the coordinate information of the bone not connected to the joint identification information “2l” corresponding to the left hand. That is, the extraction unit 142 calculates the coordinate information of the bone that can be easily touched by the hand. Then, the extraction unit 142 calculates the distance from the left hand to each bone using the coordinate information of the left hand and the calculated coordinate information of each bone. For example, from the left hand “2l”, the extraction unit 142 performs the bones “2h” to “2g”, the bones “2g” to “2f”, the bones “2f” to “2e”, “2e” to “2e” The distances to the bone between “2b”, the bone between “2b” and “2i”, the bone between “2i” and “2j”, and the bone between “2j” and “2k” are calculated.
そして、抽出部142は、算出した距離が所定の閾値以下であり、かつ、距離が最短となる骨を抽出する。例えば、抽出部142は、「2l」からの距離が所定の閾値以下であり、かつ、最小となる「2g」~「2f」間の骨を抽出する。そして、抽出部142は、抽出した骨に対して手の先端を射影して、骨における位置を算出する。例えば、抽出部142は、図7の(B)に示すように、「2g」~「2f」間の骨に対して、「2l」から射影し(点線)、射影した位置(「2g」~「2f」間の骨と点線との交点)の位置を算出する。そののち、抽出部142は、例えば、「2g」~「2f」間の骨と点線との交点の各関節からの距離を算出することで、骨の上の距離の割合を算出する。
Then, the extraction unit 142 extracts a bone whose calculated distance is equal to or less than a predetermined threshold and whose distance is the shortest. For example, the extraction unit 142 extracts a bone between “2g” and “2f” that has a minimum distance from “2l” and is a predetermined threshold value or less. Then, the extraction unit 142 projects the tip of the hand onto the extracted bone and calculates a position on the bone. For example, as shown in FIG. 7B, the extraction unit 142 projects (dotted line) from “2l” to the bone between “2g” and “2f”, and projects the projected position (“2g” to “2g”). The position of the intersection between the bone and the dotted line between “2f” is calculated. After that, the extraction unit 142 calculates the ratio of the distance on the bone by calculating the distance from each joint at the intersection between the bone and the dotted line between “2g” and “2f”, for example.
すなわち、抽出部142は、図7の(B)に示す「2g」~「2f」間の骨と点線との交点と、「2g」及び「2f」との距離をそれぞれ算出して、「2g」~「2f」の距離に対する「2g」~交点までの距離と、交点~「2f」までの距離との割合をそれぞれ算出する。そして、抽出部142は、「2g」~「2f」の骨における算出した割合の位置を患部として抽出する。
That is, the extraction unit 142 calculates the distance between the intersection between the bone and the dotted line between “2g” to “2f” and the distance between “2g” and “2f” shown in FIG. The ratio between the distance from “2g” to the intersection and the distance from the intersection to “2f” with respect to the distance from “2f” is calculated. Then, the extraction unit 142 extracts the position of the calculated ratio in the bones “2g” to “2f” as the affected part.
次に、第2の例について説明する。第2の例は、対象者(患者)の動作情報における関節の位置情報において、手の関節の位置の所定の移動範囲に相当する部位を患部とする場合について説明する。例えば、抽出部142は、手の関節の移動範囲から患部を抽出する。一例を挙げると、抽出部142は、図7の(C)に示すように、左肘に対応する関節識別情報「2j」を起点として、左手「2l」が、往復で移動している場合に、左手「2l」の停止位置の座標情報を往復移動している両端で算出する。そして、抽出部142は、算出した両端の座標情報を患部の領域の両端として、その間の領域を患部として抽出する。
Next, a second example will be described. In the second example, a case where a site corresponding to a predetermined movement range of the joint position of the hand in the position information of the joint in the motion information of the subject (patient) is used as the affected part will be described. For example, the extraction unit 142 extracts the affected part from the movement range of the joint of the hand. For example, as shown in FIG. 7C, the extraction unit 142 starts when the left hand “2l” moves in a reciprocating manner starting from the joint identification information “2j” corresponding to the left elbow. The coordinate information of the stop position of the left hand “2l” is calculated at both ends of the reciprocating movement. Then, the extracting unit 142 extracts the calculated coordinate information of both ends as both ends of the affected area, and extracts the area between them as the affected area.
すなわち、抽出部142は、図7の(C)の両端矢印に示す範囲を患部として抽出する。なお、抽出部142は、第1の例と同様に、所定のタイミングにおける手の関節と骨との距離算出を開始するように設定することができる。かかる場合には、抽出部142は、上述した所定のタイミング(音声認識または手の関節の動きを監視)で手の関節以外の関節の座標情報を取得して、手と骨との距離を算出する。そして、抽出部142は、手の関節との距離が所定の閾値以下であり、かつ、最短である骨を抽出して、抽出した骨に対する手の関節の移動状況を解析する。ここで、抽出部142は、手と骨との距離が所定の閾値以下の位置を推移している範囲についてのみ、患部の領域として抽出する。
That is, the extraction unit 142 extracts the range indicated by the double-ended arrows in FIG. Note that, similarly to the first example, the extraction unit 142 can be set to start calculating the distance between the hand joint and the bone at a predetermined timing. In such a case, the extraction unit 142 obtains coordinate information of joints other than the joints of the hand at the predetermined timing (speech recognition or monitoring the movement of the hand joint), and calculates the distance between the hand and the bone. To do. Then, the extraction unit 142 extracts a bone whose distance from the joint of the hand is equal to or less than a predetermined threshold and is the shortest, and analyzes the movement state of the joint of the hand with respect to the extracted bone. Here, the extraction unit 142 extracts only the range where the distance between the hand and the bone is moving below the predetermined threshold as the affected area.
図6に戻って、選択部143は、抽出部142によって抽出された患部に関連する関連情報を選択する。例えば、選択部143は、関連情報として、人体構造と抽出部142によって抽出された患部との位置関係を示すためのシェーマを選択する。図8A~図8Cは、第1の実施形態に係る選択部143のよる処理の一例を説明するための図である。図8A~図8Cにおいては、抽出部142によって患部がある位置として抽出された3つのパターンについてシェーマを選択する場合について示す。なお、抽出部142による患部の抽出は、上述した方法と同様である。
Returning to FIG. 6, the selection unit 143 selects related information related to the affected part extracted by the extraction unit 142. For example, the selection unit 143 selects a schema for indicating the positional relationship between the human body structure and the affected part extracted by the extraction unit 142 as the related information. 8A to 8C are diagrams for explaining an example of processing performed by the selection unit 143 according to the first embodiment. FIG. 8A to FIG. 8C show a case where a schema is selected for three patterns extracted as a certain position by the extraction unit 142. The extraction of the affected area by the extraction unit 142 is the same as the method described above.
例えば、選択部143は、図8Aの図に示すように、右肘に対応する関節識別情報「2f」と右手首に対応する関節識別情報「2g」との間の丸印で示す位置が患部として抽出された場合に、患部として抽出された位置(丸印が付された位置)が含まれる全身の前面のシェーマを選択する。ここで、選択部143は、シェーマの選択に際して、患者が診療を受けている診療科の情報や、医師の専門の情報を組み合わせてシェーマを選択する。すなわち、選択部143は、各診療科で用いられるシェーマの違いや、医師の専門による使い分けなどを考慮したシェーマの選択を行う。なお、これらのシェーマ選択に係る情報は予め設定され、記憶部130に格納される。
For example, as shown in the diagram of FIG. 8A, the selection unit 143 has a position indicated by a circle between joint identification information “2f” corresponding to the right elbow and joint identification information “2g” corresponding to the right wrist. , The schema on the front surface of the whole body including the position extracted as the affected part (the position marked with a circle) is selected. Here, when selecting a schema, the selection unit 143 selects a schema by combining information on a medical department in which a patient is undergoing medical care and information specialized by a doctor. In other words, the selection unit 143 selects a schema that takes into account the difference in the schema used in each department and the proper use by doctors. Note that information relating to the schema selection is set in advance and stored in the storage unit 130.
選択部143は、抽出部142によって抽出された患部の情報と、シェーマ選択に係る情報と、患者が受診している診療科及び医師の専門に関する情報などに基づいて、カルテに記載されるシェーマとして最適なシェーマを選択する。ここで、選択部143は、まず、患部が含まれる部位のシェーマを選択し、選択した部位に相当する複数のシェーマの中から患部の位置関係をより表現できるシェーマを選択する。
The selection unit 143 is a schema as described in the medical chart based on the information on the affected part extracted by the extraction unit 142, information on the schema selection, information on the medical department and doctor's specialty that the patient is visiting, and the like. Choose the best schema. Here, the selection unit 143 first selects a schema of a part including the affected part, and selects a schema that can more express the positional relationship of the affected part from a plurality of schemas corresponding to the selected part.
例えば、図8Bに示すように、抽出部142が頭部にある患部を抽出すると、選択部143は、頭部を示す複数のシェーマを選択する(例えば、図2B参照)。そして、選択部143は、頭部「2a」の座標に対する左手の関節「2l」の座標の位置関係から患部が頭部の前面であると判定して、頭部を示す複数のシェーマの中から頭部前面のシェーマを選択する。
For example, as illustrated in FIG. 8B, when the extraction unit 142 extracts an affected part on the head, the selection unit 143 selects a plurality of schemas indicating the head (for example, see FIG. 2B). Then, the selection unit 143 determines that the affected part is the front surface of the head from the positional relationship of the coordinates of the left hand joint “2l” with respect to the coordinates of the head “2a”, and selects from among a plurality of schemas indicating the head Select the schema in front of the head.
同様に、選択部143は、図8Cに示すように、抽出部142によって抽出された患部に対応するシェーマを、シェーマ選択に係る情報と、患者が受診している診療科及び医師の専門に関する情報などに基づいて選択する。
Similarly, as illustrated in FIG. 8C, the selection unit 143 includes information on the schema corresponding to the affected part extracted by the extraction unit 142 and information on the specialty of the department and doctor who the patient is receiving. Select based on.
図6に戻って、マーク付与部144は、選択部143によって選択されたシェーマにおいて患部に相当する位置に患部位置を示す情報を付与する。図9A~図9Cは、第1の実施形態に係るマーク付与部144による処理の一例を示す図である。図9A~図9Cにおいては、図8A~図8Cにおいて選択されたシェーマに対してそれぞれ患部位置を示す情報(マーク)を付与する場合について示す。
Returning to FIG. 6, the mark assigning unit 144 assigns information indicating the affected part position to the position corresponding to the affected part in the schema selected by the selecting unit 143. 9A to 9C are diagrams illustrating an example of processing performed by the mark assigning unit 144 according to the first embodiment. 9A to 9C show a case where information (marks) indicating the position of the affected area is given to each of the schemas selected in FIGS. 8A to 8C.
例えば、マーク付与部144は、図9Aに示すように、選択部143によって選択された全身前面のシェーマの右肘部分にマークM1を付与する。ここで、マーク付与部144は、抽出部142によって算出された骨の上における割合の情報を用いてシェーマ上にマークを付与する。同様に、マーク付与部144は、図9B及び図9Cに示すように、抽出部142によって算出された骨の上における割合の情報を用いて、各シェーマ上にマークM2及びマークM3を付与する。
For example, as shown in FIG. 9A, the mark assigning unit 144 assigns the mark M1 to the right elbow portion of the schema on the front of the whole body selected by the selection unit 143. Here, the mark imparting unit 144 imparts a mark on the schema using the ratio information on the bone calculated by the extracting unit 142. Similarly, as shown in FIG. 9B and FIG. 9C, the mark assigning unit 144 assigns a mark M2 and a mark M3 on each schema using the ratio information on the bone calculated by the extracting unit 142.
以上、抽出部142によって患部が所定の位置として抽出された場合のシェーマの選択及びマークの付与について説明した。次に、図10を用いて、患部が領域として抽出された場合について説明する。図10は、第1の実施形態に係るシェーマの選択及びマーク付与の一例を示す図である。図10においては、図7の(C)において抽出した患部に対するシェーマの選択とマークの付与に付いて示す。
The selection of the schema and the application of the mark when the affected part is extracted as the predetermined position by the extraction unit 142 have been described above. Next, a case where an affected part is extracted as a region will be described with reference to FIG. FIG. 10 is a diagram illustrating an example of schema selection and mark assignment according to the first embodiment. FIG. 10 shows the selection of the schema and the application of marks to the affected area extracted in FIG.
かかる場合には、選択部143は、図10に示すように、患部の領域(両端矢印で示す領域)の情報と、シェーマの選択に係る情報と、診療科の情報及び医師の専門の情報とから全身前面のシェーマを選択する。ここで、選択部143によるシェーマの選択は、上述した処理と同様である。
In such a case, as shown in FIG. 10, the selection unit 143 includes information on the affected area (area indicated by a double-ended arrow), information on selection of a schema, information on clinical departments, and information on specialist doctors. Select the front schema of the whole body. Here, the selection of the schema by the selection unit 143 is the same as the processing described above.
そして、マーク付与部144は、図10に示すように、選択部143によって選択されたシェーマ上の左腕の領域に患部を示すマークM4を付与する。ここで、マーク付与部144は、抽出部142によって抽出された患部の領域の情報(両端矢印で示す領域の座標情報)などに基づいて、シェーマ上のマークを付与する領域を決定して、決定した領域にマークを付与する。
Then, as shown in FIG. 10, the mark assigning unit 144 assigns a mark M4 indicating the affected part to the left arm region on the schema selected by the selecting unit 143. Here, the mark assigning unit 144 determines and determines the region to which the mark on the schema is to be applied based on the information on the affected region extracted by the extracting unit 142 (coordinate information on the region indicated by the double-ended arrows) and the like. Mark the marked area.
図6に戻って、表示制御部145は、選択部143によって選択された関連情報を出力部110にて表示させる。具体的には、表示制御部145は、選択部143によって選択されたシェーマ上にマーク付与部144がマークを付与したシェーマを出力部110にて表示させる。
Returning to FIG. 6, the display control unit 145 causes the output unit 110 to display the related information selected by the selection unit 143. Specifically, the display control unit 145 causes the output unit 110 to display the schema added with the mark by the mark adding unit 144 on the schema selected by the selection unit 143.
ここで、表示制御部145は、患部を抽出された患者の電子カルテに含まれる過去のシェーマから、マーク付与部144によってマークが付与された位置が患部と略一致するシェーマを抽出し、抽出したシェーマを出力部110にて表示する。図11は、第1の実施形態に係る表示制御部145による表示制御の一例を説明するための図である。
Here, the display control unit 145 extracts and extracts a schema in which the position where the mark is added by the mark applying unit 144 substantially matches the affected part from the past schema included in the patient's electronic medical record from which the affected part has been extracted. The schema is displayed on the output unit 110. FIG. 11 is a diagram for explaining an example of display control by the display control unit 145 according to the first embodiment.
例えば、表示制御部145は、図11に示すように、今回選択したシェーマと比較して、今回付与されたマークの位置と近い位置にマークが付与されたシェーマを患者の電子カルテのカルテ情報から読み出して、出力部110にて表示する。一例を挙げると、表示制御部145は、同一患者の電子カルテに記憶されているシェーマのカルテ情報を取得して、取得したシェーマにおいて、今回付与されたマークの位置と近い位置にマークが付与された前回選択したシェーマを出力部110にて表示する。
For example, as illustrated in FIG. 11, the display control unit 145 compares a schema provided with a mark closer to the position of the mark provided this time from the chart information of the patient's electronic medical record as compared to the schema selected this time. The data is read out and displayed on the output unit 110. For example, the display control unit 145 acquires the chart information of the schema stored in the electronic chart of the same patient, and in the acquired schema, a mark is provided at a position close to the position of the mark applied this time. The previously selected schema is displayed on the output unit 110.
また、表示制御部145は、今回選択したシェーマと同一のシェーマのカルテ情報を読み出して、出力部110にて表示させることも可能である。また、表示制御部145は、今回付与されたマークの位置と近い位置にマークが付与されたシェーマや、今回選択したシェーマと同一のシェーマのカルテ情報が複数記憶されていた場合に、最近のシェーマを表示するように制御することも可能である。また、表示制御部145は、今回付与されたマークの位置と近い位置にマークが付与されたシェーマや、今回選択したシェーマと同一のシェーマのカルテ情報が複数記憶されていた場合に、すべてのシェーマを表示するように制御することも可能である。
Also, the display control unit 145 can read out the chart information of the same schema as the currently selected schema and display it on the output unit 110. In addition, the display control unit 145 displays the latest schema when a plurality of schema information with the same mark as the schema selected this time or the schema with the same mark as the current selected schema is stored. It is also possible to control to display. In addition, the display control unit 145 displays all schemas in the case where a plurality of schema information of the same schema as the schema selected this time or the schema assigned the current mark is stored. It is also possible to control to display.
図6に戻って、カルテ情報格納部146は、患部が抽出された患者の主訴の音声及び当該患者の患部の位置が示された画像のうち、少なくとも一方を関連情報に対応付けて記憶部に格納する。具体的には、カルテ情報格納部146は、患者が発した主訴の音声情報などをシェーマの情報に対応付けてカルテ情報記憶部133に格納する。
Returning to FIG. 6, the chart information storage unit 146 associates at least one of the voice of the patient's chief complaint from which the affected part is extracted and the image showing the position of the affected part of the patient with the related information in the storage unit. Store. Specifically, the medical record information storage unit 146 stores voice information of the main complaint issued by the patient in the medical record information storage unit 133 in association with the schema information.
図12は、第1の実施形態に係るカルテ情報格納部146による処理の一例を説明するための図である。図12に示すように、患者が医師に対して「肘に違和感があって、曲げると痛む」という主訴をして、右肘を左手で指し示すと、まず、取得部141、抽出部142、選択部143及びマーク付与部144が上述した処理を行うことで、シェーマにマークを入力する。ここで、カルテ情報格納部146は、音声識別部13によって生成された音声認識結果に含まれる主訴「肘に違和感があって、曲げると痛む」の音声データをカルテ情報に格納するとともに、マーク付与部144によって付与されたマークに音声データのリンクをつけたシェーマをカルテ情報記憶部133に格納する。
FIG. 12 is a diagram for explaining an example of processing by the medical record information storage unit 146 according to the first embodiment. As shown in FIG. 12, when the patient complains to the doctor that “the elbow is uncomfortable and it hurts when bent” and the right elbow is pointed with the left hand, first, the acquisition unit 141, the extraction unit 142, the selection The unit 143 and the mark providing unit 144 perform the above-described processing, thereby inputting a mark to the schema. Here, the chart information storage unit 146 stores the voice data of the chief complaint “the elbow is uncomfortable and hurts when bent” included in the voice recognition result generated by the voice identification unit 13 in the chart information, A schema in which a voice data link is added to the mark given by the unit 144 is stored in the chart information storage unit 133.
カルテ情報格納部146は、例えば、図12に示すように、患者IDに、音声ファイル、シェーマ及びマークIDを対応付けたカルテ情報をカルテ情報記憶部133に格納する。すなわち、カルテ情報格納部146は、カルテ情報「患者ID:100033、音声ファイル:2012-07-02-0005.mp3、シェーマ:全身、マークID:00000021」をカルテ情報記憶部133に格納し、その際にマークIDに音声リンクを付与する。
For example, as shown in FIG. 12, the chart information storage unit 146 stores the chart information in which the voice file, schema, and mark ID are associated with the patient ID in the chart information storage unit 133. That is, the chart information storage unit 146 stores the chart information “patient ID: 100033, audio file: 2012-07-02-0005.mp3, schema: whole body, mark ID: 00000021” in the chart information storage unit 133, At the time, a voice link is given to the mark ID.
上述したように、第1の実施形態に係る医師端末100は、患者が診察を受けた際の動作情報から患部を抽出して、抽出した患部に最適なシェーマを選択して、マーク付与することができる。従って、従来、医師が自分で選択し、マークを付与していた作業を省略することができ、第1の実施形態に係る医師端末100は、電子カルテの情報入力を容易にすることを可能にする。
As described above, the doctor terminal 100 according to the first embodiment extracts the affected part from the operation information when the patient is examined, selects the optimum schema for the extracted affected part, and gives the mark. Can do. Therefore, it is possible to omit the work that the doctor has selected and provided with the mark conventionally, and the doctor terminal 100 according to the first embodiment can facilitate the information input of the electronic medical record. To do.
次に、図13を用いて、第1の実施形態に係る医師端末100の処理について説明する。図13は、第1の実施形態に係る医師端末100による処理の手順を示すフローチャートである。なお、図13においては、過去のシェーマを表示する場合の処理について示すが、過去のシェーマを表示しない場合であってもよい。
Next, processing of the doctor terminal 100 according to the first embodiment will be described with reference to FIG. FIG. 13 is a flowchart illustrating a processing procedure performed by the doctor terminal 100 according to the first embodiment. Although FIG. 13 shows the process when displaying a past schema, the past schema may not be displayed.
図13に示すように、第1の実施形態に係る医師端末100においては、音声保存モードがONである場合には(ステップS101肯定)、動作情報収集部10が患者の主訴の音声を取得する(ステップS102)。なお、音声保存モードがOFFである場合には(ステップS101否定)、ステップS102は実行せずにステップS103に進む。
As illustrated in FIG. 13, in the doctor terminal 100 according to the first embodiment, when the voice storage mode is ON (Yes in step S101), the motion information collection unit 10 acquires the voice of the patient's main complaint. (Step S102). If the voice storage mode is OFF (No at Step S101), the process proceeds to Step S103 without executing Step S102.
ステップS103においては、対象者(患者)に関する情報を取得して(ステップS103)、動作情報収集部10によって収集される患者の動作情報から、抽出部142が患部を抽出する(ステップS104)。そして、選択部143が、抽出部142によって抽出された患部に対応するシェーマを選択して(ステップS105)、マーク付与部144が、通常のマーク又は主訴の音声がリンクされたマークをシェーマに付与する(ステップS106)。
In step S103, information about the subject (patient) is acquired (step S103), and the extraction unit 142 extracts the affected part from the patient motion information collected by the motion information collection unit 10 (step S104). Then, the selection unit 143 selects a schema corresponding to the affected part extracted by the extraction unit 142 (step S105), and the mark imparting unit 144 imparts a normal mark or a mark linked with the voice of the chief complaint to the schema. (Step S106).
その後、表示制御部145が、シェーマを表示し(ステップS107)、さらに、過去のシェーマを表示して(ステップS108)、保存操作を受け付けたか否かの判定を行う(ステップS109)。ここで、入力部120が操作者(医師)から保存操作を受け付けた場合には(ステップS109肯定)、カルテ情報格納部146が、音声保存モードであるか否かを判定する(ステップS110)。
Thereafter, the display control unit 145 displays the schema (step S107), further displays the past schema (step S108), and determines whether or not the save operation is accepted (step S109). Here, when the input unit 120 receives a storage operation from the operator (doctor) (Yes at Step S109), the medical record information storage unit 146 determines whether or not the voice storage mode is set (Step S110).
ここで、音声保存モードではない場合には(ステップS110否定)、カルテ情報格納部146が、カルテ情報をカルテ情報記憶部133に格納して(ステップS111)、処理を終了する。一方、音声保存モードである場合には(ステップS110肯定)、カルテ情報格納部146が、音声データを対応付けたカルテ情報をカルテ情報記憶部133に格納して(ステップS112)、処理を終了する。なお、医師端末100は、保存操作を受付けるまで、シェーマの表示を継続する(ステップS109否定)。
Here, if it is not the voice saving mode (No at Step S110), the medical record information storage unit 146 stores the medical record information in the medical record information storage unit 133 (Step S111), and the process is terminated. On the other hand, when it is the voice storage mode (Yes at Step S110), the medical record information storage unit 146 stores the medical record information associated with the voice data in the medical record information storage unit 133 (Step S112), and ends the process. . The doctor terminal 100 continues displaying the schema until accepting the save operation (No at Step S109).
上述したように、第1の実施形態によれば、取得部141が、動作取得の対象となる患者の関節の位置情報を含む骨格情報を取得する。そして、抽出部142が、取得部141によって取得された患者の骨格情報における関節の位置情報に基づいて、患部を抽出する。そして、選択部143が、抽出部142によって抽出された患部に関連するシェーマを選択する。そして、表示制御部145が、選択部143によって選択されたシェーマを出力部110にて表示させるように制御する。従って、第1の実施形態に係る医師端末100は、電子カルテのカルテ情報に係る選択作業を省略することができ、電子カルテの情報入力を容易にすることを可能にする。
As described above, according to the first embodiment, the acquisition unit 141 acquires skeleton information including position information of a patient's joint that is a target of motion acquisition. Then, the extraction unit 142 extracts the affected part based on the joint position information in the patient skeleton information acquired by the acquisition unit 141. Then, the selection unit 143 selects a schema related to the affected part extracted by the extraction unit 142. Then, the display control unit 145 controls the output unit 110 to display the schema selected by the selection unit 143. Therefore, the doctor terminal 100 according to the first embodiment can omit the selection work related to the medical record information of the electronic medical record, and can easily input the information of the electronic medical record.
また、第1の実施形態によれば、抽出部142は、患者の骨格情報における関節の位置情報において、所定のタイミングにおける手の関節の位置から、当該手の関節とは異なる2つの関節をつないだ骨までの距離が所定の閾値以下であり、かつ、最短となる骨の上の位置に相当する部位を患部として抽出する。従って、第1の実施形態に係る医師端末100は、患者の診療時にとる動作に基づいて、患部を抽出することができ、正確なカルテ情報の選択処理を可能にする。
Further, according to the first embodiment, the extraction unit 142 connects two joints different from the joints of the hand from the position of the joints of the hand at a predetermined timing in the joint position information in the skeleton information of the patient. The part corresponding to the position on the bone that has the shortest distance to the bone and is equal to or less than a predetermined threshold is extracted as an affected part. Therefore, the doctor terminal 100 according to the first embodiment can extract the affected area based on the action taken at the time of medical treatment of the patient, and enables accurate chart information selection processing.
また、第1の実施形態によれば、抽出部142は、患者の骨格情報における関節の位置情報において、手の関節の位置の所定の移動範囲に相当する部位を患部として抽出する。従って、第1の実施形態に係る医師端末100は、患部が広範囲にわたっている場合にも対応することを可能にする。
Further, according to the first embodiment, the extraction unit 142 extracts a part corresponding to a predetermined movement range of the joint position of the hand as the affected part in the joint position information in the skeleton information of the patient. Therefore, the doctor terminal 100 according to the first embodiment makes it possible to cope with a case where the affected area covers a wide range.
また、第1の実施形態によれば、選択部143は、関連情報として、抽出部142によって抽出された部位のシェーマを選択する。従って、第1の実施形態に係る医師端末100は、電子カルテの情報入力において、煩雑になりやすい選択作業を省略することができ、情報入力をより容易にすることを可能にする。
Also, according to the first embodiment, the selection unit 143 selects the schema of the part extracted by the extraction unit 142 as the related information. Therefore, the doctor terminal 100 according to the first embodiment can omit a selection operation that tends to be complicated in inputting information in the electronic medical record, and makes it easier to input information.
また、第1の実施形態によれば、マーク付与部144は、選択部143によって選択されたシェーマにおいて患部に相当する位置に患部位置を示す情報を付与する。そして、表示制御部145は、マーク付与部144によって患部位置を示す情報が付与されたシェーマを出力部110にて表示させるように制御する。従って、第1の実施形態に係る医師端末100は、シェーマ上にマークを付与する作業も省略することができ、電子カルテの情報入力をさらに容易にすることを可能にする。
Further, according to the first embodiment, the mark assigning unit 144 assigns information indicating the affected part position to the position corresponding to the affected part in the schema selected by the selecting unit 143. Then, the display control unit 145 controls the output unit 110 to display the schema to which the information indicating the affected area position is added by the mark adding unit 144. Therefore, the doctor terminal 100 according to the first embodiment can omit the work of adding a mark on the schema, and can further facilitate the input of information in the electronic medical record.
また、第1の実施形態によれば、表示制御部145は、患部を抽出された患者の電子カルテに含まれる過去のシェーマから、マーク付与部144によって患部位置の情報が付与された位置が抽出された患部と略一致するシェーマを抽出し、抽出したシェーマを出力部110にて表示させるように制御する。従って、第1の実施形態に係る医師端末100は、今回の診療と比較することが求められる過去の診療データを自動で読み出すことを可能にする。
Further, according to the first embodiment, the display control unit 145 extracts the position to which the information on the affected part position is added by the mark applying part 144 from the past schema included in the electronic medical record of the patient from which the affected part is extracted. A schema that substantially matches the affected part is extracted, and the extracted schema is controlled to be displayed on the output unit 110. Therefore, the doctor terminal 100 according to the first embodiment can automatically read past medical data that is required to be compared with the current medical care.
また、第1の実施形態によれば、カルテ情報格納部146は、患部が抽出された患者の主訴の音声及び当該患者の患部の位置が示された画像のうち、少なくとも一方を関連情報に対応付けてカルテ情報記憶部133に格納する。従って、第1の実施形態に係る医師端末100は、診療時に患者が発した情報を音声や映像で保存することを可能にする。
Further, according to the first embodiment, the medical record information storage unit 146 corresponds to at least one of the voice of the patient's chief complaint from which the affected part is extracted and the image showing the position of the affected part of the patient as related information. At the same time, it is stored in the chart information storage unit 133. Therefore, the doctor terminal 100 according to the first embodiment makes it possible to save the information issued by the patient at the time of medical care as audio or video.
(第2の実施形態)
上述した第1の実施形態では、関連情報としてシェーマを選択する場合について説明した。第2の実施形態では、電子カルテにおいて患部に対応する入力項目を選択して表示する場合について説明する。第2の実施形態では、第1の実施形態と比較して、選択部143による選択処理、及び、表示制御部145による表示制御の内容が異なる。以下、これらを中心に説明する。 (Second Embodiment)
In the first embodiment described above, the case where a schema is selected as the related information has been described. In the second embodiment, a case will be described in which an input item corresponding to an affected part is selected and displayed in an electronic medical record. In the second embodiment, the selection processing by the selection unit 143 and the content of display control by the display control unit 145 are different from those in the first embodiment. Hereinafter, these will be mainly described.
上述した第1の実施形態では、関連情報としてシェーマを選択する場合について説明した。第2の実施形態では、電子カルテにおいて患部に対応する入力項目を選択して表示する場合について説明する。第2の実施形態では、第1の実施形態と比較して、選択部143による選択処理、及び、表示制御部145による表示制御の内容が異なる。以下、これらを中心に説明する。 (Second Embodiment)
In the first embodiment described above, the case where a schema is selected as the related information has been described. In the second embodiment, a case will be described in which an input item corresponding to an affected part is selected and displayed in an electronic medical record. In the second embodiment, the selection processing by the selection unit 143 and the content of display control by the display control unit 145 are different from those in the first embodiment. Hereinafter, these will be mainly described.
第2の実施形態に係る選択部143は、関連情報として、抽出部142によって抽出された患部に関連する電子カルテの項目を選択する。図14は、第2の実施形態に係る選択部143による処理の一例を説明するための図である。例えば、選択部143は、図14に示すように、患者に対する医師の診察内容(胸に聴診器をあてる行為)から、電子カルテの所見の欄に「聴診結果」の見出しを表示させるように選択する。
The selection unit 143 according to the second embodiment selects items of the electronic medical record related to the affected part extracted by the extraction unit 142 as the related information. FIG. 14 is a diagram for explaining an example of processing by the selection unit 143 according to the second embodiment. For example, as shown in FIG. 14, the selection unit 143 selects from the contents of the doctor's examination for the patient (the act of placing a stethoscope on the chest) to display the heading “Auscultation Result” in the column of the findings of the electronic medical record. To do.
また、選択部143は、図14に示すように、患者に対する医師の診察内容(胸に聴診器をあてる行為)から、聴診結果を入力するための聴診結果入力画面を表示させるように選択する。表示制御部145は、選択部143によって選択された電子カルテの項目を出力部110にて表示するよう制御する。
Further, as shown in FIG. 14, the selection unit 143 selects the auscultation result input screen for inputting the auscultation result from the contents of the doctor's examination for the patient (an act of applying a stethoscope to the chest). The display control unit 145 controls the output unit 110 to display the electronic medical record item selected by the selection unit 143.
ここで、図14に示すように、患者に対する医師の行為を抽出する場合には、医師端末100は、各フレームにおいて、患者の動作情報と、医師の動作情報とをそれぞれ取得する。すなわち、取得部141は、動作情報収集部10によって収集されたフレームごとの患者の骨格情報と、医師の骨格情報とを取得する。そして、抽出部142は、取得部141によって取得された患者の骨格情報(各関節の座標情報)及び医師の骨格情報(各関節の座標情報)から、患者に対する医師の行為(診察内容)を抽出する。
Here, as shown in FIG. 14, when extracting a doctor's action on a patient, the doctor terminal 100 acquires the patient's motion information and the doctor's motion information in each frame. That is, the acquisition unit 141 acquires patient skeleton information and doctor skeleton information for each frame collected by the motion information collection unit 10. Then, the extraction unit 142 extracts a doctor's action (examination contents) on the patient from the patient's skeleton information (coordinate information of each joint) and the doctor's skeleton information (coordinate information of each joint) acquired by the acquisition unit 141. To do.
例えば、抽出部142は、患者の各関節の座標に対する医師の手の関節の座標の位置関係から患者に対する医師の行為(診察内容)を抽出する。なお、医師の行為(診察内容)を抽出するための情報は、予め設定され記憶部130に格納される。一例を挙げると、記憶部130は、医師の手の関節が、患者の胸の周囲で一時的な停止を伴って移動していた場合に、胸に聴診器をあてているとする情報を記憶する。
For example, the extraction unit 142 extracts a doctor's action (examination contents) on the patient from the positional relationship of the coordinates of the joints of the doctor's hand with respect to the coordinates of each joint of the patient. Information for extracting a doctor's action (examination contents) is set in advance and stored in the storage unit 130. For example, the storage unit 130 stores information indicating that a stethoscope is applied to the chest when the joint of the doctor's hand moves around the chest of the patient with a temporary stop. To do.
また、聴診器をパターンマッチングにより検出することにより、医師が聴診をしていると抽出する場合であってもよい。なお、動作情報における患者と医師との識別は、どのような手法を用いる場合であってもよい。例えば、患者が座る位置と医師が座る位置とを予め座標上で設定しておき、その座標上で取得される骨格情報をそれぞれ患者の骨格情報及び医師の骨格情報として識別する場合であってもよい。
Further, it may be a case where a doctor is performing auscultation by detecting a stethoscope by pattern matching. In addition, what kind of method may be used for identification with a patient and a doctor in operation | movement information. For example, even when the position where the patient sits and the position where the doctor sits are set in advance in coordinates and the skeleton information acquired on the coordinates is identified as the skeleton information of the patient and the skeleton information of the doctor, respectively. Good.
上述したように、第2の実施形態によれば、選択部143は、関連情報として、抽出部142によって抽出された患部に関連する電子カルテの項目を選択する。従って、第2の実施形態に係る医師端末100は、電子カルテ作成に係る種々の項目の医師による選択を省略することができ、電子カルテの情報入力を容易にすることを可能にする。
As described above, according to the second embodiment, the selection unit 143 selects the item of the electronic medical record related to the affected part extracted by the extraction unit 142 as the related information. Therefore, the doctor terminal 100 according to the second embodiment can omit selection by the doctor of various items related to the creation of the electronic medical record, and can facilitate information input of the electronic medical record.
(第3の実施形態)
さて、これまで第1及び第2の実施形態について説明したが、上述した第1及び第2の実施形態以外にも、種々の異なる形態にて実施されてよいものである。 (Third embodiment)
Although the first and second embodiments have been described so far, the present invention may be implemented in various different forms other than the first and second embodiments described above.
さて、これまで第1及び第2の実施形態について説明したが、上述した第1及び第2の実施形態以外にも、種々の異なる形態にて実施されてよいものである。 (Third embodiment)
Although the first and second embodiments have been described so far, the present invention may be implemented in various different forms other than the first and second embodiments described above.
上述した第1の実施形態においては、患者の主訴を含む音声情報をカルテ情報として格納する場合について説明した。しかしながら、実施形態はこれに限定されるものではなく、例えば、画像をカルテ情報に対応付けて格納する場合であってもよい。例えば、カルテ情報格納部146は、動作情報収集部10のカラー画像収集部11によって収集されたカラー画像において、患者が患部を触っている画像を静止画又は動画で保存する。
In the first embodiment described above, the case where voice information including a patient's chief complaint is stored as medical record information has been described. However, the embodiment is not limited to this. For example, the image may be stored in association with the chart information. For example, the chart information storage unit 146 stores, as a still image or a moving image, an image of the patient touching the affected area in the color image collected by the color image collection unit 11 of the motion information collection unit 10.
上述した第1~第3の実施形態においては、動作情報収集部10が一定の収集条件で動作情報を収集する場合について説明した。しかしながら、実施形態はこれに限定されるものでなはなく、例えば、抽出部142によって抽出された患部の情報に応じて、収集条件を変更させる場合であってもよい。
In the first to third embodiments described above, the case has been described in which the operation information collection unit 10 collects operation information under a certain collection condition. However, the embodiment is not limited to this. For example, the collection condition may be changed according to the information on the affected part extracted by the extraction unit 142.
図15は、第3の実施形態に係る医師端末100による動作情報の収集条件の変更処理を説明するための図である。例えば、図15に示すように、抽出部142が、患者の右腕を患部として抽出した場合に、医師端末100は、動作情報収集部10に対してカメラの方向やズームを変更するように制御する。例えば、医師端末100は、図15に示すように、右腕の患部が画面の中央に位置するようにカメラの方向を変更させ、さらに、患部が適当な大きさで写るように拡大して撮影するように制御する。
FIG. 15 is a diagram for explaining operation information collection condition changing processing by the doctor terminal 100 according to the third embodiment. For example, as illustrated in FIG. 15, when the extraction unit 142 extracts the patient's right arm as the affected part, the doctor terminal 100 controls the motion information collecting unit 10 to change the camera direction and zoom. . For example, as shown in FIG. 15, the doctor terminal 100 changes the direction of the camera so that the affected part of the right arm is located at the center of the screen, and further enlarges and captures the affected part in an appropriate size. To control.
また、医師端末100は、図15に示すように、撮影されたカラー画像において、患部として抽出された画像領域を切り出して、保存するように制御することも可能である。その他、医師端末100は、抽出した患部の情報に基づいて、例えば、カラー画像に含まれる患部の面積や色を測定して、過去に測定した結果と比較することも可能である。また、医師端末100は、患部の過去画像が保存されていた場合に、過去に撮影した際の拡大率と同一の拡大率で撮影現在の患部を撮影して、並列表示するように制御することも可能である。
Further, as shown in FIG. 15, the doctor terminal 100 can also control to cut out and save an image area extracted as an affected part in a captured color image. In addition, the doctor terminal 100 can measure, for example, the area and color of the affected part included in the color image based on the extracted information on the affected part, and can compare it with the results measured in the past. In addition, when the past image of the affected area is stored, the doctor terminal 100 controls to shoot the current affected area at the same enlargement rate as when the image was taken in the past and to display the affected area in parallel. Is also possible.
上述した第1の実施形態では、医師端末100が、患部を抽出して、患部に関連する関連情報(例えば、シェーマや、電子カルテの項目など)を選択して表示する場合について説明した。しかしながら、実施形態はこれに限定されるものではなく、例えば、各処理がネットワーク上のサーバ装置300によって実行される場合であってもよい。
In the first embodiment described above, the case where the doctor terminal 100 extracts an affected area and selects and displays related information related to the affected area (for example, items such as a schema and an electronic medical record) has been described. However, the embodiment is not limited to this. For example, each process may be executed by the server device 300 on the network.
すなわち、サーバ装置300は、医師端末100と同様の処理を医師端末100に提供する。かかる場合には、サーバ装置300は、取得部141と、抽出部142と、選択部143と、表示制御部145とを有する。取得部141が、動作取得の対象となる患者の関節の位置情報を含む骨格情報を取得する。そして、抽出部142が、取得部141によって取得された患者の骨格情報における関節の位置情報に基づいて、患部を抽出する。そして、選択部143が、抽出部142によって抽出された患部に関連するシェーマを選択する。そして、表示制御部145が、選択部143によって選択されたシェーマを医師端末100の出力部110にて表示させるように制御する。
That is, the server apparatus 300 provides the doctor terminal 100 with the same processing as the doctor terminal 100. In such a case, the server apparatus 300 includes an acquisition unit 141, an extraction unit 142, a selection unit 143, and a display control unit 145. The acquisition unit 141 acquires skeleton information including position information of a joint of a patient that is a target of motion acquisition. Then, the extraction unit 142 extracts the affected part based on the joint position information in the patient skeleton information acquired by the acquisition unit 141. Then, the selection unit 143 selects a schema related to the affected part extracted by the extraction unit 142. Then, the display control unit 145 performs control so that the schema selected by the selection unit 143 is displayed on the output unit 110 of the doctor terminal 100.
上述した第1の実施形態では、医師端末100が、患部を抽出して、患部に関連する関連情報(例えば、シェーマや、電子カルテの項目など)を選択して表示する場合について説明した。しかしながら、実施形態はこれに限定されるものではなく、例えば、超音波診断装置や、X線診断装置などの医用画像診断装置が各処理を実行する場合であってもよい。
In the first embodiment described above, the case where the doctor terminal 100 extracts an affected area and selects and displays related information related to the affected area (for example, items such as a schema and an electronic medical record) has been described. However, the embodiment is not limited to this. For example, a medical image diagnostic apparatus such as an ultrasonic diagnostic apparatus or an X-ray diagnostic apparatus may execute each process.
例えば、超音波診断装置において処理が実行される場合には、超音波診断装置は、まず、患者の座標情報に対する超音波プローブを操作する医師の手の関節の座標情報を取得する。そして、超音波診断装置は、取得した座標情報から超音波プローブが当てられている患部を抽出して、抽出した患部に対応するボディマークをモニタなどの出力部に表示するように制御する。このとき、超音波診断装置は、超音波画像とともにボディマークを表示するように制御することも可能である。
For example, when processing is executed in the ultrasonic diagnostic apparatus, the ultrasonic diagnostic apparatus first acquires coordinate information of a joint of a doctor's hand that operates an ultrasonic probe with respect to patient coordinate information. Then, the ultrasonic diagnostic apparatus extracts the affected part to which the ultrasonic probe is applied from the acquired coordinate information, and controls to display a body mark corresponding to the extracted affected part on an output unit such as a monitor. At this time, the ultrasonic diagnostic apparatus can also be controlled to display the body mark together with the ultrasonic image.
また、第1の実施形態から第2の実施形態において説明した取得部141、抽出部142、選択部143及び表示制御部145の機能は、ソフトウェアによって実現することもできる。例えば、取得部141、抽出部142、選択部143及び表示制御部145の機能は、上記の実施形態において取得部141、抽出部142、選択部143及び表示制御部145が行うものとして説明した処理の手順を規定した動作情報処理プログラムをコンピュータに実行させることで、実現される。この動作情報処理プログラムは、例えば、ハードディスクや半導体メモリ素子等に記憶され、CPUやMPU等のプロセッサによって読み出されて実行される。また、この動作情報処理プログラムは、CD-ROM(Compact Disc - Read Only Memory)やMO(Magnetic Optical disk)、DVD(Digital Versatile Disc)などのコンピュータ読取り可能な記録媒体に記録されて、配布され得る。
Also, the functions of the acquisition unit 141, the extraction unit 142, the selection unit 143, and the display control unit 145 described in the first to second embodiments can be realized by software. For example, the functions of the acquisition unit 141, the extraction unit 142, the selection unit 143, and the display control unit 145 have been described as being performed by the acquisition unit 141, the extraction unit 142, the selection unit 143, and the display control unit 145 in the above embodiment. This is realized by causing a computer to execute an operation information processing program that defines the above procedure. The operation information processing program is stored in, for example, a hard disk or a semiconductor memory device, and is read and executed by a processor such as a CPU or MPU. The operation information processing program can be recorded and distributed on a computer-readable recording medium such as a CD-ROM (Compact Disc-Read Only Memory), MO (Magnetic Optical disk), or DVD (Digital Versatile Disc). .
以上説明したとおり、第1~第3の実施形態によれば、本実施形態の動作情報処理システム、動作情報処理装置及び医用画像診断装置は、電子カルテの情報入力を容易にすることを可能にする。
As described above, according to the first to third embodiments, the motion information processing system, the motion information processing device, and the medical image diagnostic device of the present embodiment can facilitate information input of electronic medical records. To do.
本発明のいくつかの実施形態を説明したが、これらの実施形態は、例として提示したものであり、発明の範囲を限定することは意図していない。これら実施形態は、その他の様々な形態で実施されることが可能であり、発明の要旨を逸脱しない範囲で、種々の省略、置き換え、変更を行うことができる。これら実施形態やその変形は、発明の範囲や要旨に含まれると同様に、請求の範囲に記載された発明とその均等の範囲に含まれるものである。
Although several embodiments of the present invention have been described, these embodiments are presented as examples and are not intended to limit the scope of the invention. These embodiments can be implemented in various other forms, and various omissions, replacements, and changes can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope of the present invention and the gist thereof, and are also included in the invention described in the claims and the equivalent scope thereof.
Claims (11)
- 動作取得の対象となる対象者の関節の位置情報を含む動作情報を取得する取得部と、
前記取得部によって取得された対象者の動作情報における関節の位置情報に基づいて、患部を抽出する抽出部と、
前記抽出部によって抽出された患部に関連する関連情報を選択する選択部と、
前記選択部によって選択された関連情報を表示部にて表示させるように制御する表示制御部と、
を備える、動作情報処理システム。 An acquisition unit that acquires motion information including position information of a joint of a subject who is a target of motion acquisition;
An extraction unit that extracts an affected area based on joint position information in the motion information of the subject acquired by the acquisition unit;
A selection unit for selecting related information related to the affected area extracted by the extraction unit;
A display control unit for controlling the related information selected by the selection unit to be displayed on the display unit;
An operation information processing system comprising: - 前記抽出部は、前記対象者の動作情報における関節の位置情報において、所定のタイミングにおける手の関節の位置から、当該手の関節とは異なる2つの関節をつないだ線分までの距離が所定の閾値以下であり、かつ、最短となる線分上の位置に相当する部位を前記患部として抽出する、請求項1に記載の動作情報処理システム。 In the position information of the joints in the motion information of the subject, the extraction unit has a predetermined distance from a joint position of the hand at a predetermined timing to a line segment connecting two joints different from the joint of the hand. The motion information processing system according to claim 1, wherein a part corresponding to a position on a line segment that is equal to or less than a threshold value and is shortest is extracted as the affected part.
- 前記抽出部は、前記対象者の動作情報における関節の位置情報において、手の関節の位置の所定の移動範囲に相当する部位を前記患部として抽出する、請求項1又は2に記載の動作情報処理システム。 The motion information processing according to claim 1, wherein the extraction unit extracts, as the affected part, a part corresponding to a predetermined movement range of the joint position of the hand in the joint position information in the motion information of the subject. system.
- 前記選択部は、前記関連情報として、前記抽出手段によって抽出された部位の模式図を選択する、請求項1に記載の動作情報処理システム。 The motion information processing system according to claim 1, wherein the selection unit selects a schematic diagram of a part extracted by the extraction unit as the related information.
- 前記選択部によって選択された模式図において前記患部に相当する位置に患部位置を示す情報を付与する付与部をさらに備え、
前記表示制御部は、前記付与部によって前記患部位置を示す情報が付与された模式図を前記表示部にて表示させるように制御する、請求項4に記載の動作情報処理システム。 In the schematic diagram selected by the selection unit, further includes a granting unit that gives information indicating the affected part position to a position corresponding to the affected part
The motion information processing system according to claim 4, wherein the display control unit controls the display unit to display a schematic diagram to which information indicating the affected part position is provided by the providing unit. - 前記表示制御部は、前記患部を抽出された患者の電子カルテに含まれる過去の模式図から、前記付与部によって前記患部位置の情報が付与された位置が前記抽出された患部と略一致する模式図を抽出し、抽出した模式図を前記表示部にて表示する、請求項5に記載の動作情報処理システム。 The display control unit is a model in which the position where the information on the affected part position is given by the assigning unit substantially matches the extracted affected part from the past schematic diagram included in the electronic medical record of the patient from which the affected part is extracted. The motion information processing system according to claim 5, wherein a diagram is extracted and the extracted schematic diagram is displayed on the display unit.
- 前記患部が抽出された患者の主訴の音声及び当該患者の患部の位置が示された画像のうち、少なくとも一方を前記関連情報に対応付けて記憶部に格納する格納部をさらに備える、請求項1に記載の動作情報処理システム。 The storage unit further stores at least one of the voice of the patient's main complaint from which the affected part is extracted and an image showing the position of the affected part of the patient in the storage unit in association with the related information. The operation information processing system described in 1.
- 前記取得部は、前記抽出部によって抽出された患部の位置に応じて、前記対象者の動作情報を取得するためのカメラの向き及び拡大率のうち、少なくとも一方を変更するように制御する、請求項1に記載の動作情報処理システム。 The acquisition unit controls to change at least one of a camera orientation and an enlargement ratio for acquiring the motion information of the subject according to the position of the affected part extracted by the extraction unit. Item 2. The information processing system according to Item 1.
- 前記選択部は、前記関連情報として、前記抽出部によって抽出された患部に関連する電子カルテの項目を選択する、請求項1に記載の動作情報処理システム。 The motion information processing system according to claim 1, wherein the selection unit selects, as the related information, an item of an electronic medical record related to an affected part extracted by the extraction unit.
- 動作取得の対象となる対象者の関節の位置情報を含む動作情報を取得する取得部と、
前記取得部によって取得された対象者の動作情報における関節の位置情報に基づいて、患部を抽出する抽出部と、
前記抽出部によって抽出された患部に関連する関連情報を選択する選択部と、
前記選択部によって選択された関連情報を表示部にて表示させるように制御する表示制御部と、
を備える、動作情報処理装置。 An acquisition unit that acquires motion information including position information of a joint of a subject who is a target of motion acquisition;
An extraction unit that extracts an affected area based on joint position information in the motion information of the subject acquired by the acquisition unit;
A selection unit for selecting related information related to the affected area extracted by the extraction unit;
A display control unit for controlling the related information selected by the selection unit to be displayed on the display unit;
A motion information processing apparatus comprising: - 医用画像を生成する医用画像生成部と、
動作取得の対象となる対象者の関節の位置情報を含む動作情報を取得する取得部と、
前記取得部によって取得された対象者の動作情報における関節の位置情報に基づいて、患部を抽出する抽出部と、
前記抽出部によって抽出された患部に関連する関連情報を選択する選択部と、
前記医用画像生成部によって生成された医用画像及び前記選択部によって選択された関連情報を表示部にて表示させるように制御する表示制御部と、
を備える、医用画像診断装置。 A medical image generation unit for generating a medical image;
An acquisition unit that acquires motion information including position information of a joint of a subject who is a target of motion acquisition;
An extraction unit that extracts an affected area based on joint position information in the motion information of the subject acquired by the acquisition unit;
A selection unit for selecting related information related to the affected area extracted by the extraction unit;
A display control unit that controls the display unit to display the medical image generated by the medical image generation unit and the related information selected by the selection unit;
A medical image diagnostic apparatus comprising:
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-288512 | 2012-12-28 | ||
JP2012288512A JP2014130519A (en) | 2012-12-28 | 2012-12-28 | Medical information processing system, medical information processing device, and medical image diagnostic device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014104357A1 true WO2014104357A1 (en) | 2014-07-03 |
Family
ID=51021418
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/085244 WO2014104357A1 (en) | 2012-12-28 | 2013-12-27 | Motion information processing system, motion information processing device and medical image diagnosis device |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2014130519A (en) |
WO (1) | WO2014104357A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6769859B2 (en) | 2016-12-19 | 2020-10-14 | 株式会社日立エルジーデータストレージ | Image processing device and image processing method |
WO2023053257A1 (en) * | 2021-09-29 | 2023-04-06 | 日本電気株式会社 | Information processing system, information processing device, information processing method, and non-transitory computer-readable medium having program stored therein |
JP7545172B1 (en) | 2023-05-30 | 2024-09-04 | ファストドクター株式会社 | Program, method, and information processing device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000207568A (en) * | 1999-01-20 | 2000-07-28 | Nippon Telegr & Teleph Corp <Ntt> | Attitude measuring instrument and recording medium recording attitude measuring program |
JP2002109061A (en) * | 2000-09-29 | 2002-04-12 | Kubota Corp | Life information acquisition system and method for preparing care plan |
JP2009504298A (en) * | 2005-08-19 | 2009-02-05 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | System and method for analyzing user movement |
JP2010079567A (en) * | 2008-09-25 | 2010-04-08 | Canon Inc | Image processing apparatus |
JP2010176213A (en) * | 2009-01-27 | 2010-08-12 | Canon Inc | Diagnostic support apparatus and method for controlling the same |
-
2012
- 2012-12-28 JP JP2012288512A patent/JP2014130519A/en active Pending
-
2013
- 2013-12-27 WO PCT/JP2013/085244 patent/WO2014104357A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000207568A (en) * | 1999-01-20 | 2000-07-28 | Nippon Telegr & Teleph Corp <Ntt> | Attitude measuring instrument and recording medium recording attitude measuring program |
JP2002109061A (en) * | 2000-09-29 | 2002-04-12 | Kubota Corp | Life information acquisition system and method for preparing care plan |
JP2009504298A (en) * | 2005-08-19 | 2009-02-05 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | System and method for analyzing user movement |
JP2010079567A (en) * | 2008-09-25 | 2010-04-08 | Canon Inc | Image processing apparatus |
JP2010176213A (en) * | 2009-01-27 | 2010-08-12 | Canon Inc | Diagnostic support apparatus and method for controlling the same |
Also Published As
Publication number | Publication date |
---|---|
JP2014130519A (en) | 2014-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10170155B2 (en) | Motion information display apparatus and method | |
JP6675462B2 (en) | Motion information processing device | |
JP6878628B2 (en) | Systems, methods, and computer program products for physiological monitoring | |
JP6181373B2 (en) | Medical information processing apparatus and program | |
JP6381918B2 (en) | Motion information processing device | |
JP6334925B2 (en) | Motion information processing apparatus and method | |
JP6323451B2 (en) | Image processing apparatus and program | |
JP2015061579A (en) | Motion information processing apparatus | |
JP6598422B2 (en) | Medical information processing apparatus, system, and program | |
WO2014104357A1 (en) | Motion information processing system, motion information processing device and medical image diagnosis device | |
JP6266317B2 (en) | Diagnostic support device and diagnostic support method | |
JP6320702B2 (en) | Medical information processing apparatus, program and system | |
WO2024177072A1 (en) | Information processing device, information processing method, and information processing program | |
JP2015039615A (en) | Medical image processor and medical image display system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13868390 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13868390 Country of ref document: EP Kind code of ref document: A1 |