WO2023079729A1 - Biological examination apparatus, biological examination method, and program - Google Patents

Biological examination apparatus, biological examination method, and program Download PDF

Info

Publication number
WO2023079729A1
WO2023079729A1 PCT/JP2021/040940 JP2021040940W WO2023079729A1 WO 2023079729 A1 WO2023079729 A1 WO 2023079729A1 JP 2021040940 W JP2021040940 W JP 2021040940W WO 2023079729 A1 WO2023079729 A1 WO 2023079729A1
Authority
WO
WIPO (PCT)
Prior art keywords
graph
display
moving image
swallowing
point
Prior art date
Application number
PCT/JP2021/040940
Other languages
French (fr)
Japanese (ja)
Inventor
敬治 内田
蘭 橘
寛彦 水口
Original Assignee
マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by マクセル株式会社 filed Critical マクセル株式会社
Priority to PCT/JP2021/040940 priority Critical patent/WO2023079729A1/en
Publication of WO2023079729A1 publication Critical patent/WO2023079729A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb

Definitions

  • the present invention relates to a biopsy apparatus, a biopsy method, and a program for performing a test related to swallowing of a living body.
  • Pneumonia is known to be one of the major causes of death. Among them, aspiration pneumonia induced by dysphagia, which means a disorder related to swallowing, accounts for about 60% or more.
  • Stroke is the main cause of dysphagia, and it is known that 80% of patients in the acute phase develop dysphagia. It is also known that the percentage of people with dysphagia increases with age, even without a clear causative disease such as stroke. expected to increase.
  • VF Videofluoroscopic Examination of Swallowing
  • a bolus containing a contrast agent such as barium sulfate and an X-ray fluoroscope are used to monitor the movement of the bolus during swallowing and the behavior of the hyoid bone and larynx of the subject.
  • the swallowing movement is a series of rapid movements and is generally recorded and evaluated on video.
  • VF requires caution because it is an examination that has the potential for aspiration and suffocation, and because it requires a large-sized X-ray fluoroscope, exposure and time are limited.
  • VE videoendoscopic examination of swallowing
  • Patent Literature 1 discloses a device that attaches a microphone to the neck, saves voice data corresponding to auscultation as digital data, and detects swallowing by waveform analysis.
  • Patent Document 2 in addition to a microphone, a magnetic coil is attached to the neck, and in addition to voice data, motion data of the thyroid cartilage during swallowing, which corresponds to palpation, is stored as digital data, and an examination related to swallowing of a living body is performed. and a biopsy apparatus that displays the results.
  • this biopsy apparatus has a transmitting coil and a receiving coil arranged so as to sandwich the thyroid cartilage.
  • the lateral displacement of the cartilage is measured as distance information between the coils.
  • distance information and voice information corresponding to palpation and auscultation can be acquired noninvasively at the same time, so that the swallowing motion can be evaluated by combining the distance information and voice information.
  • the distance information and the voice information are displayed as a distance waveform and a voice waveform as time-series waveforms, and the swallowing state is evaluated based on these waveforms.
  • these evaluation forms based only on waveforms, there are cases where it is difficult to grasp the actual timing of swallowing (start timing, end timing, etc.) from waveforms, and it takes time to examine swallowing. I could have lost it.
  • the present invention has been made in view of the above circumstances, and aims to provide a biopsy apparatus, a biopsy method, and a program capable of efficiently performing a swallowing test.
  • the biopsy apparatus of the present invention includes: A biological examination apparatus comprising an input device that receives an operation by an operator, a display device, and an information processing device,
  • the information processing device is A graph showing the movement of the thyroid cartilage when the subject swallows based on the detection data acquired by the detection unit that detects the movement of the thyroid cartilage, and an image of the subject when the subject swallows taken by a camera.
  • the biopsy method of the present invention is A biopsy method performed by a biopsy apparatus including an input device for receiving an operation by an operator, a display device, and an information processing device, A graph showing the movement of the thyroid cartilage when the subject swallows based on the detection data acquired by the detection unit that detects the movement of the thyroid cartilage, and an image of the subject when the subject swallows taken by a camera.
  • the program of the present invention is A graph showing the movement of the thyroid cartilage when the subject swallows based on the detection data acquired by the detection unit that detects the movement of the thyroid cartilage, and an image of the subject when the subject swallows taken by a camera. and a process of simultaneously displaying on a display device, In a state in which the graph and the moving image are displayed at the same time, based on an operation by the operator to designate an arbitrary point on one of the graph or the moving image, the arbitrary point on the other of the graph or the moving image and a process of causing the display device to display a display that enables the user to grasp the points corresponding to the above.
  • the graph showing the movement of the thyroid cartilage and the moving image of the subject during swallowing are displayed on the same screen.
  • the graph (detection data) can be checked while looking at it, and the swallowing test can be efficiently performed.
  • a display is displayed that allows the point corresponding to the arbitrary point in the moving image to be grasped, or the arbitrary point in the moving image is displayed.
  • At least one of displaying a display that makes it possible to grasp the point corresponding to the arbitrary point in the graph can be displayed based on the specified operation.
  • the graph includes a first coordinate axis corresponding to the forward and backward movement of the thyroid cartilage, a second coordinate axis corresponding to the vertical movement of the thyroid cartilage and intersecting with the first coordinate axis, may have
  • the graph may be a single graph in which the movement of the thyroid cartilage and the swallowing sound acquired by the microphone are temporally associated with each other.
  • the movement of the thyroid cartilage and the change in the swallowing sound can be integrated and visualized in one graph, so that the swallowing dynamics such as the timing of the swallowing motion and the swallowing sound can be visualized non-invasively. You can tell at a glance.
  • the swallowing dynamics such as the timing of the swallowing motion and the swallowing sound can be visualized non-invasively. You can tell at a glance.
  • by linking such graphs and moving images it becomes easier to understand the relationship between the measured data and the actual swallowing motion, and the examination of swallowing can be performed efficiently.
  • examinations related to swallowing can be performed efficiently.
  • FIG. 4 is a schematic perspective view showing an example of a flexible holder that holds the laryngeal displacement detector of the biopsy apparatus. It is a functional block diagram which shows an example of a structure of the computer of a biopsy apparatus. 4 is a flow chart showing an example of processing of a motion analysis unit of a processing unit of a computer; 4 is a flow chart showing an example of processing of a speech analysis unit of a processing unit of a computer; 4 is a flow chart showing an example of processing of an analysis unit of a processing unit of a computer; FIG.
  • FIG. 4 is a diagram showing an example of a distance waveform based on typical distance information detected by a laryngeal displacement detection unit of a biopsy apparatus;
  • (a) is a diagram showing an example of distance information based on detection data detected by a laryngeal displacement detection unit of a biopsy apparatus and an operating waveform (fitting waveform) fitted from the distance information;
  • (a) is a diagram showing an example of component waveforms individually showing temporal behavior trajectories of the thyroid cartilage in the vertical direction and the anteroposterior direction.
  • FIG. 4 is a diagram showing an example of a swallowing sound waveform based on typical voice information detected by the swallowing sound detection unit of the biopsy apparatus; It is a figure which shows an example of the locus
  • FIG. 1 is a functional block diagram showing a configuration example of a biopsy apparatus 100 according to this embodiment.
  • the biopsy apparatus 100 measures the larynx (thyroid gland) of a subject 101 caused by vertical and anteroposterior behavior of the thyroid cartilage (commonly known as the larynx) when the subject (subject) 101 swallows.
  • a transmitter coil 102 and a receiver coil 103 as a laryngeal displacement detector that detects a change in the distance between two positions in the body part surrounding the cartilage), and a swallowing sound that detects the swallowing sound when the subject 101 swallows.
  • a microphone 106 as a detector, these coils 102, 103 and microphone 106 are held by a flexible holder 113 which will be described later with reference to FIG.
  • the transmission coil 102 and the reception coil 103 are arranged facing each other so as to sandwich the thyroid cartilage from both sides, the transmission coil 102 is connected to the transmitter 104 and the reception coil 103 is connected to the receiver 105 . Further, the microphone 106 is placed near the thyroid cartilage of the subject 101 and is electrically connected to a detection circuit 107 for detecting swallowing sounds captured by the microphone 106 during swallowing, and power is supplied from the detection circuit 107. etc., and operates.
  • the microphone 106 is preferably a microphone using, for example, a piezo element (piezoelectric element) so as not to pick up ambient sounds other than swallowing sounds as much as possible, but may be a condenser type microphone or the like.
  • the biopsy apparatus 100 further has a control device 108 , a computer 109 , a display device 110 , an external storage device 111 , an input device 112 and a camera 114 .
  • the control device 108 controls the operations of the transmitter 104, the receiver 105, the detection circuit 107, the computer 109, the camera 114, and the external storage device 111, and controls power supply, signal transmission/reception timing, and the like.
  • the computer 109 is an information processing device including a CPU, a memory, an internal storage device, etc., and performs various arithmetic processing. The control and calculations performed by the computer 109 are realized by the CPU executing a predetermined program.
  • a display device 110 an external storage device 111 and an input device 112 are electrically connected to the computer 109 .
  • the camera 114 photographs the subject 101 . Specifically, the camera 114 captures the state of the subject 101 while the swallowing test (measurement) is being performed by the coils 102 and 103 and the microphone 106, and acquires a moving image 950 described later (FIGS. 11 to 11). See Figure 14). More specifically, the camera 114 captures movement of the throat of the subject 101 during the examination. Also, the movement of the mouth of the subject 101 during the examination may be photographed by the camera 114 .
  • the display device 110 is an interface that displays the measured waveform, analysis information by the computer 109, and the like.
  • the display device 110 may be, for example, a liquid crystal display, an EL display, a plasma display, a CRT display, a projector, or the like, but is not limited to these.
  • the display device 110 may be mounted on a tablet terminal, a head-mounted display, a wearable device, or the like. Note that a specific function may be notified by an LED, sound, or the like.
  • the external storage device 111 stores data used for various arithmetic processing executed by the computer 109, data obtained by the arithmetic processing, images acquired (photographed) by the camera 114 (video 950), It holds conditions, parameters, and the like that are input via the input device 112 .
  • the input device 112 is an interface for an operator to input conditions and the like necessary for measurement and arithmetic processing performed in the present embodiment.
  • a high frequency magnetic field is emitted from the transmission coil 102 by transmitting a high frequency signal generated by the transmitter 104 to the transmission coil 102, and a signal received by the reception coil 103 is transmitted to the receiver 105. you will be able to receive it. Also, the signal received by the receiver 105 is sent to the computer 109 as an output voltage measurement value of the inter-coil voltage.
  • the swallowing sound captured by the microphone 106 is detected by the detection circuit 107 and converted into a voltage signal, which is input from the detection circuit 107 to the computer 109 as an output voltage measurement value.
  • FIG. 2 shows a flexible holder 113 that holds the transmitting and receiving coils 102, 103 and the microphone 106.
  • the flexible holder 113 is made of any flexible material such as various resins, and is attached to the neck of the subject 101 using its open end as shown in the figure. and a pair of arc-shaped sensor holding members 203a and 203b positioned inside the neck-mounted member 202 along substantially the same arc.
  • One ends of a pair of sensor holding members 203a and 203b are integrally connected to each other so as to hold one ends of the pair of sensor holding members 203a and 203b on the inside thereof, and the other ends of the sensor holding members 203a and 203b are opened and positioned near the larynx of the subject 101.
  • Sensor portions 204a and 204b are arranged at the other ends of the pair of sensor holding members 203a and 203b, respectively. Together with the sensor holding members 203a and 203b positioned without contacting the neck of the body 101, the movement of swallowing (the movement of the thyroid cartilage, etc.) can be followed independently of the neck mounting member 202.
  • the transmission coil 102 is fixedly arranged inside one of the sensor portions 204a and 204b, and the reception coil 103 is fixedly arranged inside the other of the sensor portions 204a and 204b. It is arranged in Particularly in this embodiment, the transmitting coil 102 and the receiving coil 103 are attached to the sensor sections 204a and 204b so as to be arranged in directions that are likely to face each other (close to the vertical direction of the neck surface of the subject 101). , thereby enabling detection with a high signal-to-noise (SN) ratio.
  • SN signal-to-noise
  • the microphone 106 and the transmission coil 102 or the reception coil 103 can be arranged at positions substantially perpendicular to each other, and magnetic field noise generated from the microphone 106 can be reduced from entering the transmission and/or reception coils 102 and 103. can be done.
  • the corresponding positions of the transmitting coil 102 and the receiving coil 103 and the positions orthogonal to the microphones are not limited to the described arrangement, and may be any position that can realize detection with a sufficiently high SN ratio.
  • a pressing portion 205 a to be applied to the neck of the subject 101 is provided at the opposing end portion (the portion of the neck attachment member 202 positioned on the back side of the neck of the subject 101 ) forming the open end of the neck attachment member 202 .
  • 205b are formed in a shape suitable for pressing, such as cylindrical or spherical.
  • the neck size of the subject 101 is determined by four pressing points, which are the two pressing portions 205a and 205b and the two sensor portions 204a and 204b provided at the other ends of the sensor holding members 203a and 203b. This allows the flexible retainer 113 to be easily worn around the neck without any need.
  • Electrical wires 201a and 201b extending from transmitting/receiving coils 102 and 103 incorporated in sensor units 204a and 204b and microphone 106 are electrically connected to transmitter 104, receiver 105 and detection circuit 107 shown in FIG. connected
  • the calculator 109 includes a swallowing measurement unit 410 , an image acquisition unit 415 , a processing unit 420 and a display unit 430 .
  • the swallowing measurement unit 410 uses the transmitting coil 102, the receiving coil 103, the transmitter 104, the receiver 105, the microphone 106, the detection circuit 107, and the control device 108 described with reference to FIG. Sounds are measured (larynx displacement detection step and swallowing sound detection step).
  • the image acquisition unit 415 acquires a moving image 950 of the subject 101 during swallowing motion and swallowing sound measurement using the camera 114 (moving image acquisition step).
  • the image acquisition unit 415 stores the acquired moving image 950 (moving image data) in the internal storage device of the computer 109 and/or the external storage device 111 (that is, memory).
  • the processing unit 420 includes a motion analysis unit 421 that analyzes distance information, a voice analysis unit 422 that analyzes swallowing sounds that are voice information, and an analysis unit 423 that analyzes a combination of the distance information and the swallowing sounds.
  • the data measured by the swallowing measurement unit 410 is processed by these (processing step). Specifically, as will be described later, processing unit 420 applies a model function (in the present embodiment, equation (1) described later) that models the swallowing motion to detection data detected by transmission/reception coils 102 and 103.
  • the distance information (in the present embodiment, data indicating changes over time in the distance between the coils 102 and 103 arranged so as to sandwich the thyroid cartilage of the subject 101 (distance waveform shown in FIG. 7 to be described later) 701)) is fitted to (in the present embodiment, a fitted waveform 1103 shown in (a) of FIG. 8 to be described later) is obtained, and from this fitting result, an anterior-posterior movement associated with the anteroposterior movement of the thyroid cartilage is obtained.
  • a dynamic component in this embodiment, a back-and-forth dynamic component waveform 1105 shown in (b) of FIG.
  • the vertical motion component waveform 1106 shown in (b) of FIG. 8 (to be described later) or the data values forming it) is extracted, and the vertical motion component of the thyroid cartilage is extracted based on these extracted vertical motion components and longitudinal motion components.
  • Two-dimensional trajectory data in the present embodiment, data for forming a trajectory graph 901 shown in FIG. 10, which will be described later
  • the processing unit 420 generates a swallowing sound waveform (in the present embodiment, shown in FIG.
  • the display unit 430 causes the display device 110 to display information (data) measured and processed by the swallowing measurement unit 410 and the processing unit 420 and moving images (moving image data) acquired by the image acquisition unit 415 (display step). Note that the swallowing measurement unit 410, the processing unit 420, and the display unit 430 operate independently.
  • FIG. 4 shows the processing flow of the motion analysis unit 421 of the processing unit 420 of the computer 109 of FIG.
  • the motion analysis unit 421 processes the detection data detected by the transmission/reception coils 102 and 103.
  • step S501 the data measured by the swallowing measurement unit 410 is smoothed. make a change.
  • smoothing is performed using piecewise polynomial approximation using a Savitzky-Golay filter.
  • the smoothing in this case is performed by setting the number of windows and the degree of the polynomial to, for example, 5, 51, etc., respectively.
  • the smoothing method may be, for example, a simple moving average, and the present invention is not limited by these.
  • FIG. 7 shows a typical example of a range waveform 701 showing the variation over time of the distance between transmit and receive coils 102, 103, which is the distance between two locations in the larynx of subject 101.
  • a measured distance waveform 701 is the result of one-dimensional (horizontal) observation of the two-dimensional movement (forward and backward movement and vertical movement) of the thyroid cartilage (hyoid bone). Therefore, it exhibits a W-shaped waveform as shown in the figure.
  • the thyroid cartilage is lifted as the bolus is sent into the esophagus from the start point (time T0) 702 where the subject 101 starts swallowing the bolus in the mouth, thereby causing the transmission/reception coil to
  • the distance between 102 and 103 narrows from D0 to D1 and the distance waveform 701 reaches a first trough (first lower peak value; time T1) 703.
  • first trough first lower peak value
  • the thyroid cartilage moves forward (in the direction in which the subject's face is facing) to open the esophagus, thereby increasing the distance between the transmitting and receiving coils 102 and 103 from D1 to D1.
  • distance waveform 701 transitions from first valley 703 to peak (upper limit peak value; time T2) 704 .
  • the thyroid cartilage moves backward as the epiglottis moves upward, thereby increasing the distance between the transmitting and receiving coils 102 and 103.
  • the distance waveform 701 transitions from peak 704 to second valley (second lower peak value; time T3) 705 .
  • the thyroid cartilage then descends so that the epiglottis and thyroid cartilage return to their original positions, thereby increasing the distance between the transmit and receive coils 102, 103 from D3 to D4 and causing the distance waveform 701 to enter a second trough 705. to the end point (time T4) 706.
  • a downwardly convex waveform component is generated in a series of behaviors of the thyroid cartilage from ascending to descending.
  • an upwardly convex waveform component is generated. Therefore, in the present embodiment, the W-shaped distance waveform 701 is replaced with a gentle downwardly convex waveform 710 ((b )) and a sharp upwardly convex waveform 720 (corresponding to the front-back motion component waveform 1105 shown in FIG. 8B). It is modeled as shown in Equation (1).
  • t is the time
  • y(t) is the measured distance waveform
  • rAP(t) is the component in the front-back direction
  • rHF(t) is the component in the vertical direction
  • d(t) is body movement, etc. Offset from the initial value caused by individual differences such as thickness), and e indicates measurement noise.
  • the longitudinal and vertical components rAP and rHF are modeled by a normal distribution, and the trend component d(t) is modeled by a linear equation, but these models may be autoregressive models or nonlinear models. , the present invention is not limited by these.
  • each component is obtained by parameter fitting using a mathematical optimization technique.
  • parameter fitting is performed using the nonlinear least-squares method, but the present invention is not limited to this.
  • a constraint may be set such that the variance value of rAP is smaller than the variance value of rHF.
  • a waveform 1102 formed by data values represented by dots corresponds to the distance waveform 701 shown in FIG. It is an operating waveform (fitting waveform) that has been fitted.
  • the horizontal axis is time and the vertical axis is normalized amplitude based on the distance between the coils shown in FIG.
  • parameters are extracted from the fitted model function in step S503.
  • the anteroposterior and vertical behaviors of the thyroid cartilage are modeled by independent normal distributions. to extract
  • the "amplitude” corresponds to the magnitude of the movement of the thyroid cartilage
  • the "average value” corresponds to the time when the movement occurred
  • the "variance” corresponds to the duration of the movement.
  • FIG. 8(b) shows only the longitudinal and vertical components of the thyroid cartilage that are individually extracted from the operating waveform (fitted waveform) 1103 shown in FIG. 8(a). waveforms (upwardly convex front-back motion component waveform 1105 and downwardly convex vertical motion component waveform 1106) are shown.
  • the processing unit 420 including the motion analysis unit 421 of the biopsy apparatus 100 of the present embodiment performs temporal movement of the thyroid cartilage in the vertical direction and the anteroposterior direction based on the vertical motion component and the longitudinal motion component. It is possible to generate two-dimensional trajectory data that individually indicates behavior trajectories.
  • step S504 the feature points of the W-shaped waveform, that is, the distance waveform 701 in FIG.
  • the feature points corresponding to the peak points 702 to 706 are extracted.
  • T2 is taken as the average value of rAP
  • T1 and T3 are taken as the minimum values before and after T2, respectively
  • T0 and T4 are the negative and positive values of the average rHF, respectively. Obtained as the time at the point advanced by the variance value in the direction of .
  • D0 to D4 are obtained as values corresponding to times T0 to T4, respectively.
  • step S505 the waveforms, parameters, characteristic points, etc. calculated in steps S501 to S504 are stored in the internal storage device and/or external storage device of computer 109. 111. Note that each of steps S501 to S505 described above may be performed while the swallowing motion and swallowing sound are being measured by the swallowing measurement unit 410, or may be performed multiple times.
  • FIG. 5 shows the processing flow of the speech analysis unit 422 of the processing unit 420 of the computer 109 of FIG.
  • step S601 rectification processing is performed on audio information (generally, an audio signal including both positive and negative values) measured through the swallowing measurement unit 410 from the microphone 106 .
  • the rectification process means a process of taking an absolute value and converting a negative value into a positive value.
  • FIG. 9 shows a swallowing sound waveform 801 obtained by rectifying typical voice information.
  • step S602 the rectified signal obtained in step S601 is logarithmically transformed. This processing can reduce the influence of spike-like signals mixed in the swallowing sound.
  • step S603 the logarithmically transformed signal obtained in step S602 is smoothed.
  • smoothing processing is performed using a moving average, and the window width of the moving average is set to 400 points.
  • the present invention is not limited by this smoothing technique.
  • step S604 exponential transformation is applied to the smoothed signal obtained in step S603.
  • a waveform representing the envelope of the initially measured audio information In FIG. 9, an envelope curve 802 obtained from such typical speech information (swallowing sound waveform 801) is indicated by a dashed line.
  • step S605 the envelope signal obtained in step S604 is resampled. Specifically, in this embodiment, since the sampling frequencies of the voice information and the distance information in the swallowing measurement unit 410 shown in FIG. 3 are 4000 Hz and 100 Hz, respectively, the envelope signal is resampled to 1/40 is used to match the sampling frequency of the distance information.
  • step S606 the maximum value as a feature point is obtained for the resampled envelope signal obtained in step S605. This is because the section where the maximum amplitude is obtained in the swallowing sound signal (the swallowing sound waveform 801) is considered to indicate the flow of the ingested material, and is an important feature of the swallowing sound. Therefore, in this step S606, the time S2 corresponding to the peak point 803 indicating the maximum amplitude with respect to the envelope curve 802 shown in FIG. 9 is obtained.
  • step S607 the swallowing sound section of the resampled envelope signal obtained in step S605 is obtained. That is, in order to obtain the time interval Ts in which the swallowing sound occurs in the envelope 802, the times at both ends of the swallowing sound interval are obtained. Specifically, an amplitude threshold value 804 indicated by a dashed line in FIG. 9 is set, and a point crossing the threshold value 804 downward from the maximum value (peak point 803) obtained in step S606, that is, temporally Time points S1 and S3 corresponding to an early start point 805 and a temporally late end point 806, respectively, are acquired as feature points. Also, in the present embodiment, a value obtained by adding the normalized median absolute deviation to the median is used as the threshold value 804 . The method of setting the threshold 804 does not limit the present invention, and a value obtained by adding the standard deviation to the average value may be used.
  • step S608 the waveforms and feature values calculated in steps S601 to S607 are stored in the internal storage device of the computer 109 and/or the external storage device 111. Note that each of steps S601 to S608 described above may be performed while the swallowing motion and swallowing sound are being measured by the swallowing measurement unit 410, or may be performed multiple times.
  • FIG. 6 shows the processing flow of the analysis unit 423 of the processing unit 420 of the computer 109 of FIG.
  • step S1001 the maximum longitudinal and vertical maximum displacements (maximum values) of the motion waveform 1103 (or the distance waveform 701), which is the fitted waveform, are calculated.
  • step S1002 the signed curvature of each point on the trajectory graph 901 described in detail below with reference to FIG. 10 is calculated.
  • the time progress direction (transition direction) of the trajectory graph 901 is extracted, and the signed curvature at each point on the trajectory graph 901 is calculated in order to extract the point with the maximum displacement.
  • step S1003 the sign is obtained for the signed curvature obtained in step S1002. Specifically, in the trajectory graph 901, the amplitude of curvature is maximized at the point farthest from the coordinate origin. get.
  • the factor that determines whether the sign is positive or negative is the magnitude of the average value of the longitudinal component rAP and the vertical component rHF. It shows that the average value of the displacement in the direction (that is, the time to take the maximum value) is faster than that in the vertical direction.
  • step S1004 the geometric distance from the coordinate origin of the point at which the maximum signed curvature calculated in step S1002 is obtained is obtained.
  • the trajectory graph 901 since the amplitude of curvature is maximum at the point farthest from the origin of coordinates, the geometric distance from the point where the amplitude of curvature is maximum to the origin of coordinates is calculated. This makes it possible to acquire the time point (time) at which the displacement is the largest when the vertical and longitudinal components of the thyroid cartilage are synthesized.
  • step S1005 the time difference between the time at which the maximum value of the voice information is obtained and the time at which the maximum value of the distance information is obtained in the front-rear direction is acquired. This is especially because the time difference between the maximum values is an important parameter in characterizing the swallowing state.
  • this parameter can not only be grasped visually, but can also be displayed as a quantitative value.
  • the present invention is not limited by these quantitative values, and for example, the area of the region surrounded by the trajectory graph may be displayed as a feature amount.
  • step S1006 the ratio (variance value time difference).
  • the swallowing sound is generated at the timing when the thyroid cartilage advances, so in this step S1006, the ratio is calculated in order to display how much the swallowing sound generation deviation is within the individual.
  • step S1007 the waveforms and feature values calculated in steps S1001 to S1006 are saved in the internal storage device of the computer 109 and/or the external storage device 111. Note that each of the above steps S1001 to S1007 may be performed while the swallowing motion and swallowing sound are being measured by the swallowing measurement unit 410, or may be performed multiple times.
  • the processing unit 420 further calculates the behavior of the thyroid cartilage in the vertical direction and the longitudinal direction based on the vertical motion component and the longitudinal motion component described above. ) to generate two-dimensional trajectory data.
  • such two-dimensional trajectory data is generated as coordinate data shown on a coordinate plane defined by two mutually orthogonal coordinate axes, one coordinate axis corresponding to the trajectory data value of the longitudinal motion component.
  • the other coordinate axis corresponds to the trajectory data value of the vertical motion component. More specifically, as shown in FIG. 10, based on the signal fitting (step S502 in FIG. 4) and component extraction (step S503 in FIG.
  • the vertical motion component waveform 1106 The above data values and the data values on the longitudinal motion component waveform 1105 are associated with each other in time, and the horizontal axis is the trajectory data value of the longitudinal motion component (displacement in the longitudinal direction; normalized amplitude in the longitudinal motion component waveform 1105). ), and the vertical axis is plotted as the trajectory data value of the vertical motion component (vertical displacement; normalized amplitude in the vertical motion component waveform 1106). That is, the horizontal axis indicates the value of the normal distribution having the parameters extracted for rAP of formula (1) in step S503 in FIG. values of a normal distribution with parameters
  • Such a trajectory graph 901 shown in FIG. 10 is displayed on the display device 110 via the display unit 430 of the computer 109.
  • the plot of each trajectory data value on the trajectory graph 901 is Identification display, for example, color-coded display is performed according to the magnitude of the amplitude of the swallowing sound.
  • the processing unit 420 generates a swallowing sound waveform 801 and an envelope curve 802 representing temporal changes in the amplitude of the swallowing sound based on detection data detected through the microphone 106 as described above.
  • the plot of each trajectory data value on the trajectory graph 901 is identified and displayed according to the magnitude of the amplitude of the swallowing sound.
  • identification data for In relation to such identification display, in the present embodiment, which is color-coded display, a band graph for reference showing how the color changes according to the magnitude of the swallowing sound amplitude value along the vertical axis 909 is displayed adjacent to the trajectory graph 901 .
  • the larger the amplitude of the swallowing sound the more yellowish it becomes, and the smaller the amplitude, the more blueish it becomes.
  • an identification display form may be used in which the color is divided into black and white, and the color becomes lighter as the amplitude increases.
  • the identification display form is not limited to this, and trajectory data values with different amplitudes of swallowing sounds, such as changing the size or shape of the plot (mark) of each trajectory data value according to the magnitude of the amplitude of the swallowing sound.
  • Any display mode may be used as long as it is a display mode that allows identification of each other.
  • Such a trajectory graph 901 in which the trajectory data values are plotted as a time-series scatter diagram, displays the behavior of the thyroid cartilage in the front-rear direction and in the vertical direction by separating them on two coordinate axes. Make behavior visible at a glance.
  • the features of the swallowing sound information in one trajectory graph 901 in this way it is possible to visually confirm at what point in time the swallowing sound occurred with respect to the behavior of the thyroid cartilage. Not only can the swallowing motion be grasped quantitatively, but also the deviation of the swallowing sound from the normal state and the power of the swallowing sound can be grasped at a glance.
  • the processing unit 420 generates a predetermined feature point associated with the motion waveform 1103 (or the distance waveform 701), a predetermined feature point associated with the swallowing sound waveform 801 (or the envelope curve 802). , and supplementary display data for superimposing and displaying on the trajectory graph 901 supplementary information including the time of occurrence of the trajectory data values plotted on the trajectory graph 901, and the transition direction of the trajectory graph 901 and the trajectory graph.
  • Reference display data for displaying reference information including a predetermined feature amount calculated from 901 together with the trajectory graph 901 is also generated.
  • 902 in FIG. 10 is an arrow indicating in which direction the trajectory has progressed (transition direction of trajectory graph 901).
  • the trajectory starts from the coordinate origin, rotates counterclockwise, and then returns to the coordinate origin.
  • 903 denotes a feature amount calculated from the trajectory graph 901 .
  • the maximum amount of displacement in the longitudinal direction, the maximum amount of displacement in the vertical direction, the maximum amount of displacement from the coordinate origin indicated by 904, and the time difference ( ⁇ ) between the maximum values of the motion information and the voice information. , and the ratio of the time difference based on the variance of the displacement in the longitudinal direction (rAP), respectively, are shown as feature amounts.
  • the feature amount instead of displaying above the coordinate area of the trajectory graph 901 as in the present embodiment, it may be displayed within the coordinate area of the trajectory graph 901, or may be displayed in another coordinate area. It may be displayed in a drawing, and the present invention is not limited by these.
  • 905 indicates the generation time of the trajectory data value plotted on the trajectory graph 901, which is displayed every 0.1 seconds in the present embodiment.
  • 906 indicates a peak point in the distance information obtained in step S504 of FIG.
  • 907 indicates the point in time when the maximum value of the voice information obtained in step S606 of FIG. 5 is obtained. With this display, it is possible to confirm the time lag between the point of time when the maximum value of the voice information is indicated and the point of time when the distance information indicates the maximum value of the component in the front-rear direction of the thyroid cartilage.
  • Reference numeral 908 denotes the start point 805 and end point 806 (see FIG. 9) of the audio information obtained in step S607 of FIG.
  • the model function modeling the swallowing motion is fitted to the distance information based on the detection data detected by the transmitting/receiving coils 102 and 103 to obtain the fitting result. Therefore, it is possible to non-invasively reproduce the movement of the thyroid cartilage (hyoid bone) two-dimensionally (modeling of the swallowing movement), and at the same time, the behavioral components related to all the movement directions of the thyroid cartilage during swallowing, that is, the vertical direction
  • Two anteroposterior motion components and two vertical motion components corresponding to the movement in the forward and backward directions are extracted from the fitting results, and two-dimensional trajectory data showing the trajectories of the thyroid cartilage in the vertical direction and the anteroposterior direction based on these two components. is generated, it is possible to grasp the two-dimensional movement of the thyroid cartilage (hyoid bone) up and down and back and forth as swallowing dynamics at a glance without the need for comprehensive estimation of swallowing behavior.
  • the various processes (steps) related to the generation of the trajectory graph 901 are applied to the motion of the thyroid cartilage in the above, the various processes can also be applied to the motion of body parts other than the thyroid cartilage. That is, the various processes can be applied to analysis of movements of body parts other than the laryngeal region as long as they move in the same or similar manner as the thyroid cartilage (hyoid bone). Specifically, if the change in distance detected by a predetermined detection unit is decomposed into motions in a plurality of directions and can be analyzed, trajectory data is generated based on the motions, and a trajectory graph 901 is generated. can draw Further, the detection unit is not limited to detecting movement of a predetermined part (acquiring data indicating movement) using a coil. For example, the laryngeal displacement detection unit is not limited to detecting the movement by the coils 102 and 103 as long as it can detect the movement of the laryngeal (thyroid cartilage) (capable of acquiring data indicating the movement).
  • Calculator 109 can cause display device 110 to display a graph showing movement of the thyroid cartilage (larynx) and a graph relating to swallowing sounds. Further, the computer 109 (display unit 430) displays a moving image 950 captured by the camera 114 (a moving image of the subject 101 during the swallowing motion and swallowing sound measurement, which captures the movement of the thyroid cartilage), It can be displayed on the display device 110 .
  • a distance waveform 710 and a trajectory graph 901 are displayed as graphs representing the movement of the thyroid cartilage, and a swallowing sound waveform 801 and a trajectory graph 901 are displayed as graphs related to swallowing sounds.
  • the graph showing the movement of the thyroid cartilage and the graph relating to swallowing sounds are not limited to these.
  • “distance waveform 701” may be read as “motion waveform 1103", “backward motion component waveform 1105", or “vertical motion component waveform 1106".
  • the “swallowing sound waveform 801” may be read as “envelope 802” or the like.
  • the distance waveform 701 is displayed as a graph in which the first coordinate axis (horizontal axis) corresponds to time and the second coordinate axis (vertical axis) intersecting with the first coordinate axis corresponds to the distance between the coils 102 and 103. It is a display, and can be said to be a display of a graph showing the movement of the thyroid cartilage.
  • the first coordinate axis (horizontal axis) corresponds to the trajectory data value of the anteroposterior motion component of the thyroid cartilage
  • the second coordinate axis (vertical axis) intersecting with the first coordinate axis is up and down.
  • the first coordinate axis corresponds to time
  • the second coordinate axis vertical axis
  • intersecting with the first coordinate axis corresponds to the magnitude (amplitude) of the swallowing sound.
  • It is a display of the corresponding graph and can be said to be a display of a graph relating to swallowing sounds.
  • the trajectory graph 901 may be, for example, a three-dimensional graph or the like having a time axis as a third coordinate axis intersecting the first coordinate axis and the second coordinate axis.
  • the calculator 109 can execute processing for displaying the distance waveform 701, the swallowing sound waveform 801, and the moving image 950 simultaneously (on the same screen) on the display device 110.
  • a screen on which the distance waveform 701 , the swallowing sound waveform 801 , and the moving image 950 are displayed is called a first display screen 2001 .
  • the calculator 109 causes the display device 110 to display the trajectory graph 901, the distance waveform 701, the swallowing sound waveform 801, and the moving image 950 at the same time (on the same screen). processing can be executed.
  • a screen on which the trajectory graph 901 , the distance waveform 701 , the swallowing sound waveform 801 , and the moving image 950 are displayed is called a second display screen 2002 . Note that some of the items displayed on the same screen here may not be displayed on the same screen.
  • FIG. 11 and 12 are diagrams showing display examples of the first display screen 2001.
  • FIG. FIG. 11 shows the distance waveform 701 and the swallowing sound waveform 801 without markers 964 (964a, 964b), which will be described later.
  • FIG. 12 shows a state in which a marker 964, which will be described later, is displayed for the distance waveform 701 and the swallowing sound waveform 801, and the image at the time (6.39 seconds) when the display of the moving image 950 is indicated by the marker 964. Indicates that the frame is displayed.
  • 13 and 14 are diagrams showing display examples of the second display screen 2002.
  • FIG. 13 shows the distance waveform 701 and the swallowing sound waveform 801 in a state in which a marker 964, which will be described later, is not displayed, and the entire trajectory graph 901 is plotted.
  • FIG. 14 shows a state in which a marker 964, which will be described later, is displayed for the distance waveform 701 and the swallowing sound waveform 801, and the image at the time (4.74 seconds) when the display of the moving image 950 is indicated by the marker 964. Indicates that the frame is displayed.
  • FIG. 14 shows a state in which a part of the trajectory graph 901 is plotted, and shows a state in which the trajectory graph 901 is plotted up to the portion corresponding to the point in time (4.74 seconds). is.
  • the operator can use the input device 112 to specify any point on the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901 displayed on the first display screen 2001 or the second display screen 2002. It has become. Specifically, for example, using a mouse as the input device 112, by aligning the cursor 960 with an arbitrary point on the displayed distance waveform 701, swallowing sound waveform 801, or trajectory graph 901 (by aligning the cursor 960 by clicking), the point may be specified (see FIG. 12). Further, for example, a touch panel integrated with the display device 110 serves as the input device 112, and by touching an arbitrary point on the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901, the point is specified. It can be like this.
  • the computer 109 causes the display device 110 to display an image corresponding to the specified point in the video 950 displayed on the same screen based on the operation of specifying the point on the input device 112 .
  • the computer 109 displays a display that enables the user to grasp the point corresponding to the specified point in the moving image 950 displayed on the same screen based on the operation of specifying the point on the input device 112. Display on device 110 .
  • the computer 109 calculates A video (video frame) is extracted (searched) from the video 950 (video data), and the extracted video is displayed on the display device 110 .
  • the computer 109 calculates the image captured at the same time as the acquisition of the data.
  • the (video frame) is extracted (searched) from the video 950 (video data), and the extracted video is displayed on the display device 110 .
  • the computer 109 calculates the generation time of the trajectory data (the time associated with the trajectory data).
  • a video (video frame) captured at the same time as the time) is extracted (searched) from the video 950 (video data), and the extracted video is displayed on the display device 110 .
  • the data at 6.39 seconds (for example, 6.39 seconds after the start of measurement) in the distance waveform 701 is specified, the data at 6.39 seconds in the moving image 950 (for example, 6.39 seconds after the start of measurement) is displayed.
  • the computer 109 calculates the specified point (the A marker 964, which will be described later, may be displayed on the point portion).
  • the operator can use the input device 112 to specify an arbitrary time point (point) in the moving image 950 displayed on the first display screen 2001 or the second display screen 2002 .
  • a seek bar 962 indicating the playback position of moving image 950 is displayed on first display screen 2001 or second display screen 2002 .
  • the operator can use a mouse, a touch panel, or the like as the input device 112 to specify the playback position of the moving image 950 (an arbitrary time point (point) in the moving image 950) on the seek bar 962.
  • FIG. It should be noted that the designation of an arbitrary time point in the moving image 950 may be performed by directly inputting the playback time (playback position) using, for example, a keyboard as the input device 112.
  • 2002 may be provided with a button for fast-forwarding or fast-reversing the moving image 950 by a predetermined frame (predetermined time), and this may be performed by operating the button.
  • Calculator 109 calculates the distance waveform 701, swallowing sound waveform 801, or trajectory graph 901 displayed on the same screen based on the operation of specifying an arbitrary time point (point) of the moving image 950 on the input device 112.
  • the display device 110 is caused to display a display that enables the point corresponding to the point in time to be grasped.
  • the computer 109 displays a point corresponding to the video frame in the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901. is displayed on the display device 110 .
  • the calculator 109 may display a predetermined marker 964 at a location corresponding to a specified point in the distance waveform 701, swallowing sound waveform 801, or trajectory graph 901.
  • the marker 964 may be, for example, an icon 964a or the like, or a straight line 964b or the like perpendicular to the time axis (predetermined coordinate axis) (see FIGS. 12 and 14).
  • the calculator 109 may display a distance waveform 701, a swallowing sound waveform 801, or a trajectory graph 901 that is plotted up to a specified point in time and not plotted beyond the specified point in time. (See locus graph 901 in FIG. 14).
  • the calculator 109 can generate a distance waveform 701, a swallowing sound waveform 801, or a trajectory graph 901 in which plots (lines) have different colors, thicknesses, etc. up to a specified time point and beyond the specified time point. may be displayed.
  • the time point of 4.74 seconds (for example, 4.74 seconds after the start of measurement) is specified in the moving image 950
  • the time point of 4.74 seconds is specified in the distance waveform 701 and the swallowing sound waveform 801 (for example, 4.74 seconds after the start of measurement).
  • a state in which a marker 964 is displayed in the portion of 0.74 seconds has elapsed is shown. Also, in FIG.
  • the time point of 4.74 seconds (for example, 4.74 seconds after the start of measurement) of the moving image 950 is designated, and the trajectory graph 901 is set at 4.74 seconds (for example, 4.74 seconds after the start of measurement).
  • the plotted state is shown until the elapsed time).
  • the marker 964 indicates the position corresponding to the current playback position (displayed video frame) of the moving image 950, and may move along with the playback (progress) of the moving image 950. Also, the display of the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901 is plotted up to the point in time corresponding to the current playback position (displayed video frame) of the moving image 950 (before the specified point in time). is not plotted), and the plot may proceed in accordance with the playback (progress) of the moving image 950 .
  • the display of the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901 is displayed up to the time point corresponding to the current playback position (displayed video frame) of the moving image 950.
  • state the plot (line) before the designated point is not in the predetermined state
  • the portion in the predetermined state is advanced in accordance with the playback (progress) of the moving image 950. You can do it.
  • a display (a display or plot in which the marker 964 moves) that makes it possible to grasp the point corresponding to the current video playback position in the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901 ) may be displayed on the display device 110 .
  • the operation of specifying an arbitrary point on the graph (distance waveform 701, swallowing sound waveform 801, or trajectory graph 901), or the operation of specifying an arbitrary point in the video 950
  • An image (image frame) at a specific point in time and a display (marker 964 or the like) indicating a point on the graph corresponding to the image (image frame) are displayed on the same screen.
  • One video (video frame) may be displayed on the same screen, or a plurality of video frames may be displayed. That is, for example, by performing an operation to specify a plurality of points on the graph on the input device 112, images (image frames) corresponding to the plurality of specified points, that is, a plurality of images (image frames) are generated.
  • each point specified at this time is displayed so that the corresponding relationship between each image (image frame) can be understood (for example, by color coding, attaching an identification code, or aligning the positional relationship).
  • display that is, display of a plurality of markers 964, etc.
  • an image at a time point corresponding to the specified point in the moving image 950 (that is, the specified point in the moving image 950) is displayed.
  • display on the display device 110 or based on the operation of designating a predetermined time point in the moving image 950, the point corresponding to the designated time point in the graph is grasped.
  • the processing for displaying the display that enables display on the display device 110 is performed by combining the distance information, the audio information, and the moving image 950 with mutual time information (for example, obtaining the acquisition time of the distance information, the acquisition time of the audio information, and the acquisition of each frame of the moving image 950).
  • a measurement start operation on the input device 112 for example, an operation of clicking (selecting) a measurement start button displayed on the display device 110, an operation of a predetermined physical button for instructing the start of measurement, etc.) etc.
  • the measurement of the swallowing motion laarynx displacement detection step
  • the measurement of the swallowing sound the swallowing sound detection step
  • the shooting of the video 950 video acquisition step
  • the distance information, the audio information, and the moving image 950 stored in the predetermined memory are associated with each other by their time information (distance information acquisition time, sound information acquisition time, and acquisition time of each frame of the moving image 950). It will be stored as it is.
  • storing in a state in which the time information is associated with each other means that distance information (the distance between the coils 102 and 103) and audio information (the amplitude of the swallowing sound) are stored for one piece of time information. ) and the moving image 950 (each frame image forming the moving image 950) are associated with each other.
  • the distance waveform 701 and the swallowing sound waveform 801 are displayed in parallel on the first display screen 2001 . Also, in the first display screen 2001 , the display area of the moving image 950 is outside the display area of the distance waveform 701 and the display area of the swallowing sound waveform 801 . Further, as shown in FIG. 13, a distance waveform 701 and a swallowing sound waveform 801 are displayed in an overlapping manner on the second display screen 2002 . In other words, the distance waveform 701 and the swallowing sound waveform 801 are displayed on one graph.
  • the display area of the moving image 950 is inside the display area of the distance waveform 701 and the display area of the swallowing sound waveform 801 .
  • the moving image 950 is displayed superimposed on the graph of the distance waveform 701 and the graph of the swallowing sound waveform 801 .
  • the distance waveform 701 and the swallowing sound waveform 801 in an overlapping manner, it is possible to display the trajectory graph 901 and the moving image 950 in a large size.
  • the moving image 950 so as to overlap a predetermined graph, it is possible to display the predetermined graph and other graphs in a larger size.
  • the predetermined graph may be the trajectory graph 901, for example.
  • the measured distance waveform 701 and swallowing sound waveform 801 are displayed over the entire measurement time (first range), and a part of the measurement time (first range) is displayed. It is possible to display the second range as a partial range of the range).
  • the measurement (and the shooting of the moving image 950) is performed for 15 seconds.
  • the measurement (and shooting of the moving image 950) is started based on a measurement start operation on the input device 112, and a measurement end operation on the input device 112 (for example, by clicking a measurement end button displayed on the display device 110).
  • the measurement may be ended based on an operation, an operation on a predetermined physical button for instructing the end of measurement, or the like.
  • the measurement (and the shooting of the moving image 950) may be started based on a measurement start operation on the input device 112, and may end after a predetermined time (for example, 15 seconds) has elapsed.
  • the present embodiment it is possible to select (specify) which time range of the measured time the distance waveform 701 and the swallowing sound waveform 801 are to be displayed.
  • the selection may be made by, for example, specifying (drag, etc.) an arbitrary range of the displayed distance waveform 701 or swallowing sound waveform 801 using a mouse, touch panel, or the like as the input device 112. This may be performed by inputting the start point and end point of the time range to be displayed using a keyboard as the input device 112 .
  • 11 and 12 show the distance waveform 701 and the swallowing sound waveform 801 displayed over the entire measurement time.
  • 13 and 14 show a part of the time range (3.94 seconds to 5.37 seconds) of the measurement time (the time range (first range) displayed in FIGS.
  • the video 950 is reproduced on the input device 112.
  • a start operation for example, an operation of clicking a playback button displayed on the display device 110, an operation of a predetermined physical button for instructing the start of playback, etc.
  • the moving image 950 from the start point to the end point of the partial time range is reproduced.
  • the distance waveform 701 and swallowing sound waveform 801 shown in FIGS. 11 and 12 may be based on part of the measured data.
  • the operation of selecting the time range to display the distance waveform 701 also serves as the operation of specifying the analysis range of the distance information (audio information) (operation related to starting analysis).
  • the motion analysis unit 421 analyzes the distance information (analysis of the swallowing sound by the voice analysis unit 422) (in other words, when generating the trajectory graph 901)
  • the analysis result (for example, the trajectory graph 901) is displayed on the display device 110 based on the operation of specifying the time range to be analyzed, and the distance waveform of the analyzed time range 701 and the swallowing sound waveform 801 may be displayed on the display device 110 .
  • the second display screen 2002 may be displayed based on the operation of designating the analysis range (operation related to starting analysis).
  • the biopsy apparatus 100 of this embodiment is A biopsy apparatus 100 including an input device 112 that receives an operation by an operator, a display device 110, and an information processing device (computer 109),
  • the information processing device 109 is A graph (distance waveform 701 or trajectory graph 901, etc.) showing the movement of the thyroid cartilage when the subject swallows, based on the detection data acquired by the detection units 102 and 103 that detect the movement of the thyroid cartilage, and the camera 114.
  • the information processing device when the graph and the moving image 950 are displayed on the same screen, the information processing device performs , causes the display device 110 to display an image (image frame) at a time point corresponding to the arbitrary point (specified point) in the moving image 950 .
  • This makes it possible to grasp the point (video frame) on the moving image 950 corresponding to the specified point.
  • the information processing device in a state in which the graph and the moving image 950 are displayed on the same screen, based on the operator's operation of designating an arbitrary time point (point) in the moving image 950, The display device 110 may be caused to display a point corresponding to the arbitrary time point (designated time point) in the graph so that the point can be grasped.
  • Display on a display device means that, based on an operation of specifying an arbitrary point on at least one of a graph or a moving image, it is sufficient to display the corresponding point on the other so that the other can be grasped, and the specifying operation on the other It is not necessary to make a display that allows the user to grasp the corresponding points on the one side based on the above.
  • the graph showing the movement of the thyroid cartilage and the moving image of the subject during swallowing are displayed on the same screen.
  • the graph (detection data) can be confirmed while observing the movement, and the examination of swallowing can be efficiently performed.
  • a display that enables the user to grasp the point corresponding to the arbitrary point in the moving image is displayed, or an arbitrary point in the moving image is displayed. At least one of displaying a display that makes it possible to grasp the point corresponding to the arbitrary point in the graph based on the operation of designating the point.
  • processing by each device described in the present embodiment may be realized by any of software, hardware, and a combination of software and hardware.
  • Programs that make up software are, for example, non-transitory computer-readable media (non-transitory computer readable medium). Also, the program may be distributed via a network, for example.
  • biopsy device 100 biopsy device 102 transmission coil (detection unit) 103 Receiving coil (detection unit) 106 microphone 109 computer (information processing device) 110 display device 112 input device 114 camera

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Dentistry (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Provided are a biological examination apparatus capable of efficiently performing an examination concerning deglutition, a biological examination method, and a program. An information processing apparatus 109 in a biological examination apparatus 100 according to the present invention executes: a process of causing a display 110 to concurrently display a graph based on on detected data acquired by detection units 102, 103 for detecting movement of a thyroid cartilage, the graph indicating movement of a thyroid cartilage when a subject swallows, and a video image wherein the subject when swallowing is captured, the video image being captured by a camera 114; and a process of causing the display 110 to display, on the basis of operation designating an arbitrary point in one of the graph or the video image by an operator in a state wherein the graph and the video image are concurrently displayed, a display wherein a point corresponding to the arbitrary point in the other of the graph or the video image can be grasped.

Description

生体検査装置、生体検査方法およびプログラムBiopsy apparatus, biopsy method and program
 本発明は、生体の嚥下に関する検査を行うための生体検査装置、生体検査方法およびプログラムに関する。 The present invention relates to a biopsy apparatus, a biopsy method, and a program for performing a test related to swallowing of a living body.
 主要な死因の一つに肺炎が知られている。その中で、嚥下(swallow)にまつわる障害を意味する嚥下障害(dysphagia)が誘発する誤嚥性肺炎は約6割以上を占めている。 Pneumonia is known to be one of the major causes of death. Among them, aspiration pneumonia induced by dysphagia, which means a disorder related to swallowing, accounts for about 60% or more.
 嚥下障害の主要な原因疾患は脳卒中であり、その急性期患者の8割に嚥下障害が生じることが知られている。また、脳卒中のような明らかな原因疾患がなくても、年齢が上がるにつれて嚥下障害を有する割合が増加することも知られており、高齢化社会においては、今後、誤嚥性肺炎および嚥下障害の増加が見込まれている。 Stroke is the main cause of dysphagia, and it is known that 80% of patients in the acute phase develop dysphagia. It is also known that the percentage of people with dysphagia increases with age, even without a clear causative disease such as stroke. expected to increase.
 そのため、従来から、嚥下障害を診断するための様々な検査が試みられている。例えば、嚥下障害を正確に評価および把握できる方法として、嚥下造影(Videofluoroscopic Examination of Swallowing: VF)が一般的に知られている。このVFでは、硫酸バリウムなどの造影剤を含む食塊とX線透視装置とを用いて、被検者における嚥下時の食塊の動きや舌骨・喉頭部の挙動がモニターされる。この場合、嚥下運動は、一連の早い動きであるため、一般にビデオに記録して評価される。しかしながら、VFは、潜在的に誤嚥や窒息などの可能性を有する検査であることから注意を要し、また、大型装置であるX線透視装置が必要であることから、被曝や時間的制約、高コストなどの問題も伴う。また、内視鏡を用いて嚥下障害を評価する嚥下内視鏡検査(Videoendoscopic Examination of Swallowing:VE)も知られているが、VFと同様の問題を伴う。このように、VFやVEのような臨床検査は、直接的に喉の動きを見るため、正確に診断できるが、侵襲性が高く、また、所定の設備が必要であるため、どこでも簡単に行なえるというものでもない。 Therefore, various tests have been attempted to diagnose dysphagia. For example, Videofluoroscopic Examination of Swallowing (VF) is generally known as a method for accurately evaluating and understanding dysphagia. In this VF, a bolus containing a contrast agent such as barium sulfate and an X-ray fluoroscope are used to monitor the movement of the bolus during swallowing and the behavior of the hyoid bone and larynx of the subject. In this case, the swallowing movement is a series of rapid movements and is generally recorded and evaluated on video. However, VF requires caution because it is an examination that has the potential for aspiration and suffocation, and because it requires a large-sized X-ray fluoroscope, exposure and time are limited. , and high cost. Also known is videoendoscopic examination of swallowing (VE), which uses an endoscope to evaluate dysphagia, but has the same problems as VF. In this way, clinical examinations such as VF and VE directly observe the movement of the throat, so they can be diagnosed accurately. It does not mean that
 これに対し、嚥下障害の簡便な検査法として、触診(反復唾液嚥下テスト(RSST:Repetitive Saliva Swallowing Test))、聴診(頸部聴診法)、観察(水飲みテストおよびフードテスト)、または、質問紙による主観評価などのスクリーニング検査も知られているが、日常的な検査として実施できるものの、定量的な評価が難しく、再現性および客観性に乏しいという問題がある。 On the other hand, as a simple examination method for dysphagia, palpation (Repetitive Saliva Swallowing Test (RSST)), auscultation (cervical auscultation), observation (water drinking test and food test), or questionnaire Screening tests such as subjective evaluations by MRI are also known, but although they can be performed as routine tests, there are problems in that quantitative evaluation is difficult and reproducibility and objectivity are poor.
 以上のような問題に鑑み、近年、嚥下状態を共有・記録する方法が幾つか提案されている。例えば、特許文献1は、頸部にマイクを装着し、聴診に相当する音声データをデジタルデータとして保存して、波形解析により嚥下を検出する装置を開示している。また、特許文献2は、マイクに加えて頸部に磁気コイルを装着し、音声データに加えて触診に相当する嚥下時の甲状軟骨の動作データをデジタルデータとして保存して、生体の嚥下に関する検査およびその結果表示を行なう生体検査装置を開示している。この生体検査装置は、具体的には、甲状軟骨を挟み込むように送信コイルと受信コイルとを配設することにより、嚥下時の舌骨の上下前後の2次元的な挙動に付随して生じる甲状軟骨部の左右方向の変位をコイル間の距離情報として計測している。このような検査形態によれば、触診および聴診に相当する距離情報および音声情報を同時に非侵襲的に取得でき、それにより、距離情報と音声情報とを組み合わせて嚥下動作を評価することができる。 In view of the above problems, several methods have been proposed in recent years to share and record the swallowing state. For example, Patent Literature 1 discloses a device that attaches a microphone to the neck, saves voice data corresponding to auscultation as digital data, and detects swallowing by waveform analysis. In Patent Document 2, in addition to a microphone, a magnetic coil is attached to the neck, and in addition to voice data, motion data of the thyroid cartilage during swallowing, which corresponds to palpation, is stored as digital data, and an examination related to swallowing of a living body is performed. and a biopsy apparatus that displays the results. Specifically, this biopsy apparatus has a transmitting coil and a receiving coil arranged so as to sandwich the thyroid cartilage. The lateral displacement of the cartilage is measured as distance information between the coils. According to such an examination form, distance information and voice information corresponding to palpation and auscultation can be acquired noninvasively at the same time, so that the swallowing motion can be evaluated by combining the distance information and voice information.
特開2013―017694号公報JP 2013-017694 A 特開2009―213592号公報JP-A-2009-213592
 ところで、特許文献2の生体検査装置では、距離情報と音声情報とが、時系列波形としての距離波形および音声波形として表示され、嚥下状態の評価は、これらの波形に基づいて行うこととなる。しかし、これらの波形のみに基づく評価形態では、波形から実際に嚥下が行われたタイミング(開始タイミングや終了タイミング等)を把握することが困難な場合等があり、嚥下に関する検査に時間がかかってしまうおそれがあった。 By the way, in the biopsy apparatus of Patent Document 2, the distance information and the voice information are displayed as a distance waveform and a voice waveform as time-series waveforms, and the swallowing state is evaluated based on these waveforms. However, in these evaluation forms based only on waveforms, there are cases where it is difficult to grasp the actual timing of swallowing (start timing, end timing, etc.) from waveforms, and it takes time to examine swallowing. I could have lost it.
 本発明は、前記事情に鑑みてなされたものであり、嚥下に関する検査を効率よく行うことが可能な生体検査装置、生体検査方法およびプログラムを提供することを目的とする。 The present invention has been made in view of the above circumstances, and aims to provide a biopsy apparatus, a biopsy method, and a program capable of efficiently performing a swallowing test.
 前記課題を解決するために、本発明の生体検査装置は、
 操作者による操作を受け付ける入力装置と、表示装置と、情報処理装置と、を備える生体検査装置であって、
 前記情報処理装置は、
 甲状軟骨の動きを検出する検出部により取得された検出データに基づく、被検者が嚥下する際の甲状軟骨の動きを示すグラフと、カメラにより撮影された嚥下する際の前記被検者を撮影した動画と、を前記表示装置に同時に表示させる処理と、
 前記グラフと前記動画とが同時に表示されている状態において、前記操作者による前記グラフまたは前記動画の一方における任意のポイントを指定する操作に基づいて、前記グラフまたは前記動画の他方における当該任意のポイントに対応するポイントを把握可能とする表示を前記表示装置に表示させる処理と、を実行することを特徴とする。
In order to solve the above problems, the biopsy apparatus of the present invention includes:
A biological examination apparatus comprising an input device that receives an operation by an operator, a display device, and an information processing device,
The information processing device is
A graph showing the movement of the thyroid cartilage when the subject swallows based on the detection data acquired by the detection unit that detects the movement of the thyroid cartilage, and an image of the subject when the subject swallows taken by a camera. and a process of simultaneously displaying on the display device,
In a state in which the graph and the moving image are displayed at the same time, based on an operation by the operator to designate an arbitrary point on one of the graph or the moving image, the arbitrary point on the other of the graph or the moving image and a process of causing the display device to display a display that enables the user to grasp the points corresponding to .
 また、本発明の生体検査方法は、
 操作者による操作を受け付ける入力装置と、表示装置と、情報処理装置と、を備える生体検査装置により実行される生体検査方法であって、
 甲状軟骨の動きを検出する検出部により取得された検出データに基づく、被検者が嚥下する際の甲状軟骨の動きを示すグラフと、カメラにより撮影された嚥下する際の前記被検者を撮影した動画と、を前記表示装置に同時に表示させるステップと、
 前記グラフと前記動画とが同時に表示されている状態において、前記操作者による前記グラフまたは前記動画の一方における任意のポイントを指定する操作に基づいて、前記グラフまたは前記動画の他方における当該任意のポイントに対応するポイントを把握可能とする表示を前記表示装置に表示させるステップと、を含むことを特徴とする。
In addition, the biopsy method of the present invention is
A biopsy method performed by a biopsy apparatus including an input device for receiving an operation by an operator, a display device, and an information processing device,
A graph showing the movement of the thyroid cartilage when the subject swallows based on the detection data acquired by the detection unit that detects the movement of the thyroid cartilage, and an image of the subject when the subject swallows taken by a camera. and a step of simultaneously displaying on the display device;
In a state in which the graph and the moving image are displayed at the same time, based on an operation by the operator to designate an arbitrary point on one of the graph or the moving image, the arbitrary point on the other of the graph or the moving image and a step of causing the display device to display a display that makes it possible to grasp the points corresponding to the.
 また、本発明のプログラムは、
 甲状軟骨の動きを検出する検出部により取得された検出データに基づく、被検者が嚥下する際の甲状軟骨の動きを示すグラフと、カメラにより撮影された嚥下する際の前記被検者を撮影した動画と、を表示装置に同時に表示させる処理と、
 前記グラフと前記動画とが同時に表示されている状態において、前記操作者による前記グラフまたは前記動画の一方における任意のポイントを指定する操作に基づいて、前記グラフまたは前記動画の他方における当該任意のポイントに対応するポイントを把握可能とする表示を前記表示装置に表示させる処理と、をコンピュータに実行させることを特徴とする。
Further, the program of the present invention is
A graph showing the movement of the thyroid cartilage when the subject swallows based on the detection data acquired by the detection unit that detects the movement of the thyroid cartilage, and an image of the subject when the subject swallows taken by a camera. and a process of simultaneously displaying on a display device,
In a state in which the graph and the moving image are displayed at the same time, based on an operation by the operator to designate an arbitrary point on one of the graph or the moving image, the arbitrary point on the other of the graph or the moving image and a process of causing the display device to display a display that enables the user to grasp the points corresponding to the above.
 本発明によれば、甲状軟骨の動きを示すグラフと、嚥下をする際の被検者を撮影した動画とが同一画面上に表示されるので、動画によって嚥下をする際の甲状軟骨の動きを見ながらグラフ(検出データ)を確認することができ、嚥下に関する検査を効率よく行うことができる。また、本発明によれば、グラフにおける任意のポイントを指定する操作に基づいて、動画における当該任意のポイントに対応するポイントを把握可能とする表示を表示させること、または、動画における任意のポイントを指定する操作に基づいて、グラフにおける当該任意のポイントに対応するポイントを把握可能とする表示を表示させること、の少なくとも一方が可能となる。すなわち、グラフ上の任意のポイントを指定することによって当該任意のポイントに対応する動画中の映像フレームを探し出すこと、または、動画中の任意の時点(ポイント)を指定することによって当該任意の時点に対応するグラフ上のポイントを探し出すこと、が可能となる。したがって、グラフにおける集中的に解析を行うべき範囲を抽出する作業等が行いやすくなり、嚥下に関する検査を効率よく行うことができる。また、グラフ上の各ポイントと動画に写っている実際の嚥下動作との関係を把握することが容易となり、嚥下に関する検査を効率よく行うことができる。 According to the present invention, the graph showing the movement of the thyroid cartilage and the moving image of the subject during swallowing are displayed on the same screen. The graph (detection data) can be checked while looking at it, and the swallowing test can be efficiently performed. Further, according to the present invention, based on the operation of designating an arbitrary point in the graph, a display is displayed that allows the point corresponding to the arbitrary point in the moving image to be grasped, or the arbitrary point in the moving image is displayed. At least one of displaying a display that makes it possible to grasp the point corresponding to the arbitrary point in the graph can be displayed based on the specified operation. That is, by specifying an arbitrary point on the graph, searching for a video frame in the video corresponding to that arbitrary point, or by specifying an arbitrary time (point) in the video, Finding the corresponding point on the graph becomes possible. Therefore, it becomes easy to perform work such as extracting a range in which analysis should be performed intensively in the graph, and it is possible to efficiently perform a swallowing examination. In addition, it becomes easy to grasp the relationship between each point on the graph and the actual swallowing motion shown in the moving image, and the examination of swallowing can be performed efficiently.
 また、本発明の前記構成において、前記グラフは、甲状軟骨の前後方向の動きに対応する第1座標軸と、甲状軟骨の上下方向の動きに対応し前記第1座標軸と交差する第2座標軸と、を有していてもよい。 Further, in the configuration of the present invention, the graph includes a first coordinate axis corresponding to the forward and backward movement of the thyroid cartilage, a second coordinate axis corresponding to the vertical movement of the thyroid cartilage and intersecting with the first coordinate axis, may have
 このような構成によれば、グラフ自体からも、実際の嚥下動作がどのようになっているのか把握しやすくなる。また、このようなグラフと動画とを連携させることにより、計測されたデータと実際の嚥下動作との関係が一層把握しやすくなり、嚥下に関する検査を効率よく行うことができる。 With such a configuration, it is easier to grasp how the actual swallowing action is from the graph itself. In addition, by linking such graphs and moving images, it becomes easier to understand the relationship between the measured data and the actual swallowing motion, and the examination of swallowing can be performed efficiently.
 また、本発明の前記構成において、前記グラフは、甲状軟骨の動きとマイクロフォンにより取得された嚥下音とを時間的に対応付けて1つのグラフとして示したものであってもよい。 Further, in the configuration of the present invention, the graph may be a single graph in which the movement of the thyroid cartilage and the swallowing sound acquired by the microphone are temporally associated with each other.
 このような構成によれば、甲状軟骨の動きと嚥下音の変化とを1つのグラフに統合して可視化することができるため、嚥下動作と嚥下音とのタイミングなどの嚥下ダイナミクスが非侵襲的に一目で分かるようになる。また、このようなグラフと動画とを連携させることにより、計測されたデータと実際の嚥下動作との関係が一層把握しやすくなり、嚥下に関する検査を効率よく行うことができる。 According to such a configuration, the movement of the thyroid cartilage and the change in the swallowing sound can be integrated and visualized in one graph, so that the swallowing dynamics such as the timing of the swallowing motion and the swallowing sound can be visualized non-invasively. You can tell at a glance. In addition, by linking such graphs and moving images, it becomes easier to understand the relationship between the measured data and the actual swallowing motion, and the examination of swallowing can be performed efficiently.
 本発明によれば、嚥下に関する検査を効率よく行うことができる。 According to the present invention, examinations related to swallowing can be performed efficiently.
本発明の一実施形態に係る生体検査装置の構成の一例を示す機能ブロック図である。It is a functional block diagram showing an example of composition of a biopsy apparatus concerning one embodiment of the present invention. 生体検査装置の喉頭部変位検出部を保持する可撓性保持具の一例を示す概略斜視図である。FIG. 4 is a schematic perspective view showing an example of a flexible holder that holds the laryngeal displacement detector of the biopsy apparatus. 生体検査装置の計算機の構成の一例を示す機能ブロック図である。It is a functional block diagram which shows an example of a structure of the computer of a biopsy apparatus. 計算機の処理部の動作解析部の処理の一例を示すフローチャートである。4 is a flow chart showing an example of processing of a motion analysis unit of a processing unit of a computer; 計算機の処理部の音声解析部の処理の一例を示すフローチャートである。4 is a flow chart showing an example of processing of a speech analysis unit of a processing unit of a computer; 計算機の処理部の分析部の処理の一例を示すフローチャートである。4 is a flow chart showing an example of processing of an analysis unit of a processing unit of a computer; 生体検査装置の喉頭部変位検出部により検出される典型的な距離情報に基づく距離波形の一例を示す図である。FIG. 4 is a diagram showing an example of a distance waveform based on typical distance information detected by a laryngeal displacement detection unit of a biopsy apparatus; (a)は、生体検査装置の喉頭部変位検出部によって検出される検出データに基づく距離情報および該距離情報から得られるフィッティングされた動作波形(フィッティング波形)の一例を示す図であり、(b)は、甲状軟骨の上下方向および前後方向のそれぞれの経時的な挙動軌跡を個別に示す成分波形の一例を示す図である。(a) is a diagram showing an example of distance information based on detection data detected by a laryngeal displacement detection unit of a biopsy apparatus and an operating waveform (fitting waveform) fitted from the distance information; ) is a diagram showing an example of component waveforms individually showing temporal behavior trajectories of the thyroid cartilage in the vertical direction and the anteroposterior direction. 生体検査装置の嚥下音検出部により検出される典型的な音声情報に基づく嚥下音波形の一例を示す図である。FIG. 4 is a diagram showing an example of a swallowing sound waveform based on typical voice information detected by the swallowing sound detection unit of the biopsy apparatus; 生体検査装置の処理部で得られる2次元軌跡データに基づいて表示される軌跡グラフの一例を示す図である。It is a figure which shows an example of the locus|trajectory graph displayed based on the two-dimensional locus|trajectory data obtained by the process part of a biopsy apparatus. 表示装置による表示の一例を示す図である。It is a figure which shows an example of a display by a display apparatus. 表示装置による表示の一例を示す図である。It is a figure which shows an example of a display by a display apparatus. 表示装置による表示の一例を示す図である。It is a figure which shows an example of a display by a display apparatus. 表示装置による表示の一例を示す図である。It is a figure which shows an example of a display by a display apparatus.
 以下、図面を参照しながら本発明の実施の形態について説明する。本実施形態では、以下に示すような技術を提供することにより、高度な先進技術で医療の発展と健康社会の実現に貢献する。本検査装置、検査方法等の実現により、国連の提唱する持続可能な開発目標(SDGs:Sustainable Development Goals)の「9.産業と技術革新の基盤をつくろう」に貢献する。
 図1は、本実施の形態に係る生体検査装置100の構成例を示す機能ブロック図である。図示のように、生体検査装置100は、被検体(被検者)101の嚥下時の甲状軟骨(俗称:喉仏)の上下方向および前後方向の挙動に伴って生じる被検体101の喉頭部(甲状軟骨の周囲の生体部位)における2つの位置間の距離の変化を検出する喉頭部変位検出部としての送信コイル102および受信コイル103と、被検体101が嚥下する際の嚥下音を検出する嚥下音検出部としてのマイクロフォン106とを有し、これらのコイル102,103およびマイクロフォン106は図2に関連して後述する可撓性保持具113に保持される。
BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, embodiments of the present invention will be described with reference to the drawings. This embodiment contributes to the development of medical care and the realization of a healthy society with highly advanced technology by providing the technology described below. By realizing this inspection device and inspection method, we will contribute to "9. Build a foundation for industry and technological innovation" in the Sustainable Development Goals (SDGs) advocated by the United Nations.
FIG. 1 is a functional block diagram showing a configuration example of a biopsy apparatus 100 according to this embodiment. As shown in the figure, the biopsy apparatus 100 measures the larynx (thyroid gland) of a subject 101 caused by vertical and anteroposterior behavior of the thyroid cartilage (commonly known as the larynx) when the subject (subject) 101 swallows. A transmitter coil 102 and a receiver coil 103 as a laryngeal displacement detector that detects a change in the distance between two positions in the body part surrounding the cartilage), and a swallowing sound that detects the swallowing sound when the subject 101 swallows. and a microphone 106 as a detector, these coils 102, 103 and microphone 106 are held by a flexible holder 113 which will be described later with reference to FIG.
 送信コイル102および受信コイル103は、甲状軟骨を両側から挟み込むように互いに対向して配置され、送信コイル102は送信機104に接続されるとともに、受信コイル103は受信機105に接続される。また、マイクロフォン106は、被検体101の甲状軟骨近傍に配置され、嚥下時にマイクロフォン106により捕捉される嚥下音を検出する検出用回路107に電気的に接続されるとともに、検出用回路107から電源供給等を受けて動作する。なお、マイクロフォン106は、嚥下音以外の周囲音を極力拾わないように例えばピエゾ素子(圧電素子)を用いたマイクロフォンであることが好ましいが、コンデンサー型マイクロフォン等であってもよい。 The transmission coil 102 and the reception coil 103 are arranged facing each other so as to sandwich the thyroid cartilage from both sides, the transmission coil 102 is connected to the transmitter 104 and the reception coil 103 is connected to the receiver 105 . Further, the microphone 106 is placed near the thyroid cartilage of the subject 101 and is electrically connected to a detection circuit 107 for detecting swallowing sounds captured by the microphone 106 during swallowing, and power is supplied from the detection circuit 107. etc., and operates. The microphone 106 is preferably a microphone using, for example, a piezo element (piezoelectric element) so as not to pick up ambient sounds other than swallowing sounds as much as possible, but may be a condenser type microphone or the like.
 また、生体検査装置100は、制御装置108と、計算機109と、表示装置110と、外部記憶装置111と、入力装置112と、カメラ114と、を更に有する。制御装置108は、送信機104、受信機105、検出用回路107、計算機109、カメラ114、および、外部記憶装置111の動作を制御し、電源供給や信号の送受信タイミング等を制御する。また、計算機109は、CPU、メモリ、内部記憶装置などを備える情報処理装置であり、様々な演算処理を行なう。計算機109が行なう制御や演算は、CPUが所定のプログラムを実行することによって実現される。ただし、演算の一部は、ASIC(Application Specific Integrated Circuit)やFPGA(Field
Programable Gate Array)等のハードウェアにより実現することも可能である。なお、この計算機109には、表示装置110、外部記憶装置111、および、入力装置112が電気的に接続される。
Moreover, the biopsy apparatus 100 further has a control device 108 , a computer 109 , a display device 110 , an external storage device 111 , an input device 112 and a camera 114 . The control device 108 controls the operations of the transmitter 104, the receiver 105, the detection circuit 107, the computer 109, the camera 114, and the external storage device 111, and controls power supply, signal transmission/reception timing, and the like. Further, the computer 109 is an information processing device including a CPU, a memory, an internal storage device, etc., and performs various arithmetic processing. The control and calculations performed by the computer 109 are realized by the CPU executing a predetermined program. However, some of the calculations are based on ASICs (Application Specific Integrated Circuits) and FPGAs (Field
It can also be implemented by hardware such as a programmable gate array). A display device 110 , an external storage device 111 and an input device 112 are electrically connected to the computer 109 .
 また、カメラ114は、被検体101を撮影するものである。具体的には、カメラ114は、コイル102,103およびマイクロフォン106によって嚥下に係る検査(計測)がされている間の被検体101の様子を撮影し、後述する動画950を取得する(図11~図14参照)。より具体的には、カメラ114は、当該検査中の被検体101の喉の動きを撮影する。また、カメラ114によって、当該検査中の被検体101の口の動き等を撮影してもよい。 Also, the camera 114 photographs the subject 101 . Specifically, the camera 114 captures the state of the subject 101 while the swallowing test (measurement) is being performed by the coils 102 and 103 and the microphone 106, and acquires a moving image 950 described later (FIGS. 11 to 11). See Figure 14). More specifically, the camera 114 captures movement of the throat of the subject 101 during the examination. Also, the movement of the mouth of the subject 101 during the examination may be photographed by the camera 114 .
 また、表示装置110は、計測波形や計算機109による解析情報等を表示するインタフェースである。表示装置110は、例えば、液晶ディスプレイ、ELディスプレイ、プラズマディスプレイ、CRTディスプレイまたはプロジェクタ等であってもよいが、これらに限定されるものではない。また、表示装置110は、タブレット端末やヘッドマウントディスプレイやウェアラブルデバイス等に搭載されていてもよい。なお、特定の機能に関してはLEDや音声等により報知されてもよい。また、外部記憶装置111は、前記内部記憶装置とともに、計算機109が実行する各種の演算処理に用いられるデータ、演算処理により得られるデータ、カメラ114によって取得(撮影)された画像(動画950)、入力装置112を介して入力される条件、パラメータ等を保持する。また、入力装置112は、本実施の形態において実施される計測や演算処理に必要な条件等をオペレータ(操作者)が入力するためのインタフェースである。 In addition, the display device 110 is an interface that displays the measured waveform, analysis information by the computer 109, and the like. The display device 110 may be, for example, a liquid crystal display, an EL display, a plasma display, a CRT display, a projector, or the like, but is not limited to these. Moreover, the display device 110 may be mounted on a tablet terminal, a head-mounted display, a wearable device, or the like. Note that a specific function may be notified by an LED, sound, or the like. In addition to the internal storage device, the external storage device 111 stores data used for various arithmetic processing executed by the computer 109, data obtained by the arithmetic processing, images acquired (photographed) by the camera 114 (video 950), It holds conditions, parameters, and the like that are input via the input device 112 . The input device 112 is an interface for an operator to input conditions and the like necessary for measurement and arithmetic processing performed in the present embodiment.
 このような構成では、送信機104により生成される高周波信号が送信コイル102に送信されることによって送信コイル102から高周波磁場が照射され、それに伴って受信コイル103で受信される信号が受信機105で受けられるようになる。また、受信機105で受けられた信号は、コイル間電圧の出力電圧計測値として計算機109に送られる。一方、マイクロフォン106により捕捉された嚥下音は、検出用回路107で検出されて電圧信号に変換され、この検出用回路107から出力電圧計測値として計算機109に入力される。 In such a configuration, a high frequency magnetic field is emitted from the transmission coil 102 by transmitting a high frequency signal generated by the transmitter 104 to the transmission coil 102, and a signal received by the reception coil 103 is transmitted to the receiver 105. you will be able to receive it. Also, the signal received by the receiver 105 is sent to the computer 109 as an output voltage measurement value of the inter-coil voltage. On the other hand, the swallowing sound captured by the microphone 106 is detected by the detection circuit 107 and converted into a voltage signal, which is input from the detection circuit 107 to the computer 109 as an output voltage measurement value.
 図2には、送受信コイル102,103およびマイクロフォン106を保持する可撓性保持具113が示される。この可撓性保持具113は、各種樹脂などの任意の可撓性材料によって形成されており、図示のように、その開放端を利用して被検体101の首に装着されるようになっている略環状の首装着部材202と、この首装着部材202の内側で略同一の円弧に沿うように位置される一対の円弧状のセンサ保持部材203a,203bとから構成され、首装着部材202がその内側において一対のセンサ保持部材203a,203bの一端をそれぞれ両側で保持するように一体結合されるとともに、センサ保持部材203a,203bの他端間が開放されて被検体101の喉頭部付近に位置されるようなっている。そして、一対のセンサ保持部材203a,203bのそれぞれの他端にはセンサ部204a,204bが配置され、これらのセンサ部204a,204bは、被検体101の喉頭部に当接されるとともに、被検体101の首と接触することなく位置される各センサ保持部材203a,203bと共に首装着部材202とは独立に嚥下の動き(甲状軟骨等の動き)に追従できるようになっている。 FIG. 2 shows a flexible holder 113 that holds the transmitting and receiving coils 102, 103 and the microphone 106. FIG. The flexible holder 113 is made of any flexible material such as various resins, and is attached to the neck of the subject 101 using its open end as shown in the figure. and a pair of arc-shaped sensor holding members 203a and 203b positioned inside the neck-mounted member 202 along substantially the same arc. One ends of a pair of sensor holding members 203a and 203b are integrally connected to each other so as to hold one ends of the pair of sensor holding members 203a and 203b on the inside thereof, and the other ends of the sensor holding members 203a and 203b are opened and positioned near the larynx of the subject 101. It seems to be Sensor portions 204a and 204b are arranged at the other ends of the pair of sensor holding members 203a and 203b, respectively. Together with the sensor holding members 203a and 203b positioned without contacting the neck of the body 101, the movement of swallowing (the movement of the thyroid cartilage, etc.) can be followed independently of the neck mounting member 202.
 センサ部204a,204bの一方の内部には送信コイル102が、他方の内部には受信コイル103が固定状態で配設されるとともに、センサ部204a,204bのいずれかの内部にマイクロフォン106が固定状態で配設されている。特に本実施の形態において、送信コイル102および受信コイル103は、互いに対向し易い(被検体101の首表面の鉛直方向に近い)向きに配置されるようにセンサ部204a,204bに装着されており、それにより、信号対ノイズ(SN)比が高い検出を可能としている。そのため、マイクロフォン106と送信コイル102又は受信コイル103とを略直交する位置に配置することができ、マイクロフォン106から発生する磁場ノイズが送信及び/又は受信コイル102,103へ混入するのを低減することができる。ただし、送信コイル102および受信コイル103の対応の位置やマイクロフォンとの直交の位置については、記載の配置に限定されるものではなく、SN比が十分に高い検出を実現できる位置であればよい。 The transmission coil 102 is fixedly arranged inside one of the sensor portions 204a and 204b, and the reception coil 103 is fixedly arranged inside the other of the sensor portions 204a and 204b. It is arranged in Particularly in this embodiment, the transmitting coil 102 and the receiving coil 103 are attached to the sensor sections 204a and 204b so as to be arranged in directions that are likely to face each other (close to the vertical direction of the neck surface of the subject 101). , thereby enabling detection with a high signal-to-noise (SN) ratio. Therefore, the microphone 106 and the transmission coil 102 or the reception coil 103 can be arranged at positions substantially perpendicular to each other, and magnetic field noise generated from the microphone 106 can be reduced from entering the transmission and/or reception coils 102 and 103. can be done. However, the corresponding positions of the transmitting coil 102 and the receiving coil 103 and the positions orthogonal to the microphones are not limited to the described arrangement, and may be any position that can realize detection with a sufficiently high SN ratio.
 また、首装着部材202の開放端を形成する対向する末端部(被検体101の首の裏側に位置される首装着部材202の部位)には、被検体101の首に当て付けられる押さえ部205a,205bが円筒状または球状等の押圧に適した形状を成して形成されている。これらの2つの押さえ部205a,205bとセンサ保持部材203a,203bの他端に設けられる前述の2つのセンサ部204a,204bとから成る4箇所の押圧ポイントによって被検体101の首の大きさに関係なく可撓性保持具113を首に容易に装着できるようになっている。なお、センサ部204a,204bに内蔵される送受信コイル102,103およびマイクロフォン106から延びる電気配線201a,201bは、図1に示される送信機104、受信機105、および、検出用回路107にそれぞれ電気的に接続される。 In addition, a pressing portion 205 a to be applied to the neck of the subject 101 is provided at the opposing end portion (the portion of the neck attachment member 202 positioned on the back side of the neck of the subject 101 ) forming the open end of the neck attachment member 202 . , 205b are formed in a shape suitable for pressing, such as cylindrical or spherical. The neck size of the subject 101 is determined by four pressing points, which are the two pressing portions 205a and 205b and the two sensor portions 204a and 204b provided at the other ends of the sensor holding members 203a and 203b. This allows the flexible retainer 113 to be easily worn around the neck without any need. Electrical wires 201a and 201b extending from transmitting/receiving coils 102 and 103 incorporated in sensor units 204a and 204b and microphone 106 are electrically connected to transmitter 104, receiver 105 and detection circuit 107 shown in FIG. connected
 図3には計算機109の機能ブロック図が示される。図示のように、計算機109は、嚥下計測部410と、画像取得部415と、処理部420と、表示部430とを備える。嚥下計測部410は、図1に関連して説明した送信コイル102、受信コイル103、送信機104、受信機105、マイクロフォン106、検出用回路107、および、制御装置108を用いて嚥下動作および嚥下音を計測する(喉頭部変位検出ステップおよび嚥下音検出ステップ)。また、画像取得部415は、カメラ114を用いて嚥下動作および嚥下音計測中の被検体101を撮影した動画950を取得する(動画取得ステップ)。また、画像取得部415は、取得した動画950(動画データ)を、計算機109の内部記憶装置および/または外部記憶装置111(すなわちメモリ)に記憶させる。また、処理部420は、距離情報を解析する動作解析部421と、音声情報である嚥下音を解析する音声解析部422と、距離情報と嚥下音とを組み合わせて分析を行なう分析部423とを有し、これらによって嚥下計測部410で計測されたデータを処理する(処理ステップ)。具体的には、処理部420は、後述するように、嚥下動作をモデル化したモデル関数(本実施の形態では、後述する式(1))を送受信コイル102,103によって検出される検出データに基づく距離情報(本実施の形態では、被検体101の甲状軟骨を間に挟み込むように配置されるコイル102,103間の距離の経時的な変化を示すデータ(後述する図7に示される距離波形701))にフィッティングさせたフィッティング結果(本実施の形態では、後述する図8の(a)に示されるフィッティングされた波形1103)を得るとともに、このフィッティング結果から、甲状軟骨の前後動に伴う前後動成分(本実施の形態では、後述する図8の(b)に示される前後動成分波形1105またはそれを形成するデータ値)と、甲状軟骨の上下動に伴う上下動成分(本実施の形態では、後述する図8の(b)に示される上下動成分波形1106またはそれを形成するデータ値)とを抽出し、これらの抽出された上下動成分および前後動成分に基づいて甲状軟骨の上下方向および前後方向の挙動軌跡を示す2次元軌跡データ(本実施の形態では、後述する図10に示される軌跡グラフ901を形成するためのデ-タ)を生成する。また、処理部420は、後述するように、マイクロフォン106によって検出される検出データに基づいて嚥下音の振幅の経時的な変化を示す嚥下音波形(本実施の形態では、後述する図9に示される嚥下音波形801)を生成するとともに、嚥下音波形と軌跡グラフとを時間的に対応付けて軌跡グラフ上の各軌跡データ値のプロットを嚥下音の振幅の大きさに応じて識別表示するための識別表示データを生成する。また、表示部430は、嚥下計測部410および処理部420により計測および処理された情報(データ)、および、画像取得部415により取得された動画(動画データ)を表示装置110に表示させる(表示ステップ)。なお、嚥下計測部410、処理部420、および、表示部430は独立に動作する。 A functional block diagram of the computer 109 is shown in FIG. As illustrated, the calculator 109 includes a swallowing measurement unit 410 , an image acquisition unit 415 , a processing unit 420 and a display unit 430 . The swallowing measurement unit 410 uses the transmitting coil 102, the receiving coil 103, the transmitter 104, the receiver 105, the microphone 106, the detection circuit 107, and the control device 108 described with reference to FIG. Sounds are measured (larynx displacement detection step and swallowing sound detection step). In addition, the image acquisition unit 415 acquires a moving image 950 of the subject 101 during swallowing motion and swallowing sound measurement using the camera 114 (moving image acquisition step). Also, the image acquisition unit 415 stores the acquired moving image 950 (moving image data) in the internal storage device of the computer 109 and/or the external storage device 111 (that is, memory). Further, the processing unit 420 includes a motion analysis unit 421 that analyzes distance information, a voice analysis unit 422 that analyzes swallowing sounds that are voice information, and an analysis unit 423 that analyzes a combination of the distance information and the swallowing sounds. The data measured by the swallowing measurement unit 410 is processed by these (processing step). Specifically, as will be described later, processing unit 420 applies a model function (in the present embodiment, equation (1) described later) that models the swallowing motion to detection data detected by transmission/reception coils 102 and 103. Based on the distance information (in the present embodiment, data indicating changes over time in the distance between the coils 102 and 103 arranged so as to sandwich the thyroid cartilage of the subject 101 (distance waveform shown in FIG. 7 to be described later) 701)) is fitted to (in the present embodiment, a fitted waveform 1103 shown in (a) of FIG. 8 to be described later) is obtained, and from this fitting result, an anterior-posterior movement associated with the anteroposterior movement of the thyroid cartilage is obtained. A dynamic component (in this embodiment, a back-and-forth dynamic component waveform 1105 shown in (b) of FIG. 8 to be described later or data values forming it) and a vertical movement component associated with vertical movement of the thyroid cartilage (this embodiment Then, the vertical motion component waveform 1106 shown in (b) of FIG. 8 (to be described later) or the data values forming it) is extracted, and the vertical motion component of the thyroid cartilage is extracted based on these extracted vertical motion components and longitudinal motion components. Two-dimensional trajectory data (in the present embodiment, data for forming a trajectory graph 901 shown in FIG. 10, which will be described later) representing behavior trajectories in a direction and a longitudinal direction are generated. In addition, as will be described later, the processing unit 420 generates a swallowing sound waveform (in the present embodiment, shown in FIG. 9 to be described later) that indicates a temporal change in the amplitude of the swallowing sound based on detection data detected by the microphone 106. In addition to generating the swallowing sound waveform 801), the swallowing sound waveform is temporally associated with the trajectory graph, and the plot of each trajectory data value on the trajectory graph is identified and displayed according to the magnitude of the amplitude of the swallowing sound. to generate identification data for In addition, the display unit 430 causes the display device 110 to display information (data) measured and processed by the swallowing measurement unit 410 and the processing unit 420 and moving images (moving image data) acquired by the image acquisition unit 415 (display step). Note that the swallowing measurement unit 410, the processing unit 420, and the display unit 430 operate independently.
 図4には、図3の計算機109の処理部420の動作解析部421の処理の流れが示される。動作解析部421は、送受信コイル102,103によって検出される検出データを処理するものであり、具体的には、まず最初に、ステップS501において、嚥下計測部410で計測されたデータに対して平滑化を施す。特に本実施の形態においては、Savitzky-Golayフィルタによる区分的多項式近似を用いて平滑化を実施する。この場合の平滑化は、窓数および多項式の次数を例えばそれぞれ5,51などと設定することによって実施される。なお、平滑化の手法は、例えば、単純移動平均などであってもよく、これらによって本発明が限定されるものではない。 FIG. 4 shows the processing flow of the motion analysis unit 421 of the processing unit 420 of the computer 109 of FIG. The motion analysis unit 421 processes the detection data detected by the transmission/reception coils 102 and 103. Specifically, first, in step S501, the data measured by the swallowing measurement unit 410 is smoothed. make a change. In particular, in the present embodiment, smoothing is performed using piecewise polynomial approximation using a Savitzky-Golay filter. The smoothing in this case is performed by setting the number of windows and the degree of the polynomial to, for example, 5, 51, etc., respectively. Note that the smoothing method may be, for example, a simple moving average, and the present invention is not limited by these.
 続いてステップS502では、ステップS501において平滑化された計測信号に対してフィッティングがなされる。これに関連して、図7には、被検体101の喉頭部における2つの位置間の距離である送受信コイル102,103間の距離の経時的な変化を示す距離波形701の典型例が示される。計測されるこのような距離波形701は、甲状軟骨(舌骨)の2次元動作(前後動および上下動)を1次元(左右)で観測した結果であり、甲状軟骨が錘状の形状をしていることから、図示のようにW型の波形形状を呈する。具体的には、被検体101が食塊を口に入れて飲み込む嚥下を開始する開始点(時間T0)702から食塊が食道へと送り込まれるにつれて甲状軟骨が挙上し、それにより、送受信コイル102,103間の距離がD0からD1へと狭まり、距離波形701が第1の谷部(第1の下限ピーク値;時間T1)703に達する。なお、この食塊送り込み過程では、被検体101の喉頭蓋が下方へ移動して鼻腔から気道への経路が塞がれる。その後、食塊が食道を通過する際に、食道を開放するべく甲状軟骨が前方(被検体の顔が向いている方向)へ移動し、それにより、送受信コイル102,103間の距離がD1からD2へと広がり、距離波形701が第1の谷部703から山部(上限ピーク値;時間T2)704へと移行する。そして、食塊が食道(喉頭蓋)を完全に通過して胃へと送り込まれると、喉頭蓋の上方への移動に伴って甲状軟骨も後方へ移動し、それにより、送受信コイル102,103間の距離がD2からD3へと狭まり、距離波形701が山部704から第2の谷部(第2の下限ピーク値;時間T3)705へと移行する。その後、喉頭蓋および甲状軟骨が元の位置へ戻るべく、甲状軟骨が下降し、それにより、送受信コイル102,103間の距離がD3からD4へと広がって、距離波形701が第2の谷部705から終了点(時間T4)706へと移行する。 Subsequently, in step S502, fitting is performed on the measurement signal smoothed in step S501. In this regard, FIG. 7 shows a typical example of a range waveform 701 showing the variation over time of the distance between transmit and receive coils 102, 103, which is the distance between two locations in the larynx of subject 101. . Such a measured distance waveform 701 is the result of one-dimensional (horizontal) observation of the two-dimensional movement (forward and backward movement and vertical movement) of the thyroid cartilage (hyoid bone). Therefore, it exhibits a W-shaped waveform as shown in the figure. Specifically, the thyroid cartilage is lifted as the bolus is sent into the esophagus from the start point (time T0) 702 where the subject 101 starts swallowing the bolus in the mouth, thereby causing the transmission/reception coil to The distance between 102 and 103 narrows from D0 to D1 and the distance waveform 701 reaches a first trough (first lower peak value; time T1) 703. FIG. In this bolus feeding process, the epiglottis of the subject 101 moves downward to block the passage from the nasal cavity to the respiratory tract. Thereafter, when the bolus passes through the esophagus, the thyroid cartilage moves forward (in the direction in which the subject's face is facing) to open the esophagus, thereby increasing the distance between the transmitting and receiving coils 102 and 103 from D1 to D1. D2, distance waveform 701 transitions from first valley 703 to peak (upper limit peak value; time T2) 704 . Then, when the bolus has completely passed through the esophagus (epiglottis) and is sent into the stomach, the thyroid cartilage moves backward as the epiglottis moves upward, thereby increasing the distance between the transmitting and receiving coils 102 and 103. narrows from D2 to D3, and the distance waveform 701 transitions from peak 704 to second valley (second lower peak value; time T3) 705 . The thyroid cartilage then descends so that the epiglottis and thyroid cartilage return to their original positions, thereby increasing the distance between the transmit and receive coils 102, 103 from D3 to D4 and causing the distance waveform 701 to enter a second trough 705. to the end point (time T4) 706.
 以上から分かるように、このような距離波形701では、甲状軟骨の上昇から下降への一連の挙動において、下に凸の波形成分を生じさせ、一方、甲状軟骨の前進から後進への一連の挙動において、上に凸の波形成分を生じさせる。したがって、このことから、本実施の形態では、W型の距離波形701を、図7に短破線および長破線により区別して示されるように、緩やかな下に凸の波形710(図8の(b)に示される上下動成分波形1106に対応する)と、鋭い上に凸の波形720(図8の(b)に示される前後動成分波形1105に対応する)との重ね合わせと捉え、以下の式(1)のようにモデル化する。
Figure JPOXMLDOC01-appb-M000001
 ここで、tは時間、y(t)は計測した距離波形、rAP(t)は前後方向の成分、rHF(t)は上下方向の成分、d(t)は体動など(例えば、首の太さなどの個体差により生じる初期値からのオフセット)から生じるトレンド成分、eは計測ノイズを示す。
As can be seen from the above, in such a distance waveform 701, a downwardly convex waveform component is generated in a series of behaviors of the thyroid cartilage from ascending to descending. , an upwardly convex waveform component is generated. Therefore, in the present embodiment, the W-shaped distance waveform 701 is replaced with a gentle downwardly convex waveform 710 ((b )) and a sharp upwardly convex waveform 720 (corresponding to the front-back motion component waveform 1105 shown in FIG. 8B). It is modeled as shown in Equation (1).
Figure JPOXMLDOC01-appb-M000001
Here, t is the time, y(t) is the measured distance waveform, rAP(t) is the component in the front-back direction, rHF(t) is the component in the vertical direction, d(t) is body movement, etc. Offset from the initial value caused by individual differences such as thickness), and e indicates measurement noise.
 また、本実施の形態では、前後方向および上下方向の成分rAP,rHFを正規分布で、トレンド成分d(t)を一次方程式でモデル化するが、これらのモデルは自己回帰モデルや非線形モデルでもよく、これらによって本発明が限定されるものではない。また、本実施の形態のこのようなモデリングでは、数理最適化手法を用いてパラメータフィッティングすることによって各成分を求める。なお、本実施の形態では、非線形最小二乗法を用いてパラメータフィッティングを実施するが、これによって本発明が限定されるものではない。また、パラメータフィッティングを行なう際に、例えば、rAPの分散値がrHFの分散値よりも小さくなる、といった制約を設けても構わない。 In the present embodiment, the longitudinal and vertical components rAP and rHF are modeled by a normal distribution, and the trend component d(t) is modeled by a linear equation, but these models may be autoregressive models or nonlinear models. , the present invention is not limited by these. Moreover, in such modeling of the present embodiment, each component is obtained by parameter fitting using a mathematical optimization technique. In this embodiment, parameter fitting is performed using the nonlinear least-squares method, but the present invention is not limited to this. Further, when performing parameter fitting, for example, a constraint may be set such that the variance value of rAP is smaller than the variance value of rHF.
 このような正規分布を使用したモデル関数(式(1))をフィッティングした結果が図8の(a)に示される。図中、ドットで表わされるデータ値により形成される波形1102は、図7に示される距離波形701に対応し、また、実線で表わされる波形1103は、波形1102を形成する距離情報にモデル関数をフィッティングさせた動作波形(フィッティング波形)である。ここで、横軸は時間であり、縦軸は、図7に示されるコイル間の距離に基づく正規化された振幅である。 The result of fitting the model function (formula (1)) using such a normal distribution is shown in (a) of FIG. In the figure, a waveform 1102 formed by data values represented by dots corresponds to the distance waveform 701 shown in FIG. It is an operating waveform (fitting waveform) that has been fitted. Here, the horizontal axis is time and the vertical axis is normalized amplitude based on the distance between the coils shown in FIG.
 以上のような信号フィッティングステップS502が終了したら、今度は、ステップS503において、フィッティングしたモデル関数からパラメータを抽出する。本実施の形態では、甲状軟骨の前後方向および上下方向の挙動をそれぞれ独立した正規分布でモデル化したため、このステップS503では、これらの挙動のそれぞれの「振幅」、「平均値」、「分散」を抽出する。なお、「振幅」は甲状軟骨の動作の大きさ、「平均値」は動作が発生した時間、「分散」は動作の持続時間にそれぞれ対応する。 After completing the signal fitting step S502 as described above, parameters are extracted from the fitted model function in step S503. In the present embodiment, the anteroposterior and vertical behaviors of the thyroid cartilage are modeled by independent normal distributions. to extract The "amplitude" corresponds to the magnitude of the movement of the thyroid cartilage, the "average value" corresponds to the time when the movement occurred, and the "variance" corresponds to the duration of the movement.
 これに関連して、図8の(b)には、図8の(a)に示される動作波形(フィッティング波形)1103から甲状軟骨の前後方向および上下方向の成分のみを個別に抽出して表示させた波形(上に凸の前後動成分波形1105および下に凸の上下動成分波形1106)が示される。このように、本実施の形態の生体検査装置100の動作解析部421を含む処理部420は、上下動成分および前後動成分に基づいて、甲状軟骨の上下方向および前後方向のそれぞれの経時的な挙動軌跡を個別に示す2次元軌跡データを生成できるようになっている。 In relation to this, FIG. 8(b) shows only the longitudinal and vertical components of the thyroid cartilage that are individually extracted from the operating waveform (fitted waveform) 1103 shown in FIG. 8(a). waveforms (upwardly convex front-back motion component waveform 1105 and downwardly convex vertical motion component waveform 1106) are shown. As described above, the processing unit 420 including the motion analysis unit 421 of the biopsy apparatus 100 of the present embodiment performs temporal movement of the thyroid cartilage in the vertical direction and the anteroposterior direction based on the vertical motion component and the longitudinal motion component. It is possible to generate two-dimensional trajectory data that individually indicates behavior trajectories.
 このような成分抽出ステップS503が終了したら、今度は、ステップS504において、ステップS503にて抽出したパラメータを用いて再構築した波形から、W型波形の特徴点、すなわち、図7の距離波形701上のピーク点等702~706(D0~D4およびT0~T4のデータ値)に対応する特徴点を抽出する。具体的には、本実施の形態では、式(1)のように計測信号をモデル化して成分分離していることから、ノイズおよびトレンド成分を加味することなく簡便に特徴点の抽出を行なう。より具体的には、一例として、T2をrAPの平均値として取得し、T1およびT3をそれぞれT2の前後における最小値を示す時間として取得し、T0およびT4をそれぞれrHFの平均値から負および正の方向に分散値分だけ進んだ点の時間として取得する。そして、時間T0~T4に対応する値としてD0~D4をそれぞれ取得する。 After the component extraction step S503 is finished, in step S504, the feature points of the W-shaped waveform, that is, the distance waveform 701 in FIG. The feature points corresponding to the peak points 702 to 706 (data values of D0 to D4 and T0 to T4) are extracted. Specifically, in the present embodiment, since the measurement signal is modeled and separated into components as shown in Equation (1), feature points are easily extracted without considering noise and trend components. More specifically, as an example, T2 is taken as the average value of rAP, T1 and T3 are taken as the minimum values before and after T2, respectively, and T0 and T4 are the negative and positive values of the average rHF, respectively. Obtained as the time at the point advanced by the variance value in the direction of . D0 to D4 are obtained as values corresponding to times T0 to T4, respectively.
 このようなピーク値検出ステップS504が終了したら、今度は、ステップS505において、前述の各ステップS501~S504で算出された波形、パラメータ、特徴点等が計算機109の内部記憶装置および/または外部記憶装置111に保存される。なお、以上の各ステップS501~S505は、嚥下計測部410による嚥下動作および嚥下音の計測中に実施されてもよく、また、複数回実施されてもよい。 After such peak value detection step S504 is completed, in step S505, the waveforms, parameters, characteristic points, etc. calculated in steps S501 to S504 are stored in the internal storage device and/or external storage device of computer 109. 111. Note that each of steps S501 to S505 described above may be performed while the swallowing motion and swallowing sound are being measured by the swallowing measurement unit 410, or may be performed multiple times.
 図5には、図3の計算機109の処理部420の音声解析部422の処理の流れが示される。図示のように、ステップS601では、マイクロフォン106から嚥下計測部410を通じて計測された音声情報(一般に正負両方の値を含む音声信号)に対して整流処理が施される。ここで、整流処理とは、絶対値を取って負の値を正の値へ変換する処理を示す。図9には、典型的な音声情報を整流処理して成る嚥下音波形801が示される。 FIG. 5 shows the processing flow of the speech analysis unit 422 of the processing unit 420 of the computer 109 of FIG. As shown in the figure, in step S601, rectification processing is performed on audio information (generally, an audio signal including both positive and negative values) measured through the swallowing measurement unit 410 from the microphone 106 . Here, the rectification process means a process of taking an absolute value and converting a negative value into a positive value. FIG. 9 shows a swallowing sound waveform 801 obtained by rectifying typical voice information.
 ステップS602では、ステップS601で得られる整流処理された信号が対数変換される。この処理によって、嚥下音に混入するスパイク状の信号の影響を低減することができる。 In step S602, the rectified signal obtained in step S601 is logarithmically transformed. This processing can reduce the influence of spike-like signals mixed in the swallowing sound.
 ステップS603では、ステップS602で得られる対数変換された信号に対して平滑化が施される。特に本実施の形態では、移動平均を用いて平滑化処理が行なわれ、移動平均の窓幅が400点に設定される。なお、この平滑化手法によって本発明が限定されるものではない。 In step S603, the logarithmically transformed signal obtained in step S602 is smoothed. Especially in the present embodiment, smoothing processing is performed using a moving average, and the window width of the moving average is set to 400 points. The present invention is not limited by this smoothing technique.
 ステップS604では、ステップS603で得られた平滑化信号に対して指数変換が施される。これにより、当初の計測した音声情報の包絡線を示す波形を得ることができる。図9には、そのような典型的な音声情報(嚥下音波形801)から得られる包絡線802が破線で示されている。 In step S604, exponential transformation is applied to the smoothed signal obtained in step S603. As a result, it is possible to obtain a waveform representing the envelope of the initially measured audio information. In FIG. 9, an envelope curve 802 obtained from such typical speech information (swallowing sound waveform 801) is indicated by a dashed line.
 ステップS605では、ステップS604で得られた包絡線信号がリサンプリングされる。具体的には、本実施の形態では、図3に示される嚥下計測部410での音声情報および距離情報のサンプリング周波数がそれぞれ4000Hzおよび100Hzであるため、包絡線信号を1/40にリサンプリングして距離情報のサンプリング周波数と一致させる処理を行なう。 In step S605, the envelope signal obtained in step S604 is resampled. Specifically, in this embodiment, since the sampling frequencies of the voice information and the distance information in the swallowing measurement unit 410 shown in FIG. 3 are 4000 Hz and 100 Hz, respectively, the envelope signal is resampled to 1/40 is used to match the sampling frequency of the distance information.
 ステップS606では、ステップS605で得られるリサンプリングされた包絡線信号に関して特徴点としての最大値が求められる。これは、嚥下音信号(嚥下音波形801)において最大振幅が得られる区間が摂取物の流動を示していると考えられており、嚥下音の重要な特徴だからである。そのため、このステップS606では、図9に示される包絡線802に関して最大振幅を示すピーク点803に対応する時間S2が取得される。 In step S606, the maximum value as a feature point is obtained for the resampled envelope signal obtained in step S605. This is because the section where the maximum amplitude is obtained in the swallowing sound signal (the swallowing sound waveform 801) is considered to indicate the flow of the ingested material, and is an important feature of the swallowing sound. Therefore, in this step S606, the time S2 corresponding to the peak point 803 indicating the maximum amplitude with respect to the envelope curve 802 shown in FIG. 9 is obtained.
 ステップS607では、ステップS605で得られるリサンプリングされた包絡線信号の嚥下音区間が求められる。すなわち、包絡線802において、嚥下音が生じている時間区間Tsを得るために、嚥下音区間の両端の時間が取得される。具体的には、図9中に一点鎖線で示される振幅閾値804を設定するとともに、ステップS606で求められた最大値(ピーク点803)からみて閾値804を下に横切る点、すなわち、時間的に早い開始点805および時間的に遅い終了点806にそれぞれ対応する時間S1,S3を特徴点として取得する。また、本実施の形態においては、閾値804として中央値に正規化中央絶対偏差を加えた値を用いる。なお、閾値804の設定方法によって本発明が限定されるものではなく、平均値に標準偏差を加えた値などを用いてもよい。 In step S607, the swallowing sound section of the resampled envelope signal obtained in step S605 is obtained. That is, in order to obtain the time interval Ts in which the swallowing sound occurs in the envelope 802, the times at both ends of the swallowing sound interval are obtained. Specifically, an amplitude threshold value 804 indicated by a dashed line in FIG. 9 is set, and a point crossing the threshold value 804 downward from the maximum value (peak point 803) obtained in step S606, that is, temporally Time points S1 and S3 corresponding to an early start point 805 and a temporally late end point 806, respectively, are acquired as feature points. Also, in the present embodiment, a value obtained by adding the normalized median absolute deviation to the median is used as the threshold value 804 . The method of setting the threshold 804 does not limit the present invention, and a value obtained by adding the standard deviation to the average value may be used.
 最後に、ステップS608では、前述の各ステップS601~S607で算出された波形および特徴量等が計算機109の内部記憶装置および/または外部記憶装置111に保存される。なお、以上の各ステップS601~S608は、嚥下計測部410による嚥下動作および嚥下音の計測中に実施されてもよく、また、複数回実施されてもよい。 Finally, in step S608, the waveforms and feature values calculated in steps S601 to S607 are stored in the internal storage device of the computer 109 and/or the external storage device 111. Note that each of steps S601 to S608 described above may be performed while the swallowing motion and swallowing sound are being measured by the swallowing measurement unit 410, or may be performed multiple times.
 図6には、図3の計算機109の処理部420の分析部423の処理の流れが示される。図示のように、ステップS1001では、フィッティングされた波形である動作波形1103(または距離波形701)の前後方向および上下方向の最大変位(最大値)が算出される。 FIG. 6 shows the processing flow of the analysis unit 423 of the processing unit 420 of the computer 109 of FIG. As shown, in step S1001, the maximum longitudinal and vertical maximum displacements (maximum values) of the motion waveform 1103 (or the distance waveform 701), which is the fitted waveform, are calculated.
 ステップS1002では、図10を参照して以下で詳しく説明する前述の軌跡グラフ901上における各点の符号付き曲率が算出される。このステップS1002では、軌跡グラフ901の時間進展方向(移行方向)が抽出され、前記最大変位をとる点を抽出するために、軌跡グラフ901上の各点における符号付き曲率が計算される。 In step S1002, the signed curvature of each point on the trajectory graph 901 described in detail below with reference to FIG. 10 is calculated. In this step S1002, the time progress direction (transition direction) of the trajectory graph 901 is extracted, and the signed curvature at each point on the trajectory graph 901 is calculated in order to extract the point with the maximum displacement.
 ステップS1003では、ステップS1002で得られた符号付き曲率において符号が取得される。具体的には、軌跡グラフ901においては、その座標原点から最も遠い点において曲率の振幅が最大となるため、軌跡グラフ901上の各点の曲率を算出した後に最大の曲率をとる点の符号を取得する。座標系として、反時計回りが正、時計回りが負、といったように符号を決定することにより、時間進展方向が一意に求まる。なお、符号の正負を決定付ける要因は、前後方向の成分rAPおよび上下方向成分rHFの平均値の大小であり、時間進展方向が反時計回りである後述する図10の軌跡グラフ901においては、前後方向の変位の平均値(すなわち、最大値をとる時間)の方が上下方向のそれよりも早いことを示す。 In step S1003, the sign is obtained for the signed curvature obtained in step S1002. Specifically, in the trajectory graph 901, the amplitude of curvature is maximized at the point farthest from the coordinate origin. get. By determining the sign of the coordinate system such that the counterclockwise rotation is positive and the clockwise rotation is negative, the time progress direction can be uniquely determined. The factor that determines whether the sign is positive or negative is the magnitude of the average value of the longitudinal component rAP and the vertical component rHF. It shows that the average value of the displacement in the direction (that is, the time to take the maximum value) is faster than that in the vertical direction.
 ステップS1004では、ステップS1002で算出された符号付き曲率の最大値が得られた点の座標原点からの幾何距離が取得される。軌跡グラフ901では、座標原点から最も遠い点において曲率の振幅が最大となるため、曲率の振幅が最大となる点から座標原点までの幾何距離を計算する。これによって、甲状軟骨の上下方向および前後方向の成分を合成したときに最も変位が大きい時点(時間)を取得することができる。 In step S1004, the geometric distance from the coordinate origin of the point at which the maximum signed curvature calculated in step S1002 is obtained is obtained. In the trajectory graph 901, since the amplitude of curvature is maximum at the point farthest from the origin of coordinates, the geometric distance from the point where the amplitude of curvature is maximum to the origin of coordinates is calculated. This makes it possible to acquire the time point (time) at which the displacement is the largest when the vertical and longitudinal components of the thyroid cartilage are synthesized.
 ステップS1005では、音声情報の最大値をとる時間と距離情報の前後方向の最大値をとる時間との時間差が取得される。これは、特に、最大値をとる時間差が、嚥下状態を特徴付ける上で重要なパラメータだからである。本実施の形態においては、後述する軌跡グラフ901の表示形態からも分かるように、このパラメータが視覚的に把握できるだけでなく、定量値としても表示できる。なお、これらの定量値によって本発明が限定されるものではなく、例えば、軌跡グラフによって取り囲まれる領域の面積を特徴量として表示するなどしてもよい。 In step S1005, the time difference between the time at which the maximum value of the voice information is obtained and the time at which the maximum value of the distance information is obtained in the front-rear direction is acquired. This is especially because the time difference between the maximum values is an important parameter in characterizing the swallowing state. In the present embodiment, as can be seen from the display form of the trajectory graph 901, which will be described later, this parameter can not only be grasped visually, but can also be displayed as a quantitative value. The present invention is not limited by these quantitative values, and for example, the area of the region surrounded by the trajectory graph may be displayed as a feature amount.
 ステップS1006では、ステップS1005で得られた時間差の、距離情報の前後方向の成分を示すモデル(図8の(b)に示される前後動成分波形1105)の分散値を基準にした割合(分散値に対する時間差の割合)を取得する。健常者モデルでは甲状軟骨が前進しているタイミングで嚥下音が発生することから、このステップS1006では、個人内でどの程度の嚥下音発生ずれがあるのかを表示するために前記割合を算出する。 In step S1006, the ratio (variance value time difference). In the healthy human model, the swallowing sound is generated at the timing when the thyroid cartilage advances, so in this step S1006, the ratio is calculated in order to display how much the swallowing sound generation deviation is within the individual.
 最後に、ステップS1007では、前述の各ステップS1001~S1006で算出された波形および特徴量等が計算機109の内部記憶装置および/または外部記憶装置111に保存される。なお、以上の各ステップS1001~S1007は、嚥下計測部410による嚥下動作および嚥下音の計測中に実施されてもよく、また、複数回実施されてもよい。 Finally, in step S1007, the waveforms and feature values calculated in steps S1001 to S1006 are saved in the internal storage device of the computer 109 and/or the external storage device 111. Note that each of the above steps S1001 to S1007 may be performed while the swallowing motion and swallowing sound are being measured by the swallowing measurement unit 410, or may be performed multiple times.
 以上のような処理ステップに基づき、処理部420は、更に、前述した上下動成分および前後動成分に基づいて、甲状軟骨の上下方向および前後方向の挙動を同時に1つの軌跡グラフ901(図10参照)で示す2次元軌跡データを生成するようになっている。具体的には、そのような2次元軌跡データは、互いに直交する2つの座標軸によって規定される座標平面上に示される座標データとして生成され、一方の座標軸が前後動成分の軌跡データ値に対応し、他方の座標軸が上下動成分の軌跡データ値に対応する。より具体的には、図10に示されるように、前述した動作解析部421による信号フィッティング(図4のステップS502)及び成分抽出(図4のステップS503)に基づき、前述した上下動成分波形1106上のデータ値と前後動成分波形1105上のデータ値とを時間的に対応付けて、横軸を前後動成分の軌跡データ値(前後方向の変位;前後動成分波形1105における正規化された振幅)、縦軸を上下動成分の軌跡データ値(上下方向の変位;上下動成分波形1106における正規化された振幅)としてプロットする。すなわち、横軸は、図4のステップS503にて式(1)のrAPに対して抽出されたパラメータを有する正規分布の値を示し、縦軸は、式(1)のrHFに対して抽出されたパラメータを有する正規分布の値を示す。 Based on the above-described processing steps, the processing unit 420 further calculates the behavior of the thyroid cartilage in the vertical direction and the longitudinal direction based on the vertical motion component and the longitudinal motion component described above. ) to generate two-dimensional trajectory data. Specifically, such two-dimensional trajectory data is generated as coordinate data shown on a coordinate plane defined by two mutually orthogonal coordinate axes, one coordinate axis corresponding to the trajectory data value of the longitudinal motion component. , the other coordinate axis corresponds to the trajectory data value of the vertical motion component. More specifically, as shown in FIG. 10, based on the signal fitting (step S502 in FIG. 4) and component extraction (step S503 in FIG. 4) by the motion analysis unit 421 described above, the vertical motion component waveform 1106 The above data values and the data values on the longitudinal motion component waveform 1105 are associated with each other in time, and the horizontal axis is the trajectory data value of the longitudinal motion component (displacement in the longitudinal direction; normalized amplitude in the longitudinal motion component waveform 1105). ), and the vertical axis is plotted as the trajectory data value of the vertical motion component (vertical displacement; normalized amplitude in the vertical motion component waveform 1106). That is, the horizontal axis indicates the value of the normal distribution having the parameters extracted for rAP of formula (1) in step S503 in FIG. values of a normal distribution with parameters
 図10に示されるこのような軌跡グラフ901は、計算機109の表示部430を介して表示装置110に表示されるが、特に本実施の形態では、軌跡グラフ901上の各軌跡データ値のプロットが嚥下音の振幅の大きさに応じて識別表示、例えば色分け表示されるようになっている。このような識別表示を実現するため、処理部420は、前述したようにマイクロフォン106を通じて検出される検出データに基づいて嚥下音の振幅の経時的な変化を示す嚥下音波形801および包絡線802を生成するとともに、嚥下音波形801または包絡線802と軌跡グラフ901とを時間的に対応付けて軌跡グラフ901上の各軌跡データ値のプロットを嚥下音の振幅の大きさに応じて識別表示するための識別表示データを生成する。また、このような識別表示に関連して、色分け表示される本実施の形態では、縦軸に沿う嚥下音振幅値の大小に伴って色がどのように変化するのかを示す参照用の帯グラフ909が軌跡グラフ901に隣接して表示される。例えば、ここでは、嚥下音の振幅が大きくなればなるほど黄色を帯び、振幅が小さくなればなるほど青色を帯びるような識別表示形態を成す。あるいは、白黒で色分けし、振幅が大きくなればなるほど色が薄くなるような識別表示形態であってもよい。なお、識別表示形態は、これに限定されず、各軌跡データ値のプロット(マーク)の大きさまたは形状を嚥下音の振幅の大きさに応じて変えるなど、嚥下音の振幅が異なる軌跡データ値同士を識別できる表示形態であれば、どのような表示形態であっても構わない。 Such a trajectory graph 901 shown in FIG. 10 is displayed on the display device 110 via the display unit 430 of the computer 109. Particularly, in this embodiment, the plot of each trajectory data value on the trajectory graph 901 is Identification display, for example, color-coded display is performed according to the magnitude of the amplitude of the swallowing sound. In order to realize such an identification display, the processing unit 420 generates a swallowing sound waveform 801 and an envelope curve 802 representing temporal changes in the amplitude of the swallowing sound based on detection data detected through the microphone 106 as described above. In addition to temporally associating the swallowing sound waveform 801 or the envelope curve 802 with the trajectory graph 901, the plot of each trajectory data value on the trajectory graph 901 is identified and displayed according to the magnitude of the amplitude of the swallowing sound. to generate identification data for In relation to such identification display, in the present embodiment, which is color-coded display, a band graph for reference showing how the color changes according to the magnitude of the swallowing sound amplitude value along the vertical axis 909 is displayed adjacent to the trajectory graph 901 . For example, here, the larger the amplitude of the swallowing sound, the more yellowish it becomes, and the smaller the amplitude, the more blueish it becomes. Alternatively, an identification display form may be used in which the color is divided into black and white, and the color becomes lighter as the amplitude increases. Note that the identification display form is not limited to this, and trajectory data values with different amplitudes of swallowing sounds, such as changing the size or shape of the plot (mark) of each trajectory data value according to the magnitude of the amplitude of the swallowing sound. Any display mode may be used as long as it is a display mode that allows identification of each other.
 軌跡データ値を時系列的な散布図としてプロットしたこのような軌跡グラフ901は、甲状軟骨の前後方向および上下方向の挙動を2つの座標軸で分離して表示するものであり、嚥下における甲状軟骨の挙動を一目で把握できるようにする。また、このように嚥下動作の挙動に加えて嚥下音情報の特徴を1つの軌跡グラフ901に表示させることにより、嚥下音が甲状軟骨の挙動に対してどの時点で発生したかを視覚的に確認することができ、嚥下動作を定量的に把握できるのみならず、正常な状態からの嚥下音のずれや嚥下音のパワーも一目で把握できる。 Such a trajectory graph 901, in which the trajectory data values are plotted as a time-series scatter diagram, displays the behavior of the thyroid cartilage in the front-rear direction and in the vertical direction by separating them on two coordinate axes. Make behavior visible at a glance. In addition to the behavior of the swallowing motion, by displaying the features of the swallowing sound information in one trajectory graph 901 in this way, it is possible to visually confirm at what point in time the swallowing sound occurred with respect to the behavior of the thyroid cartilage. Not only can the swallowing motion be grasped quantitatively, but also the deviation of the swallowing sound from the normal state and the power of the swallowing sound can be grasped at a glance.
 また、この軌跡グラフ901には、様々な補助的な情報が付加して表示される。この目的のため、本実施の形態において、処理部420は、動作波形1103(または距離波形701)に関連付けられる所定の特徴点、嚥下音波形801(または包絡線802)に関連付けられる所定の特徴点、および、軌跡グラフ901上にプロットされる軌跡データ値の発生時間を含む補足情報を軌跡グラフ901上に重ねて表示するための補足表示データを生成するとともに、軌跡グラフ901の移行方向と軌跡グラフ901から算出される所定の特徴量とを含む参照情報を軌跡グラフ901と共に表示するための参照表示データも生成するようになっている。 In addition, various supplementary information is added to the trajectory graph 901 and displayed. For this purpose, in the present embodiment, the processing unit 420 generates a predetermined feature point associated with the motion waveform 1103 (or the distance waveform 701), a predetermined feature point associated with the swallowing sound waveform 801 (or the envelope curve 802). , and supplementary display data for superimposing and displaying on the trajectory graph 901 supplementary information including the time of occurrence of the trajectory data values plotted on the trajectory graph 901, and the transition direction of the trajectory graph 901 and the trajectory graph. Reference display data for displaying reference information including a predetermined feature amount calculated from 901 together with the trajectory graph 901 is also generated.
 具体的に、そのような補助的な表示に関し、図10中、902は、軌跡がどちらの方向に進展したか(軌跡グラフ901の移行方向)を示す矢印である。本実施の形態においては、軌跡が座標原点から始まって反時計回りに回転した後に座標原点に戻ることを示している。また、903は、軌跡グラフ901から算出される特徴量を示す。具体的には、前後方向の変位の最大量、上下方向の変位の最大量、904により示される座標原点からの最大変位、動作情報と音声情報とがそれぞれ最大値をとる時間の時間差(σ)、および、前後方向の変位(rAP)の分散値を基準にした前記時間差の割合、がそれぞれ特徴量として示される。これらの情報は、前述した分析部423による処理によって得られる。なお、この特徴量の表示の仕方としては、本実施の形態のように軌跡グラフ901の座標領域よりも上側に表示するのではなく、軌跡グラフ901の座標領域中に表示したり、あるいは、別図に表示したりしてもよく、これらによって本発明が限定されるものではない。 Specifically, regarding such an auxiliary display, 902 in FIG. 10 is an arrow indicating in which direction the trajectory has progressed (transition direction of trajectory graph 901). In this embodiment, the trajectory starts from the coordinate origin, rotates counterclockwise, and then returns to the coordinate origin. 903 denotes a feature amount calculated from the trajectory graph 901 . Specifically, the maximum amount of displacement in the longitudinal direction, the maximum amount of displacement in the vertical direction, the maximum amount of displacement from the coordinate origin indicated by 904, and the time difference (σ) between the maximum values of the motion information and the voice information. , and the ratio of the time difference based on the variance of the displacement in the longitudinal direction (rAP), respectively, are shown as feature amounts. These pieces of information are obtained by the processing by the analysis unit 423 described above. As a method of displaying the feature amount, instead of displaying above the coordinate area of the trajectory graph 901 as in the present embodiment, it may be displayed within the coordinate area of the trajectory graph 901, or may be displayed in another coordinate area. It may be displayed in a drawing, and the present invention is not limited by these.
 また、図10中、905は、軌跡グラフ901上にプロットされる軌跡データ値の発生時間を示しており、本実施の形態においては0.1秒ごとに表示される。また、906は、図4のステップS504により求められた距離情報におけるピーク点を示す。また、907は、図5のステップS606により求められた音声情報の最大値をとる時点を示す。この表示により、音声情報の最大値を示す時点と、距離情報における甲状軟骨の前後方向の成分の最大値を示す時点、との時間的なずれが図中にて確認できる。また、908は、図5のステップS607により求められた音声情報の開始点805および終了点806(図9参照)を示す。 Also, in FIG. 10, 905 indicates the generation time of the trajectory data value plotted on the trajectory graph 901, which is displayed every 0.1 seconds in the present embodiment. 906 indicates a peak point in the distance information obtained in step S504 of FIG. 907 indicates the point in time when the maximum value of the voice information obtained in step S606 of FIG. 5 is obtained. With this display, it is possible to confirm the time lag between the point of time when the maximum value of the voice information is indicated and the point of time when the distance information indicates the maximum value of the component in the front-rear direction of the thyroid cartilage. Reference numeral 908 denotes the start point 805 and end point 806 (see FIG. 9) of the audio information obtained in step S607 of FIG.
 以上説明したように、本実施の形態によれば、嚥下動作をモデル化したモデル関数を送受信コイル102,103によって検出される検出データに基づく距離情報にフィッティングさせてフィッティング結果を得るようにしているため、非侵襲で甲状軟骨(舌骨)の動作を二次元的に再現(嚥下動作のモデリング)可能になるとともに、嚥下時の甲状軟骨の全ての動作方向に関連する挙動成分、すなわち、上下方向および前後方向の動きにそれぞれ対応する2つの前後動成分および上下動成分をフィッティング結果から抽出し、これらの2つの成分に基づいて甲状軟骨の上下方向および前後方向の挙動軌跡を示す2次元軌跡データを生成するようにしているため、総合的な嚥下挙動の推測を要することなく甲状軟骨(舌骨)の上下前後の2次元的な動作を嚥下ダイナミクスとして一見して把握することも可能となる。 As described above, according to the present embodiment, the model function modeling the swallowing motion is fitted to the distance information based on the detection data detected by the transmitting/receiving coils 102 and 103 to obtain the fitting result. Therefore, it is possible to non-invasively reproduce the movement of the thyroid cartilage (hyoid bone) two-dimensionally (modeling of the swallowing movement), and at the same time, the behavioral components related to all the movement directions of the thyroid cartilage during swallowing, that is, the vertical direction Two anteroposterior motion components and two vertical motion components corresponding to the movement in the forward and backward directions are extracted from the fitting results, and two-dimensional trajectory data showing the trajectories of the thyroid cartilage in the vertical direction and the anteroposterior direction based on these two components. is generated, it is possible to grasp the two-dimensional movement of the thyroid cartilage (hyoid bone) up and down and back and forth as swallowing dynamics at a glance without the need for comprehensive estimation of swallowing behavior.
 なお、以上では、軌跡グラフ901の生成に係る各種処理(ステップ)が甲状軟骨の動作に対して適用されているが、当該各種処理は、甲状軟骨以外の生体部位の動作にも適用できる。すなわち、当該各種処理は、甲状軟骨(舌骨)と同一または類似した動きを成す身体部位であれば、喉頭部以外の部位の動きの解析にも適用できる。具体的には、所定の検出部によって検出される距離の変化を、複数方向の動きに分解して解析可能な身体部位であれば、その動作に基づいて軌跡データを生成し、軌跡グラフ901を描くことができる。また、検出部は、コイルによって、所定の部位の動きを検出する(動きを示すデータを取得する)ものに限られない。例えば、喉頭部変位検出部は、喉頭部(甲状軟骨)の動きを検出可能(動きを示すデータを取得可能)であればよく、コイル102,103によって当該動きを検出するものに限られない。 Although the various processes (steps) related to the generation of the trajectory graph 901 are applied to the motion of the thyroid cartilage in the above, the various processes can also be applied to the motion of body parts other than the thyroid cartilage. That is, the various processes can be applied to analysis of movements of body parts other than the laryngeal region as long as they move in the same or similar manner as the thyroid cartilage (hyoid bone). Specifically, if the change in distance detected by a predetermined detection unit is decomposed into motions in a plurality of directions and can be analyzed, trajectory data is generated based on the motions, and a trajectory graph 901 is generated. can draw Further, the detection unit is not limited to detecting movement of a predetermined part (acquiring data indicating movement) using a coil. For example, the laryngeal displacement detection unit is not limited to detecting the movement by the coils 102 and 103 as long as it can detect the movement of the laryngeal (thyroid cartilage) (capable of acquiring data indicating the movement).
 続いて、表示装置110による表示について、詳細に説明する。
 計算機109(表示部430)は、甲状軟骨(喉頭部)の動きを示すグラフおよび嚥下音に関するグラフを、表示装置110に表示させることが可能となっている。また、計算機109(表示部430)は、カメラ114により撮影された動画950(嚥下動作および嚥下音計測中の被検体101を撮影した動画であって甲状軟骨の動く様子を撮影した動画)を、表示装置110に表示させることが可能となっている。
 以下では、甲状軟骨の動きを示すグラフとして距離波形710および軌跡グラフ901が表示され、嚥下音に関するグラフとして嚥下音波形801および軌跡グラフ901が表示される場合を例に説明する。ただし、甲状軟骨の動きを示すグラフおよび嚥下音に関するグラフはこれらに限定されるものではない。また、表示に係る以下の説明においては、「距離波形701」を「動作波形1103」または「前後動成分波形1105」もしくは「上下動成分波形1106」等と読み替えてもよい。また、「嚥下音波形801」を「包絡線802」等と読み替えてもよい。
Next, display by the display device 110 will be described in detail.
Calculator 109 (display unit 430) can cause display device 110 to display a graph showing movement of the thyroid cartilage (larynx) and a graph relating to swallowing sounds. Further, the computer 109 (display unit 430) displays a moving image 950 captured by the camera 114 (a moving image of the subject 101 during the swallowing motion and swallowing sound measurement, which captures the movement of the thyroid cartilage), It can be displayed on the display device 110 .
An example will be described below in which a distance waveform 710 and a trajectory graph 901 are displayed as graphs representing the movement of the thyroid cartilage, and a swallowing sound waveform 801 and a trajectory graph 901 are displayed as graphs related to swallowing sounds. However, the graph showing the movement of the thyroid cartilage and the graph relating to swallowing sounds are not limited to these. Further, in the following description relating to display, "distance waveform 701" may be read as "motion waveform 1103", "backward motion component waveform 1105", or "vertical motion component waveform 1106". Also, the “swallowing sound waveform 801” may be read as “envelope 802” or the like.
 距離波形701の表示は、第1の座標軸(横軸)が時間に対応し、当該第1の座標軸と交差する第2の座標軸(縦軸)がコイル102,103間の距離に対応するグラフの表示であって、甲状軟骨の動きを示すグラフの表示といえる。また、軌跡グラフ901の表示は、第1の座標軸(横軸)が甲状軟骨の前後動成分の軌跡データ値に対応し、当該第1の座標軸と交差する第2の座標軸(縦軸)が上下動成分の軌跡データ値に対応するグラフの表示であって、甲状軟骨の動きを示すグラフの表示といえる。また、嚥下音波形801の表示は、第1の座標軸(横軸)が時間に対応し、当該第1の座標軸と交差する第2の座標軸(縦軸)が嚥下音の大きさ(振幅)に対応するグラフの表示であって、嚥下音に関するグラフの表示といえる。なお、軌跡グラフ901は、例えば、第1の座標軸および第2の座標軸と交差する第3の座標軸として、時間軸を有した3次元グラフ等となっていてもよい。 The distance waveform 701 is displayed as a graph in which the first coordinate axis (horizontal axis) corresponds to time and the second coordinate axis (vertical axis) intersecting with the first coordinate axis corresponds to the distance between the coils 102 and 103. It is a display, and can be said to be a display of a graph showing the movement of the thyroid cartilage. In the display of the trajectory graph 901, the first coordinate axis (horizontal axis) corresponds to the trajectory data value of the anteroposterior motion component of the thyroid cartilage, and the second coordinate axis (vertical axis) intersecting with the first coordinate axis is up and down. This is a display of a graph corresponding to the trajectory data value of the dynamic component, and can be said to be a display of a graph showing the movement of the thyroid cartilage. In the display of the swallowing sound waveform 801, the first coordinate axis (horizontal axis) corresponds to time, and the second coordinate axis (vertical axis) intersecting with the first coordinate axis corresponds to the magnitude (amplitude) of the swallowing sound. It is a display of the corresponding graph, and can be said to be a display of a graph relating to swallowing sounds. Note that the trajectory graph 901 may be, for example, a three-dimensional graph or the like having a time axis as a third coordinate axis intersecting the first coordinate axis and the second coordinate axis.
 図11および図12に示すように、計算機109は、距離波形701と、嚥下音波形801と、動画950と、を表示装置110に同時に(同一画面上に)表示させる処理を実行可能となっている。距離波形701と、嚥下音波形801と、動画950と、が表示されている画面を第1表示画面2001と呼ぶ。また、図13および図14に示すように、計算機109は、軌跡グラフ901と、距離波形701と、嚥下音波形801と、動画950と、を表示装置110に同時に(同一画面上に)表示させる処理を実行可能となっている。軌跡グラフ901と、距離波形701と、嚥下音波形801と、動画950と、が表示されている画面を第2表示画面2002と呼ぶ。
 なお、ここで同一画面上に表示させるとしたもののうち、一部が同一画面上に表示されないこととしてもよい。
As shown in FIGS. 11 and 12, the calculator 109 can execute processing for displaying the distance waveform 701, the swallowing sound waveform 801, and the moving image 950 simultaneously (on the same screen) on the display device 110. there is A screen on which the distance waveform 701 , the swallowing sound waveform 801 , and the moving image 950 are displayed is called a first display screen 2001 . Further, as shown in FIGS. 13 and 14, the calculator 109 causes the display device 110 to display the trajectory graph 901, the distance waveform 701, the swallowing sound waveform 801, and the moving image 950 at the same time (on the same screen). processing can be executed. A screen on which the trajectory graph 901 , the distance waveform 701 , the swallowing sound waveform 801 , and the moving image 950 are displayed is called a second display screen 2002 .
Note that some of the items displayed on the same screen here may not be displayed on the same screen.
 図11および図12は、第1表示画面2001の表示例を示す図である。図11は、距離波形701および嚥下音波形801について、後述するマーカー964(964a,964b)が表示されていない状態を示す。また、図12は、距離波形701および嚥下音波形801について、後述するマーカー964が表示されている状態を示すとともに、動画950の表示が当該マーカ964によって示される時点(6.39秒)の映像フレームの表示となっている状態を示す。また、図13および図14は、第2表示画面2002の表示例を示す図である。図13は、距離波形701および嚥下音波形801について、後述するマーカー964が表示されていない状態を示すとともに、軌跡グラフ901の全体がプロットされている状態を示す。また、図14は、距離波形701および嚥下音波形801について、後述するマーカー964が表示されている状態を示すとともに、動画950の表示が当該マーカ964によって示される時点(4.74秒)の映像フレームの表示となっている状態を示す。また、図14は、軌跡グラフ901の一部がプロットされている状態を示すものであり、軌跡グラフ901が、当該時点(4.74秒)に対応する部分までプロットされている状態を示すものである。 11 and 12 are diagrams showing display examples of the first display screen 2001. FIG. FIG. 11 shows the distance waveform 701 and the swallowing sound waveform 801 without markers 964 (964a, 964b), which will be described later. Further, FIG. 12 shows a state in which a marker 964, which will be described later, is displayed for the distance waveform 701 and the swallowing sound waveform 801, and the image at the time (6.39 seconds) when the display of the moving image 950 is indicated by the marker 964. Indicates that the frame is displayed. 13 and 14 are diagrams showing display examples of the second display screen 2002. FIG. FIG. 13 shows the distance waveform 701 and the swallowing sound waveform 801 in a state in which a marker 964, which will be described later, is not displayed, and the entire trajectory graph 901 is plotted. Further, FIG. 14 shows a state in which a marker 964, which will be described later, is displayed for the distance waveform 701 and the swallowing sound waveform 801, and the image at the time (4.74 seconds) when the display of the moving image 950 is indicated by the marker 964. Indicates that the frame is displayed. Also, FIG. 14 shows a state in which a part of the trajectory graph 901 is plotted, and shows a state in which the trajectory graph 901 is plotted up to the portion corresponding to the point in time (4.74 seconds). is.
 操作者は、入力装置112を用いて、第1表示画面2001または第2表示画面2002に表示されている距離波形701、嚥下音波形801または軌跡グラフ901上の任意のポイントを指定することが可能となっている。具体的には、例えば、入力装置112としてのマウスを用いて、表示されている距離波形701、嚥下音波形801または軌跡グラフ901上の任意の箇所にカーソル960を合わせることで(カーソル960を合わせクリックすることで)、当該ポイントが指定されるようになっていてもよい(図12参照)。また、例えば、表示装置110と一体化されたタッチパネルが入力装置112となっており、距離波形701、嚥下音波形801または軌跡グラフ901上の任意の箇所をタッチすることで当該ポイントが指定されるようになっていてもよい。 The operator can use the input device 112 to specify any point on the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901 displayed on the first display screen 2001 or the second display screen 2002. It has become. Specifically, for example, using a mouse as the input device 112, by aligning the cursor 960 with an arbitrary point on the displayed distance waveform 701, swallowing sound waveform 801, or trajectory graph 901 (by aligning the cursor 960 by clicking), the point may be specified (see FIG. 12). Further, for example, a touch panel integrated with the display device 110 serves as the input device 112, and by touching an arbitrary point on the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901, the point is specified. It can be like this.
 計算機109は、入力装置112に対する当該ポイントを指定する操作に基づいて、同一画面上に表示されている動画950における、指定された当該ポイントに対応する時点の映像を表示装置110に表示させる。換言すると、計算機109は、入力装置112に対する当該ポイントを指定する操作に基づいて、同一画面上に表示されている動画950における、指定された当該ポイントに対応するポイントを把握可能とする表示を表示装置110に表示させる。具体的には、例えば、距離波形701の任意のポイント、換言すると、距離波形701上の特定の時点のデータが指定されると、計算機109は、当該データの取得時と同時期に撮影された映像(映像フレーム)を動画950(動画データ)から抽出し(探し出し)、抽出した映像を表示装置110に表示させる。また、例えば、嚥下音波形801の任意のポイント、換言すると、嚥下音波形801上の特定の時点のデータが指定されると、計算機109は、当該データの取得時と同時期に撮影された映像(映像フレーム)を動画950(動画データ)から抽出し(探し出し)、抽出した映像を表示装置110に表示させる。また、例えば、軌跡グラフ901の任意のポイント、換言すると、軌跡グラフ901上の特定の軌跡データが指定されると、計算機109は、当該軌跡データの発生時間(当該軌跡データに対応付けられている時刻)と同時期に撮影された映像(映像フレーム)を動画950(動画データ)から抽出し(探し出し)、抽出した映像を表示装置110に表示させる。例えば、図12に示すように、距離波形701中の6.39秒時点(例えば計測開始から6.39秒経過時)のデータが指定されると、動画950中の6.39秒時点(例えば計測開始から6.39秒経過時)の映像が表示される。 The computer 109 causes the display device 110 to display an image corresponding to the specified point in the video 950 displayed on the same screen based on the operation of specifying the point on the input device 112 . In other words, the computer 109 displays a display that enables the user to grasp the point corresponding to the specified point in the moving image 950 displayed on the same screen based on the operation of specifying the point on the input device 112. Display on device 110 . Specifically, for example, when an arbitrary point on the distance waveform 701, in other words, data at a specific point in time on the distance waveform 701 is specified, the computer 109 calculates A video (video frame) is extracted (searched) from the video 950 (video data), and the extracted video is displayed on the display device 110 . Further, for example, when an arbitrary point of the swallowing sound waveform 801, in other words, data at a specific point in time on the swallowing sound waveform 801 is specified, the computer 109 calculates the image captured at the same time as the acquisition of the data. The (video frame) is extracted (searched) from the video 950 (video data), and the extracted video is displayed on the display device 110 . Further, for example, when an arbitrary point on the trajectory graph 901, in other words, specific trajectory data on the trajectory graph 901 is specified, the computer 109 calculates the generation time of the trajectory data (the time associated with the trajectory data). A video (video frame) captured at the same time as the time) is extracted (searched) from the video 950 (video data), and the extracted video is displayed on the display device 110 . For example, as shown in FIG. 12, if the data at 6.39 seconds (for example, 6.39 seconds after the start of measurement) in the distance waveform 701 is specified, the data at 6.39 seconds in the moving image 950 (for example, 6.39 seconds after the start of measurement) is displayed.
 なお、グラフ(距離波形701、嚥下音波形801または軌跡グラフ901)上の指定されたポイントが把握可能となるように、計算機109は、当該ポイントを指定する操作に基づいて、指定された箇所(当該ポイント部分)に後述するマーカー964を表示させることとしてもよい。 Note that the computer 109 calculates the specified point (the A marker 964, which will be described later, may be displayed on the point portion).
 また、操作者は、入力装置112を用いて、第1表示画面2001または第2表示画面2002に表示されている動画950における任意の時点(ポイント)を指定することが可能となっている。具体的には、本実施の形態では、第1表示画面2001または第2表示画面2002には、動画950の再生位置を示すシークバー962が表示されている。そして、操作者は、入力装置112としてのマウスあるいはタッチパネル等を用い、シークバー962上で動画950の再生位置(動画950における任意の時点(ポイント))を指定することが可能となっている。なお、動画950における任意の時点の指定は、例えば、入力装置112としてのキーボード等で再生時刻(再生位置)を直接入力することによって行われてもよく、第1表示画面2001または第2表示画面2002に動画950を所定フレーム(所定時刻)早送りまたは早戻しさせるボタンを設け当該ボタンを操作することによって行われる等してもよい。 Also, the operator can use the input device 112 to specify an arbitrary time point (point) in the moving image 950 displayed on the first display screen 2001 or the second display screen 2002 . Specifically, in the present embodiment, a seek bar 962 indicating the playback position of moving image 950 is displayed on first display screen 2001 or second display screen 2002 . The operator can use a mouse, a touch panel, or the like as the input device 112 to specify the playback position of the moving image 950 (an arbitrary time point (point) in the moving image 950) on the seek bar 962. FIG. It should be noted that the designation of an arbitrary time point in the moving image 950 may be performed by directly inputting the playback time (playback position) using, for example, a keyboard as the input device 112. 2002 may be provided with a button for fast-forwarding or fast-reversing the moving image 950 by a predetermined frame (predetermined time), and this may be performed by operating the button.
 計算機109は、入力装置112に対する動画950の任意の時点(ポイント)を指定する操作に基づいて、同一画面上に表示されている距離波形701、嚥下音波形801または軌跡グラフ901における、指定された時点に対応するポイントを把握可能とする表示を表示装置110に表示させる。換言すると、計算機109は、動画950中の特定の映像フレームが表示されている状態において、距離波形701、嚥下音波形801または軌跡グラフ901における、当該映像フレームに対応するポイントを把握可能とする表示を表示装置110に表示させる。具体的には、例えば、計算機109は、距離波形701、嚥下音波形801または軌跡グラフ901における指定された時点に対応する箇所に、所定のマーカー964を表示させることとしてもよい。なお、当該マーカー964は、例えば、アイコン964a等であってもよく、時間軸(所定の座標軸)に直交する直線964b等であってもよい(図12、図14参照)。また、例えば、計算機109は、指定された時点までプロットされているとともに、指定された時点より先のプロットがされていない距離波形701、嚥下音波形801または軌跡グラフ901を表示させることとしてもよい(図14の軌跡グラフ901参照)。また、例えば、計算機109は、指定された時点までと、指定された時点よりも先とでプロット(線)の色あるいは太さ等を異ならせた距離波形701、嚥下音波形801または軌跡グラフ901を表示させることとしてもよい。図14には、動画950の4.74秒時点(例えば計測開始から4.74秒経過時)が指定され、距離波形701および嚥下音波形801中の4.74秒時点(例えば計測開始から4.74秒経過時)の部分にマーカー964が表示されている状態が示されている。また、図14には、動画950の4.74秒時点(例えば計測開始から4.74秒経過時)が指定され、軌跡グラフ901が、4.74秒時点(例えば計測開始から4.74秒経過時)までプロットされた状態が示されている。 Calculator 109 calculates the distance waveform 701, swallowing sound waveform 801, or trajectory graph 901 displayed on the same screen based on the operation of specifying an arbitrary time point (point) of the moving image 950 on the input device 112. The display device 110 is caused to display a display that enables the point corresponding to the point in time to be grasped. In other words, in a state where a specific video frame in the moving image 950 is displayed, the computer 109 displays a point corresponding to the video frame in the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901. is displayed on the display device 110 . Specifically, for example, the calculator 109 may display a predetermined marker 964 at a location corresponding to a specified point in the distance waveform 701, swallowing sound waveform 801, or trajectory graph 901. Note that the marker 964 may be, for example, an icon 964a or the like, or a straight line 964b or the like perpendicular to the time axis (predetermined coordinate axis) (see FIGS. 12 and 14). Further, for example, the calculator 109 may display a distance waveform 701, a swallowing sound waveform 801, or a trajectory graph 901 that is plotted up to a specified point in time and not plotted beyond the specified point in time. (See locus graph 901 in FIG. 14). In addition, for example, the calculator 109 can generate a distance waveform 701, a swallowing sound waveform 801, or a trajectory graph 901 in which plots (lines) have different colors, thicknesses, etc. up to a specified time point and beyond the specified time point. may be displayed. In FIG. 14, the time point of 4.74 seconds (for example, 4.74 seconds after the start of measurement) is specified in the moving image 950, and the time point of 4.74 seconds (for example, 4.74 seconds after the start of measurement) is specified in the distance waveform 701 and the swallowing sound waveform 801 (for example, 4.74 seconds after the start of measurement). A state in which a marker 964 is displayed in the portion of 0.74 seconds has elapsed is shown. Also, in FIG. 14, the time point of 4.74 seconds (for example, 4.74 seconds after the start of measurement) of the moving image 950 is designated, and the trajectory graph 901 is set at 4.74 seconds (for example, 4.74 seconds after the start of measurement). The plotted state is shown until the elapsed time).
 なお、マーカー964は、動画950の現在の再生位置(表示されている映像フレーム)に対応する箇所を示すものであって、動画950の再生(進行)に合わせて移動するものとしてもよい。また、距離波形701、嚥下音波形801または軌跡グラフ901の表示を、動画950の現在の再生位置(表示されている映像フレーム)に対応する時点までプロットされたものとし(指定された時点より先のプロットがされていないものとし)、動画950の再生(進行)に合わせてプロットを進行させることとしてもよい。また、距離波形701、嚥下音波形801または軌跡グラフ901の表示を、動画950の現在の再生位置(表示されている映像フレーム)に対応する時点までプロット(線)の色あるいは太さ等が所定状態となったものとし(指定された時点より先のプロット(線)が所定状態となっていないものとし)、動画950の再生(進行)に合わせて当該所定状態となっている箇所を進行させることとしてもよい。すなわち、動画950の再生(進行)に合わせて、距離波形701、嚥下音波形801または軌跡グラフ901における現在の動画再生位置に対応するポイントが把握可能となる表示(マーカー964が移動する表示やプロットが進行する表示等)を、表示装置110にさせることとしてもよい。 Note that the marker 964 indicates the position corresponding to the current playback position (displayed video frame) of the moving image 950, and may move along with the playback (progress) of the moving image 950. Also, the display of the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901 is plotted up to the point in time corresponding to the current playback position (displayed video frame) of the moving image 950 (before the specified point in time). is not plotted), and the plot may proceed in accordance with the playback (progress) of the moving image 950 . In addition, the display of the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901 is displayed up to the time point corresponding to the current playback position (displayed video frame) of the moving image 950. state (the plot (line) before the designated point is not in the predetermined state), and the portion in the predetermined state is advanced in accordance with the playback (progress) of the moving image 950. You can do it. That is, in accordance with the playback (progress) of the video 950, a display (a display or plot in which the marker 964 moves) that makes it possible to grasp the point corresponding to the current video playback position in the distance waveform 701, the swallowing sound waveform 801, or the trajectory graph 901 ) may be displayed on the display device 110 .
 なお、本実施形態では、グラフ(距離波形701、嚥下音波形801または軌跡グラフ901)上の任意のポイントを指定する動作、または、動画950における任意の時点(ポイント)を指定する操作に基づいて、特定時点における映像(映像フレーム)と、当該映像(映像フレーム)に対応するグラフ上のポイントを示す表示(マーカー964等の表示)とが同一画面上に表示されるようになっているが、同一画面上に表示される映像(映像フレーム)は1つであってもよく、複数であってもよい。すなわち、例えば、入力装置112に対して、グラフ上の複数のポイントを指定する操作を行うことで、指定された複数のポイントのそれぞれに対応する映像(映像フレーム)、すなわち複数の映像(映像フレーム)が同一画面上に表示されるようになっていてもよい。また、このとき指定された各ポイントと、各映像(映像フレーム)との対応関係がわかるように(例えば、色分けをしたり、識別符号を付けたり、位置関係を揃えたりするなどして)表示させてもよい。また、例えば、入力装置112に対して、複数の時刻を指定する操作(動画950の複数の時点を指定する操作)を行うことで、指定された複数の時点それぞれに対応するグラフ上のポイントを示す表示(すなわち、複数のマーカー964の表示等)が同一画面上に表示されるようになっていてもよい Note that in the present embodiment, based on the operation of specifying an arbitrary point on the graph (distance waveform 701, swallowing sound waveform 801, or trajectory graph 901), or the operation of specifying an arbitrary point in the video 950 An image (image frame) at a specific point in time and a display (marker 964 or the like) indicating a point on the graph corresponding to the image (image frame) are displayed on the same screen. One video (video frame) may be displayed on the same screen, or a plurality of video frames may be displayed. That is, for example, by performing an operation to specify a plurality of points on the graph on the input device 112, images (image frames) corresponding to the plurality of specified points, that is, a plurality of images (image frames) are generated. ) may be displayed on the same screen. In addition, each point specified at this time is displayed so that the corresponding relationship between each image (image frame) can be understood (for example, by color coding, attaching an identification code, or aligning the positional relationship). You may let Further, for example, by performing an operation of designating a plurality of times (an operation of designating a plurality of time points of the moving image 950) on the input device 112, points on the graph corresponding to each of the designated time points can be displayed. display (that is, display of a plurality of markers 964, etc.) may be displayed on the same screen.
 グラフ(距離波形701、嚥下音波形801または軌跡グラフ901)上の任意のポイントを指定する操作に基づいて、動画950における当該指定されたポイントに対応する時点の映像(すなわち動画950における当該指定されたポイントに対応するポイントを把握可能とする表示)を表示装置110に表示させる処理、あるいは、動画950における所定時点を指定する操作に基づいて、グラフにおける当該指定された時点に対応するポイントを把握可能とする表示を表示装置110に表示させる処理は、距離情報と音声情報と動画950とを互いの時間情報(例えば、距離情報の取得時期と音声情報の取得時期と動画950の各フレームの取得時期)を関連付けて記憶しておくことにより実現できる。本実施形態では、送受信コイル102,103による距離情報の取得と、マイクロフォン106による音声情報の取得と、動画950の撮影と、を同期して行うことにより、距離情報に関する時間情報と、音声情報に関する時間情報と、動画950に関する時間情報と、が関連付けられている。具体的には、本実施形態では、入力装置112に対する計測開始操作(例えば、表示装置110に表示された計測開始ボタンをクリック(選択)する操作や、計測開始を指示する所定の物理ボタンに対する操作等)に基づいて、嚥下動作の計測(喉頭部変位検出ステップ)と、嚥下音の計測(嚥下音検出ステップ)と、動画950の撮影(動画取得ステップ)とが、同時に開始されるようになっている。そして、これにより所定のメモリに保存された距離情報と音声情報と動画950とが、互いの時間情報(距離情報の取得時期と音情報の取得時期と動画950の各フレームの取得時期)が関連付けられた状態で記憶されることとなる。なお、ここで、「互いの時間情報が関連付けられた状態で記憶される」とは、1つの時間情報に対して、距離情報(コイル102,103間の距離)と音声情報(嚥下音の振幅)と動画950(動画950を構成する各フレーム画像)とが紐づけられている場合等を含むものである。 Based on the operation of specifying an arbitrary point on the graph (distance waveform 701, swallowing sound waveform 801, or trajectory graph 901), an image at a time point corresponding to the specified point in the moving image 950 (that is, the specified point in the moving image 950) is displayed. display on the display device 110, or based on the operation of designating a predetermined time point in the moving image 950, the point corresponding to the designated time point in the graph is grasped. The processing for displaying the display that enables display on the display device 110 is performed by combining the distance information, the audio information, and the moving image 950 with mutual time information (for example, obtaining the acquisition time of the distance information, the acquisition time of the audio information, and the acquisition of each frame of the moving image 950). It can be realized by storing in association with the time period). In the present embodiment, acquisition of distance information by the transmitting/receiving coils 102 and 103, acquisition of audio information by the microphone 106, and shooting of the moving image 950 are performed in synchronism. Time information and time information about the moving image 950 are associated. Specifically, in the present embodiment, a measurement start operation on the input device 112 (for example, an operation of clicking (selecting) a measurement start button displayed on the display device 110, an operation of a predetermined physical button for instructing the start of measurement, etc.) etc.), the measurement of the swallowing motion (larynx displacement detection step), the measurement of the swallowing sound (the swallowing sound detection step), and the shooting of the video 950 (video acquisition step) are simultaneously started. ing. As a result, the distance information, the audio information, and the moving image 950 stored in the predetermined memory are associated with each other by their time information (distance information acquisition time, sound information acquisition time, and acquisition time of each frame of the moving image 950). It will be stored as it is. Here, "storing in a state in which the time information is associated with each other" means that distance information (the distance between the coils 102 and 103) and audio information (the amplitude of the swallowing sound) are stored for one piece of time information. ) and the moving image 950 (each frame image forming the moving image 950) are associated with each other.
 また、本実施形態では、図11に示すように、第1表示画面2001には、距離波形701と嚥下音波形801とが、並列に分かれて表示されている。また、第1表示画面2001においては、動画950の表示領域は、距離波形701の表示領域および嚥下音波形801の表示領域の外側となっている。また、図13に示すように、第2表示画面2002には、距離波形701と嚥下音波形801とが、重ねて表示されている。換言すると、距離波形701と嚥下音波形801とが1つのグラフ上に表示されている。また、第2表示画面2002においては、動画950の表示領域は、距離波形701の表示領域および嚥下音波形801の表示領域の内側となっている。換言すると、動画950が、距離波形701のグラフおよび嚥下音波形801のグラフと重ねて表示されている。このように、距離波形701と嚥下音波形801とを重ねて表示することにより、軌跡グラフ901や動画950を大きく表示することが可能となる。また、動画950を、所定のグラフと重ねて表示することにより、当該所定のグラフや他のグラフを大きく表示することが可能となる。なお、当該所定のグラフは、例えば、軌跡グラフ901であってもよい。 Further, in this embodiment, as shown in FIG. 11, the distance waveform 701 and the swallowing sound waveform 801 are displayed in parallel on the first display screen 2001 . Also, in the first display screen 2001 , the display area of the moving image 950 is outside the display area of the distance waveform 701 and the display area of the swallowing sound waveform 801 . Further, as shown in FIG. 13, a distance waveform 701 and a swallowing sound waveform 801 are displayed in an overlapping manner on the second display screen 2002 . In other words, the distance waveform 701 and the swallowing sound waveform 801 are displayed on one graph. Also, in the second display screen 2002 , the display area of the moving image 950 is inside the display area of the distance waveform 701 and the display area of the swallowing sound waveform 801 . In other words, the moving image 950 is displayed superimposed on the graph of the distance waveform 701 and the graph of the swallowing sound waveform 801 . Thus, by displaying the distance waveform 701 and the swallowing sound waveform 801 in an overlapping manner, it is possible to display the trajectory graph 901 and the moving image 950 in a large size. Also, by displaying the moving image 950 so as to overlap a predetermined graph, it is possible to display the predetermined graph and other graphs in a larger size. Note that the predetermined graph may be the trajectory graph 901, for example.
 また、本実施形態では、計測された距離波形701および嚥下音波形801を、計測時間の全体(第1の範囲)にわたって表示させることと、計測時間のうちの一部の時間範囲(第1の範囲の一部の範囲としての第2の範囲)について表示させることとが可能となっている。例えば、ここでは、15秒間の計測(および動画950の撮影)が行われたものとして説明する。なお、計測(および動画950の撮影)は、入力装置112に対する計測開始操作に基づいて開始されるとともに、入力装置112に対する計測終了操作(例えば、表示装置110に表示された計測終了ボタンをクリックする操作や、計測終了を指示する所定の物理ボタンに対する操作等)に基づいて終了することとしてもよい。また、計測(および動画950の撮影)は、入力装置112に対する計測開始操作に基づいて開始されるとともに、予め定められた所定時間(例えば15秒)経過すると終了することとしてもよい。 Further, in the present embodiment, the measured distance waveform 701 and swallowing sound waveform 801 are displayed over the entire measurement time (first range), and a part of the measurement time (first range) is displayed. It is possible to display the second range as a partial range of the range). For example, here, it is assumed that the measurement (and the shooting of the moving image 950) is performed for 15 seconds. Note that the measurement (and shooting of the moving image 950) is started based on a measurement start operation on the input device 112, and a measurement end operation on the input device 112 (for example, by clicking a measurement end button displayed on the display device 110). The measurement may be ended based on an operation, an operation on a predetermined physical button for instructing the end of measurement, or the like. Also, the measurement (and the shooting of the moving image 950) may be started based on a measurement start operation on the input device 112, and may end after a predetermined time (for example, 15 seconds) has elapsed.
 本実施形態では、計測時間のうちのどの時間範囲の距離波形701および嚥下音波形801を表示させるかを選択(指定)可能となっている。当該選択は、例えば、表示されている距離波形701または嚥下音波形801のうちの任意の範囲を入力装置112としてのマウスやタッチパネル等で指定する(ドラッグ等する)ことによって行われてもよく、入力装置112としてのキーボードによって表示させる時間範囲の開始時点や終了時点を入力すること等によって行われてもよい。図11および図12には、計測時間の全体にわたって表示された距離波形701および嚥下音波形801が示されている。また、図13および図14には、計測時間のうちの一部の時間範囲(3.94秒~5.37秒)(図12および図13において表示されている時間範囲(第1の範囲)のうちの一部の時間範囲(第2の範囲))について表示された距離波形701および嚥下音波形801が示されている。すなわち、本実施形態では、距離波形701および嚥下音波形801の一部の範囲(時間範囲)を指定する操作に基づいて、指定された範囲の距離波形701および嚥下音波形801を表示させることが可能となっている。また、一部の時間範囲の距離波形701および嚥下音波形801が表示されている画面(第2表示画面2002)においては、撮影された動画950のうち、当該一部の時間範囲に対応する部分(のみ)が再生されるようになっている。すなわち、距離波形701および嚥下音波形801の一部の時間範囲が選択され表示されている状態(第2表示画面2002が表示されている状態)においては、入力装置112に対して動画950の再生開始操作(例えば、表示装置110に表示された再生ボタンをクリックする操作や、再生開始を指示する所定の物理ボタンに対する操作等)に基づいて動画950の再生が開始される場合に、撮影された動画950のうち、当該一部の時間範囲の開始時点から終了時点までの動画950が再生されるようになっている。
 なお、図11および図12に示される距離波形701および嚥下音波形801自体が、計測したデータのうちの一部のデータに基づくものであってもよい。
In the present embodiment, it is possible to select (specify) which time range of the measured time the distance waveform 701 and the swallowing sound waveform 801 are to be displayed. The selection may be made by, for example, specifying (drag, etc.) an arbitrary range of the displayed distance waveform 701 or swallowing sound waveform 801 using a mouse, touch panel, or the like as the input device 112. This may be performed by inputting the start point and end point of the time range to be displayed using a keyboard as the input device 112 . 11 and 12 show the distance waveform 701 and the swallowing sound waveform 801 displayed over the entire measurement time. 13 and 14 show a part of the time range (3.94 seconds to 5.37 seconds) of the measurement time (the time range (first range) displayed in FIGS. 12 and 13). A distance waveform 701 and a swallowing sound waveform 801 displayed for a part of the time range (second range) of . That is, in the present embodiment, based on the operation of specifying a partial range (time range) of the distance waveform 701 and the swallowing sound waveform 801, the specified range of the distance waveform 701 and the swallowing sound waveform 801 can be displayed. It is possible. In addition, on the screen (second display screen 2002) on which the distance waveform 701 and the swallowing sound waveform 801 in a partial time range are displayed, a portion of the captured moving image 950 corresponding to the partial time range is displayed. (only) is to be played. That is, in a state in which a partial time range of the distance waveform 701 and the swallowing sound waveform 801 is selected and displayed (a state in which the second display screen 2002 is displayed), the video 950 is reproduced on the input device 112. When the playback of the moving image 950 is started based on a start operation (for example, an operation of clicking a playback button displayed on the display device 110, an operation of a predetermined physical button for instructing the start of playback, etc.), Of the moving image 950, the moving image 950 from the start point to the end point of the partial time range is reproduced.
Note that the distance waveform 701 and swallowing sound waveform 801 shown in FIGS. 11 and 12 may be based on part of the measured data.
 なお、例えば、距離波形701(嚥下音波形801)の表示させる時間範囲を選択する操作は、距離情報(音声情報)の解析範囲を指定する操作(解析の開始に係る操作)等が兼ねることとしてもよい。すなわち、例えば、動作解析部421による距離情報の解析(音声解析部422による嚥下音の解析)を行うにあたり(換言すると軌跡グラフ901の生成にあたり)、計測された距離情報(音声情報)のうちのどの時間範囲について解析を行うか指定する場合に、解析する時間範囲を指定する操作に基づいて解析結果(例えば軌跡グラフ901)が表示装置110に表示されるとともに、当該解析した時間範囲の距離波形701や嚥下音波形801が表示装置110に表示されるようにしてもよい。また、解析する時間範囲を指定する操作に基づいて、当該解析した時間範囲の動画の再生が可能となるようにしてもよい。換言すると、第2表示画面2002は、解析範囲を指定する操作(解析の開始に係る操作)に基づいて表示されることとしてもよい。 Note that, for example, the operation of selecting the time range to display the distance waveform 701 (swallowing sound waveform 801) also serves as the operation of specifying the analysis range of the distance information (audio information) (operation related to starting analysis). good too. That is, for example, when the motion analysis unit 421 analyzes the distance information (analysis of the swallowing sound by the voice analysis unit 422) (in other words, when generating the trajectory graph 901), When specifying which time range to analyze, the analysis result (for example, the trajectory graph 901) is displayed on the display device 110 based on the operation of specifying the time range to be analyzed, and the distance waveform of the analyzed time range 701 and the swallowing sound waveform 801 may be displayed on the display device 110 . Further, based on the operation of designating the time range to be analyzed, it may be possible to reproduce the moving image of the analyzed time range. In other words, the second display screen 2002 may be displayed based on the operation of designating the analysis range (operation related to starting analysis).
 本実施形態の生体検査装置100は、
 操作者による操作を受け付ける入力装置112と、表示装置110と、情報処理装置(計算機109)と、を備える生体検査装置100であって、
 前記情報処理装置109は、
 甲状軟骨の動きを検出する検出部102,103により取得された検出データに基づく、被検者が嚥下する際の甲状軟骨の動きを示すグラフ(距離波形701または軌跡グラフ901等)と、カメラ114により撮影された嚥下する際の前記被検者を撮影した動画950と、を前記表示装置110に同時に表示させる処理と、
 前記グラフと前記動画950とが同時に表示されている状態において、前記操作者による前記グラフまたは前記動画950の一方における任意のポイントを指定する操作に基づいて、前記グラフまたは前記動画950の他方における当該任意のポイントに対応するポイントを把握可能とする表示を前記表示装置110に表示させる処理と、を実行する。
 具体的には、例えば、前記情報処理装置は、前記グラフと前記動画950とが同一画面上に表示されている状態において、前記操作者による前記グラフ上の任意のポイントを指定する操作に基づいて、前記動画950における当該任意のポイント(指定されたポイント)に対応する時点の映像(映像フレーム)を前記表示装置110に表示させる。これにより、指定されたポイントに対応する動画950上のポイント(映像フレーム)が把握可能となる。
 あるいは、前記情報処理装置は、前記グラフと前記動画950とが同一画面上に表示されている状態において、前記操作者による前記動画950における任意の時点(ポイント)を指定する操作に基づいて、前記グラフにおける当該任意の時点(指定された時点)に対応するポイントを把握可能とする表示を前記表示装置110に表示させてもよい。これにより、指定されたポイントに対応するグラフ上のポイントが把握可能となる。
 なお、「前記操作者による前記グラフまたは前記動画の一方における任意のポイントを指定する操作に基づいて、前記グラフまたは前記動画の他方における当該任意のポイントに対応するポイントを把握可能とする表示を前記表示装置に表示させる」とは、グラフまたは動画の少なくとも一方に対する任意のポイントを指定する操作に基づいて、他方における対応ポイントを把握可能とする表示をするものであればよく、当該他方に対する指定操作に基づいて当該一方における対応ポイントを把握可能とする表示をするものでなくてもよい。
The biopsy apparatus 100 of this embodiment is
A biopsy apparatus 100 including an input device 112 that receives an operation by an operator, a display device 110, and an information processing device (computer 109),
The information processing device 109 is
A graph (distance waveform 701 or trajectory graph 901, etc.) showing the movement of the thyroid cartilage when the subject swallows, based on the detection data acquired by the detection units 102 and 103 that detect the movement of the thyroid cartilage, and the camera 114. A process of simultaneously displaying on the display device 110 a moving image 950 of the subject when swallowing, which is shot by
In a state in which the graph and the moving image 950 are displayed simultaneously, based on the operator's operation of designating an arbitrary point in either the graph or the moving image 950, the corresponding point in the other of the graph or the moving image 950 is displayed. and a process of causing the display device 110 to display a display that enables the point corresponding to the arbitrary point to be grasped.
Specifically, for example, when the graph and the moving image 950 are displayed on the same screen, the information processing device performs , causes the display device 110 to display an image (image frame) at a time point corresponding to the arbitrary point (specified point) in the moving image 950 . This makes it possible to grasp the point (video frame) on the moving image 950 corresponding to the specified point.
Alternatively, the information processing device, in a state in which the graph and the moving image 950 are displayed on the same screen, based on the operator's operation of designating an arbitrary time point (point) in the moving image 950, The display device 110 may be caused to display a point corresponding to the arbitrary time point (designated time point) in the graph so that the point can be grasped. This makes it possible to grasp the point on the graph corresponding to the designated point.
It should be noted that "on the basis of the operator's operation of designating an arbitrary point in one of the graph or the moving image, a display that enables the user to grasp the point corresponding to the arbitrary point in the other of the graph or the moving image" is described above. "Display on a display device" means that, based on an operation of specifying an arbitrary point on at least one of a graph or a moving image, it is sufficient to display the corresponding point on the other so that the other can be grasped, and the specifying operation on the other It is not necessary to make a display that allows the user to grasp the corresponding points on the one side based on the above.
 このような構成によれば、甲状軟骨の動きを示すグラフと、嚥下をする際の被検者を撮影した動画とが同一画面上に表示されるので、動画によって嚥下をする際の甲状軟骨の動きを見ながらグラフ(検出データ)を確認することができ、嚥下に関する検査を効率よく行うことができる。また、このような構成によれば、グラフにおける任意のポイントを指定する操作に基づいて、動画における当該任意のポイントに対応するポイントを把握可能とする表示を表示させること、または、動画における任意のポイントを指定する操作に基づいて、グラフにおける当該任意のポイントに対応するポイントを把握可能とする表示を表示させること、の少なくとも一方が可能となる。すなわち、グラフ上の任意のポイントを指定することによって当該任意のポイントに対応する動画中の映像フレームを探し出すこと、または、動画中の任意の時点(ポイント)を指定することによって当該任意の時点に対応するグラフ上のポイントを探し出すこと、が可能となる。したがって、グラフにおける集中的に解析を行うべき範囲を抽出する作業等が行いやすくなり、嚥下に関する検査を効率よく行うことができる。また、グラフ上の各ポイントと動画に写っている実際の嚥下動作との関係を把握することが容易となり、嚥下に関する検査を効率よく行うことができる。 According to such a configuration, the graph showing the movement of the thyroid cartilage and the moving image of the subject during swallowing are displayed on the same screen. The graph (detection data) can be confirmed while observing the movement, and the examination of swallowing can be efficiently performed. In addition, according to such a configuration, based on the operation of designating an arbitrary point in the graph, a display that enables the user to grasp the point corresponding to the arbitrary point in the moving image is displayed, or an arbitrary point in the moving image is displayed. At least one of displaying a display that makes it possible to grasp the point corresponding to the arbitrary point in the graph based on the operation of designating the point. That is, by specifying an arbitrary point on the graph, searching for a video frame in the video corresponding to that arbitrary point, or by specifying an arbitrary time (point) in the video, Finding the corresponding point on the graph becomes possible. Therefore, it becomes easy to perform work such as extracting a range in which analysis should be performed intensively in the graph, and it is possible to efficiently perform a swallowing examination. In addition, it becomes easy to grasp the relationship between each point on the graph and the actual swallowing motion shown in the moving image, and the examination of swallowing can be performed efficiently.
 なお、本実施形態において説明した各装置による処理は、ソフトウェア、ハードウェアおよびソフトウェアとハードウェアとの組み合わせとのいずれにより実現されてもよい。ソフトウェアを構成するプログラムは、例えば、非一時的なコンピュータ可読媒体(non-transitory
computer readable medium)に格納されてもよい。また、プログラムは、例えば、ネットワークを介して配信される等してもよい。
Note that the processing by each device described in the present embodiment may be realized by any of software, hardware, and a combination of software and hardware. Programs that make up software are, for example, non-transitory computer-readable media (non-transitory
computer readable medium). Also, the program may be distributed via a network, for example.
 なお、本発明は、前述した実施の形態に限定されず、その要旨を逸脱しない範囲内において、各構成要素の自由な組み合わせ、あるいは任意の構成要素の変形や省略が可能である。 It should be noted that the present invention is not limited to the above-described embodiments, and it is possible to freely combine each component, or to modify or omit any component without departing from the scope of the invention.
100 生体検査装置
102 送信コイル(検出部)
103 受信コイル(検出部)
106 マイクロフォン
109 計算機(情報処理装置)
110 表示装置
112 入力装置
114 カメラ
100 biopsy device 102 transmission coil (detection unit)
103 Receiving coil (detection unit)
106 microphone 109 computer (information processing device)
110 display device 112 input device 114 camera

Claims (5)

  1.  操作者による操作を受け付ける入力装置と、表示装置と、情報処理装置と、を備える生体検査装置であって、
     前記情報処理装置は、
     甲状軟骨の動きを検出する検出部により取得された検出データに基づく、被検者が嚥下する際の甲状軟骨の動きを示すグラフと、カメラにより撮影された嚥下する際の前記被検者を撮影した動画と、を前記表示装置に同時に表示させる処理と、
     前記グラフと前記動画とが同時に表示されている状態において、前記操作者による前記グラフまたは前記動画の一方における任意のポイントを指定する操作に基づいて、前記グラフまたは前記動画の他方における当該任意のポイントに対応するポイントを把握可能とする表示を前記表示装置に表示させる処理と、を実行することを特徴とする生体検査装置。
    A biological examination apparatus comprising an input device that receives an operation by an operator, a display device, and an information processing device,
    The information processing device is
    A graph showing the movement of the thyroid cartilage when the subject swallows based on the detection data acquired by the detection unit that detects the movement of the thyroid cartilage, and an image of the subject when the subject swallows taken by a camera. and a process of simultaneously displaying on the display device,
    In a state in which the graph and the moving image are displayed at the same time, based on an operation by the operator to designate an arbitrary point on one of the graph or the moving image, the arbitrary point on the other of the graph or the moving image and a process of causing the display device to display a display that enables the user to grasp a point corresponding to the biopsy apparatus.
  2.  前記グラフは、甲状軟骨の前後方向の動きに対応する第1座標軸と、甲状軟骨の上下方向の動きに対応し前記第1座標軸と交差する第2座標軸と、を有することを特徴とする請求項1に記載の生体検査装置。 2. The graph as claimed in claim 1, wherein said graph has a first coordinate axis corresponding to the longitudinal movement of the thyroid cartilage and a second coordinate axis corresponding to the vertical movement of the thyroid cartilage and intersecting said first coordinate axis. 2. The biopsy device according to 1.
  3.  前記グラフは、甲状軟骨の動きとマイクロフォンにより取得された嚥下音とを時間的に対応付けて1つのグラフとして示したものであることを特徴とする請求項1または2に記載の生体検査装置。 The biopsy apparatus according to claim 1 or 2, characterized in that the graph is a single graph in which the movement of the thyroid cartilage and the swallowing sound acquired by the microphone are associated with each other in time.
  4.  操作者による操作を受け付ける入力装置と、表示装置と、情報処理装置と、を備える生体検査装置により実行される生体検査方法であって、
     甲状軟骨の動きを検出する検出部により取得された検出データに基づく、被検者が嚥下する際の甲状軟骨の動きを示すグラフと、カメラにより撮影された嚥下する際の前記被検者を撮影した動画と、を前記表示装置に同時に表示させるステップと、
     前記グラフと前記動画とが同時に表示されている状態において、前記操作者による前記グラフまたは前記動画の一方における任意のポイントを指定する操作に基づいて、前記グラフまたは前記動画の他方における当該任意のポイントに対応するポイントを把握可能とする表示を前記表示装置に表示させるステップと、を含むことを特徴とする生体検査方法。
    A biopsy method performed by a biopsy apparatus including an input device for receiving an operation by an operator, a display device, and an information processing device,
    A graph showing the movement of the thyroid cartilage when the subject swallows based on the detection data acquired by the detection unit that detects the movement of the thyroid cartilage, and an image of the subject when the subject swallows taken by a camera. and a step of simultaneously displaying on the display device;
    In a state in which the graph and the moving image are displayed at the same time, based on an operation by the operator to designate an arbitrary point on one of the graph or the moving image, the arbitrary point on the other of the graph or the moving image and a step of causing the display device to display a display that makes it possible to grasp a point corresponding to .
  5.  甲状軟骨の動きを検出する検出部により取得された検出データに基づく、被検者が嚥下する際の甲状軟骨の動きを示すグラフと、カメラにより撮影された嚥下する際の前記被検者を撮影した動画と、を表示装置に同時に表示させる処理と、
     前記グラフと前記動画とが同時に表示されている状態において、操作者による前記グラフまたは前記動画の一方における任意のポイントを指定する操作に基づいて、前記グラフまたは前記動画の他方における当該任意のポイントに対応するポイントを把握可能とする表示を前記表示装置に表示させる処理と、をコンピュータに実行させることを特徴とするプログラム。
    A graph showing the movement of the thyroid cartilage when the subject swallows based on the detection data acquired by the detection unit that detects the movement of the thyroid cartilage, and an image of the subject when the subject swallows taken by a camera. and a process of simultaneously displaying on a display device,
    In a state in which the graph and the moving image are displayed simultaneously, based on an operator's operation to specify an arbitrary point on either the graph or the moving image, the arbitrary point on the other of the graph or the moving image is displayed. A program for causing a computer to execute a process of causing the display device to display a display that makes it possible to grasp a corresponding point.
PCT/JP2021/040940 2021-11-08 2021-11-08 Biological examination apparatus, biological examination method, and program WO2023079729A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/040940 WO2023079729A1 (en) 2021-11-08 2021-11-08 Biological examination apparatus, biological examination method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/040940 WO2023079729A1 (en) 2021-11-08 2021-11-08 Biological examination apparatus, biological examination method, and program

Publications (1)

Publication Number Publication Date
WO2023079729A1 true WO2023079729A1 (en) 2023-05-11

Family

ID=86240873

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/040940 WO2023079729A1 (en) 2021-11-08 2021-11-08 Biological examination apparatus, biological examination method, and program

Country Status (1)

Country Link
WO (1) WO2023079729A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003111748A (en) * 2001-10-04 2003-04-15 Nippon Riko Igaku Kenkyusho:Kk Swallowing sound obtaining device
JP2007260273A (en) * 2006-03-29 2007-10-11 Sumitomo Osaka Cement Co Ltd Swallowing function evaluation instrument
JP2008018094A (en) * 2006-07-13 2008-01-31 Tokyo Giken:Kk Oral motion measuring apparatus
JP2009213592A (en) * 2008-03-10 2009-09-24 Hitachi Computer Peripherals Co Ltd Biological examination apparatus
JP2013031650A (en) * 2011-06-30 2013-02-14 Gifu Univ System and method for measuring ingesting action
JP2020089613A (en) * 2018-12-07 2020-06-11 国立大学法人山梨大学 Swallowing ability measuring system, swallowing ability measuring method, and sensor holder
JP2020110443A (en) * 2019-01-15 2020-07-27 パイオニア株式会社 Blood circulation information calculation apparatus, blood circulation information calculation method, and program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003111748A (en) * 2001-10-04 2003-04-15 Nippon Riko Igaku Kenkyusho:Kk Swallowing sound obtaining device
JP2007260273A (en) * 2006-03-29 2007-10-11 Sumitomo Osaka Cement Co Ltd Swallowing function evaluation instrument
JP2008018094A (en) * 2006-07-13 2008-01-31 Tokyo Giken:Kk Oral motion measuring apparatus
JP2009213592A (en) * 2008-03-10 2009-09-24 Hitachi Computer Peripherals Co Ltd Biological examination apparatus
JP2013031650A (en) * 2011-06-30 2013-02-14 Gifu Univ System and method for measuring ingesting action
JP2020089613A (en) * 2018-12-07 2020-06-11 国立大学法人山梨大学 Swallowing ability measuring system, swallowing ability measuring method, and sensor holder
JP2020110443A (en) * 2019-01-15 2020-07-27 パイオニア株式会社 Blood circulation information calculation apparatus, blood circulation information calculation method, and program

Similar Documents

Publication Publication Date Title
JP7387185B2 (en) Systems, methods and computer program products for physiological monitoring
US9888906B2 (en) Ultrasound diagnostic apparatus, ultrasound image processing method, and non-transitory computer readable recording medium
JP5495415B2 (en) Mandibular anterior tooth movement tracking system, mandibular anterior tooth movement tracking device, and temporomandibular joint noise analyzer
JP6323451B2 (en) Image processing apparatus and program
JP2020089613A (en) Swallowing ability measuring system, swallowing ability measuring method, and sensor holder
WO2024021534A1 (en) Artificial intelligence-based terminal for evaluating airway
WO2018193955A1 (en) Deglutition function testing system using 3d camera
JP5506024B2 (en) Temporomandibular disorder diagnosis support system and apparatus provided with pain detector
WO2023079729A1 (en) Biological examination apparatus, biological examination method, and program
WO2021225081A1 (en) Biological examination device and biological information analysis method
US8251927B2 (en) System and procedure for the analysis of the swallowing process in humans
WO2023100348A1 (en) Biological examination device, biological-information analyzing method, and computer program
JP4630194B2 (en) Biological measuring device and biological measuring method
CN112334061A (en) Monitoring swallowing in a subject
US20210236050A1 (en) Dynamic anatomic data collection and modeling during sleep
JP2014130519A (en) Medical information processing system, medical information processing device, and medical image diagnostic device
JP7405010B2 (en) Diagnostic support system, diagnostic support device and program
JP2007181564A (en) Bioinstrumentation apparatus and bioinstrumentation method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21963322

Country of ref document: EP

Kind code of ref document: A1