WO2021152745A1 - Ultrasonic observation device, method for operating ultrasonic observation device, and program for operating ultrasonic observation device - Google Patents

Ultrasonic observation device, method for operating ultrasonic observation device, and program for operating ultrasonic observation device Download PDF

Info

Publication number
WO2021152745A1
WO2021152745A1 PCT/JP2020/003245 JP2020003245W WO2021152745A1 WO 2021152745 A1 WO2021152745 A1 WO 2021152745A1 JP 2020003245 W JP2020003245 W JP 2020003245W WO 2021152745 A1 WO2021152745 A1 WO 2021152745A1
Authority
WO
WIPO (PCT)
Prior art keywords
frequency spectrum
ultrasonic
unit
ultrasonic observation
data
Prior art date
Application number
PCT/JP2020/003245
Other languages
French (fr)
Japanese (ja)
Inventor
市川 純一
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Priority to PCT/JP2020/003245 priority Critical patent/WO2021152745A1/en
Publication of WO2021152745A1 publication Critical patent/WO2021152745A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • A61B8/14Echo-tomography

Definitions

  • the present invention relates to an ultrasonic observation device for observing a tissue to be observed using ultrasonic waves, an operation method of the ultrasonic observation device, and an operation program of the ultrasonic observation device.
  • Ultrasound may be applied to observe the characteristics of the biological tissue or material to be observed. Specifically, information on the characteristics of the observation target is acquired by transmitting ultrasonic waves to the observation target and performing predetermined signal processing on the ultrasonic echo reflected by the observation target.
  • a technique for generating a feature amount image showing a difference in tissue properties in a living tissue by utilizing the frequency feature amount of ultrasonic waves scattered in the living tissue see, for example, Patent Document 1).
  • a frequency spectrum is calculated by performing a fast Fourier transform (FFT) operation on a received signal representing an ultrasonic echo and performing frequency analysis, and the feature quantity extracted by performing approximation processing on the frequency spectrum.
  • a feature amount image is generated based on.
  • Patent Document 1 discloses a technique capable of distinguishing the texture by approximating the frequency spectrum and calculating the feature amount. On the other hand, some of the information in the spectrum is missing due to the approximation. Therefore, there is a possibility that the discrimination accuracy of the tissue property may be affected by the discrimination based on the missing feature amount. Especially in diagnosis, it is required to distinguish between benign and malignant tissues with high accuracy.
  • the present invention has been made in view of the above, and is an ultrasonic observation device capable of discriminating the characteristics of an observation target obtained from a frequency spectrum with high accuracy, an operating method of the ultrasonic observation device, and an ultrasonic observation device.
  • the purpose is to provide an operation program for.
  • the ultrasonic observation apparatus has a receiving unit that receives an echo signal of ultrasonic waves reflected by an observation target and a high-speed Fourier conversion based on the echo signal.
  • a receiving unit receives an echo signal of ultrasonic waves reflected by an observation target and a high-speed Fourier conversion based on the echo signal.
  • a frequency analysis unit calculates a frequency spectrum by performing frequency analysis, a frequency spectrum acquired from an observation target, and a plurality of teacher data consisting of disease information of the observation target. It is characterized by including a discrimination unit that discriminates the observation target by inputting the data of the frequency spectrum received by the reception unit.
  • the frequency analysis unit attenuates and corrects the frequency spectrum
  • the teacher data includes the frequency spectrum after attenuation correction
  • the discrimination unit attenuates. It is characterized in that the corrected frequency spectrum is input to the trained model.
  • the ultrasonic observation apparatus further includes a B mode image generation unit that generates B mode image data obtained by converting the amplitude of the echo signal into brightness, and the teacher data has already been acquired.
  • the frequency spectrum and the acquired B-mode image data are included, and the discrimination unit is characterized in that the frequency spectrum and the B-mode image data are input to the trained model.
  • the ultrasonic observation apparatus is characterized in that, in the above invention, the discrimination unit outputs the discrimination result to the outside as the teacher data together with the associated frequency spectrum.
  • the ultrasonic observation apparatus is further characterized in that, in the above invention, a feature amount information generation unit for calculating a feature amount based on the frequency spectrum calculated by the frequency analysis unit is further provided.
  • the ultrasonic observation apparatus provides the B-mode image generation unit that generates B-mode image data obtained by converting the amplitude of the echo signal into brightness, and the visual information related to the feature amount. It is further provided with a display image generation unit that generates display image data by superimposing it on a B-mode image corresponding to the B-mode image data.
  • the teacher data includes at least two pieces of information regarding the acquired frequency spectrum, the acquired B mode image data, and the acquired feature amount.
  • the discrimination unit is characterized in that data corresponding to the teacher data is input to the trained model.
  • the discrimination unit selects and selected any of a plurality of different trained models trained using different types of teacher data. It is characterized by inputting data corresponding to the model.
  • the ultrasonic observation device is characterized in that, in the above invention, further includes a storage unit for storing a frequency spectrum and a discrimination result of the frequency spectrum.
  • the echo signal of the ultrasonic wave reflected by the observation target is received, and the frequency analysis unit performs frequency analysis by high-speed Fourier conversion based on the echo signal to perform frequency.
  • the frequency received by the receiving unit with respect to the trained model learned by calculating the spectrum and using the frequency spectrum acquired from the observation target by the discrimination unit and a plurality of teacher data consisting of the disease information of the observation target. It is characterized in that the observation target is discriminated by inputting spectrum data.
  • the ultrasonic observation device receives the echo signal of the ultrasonic wave reflected by the observation target, and the frequency analysis unit performs high-speed Fourier conversion based on the echo signal. Frequency analysis is performed to calculate the frequency spectrum, and the discrimination unit uses the frequency spectrum acquired from the observation target and a plurality of teacher data consisting of the disease information of the observation target to learn the trained model. It is characterized in that the observation target is discriminated by inputting the data of the frequency spectrum received by the receiving unit.
  • FIG. 1 is a schematic view showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to an embodiment of the present invention.
  • FIG. 3 is a diagram showing an example of a frequency spectrum as learning data for each disease.
  • FIG. 4 is a flowchart showing an outline of the learning process performed by the ultrasonic observation device according to the embodiment of the present invention.
  • FIG. 5 is a flowchart showing an outline of the discrimination process performed by the ultrasonic observation device according to the embodiment of the present invention.
  • FIG. 1 is a schematic view showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to an embodiment of the present invention.
  • FIG. 3 is a diagram showing an example of
  • FIG. 6 is a diagram showing an example of the discrimination result displayed on the display device of the ultrasonic observation system according to the embodiment of the present invention.
  • FIG. 7 is a diagram illustrating a process performed by the ultrasonic observation apparatus according to the first modification of the embodiment of the present invention.
  • FIG. 8 is a diagram showing an example of a frequency spectrum and a B-mode image as learning data for each disease.
  • FIG. 9 is a block diagram showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to a modification 3 of the embodiment of the present invention.
  • FIG. 10 is a diagram showing an example of a frequency spectrum and a feature amount image as learning data for each disease.
  • FIG. 1 is a schematic view showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to an embodiment of the present invention.
  • the ultrasonic observation system 1 shown in the figure includes an ultrasonic endoscope 2 (ultrasonic probe) that transmits ultrasonic waves to a subject to be observed and receives the ultrasonic waves reflected by the subject, and an ultrasonic probe.
  • An ultrasonic observation device 3 that generates an ultrasonic image based on an ultrasonic signal (echo signal) acquired by the ultrasonic endoscope 2, a display device 4 that displays an ultrasonic image generated by the ultrasonic observation device 3, and a display device 4.
  • echo signal ultrasonic signal
  • the ultrasonic observation device 3 has a function of wirelessly communicating with a database (internal map database 51, personal information database 52, and identification information database 53) on the cloud 5. Further, a device (for example, an external server 6) different from the ultrasonic observation device 3 is electrically connected to the cloud 5 by wireless communication.
  • the body map database 51 stores information about a part of the body observed by using the ultrasonic endoscope 2.
  • test results other than the ultrasonic observation device 3 and diagnosis results of a doctor are associated and stored for each subject.
  • the discrimination information database 53 the frequency spectrum and the discrimination result are stored in association with each other.
  • the discrimination result is, for example, benign or malignant tissue.
  • the ultrasonic endoscope 2 has an ultrasonic transducer 21 at its tip, converts an electrical pulse signal received from the ultrasonic observation device 3 into an ultrasonic pulse (acoustic pulse), and irradiates the subject. At the same time, the ultrasonic echo reflected by the subject is converted into an electrical echo signal expressed by a voltage change and output to the ultrasonic observation device 3.
  • the ultrasonic vibrator 21 includes piezoelectric elements arranged in one dimension (straight line) or two dimensions, and each piezoelectric element transmits and receives ultrasonic waves to a subject.
  • the ultrasonic oscillator 21 may be a convex oscillator, a linear oscillator, or a radial oscillator.
  • the ultrasonic endoscope 2 usually has an imaging optical system and an imaging element, and is inserted into the digestive tract (esophagus, stomach, duodenum, large intestine) or respiratory organ (trachea, bile duct) of a subject for digestion. It is possible to image tubes, respiratory organs, and surrounding organs (pancreatic duct, gallbladder, bile duct, biliary tract, lymph nodes, mediastinal organs, blood vessels, etc.). Further, the ultrasonic endoscope 2 has a light guide that guides the illumination light to irradiate the subject at the time of imaging.
  • the tip of this light guide reaches the tip of the insertion portion of the ultrasonic endoscope 2 into the subject, while the proximal end is connected to the light source device in the ultrasonic observation device 3 that generates illumination light.
  • the ultrasonic endoscope 2 is not limited to the ultrasonic probe 2, and an ultrasonic probe that does not have an imaging optical system and an imaging element may be used.
  • FIG. 2 is a block diagram showing a configuration of an ultrasonic observation system including an ultrasonic observation device 3 according to an embodiment of the present invention.
  • the ultrasonic observation device 3 learns from the transmission / reception unit 31, the signal processing unit 32, the B-mode image generation unit 33, the display image generation unit 34, the frequency analysis unit 35, the discrimination unit 36, and the communication unit 37.
  • a unit 38, a control unit 39, and a storage unit 40 are provided.
  • the ultrasonic observation device 3 is provided with an input unit or the like that receives input of various information as a user interface of a keyboard, a mouse, a touch panel, or the like.
  • the transmission / reception unit 31 is electrically connected to the ultrasonic endoscope 2 and transmits a transmission signal (pulse signal) composed of a high voltage pulse to the ultrasonic vibrator 21 based on a predetermined waveform and transmission timing. Further, the transmission / reception unit 31 receives an echo signal which is an electrical reception signal from the ultrasonic transducer 21 and generates digital high frequency (RF: Radio Frequency) signal data (hereinafter referred to as RF data) to generate the data. The RF data is output to the signal processing unit 32 and the frequency analysis unit 35. At this time, the transmission / reception unit 31 may perform amplification correction processing according to the reception depth.
  • RF Radio Frequency
  • the transmission / reception unit 31 is a multi-channel for beam synthesis corresponding to the plurality of elements. Has a circuit.
  • the frequency band of the pulse signal transmitted by the transmission / reception unit 31 to the ultrasonic endoscope 2 may be a wide band that substantially covers the linear response frequency band of the electroacoustic conversion of the pulse signal into the ultrasonic pulse in the ultrasonic transducer 21. .. This makes it possible to perform an accurate approximation when executing the frequency spectrum approximation processing described later.
  • the transmission / reception unit 31 transmits various control signals output by the control unit 39 to the ultrasonic endoscope 2, and receives various information including an ID for identification from the ultrasonic endoscope 2 to control unit 39. It also has a function to send to.
  • the signal processing unit 32 generates digital B-mode reception data based on the RF data received from the transmission / reception unit 31.
  • the signal processing unit 32 performs known processing such as a bandpass filter, envelope detection, and logarithmic conversion on the RF data to generate digital B-mode reception data. In logarithm conversion, the common logarithm of the amount obtained by dividing the RF data by the reference voltage V c is taken and expressed in decibel values.
  • the signal processing unit 32 outputs the generated B-mode reception data to the B-mode image generation unit 33.
  • the signal processing unit 32 is a general-purpose processor such as a CPU (Central Processing Unit) having arithmetic and control functions, and various arithmetic circuits that execute specific functions such as FPGA (Field Programmable Gate Array) and ASIC (Application Specific Integrated Circuit). It is configured by using a dedicated processor such as.
  • CPU Central Processing Unit
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • the B-mode image generation unit 33 generates ultrasonic image data (hereinafter referred to as B-mode image data) that converts the amplitude of the echo signal into brightness and displays it.
  • the B-mode image generation unit 33 performs signal processing on the B-mode received data received from the signal processing unit 32 using known techniques such as gain processing, contrast processing, and ⁇ correction processing, and also causes the display device 4 to perform signal processing.
  • B-mode image data is generated by thinning out data in the depth direction according to the data step width determined according to the display range of the image.
  • the B-mode image is a grayscale image in which the values of R (red), G (green), and B (blue), which are variables when the RGB color system is adopted as the color space, are matched.
  • the B-mode image generation unit 33 has undergone coordinate conversion to rearrange the data on the rotational coordinates into the data on the Cartesian coordinates so that the scanning range can be spatially correctly expressed in the B-mode received data from the signal processing unit 32. After that, the gap between the received data for B mode is filled by performing the interpolation processing between the received data for B mode, and the B mode image data is generated.
  • the B-mode image generation unit 33 outputs the generated B-mode image data to the display image generation unit 34.
  • the B-mode image generation unit 33 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as an FPGA or ASIC.
  • the display image generation unit 34 generates display image data including the B mode image generated by the B mode image generation unit 33 and the discrimination result of the discrimination unit 36.
  • the frequency analysis unit 35 calculates the frequency spectrum by performing a frequency analysis by performing a fast Fourier transform (FFT) on the RF data generated by the transmission / reception unit 31.
  • the frequency analysis unit 35 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as an FPGA or ASIC.
  • the frequency analysis unit 35 samples the RF data (line data) of each sound line generated by the transmission / reception unit 31 at predetermined time intervals, and generates sample data.
  • the frequency analysis unit 35 calculates frequency spectra at a plurality of locations (data positions) on the RF data by performing FFT processing on the sample data group.
  • the "frequency spectrum” as used herein means a "frequency distribution of intensity at a certain reception depth z" obtained by subjecting a sample data group to FFT processing.
  • the term "intensity” as used herein means, for example, parameters such as echo signal voltage, echo signal power, ultrasonic echo sound pressure, and ultrasonic echo sound energy, amplitudes, time integration values, and combinations thereof. Refers to any of.
  • the frequency spectrum tends to differ depending on the properties of the living tissue scanned by the ultrasonic waves. This is because the frequency spectrum has a correlation with the size, number density, acoustic impedance, etc. of the scatterer that scatters ultrasonic waves.
  • the term "property of living tissue” as used herein means, for example, malignant tumor (cancer), benign tumor, endocrine tumor, mucinous tumor, normal tissue, cyst, vessel and the like.
  • the discrimination unit 36 reads a learned model created by the learning unit 38 (hereinafter, also simply referred to as a “model”) from the storage unit 40 and uses it.
  • the frequency spectrum calculated by the frequency analysis unit 35 is input and differentiated using a model.
  • the frequency spectrum used here is a frequency spectrum calculated at each position in the analysis target region in the B-mode image.
  • the analysis target region may be a region set by the operator, or a circle inscribed in the tissue detected by contour extraction may be set as the region.
  • the discrimination unit 36 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as an FPGA or ASIC.
  • the model used by the discrimination unit 36 is a model obtained by machine learning, for example, a model obtained by deep learning.
  • machine learning for example, a model obtained by deep learning.
  • support vector machine and support vector regression there are also methods called support vector machine and support vector regression.
  • the learning here is to calculate the weight of the classifier, the filter coefficient, and the offset.
  • the communication unit 37 acquires frequency spectrum data and discrimination results (disease name, benign, malignant, etc.) for the frequency spectrum from the discrimination information database 53 of the cloud 5 as learning data.
  • the communication unit 37 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as an FPGA or ASIC.
  • the learning unit 38 creates the above-mentioned discrimination model using the frequency spectrum data acquired by the communication unit 37 and the definitive diagnosis result diagnosed by the doctor.
  • the model generated by the learning unit 38 is stored in the storage unit 40.
  • the learning unit 38 is configured by using a general-purpose processor such as a CPU and a dedicated processor such as various arithmetic circuits that execute specific functions such as FPGA and ASIC.
  • the control unit 39 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as an FPGA or ASIC.
  • the control unit 39 collectively controls the ultrasonic observation device 3 by reading out the information stored and stored by the storage unit 40 from the storage unit 40 and executing various arithmetic processes related to the operation method of the ultrasonic observation device 3. do. It is also possible to configure the control unit 39 by using a CPU or the like common to the signal processing unit 32, the frequency analysis unit 35, and the discrimination unit 36.
  • the storage unit 40 stores various information necessary for the operation of the ultrasonic observation device 3.
  • the storage unit 40 stores the B-mode image data generated by the B-mode image generation unit 33, the frequency spectrum calculated by the frequency analysis unit 35, the learned model generated by the learning unit 38, the discrimination result of the discrimination unit 36, and the like. ..
  • the storage unit 40 stores, for example, information required for amplification processing (relationship between amplification factor and reception depth), information on window functions (Hamming, Hanning, Blackman, etc.) required for frequency analysis processing, and the like. ..
  • the storage unit 40 stores various programs including an operation program for executing the operation method of the ultrasonic observation device 3.
  • the operating program can also be recorded on a computer-readable recording medium such as a hard disk, flash memory, CD-ROM, DVD-ROM, or flexible disk and widely distributed.
  • the various programs described above can also be acquired by downloading them via a communication network.
  • the communication network referred to here is realized by, for example, an existing public line network, LAN (Local Area Network), WAN (Wide Area Network), etc., and may be wired or wireless.
  • the storage unit 40 having the above configuration is realized by using a ROM (Read Only Memory) in which various programs and the like are pre-installed, and a RAM (Random Access Memory) for storing calculation parameters and data of each process. ..
  • ROM Read Only Memory
  • RAM Random Access Memory
  • FIG. 3 is a diagram showing an example of a frequency spectrum as learning data for each disease.
  • the frequency spectrum data acquired by the communication unit 37 is associated with, for example, data of a diagnosis result (disease name, benign, malignant, etc.) of a doctor or the like.
  • FIG. 3 shows an example in which a plurality of frequency spectra are collected for each of disease A and disease B.
  • the learning unit 38 extracts the features of each frequency spectrum and creates a model. Then, when there is an input of a new frequency spectrum corresponding to the disease, the learning unit 38 extracts the feature and updates the model.
  • FIG. 4 is a flowchart showing an outline of the learning process performed by the ultrasonic observation device 3 according to the embodiment of the present invention.
  • the communication unit 37 acquires the frequency spectrum data corresponding to the discrimination result from the discrimination information database 53 of the cloud 5 as teacher data (step S11).
  • step S12 the learning process is performed by the learning unit 38.
  • features are extracted from the input frequency spectrum.
  • Learning unit 38 for example, the frequency spectrum S A shown in FIG. 3, the S B, respectively for extracting spectral features.
  • step S13 the learning unit 38 creates a model based on the features extracted by the learning process.
  • step S14 the learning unit 38 sets the created model as a model for discrimination.
  • the model set in step S14 is stored in the storage unit 40.
  • a model for discrimination is created by the process explained above.
  • the learning unit 38 extracts the characteristics of the input frequency spectrum and updates the model each time.
  • FIG. 5 is a flowchart showing an outline of the discrimination process performed by the ultrasonic observation device 3 having the above configuration.
  • the ultrasonic observation device 3 receives an echo signal as a measurement result of an observation target by the ultrasonic transducer from the ultrasonic endoscope 2 (step S1).
  • the B-mode image generation unit 33 generates B-mode image data based on the echo signal received by the transmission / reception unit 31 and outputs it to the display device 4 (step S2).
  • the display device 4 that has received the display image data including the B-mode image data displays the B-mode image corresponding to the B-mode image data (step S3).
  • the frequency analysis unit 35 calculates the frequency spectrum for all the sample data groups by performing the frequency analysis by the FFT calculation (step S4).
  • the frequency analysis unit 35 performs a plurality of FFT calculations for each of the sound lines in the analysis target region.
  • the result of the FFT calculation is stored in the storage unit 40 together with the reception depth and the reception direction.
  • the frequency analysis unit 35 may perform frequency analysis processing on all the regions where the echo signal is received, or may perform frequency analysis processing only within the set region of interest. Further, the process in step S4 may be executed before the processes in steps S2 and S3, or may be executed at the same time as any of the steps.
  • the discrimination unit 36 uses the frequency spectrum calculated in step S4 and the model created by the learning unit 38 to perform discrimination processing of the echo signal received this time as described above (step S5).
  • the discrimination unit 36 outputs the discrimination result to the display image generation unit 34.
  • the display device 4 displays an image including the discrimination result under the control of the control unit 39 (step S6).
  • the display device 4 for example, an image in which the B mode image and the discrimination result are arranged side by side is displayed.
  • FIG. 6 is a diagram showing an example of the discrimination result displayed on the display device of the ultrasonic observation system according to the embodiment of the present invention. Regions A and B are circles inscribed in the outer edge of the tissue detected in the B-mode image, and indicate regions having tissue properties to be differentiated. The frequency spectra in each of the regions A and B are used for discrimination. Then, the discrimination results of each area are displayed in the square frame.
  • PDAC indicates “Pancreatic ductal adenocarcinoma”
  • P-NET indicates “Pancreatic Neuroendocrine tumor”.
  • XXX indicates the possibility of other diseases.
  • the tissue may be enlarged and the enlarged view and the probability of the lesion may be displayed on another display area or another display device.
  • the frequency spectrum calculated by the frequency analysis unit 35 is input to the model created from the teacher data to distinguish the observation target.
  • the discrimination process is performed without losing the information possessed by the frequency spectrum, and therefore, it is obtained from the frequency spectrum. It is possible to discriminate the characteristics of the observation target with high accuracy.
  • the model created by the learning unit 38 and the parameters used by the discrimination unit 36 for discrimination may be the genetic information of an individual or the like in addition to the frequency spectrum.
  • FIG. 7 is a diagram illustrating a process performed by the ultrasonic observation apparatus according to the first modification of the embodiment of the present invention.
  • the configuration of the ultrasonic observation system according to the first modification is the same as the configuration of the ultrasonic observation system according to the embodiment.
  • the processing of the frequency analysis unit 35 is different from that of the above-described embodiment.
  • the frequency analysis unit 35 performs attenuation correction on the calculated frequency spectrum.
  • the frequency analysis unit 35 corrects the pre-correction feature amount according to the attenuation factor.
  • the attenuation amount A (f, z) of ultrasonic waves is the attenuation that occurs while the ultrasonic waves reciprocate between the reception depth 0 and the reception depth z, and is defined as the intensity change before and after the reciprocation. It is empirically known that the amount of attenuation A (f, z) is proportional to the frequency in a uniform tissue, and is expressed by the following equation (1).
  • a (f, z) 2 ⁇ zf ... (1)
  • is called the damping factor.
  • z (cm) is the reception depth of ultrasonic waves
  • f (MHz) is the frequency.
  • the specific value of the attenuation factor ⁇ is determined according to the part of the living body.
  • the unit of the attenuation factor ⁇ is, for example, dB / (cm ⁇ MHz). In the present embodiment, it is also possible to configure the value of the attenuation factor ⁇ to be changed by the input of the user.
  • the frequency analysis unit 35 corrects the frequency spectrum by increasing the multiplication coefficient as the frequency becomes higher and the depth becomes deeper. For example, by performing attenuation correction on the frequency spectrum S C shown in FIG. 7 (a), the frequency spectrum S C 'shown in (b) of FIG. 7 is obtained.
  • the frequency spectrum is the frequency spectrum data after attenuation correction.
  • the frequency spectrum of the flowcharts of FIGS. 4 and 5 described above is replaced with the frequency spectrum after attenuation correction for processing.
  • the frequency analysis unit 35 calculates the frequency spectrum and then performs attenuation correction.
  • the frequency spectrum after attenuation correction is input to the model created from the teacher data to distinguish the observation target.
  • the discrimination process is performed without losing the information contained in the frequency spectrum, as compared with the case where the conventional frequency spectrum is approximated to obtain the feature amount for discrimination. Therefore, the characteristics of the observation target obtained from the frequency spectrum can be discriminated with high accuracy.
  • the frequency spectrum is attenuated and corrected, and the frequency spectrum after the attenuation correction is used. Therefore, a spectrum that appropriately represents the texture (size and density of the ultrasonic scatterer) is learned and discriminated. It is possible to discriminate with higher accuracy.
  • FIG. 8 is a diagram showing an example of a frequency spectrum and a B-mode image as learning data for each disease.
  • the frequency spectrum data acquired by the communication unit 37 and the B-mode image data correspond to, for example, a disease determined by a diagnosis by a doctor or the like.
  • FIG. 8 shows an example in which a plurality of frequency spectra and B-mode image data are collected for each of the disease A and the disease B.
  • the frequency spectrum stored in 8 designated area in the B-mode image (e.g., region Q A, Q B) is the frequency spectrum in.
  • the learning unit 38 extracts the characteristics of the frequency spectrum and the characteristics of the B-mode image data, and creates a model. Then, when there is an input of a new frequency spectrum and B-mode image data associated with the disease, the learning unit 38 extracts the features and updates the model. In the second modification, the model is created by processing in the same manner as the flowchart of FIG. 4 described above.
  • the discrimination process is executed in the same manner as the flowchart of FIG. 5 described above.
  • the discrimination process of step S5 the frequency spectrum and the B mode image data are used.
  • the frequency spectrum and the B mode image data are input to the model created from the teacher data to distinguish the observation target.
  • the B mode image data is used and the information contained in the frequency spectrum is further compared with the case where the conventional frequency spectrum is approximated to obtain the feature amount for discrimination. Since the discrimination process is performed without missing the above, the characteristics of the observation target obtained from the frequency spectrum can be discriminated with high accuracy.
  • the result of the doctor observing the B mode image and visually diagnosing can be used for learning and discrimination, and the discrimination can be performed with higher accuracy. can do.
  • the frequency spectrum used for discrimination in the second modification may be an attenuation-corrected frequency spectrum.
  • FIG. 9 is a block diagram showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to a modification 3 of the embodiment of the present invention.
  • the third modification is different from the above-described embodiment in that the feature amount is calculated from the frequency spectrum.
  • the same components as those according to the above-described embodiment are designated by the same reference numerals.
  • the ultrasonic observation system includes an ultrasonic endoscope 2, an ultrasonic observation device 3A that generates an ultrasonic image based on an echo signal acquired by the ultrasonic endoscope 2, and an ultrasonic observation device.
  • a display device 4 for displaying an ultrasonic image generated by 3A is provided.
  • the ultrasonic observation device 3A can wirelessly communicate with the databases on the cloud 5 (internal map database 51, personal information database 52, and identification information database 53) (see FIG. 1).
  • the ultrasonic observation device 3A learns from the transmission / reception unit 31, the signal processing unit 32, the B-mode image generation unit 33, the display image generation unit 34, the frequency analysis unit 35, the discrimination unit 36, and the communication unit 37.
  • a unit 38, a control unit 39, a storage unit 40, and a feature amount information generation unit 41 are provided.
  • the ultrasonic observation device 3 is provided with an input unit and the like, which are realized by using a user interface such as a keyboard, a mouse, and a touch panel, and receive input of various information.
  • the feature amount information generation unit 41 calculates the feature amount of the frequency spectrum calculated by the frequency analysis unit 35, for example, in the set area of interest.
  • the feature amount information generation unit 41 calculates the feature amount by approximating the frequency spectrum with a straight line. Attenuation correction may be applied to the frequency spectrum before the approximation process or the regression line obtained by approximation.
  • FIG. 10 is a diagram showing an example of a frequency spectrum and a feature amount image as learning data for each disease.
  • the feature quantity information generation unit 41 calculates the feature quantity that characterizes the approximated linear equation by performing regression analysis of the frequency spectrum in a predetermined frequency band and approximating the frequency spectrum with a linear equation (regression straight line). For example, if the frequency spectrum S A shown in FIG. 10, the feature information generating unit 41 obtains a regression line L A by approximated by a linear equation the frequency spectrum S A performs a regression analysis in the frequency band F.
  • the feature information generating unit 41 obtains a regression line L B by approximating the frequency spectrum S B by a linear equation.
  • Feature amount information generation unit 41, the gradient a B of the regression line L B, and calculates the intercept b B, and the mid-band fit (Mid-band fit) c B a B f M + b B as the feature amount.
  • the slope a of the regression line has a correlation with the size of the ultrasonic scatterer, and generally, the larger the scatterer, the smaller the slope.
  • the intercept b of the regression line has a correlation with the size of the scatterer, the difference in acoustic impedance, the number density (concentration) of the scatterer, and the like. Specifically, the intercept b has a larger value as the scatterer is larger, has a larger value as the difference in acoustic impedance is larger, and has a larger value as the number density of the scatterer is larger.
  • the midband fit c is an indirect parameter derived from the slope a and intercept b, which gives the intensity of the spectrum at the center within a valid frequency band. Therefore, the mid-band fit c has a certain degree of correlation with the brightness of the B-mode image, in addition to the size of the scatterer, the difference in acoustic impedance, and the number density of the scatterer.
  • the feature amount information generation unit 41 may approximate the frequency spectrum with a polynomial of degree 2 or higher by regression analysis.
  • the feature amount information generation unit 41 generates feature amount image data for displaying the calculated feature amount together with the B mode image in association with the visual information.
  • the feature amount information generation unit 41 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as various arithmetic circuits that execute specific functions such as FPGA and ASIC.
  • the display image generation unit 34 generates display image data by superimposing the visual information related to the feature amount calculated by the feature amount information generation unit 41 on each pixel of the image in the B mode image data.
  • the display image generation unit 34 allocates visual information corresponding to the feature amount of the frequency spectrum.
  • the display image generation unit 34 generates a feature image by associating a hue as visual information with any one of the above-mentioned inclination, intercept, and midband fit, for example.
  • visual information related to feature quantities includes, for example, saturation, lightness, luminance value, R (red), G (green), B (blue), and other color spaces that constitute a predetermined color system. Variables can be mentioned.
  • the frequency spectrum data (including the regression line) acquired by the communication unit 37 and the feature amount image data correspond to, for example, the diagnosis results (disease name, benign, malignant, etc.) determined by the diagnosis of a doctor or the like. ..
  • FIG. 10 shows an example in which a plurality of frequency spectra and feature amount image data are collected for each of disease A and disease B.
  • the diagnosis result corresponding to the feature amount is associated with the result of the diagnosis by the doctor by displaying the visual information related to the feature amount.
  • the learning unit 38 extracts the characteristics of the frequency spectrum and the characteristics of the feature amount image data, and creates a model. Then, when a new frequency spectrum and feature amount image data corresponding to the disease are input, the learning unit 38 extracts the features and updates the model. In the third modification, a model is created by processing in the same manner as the flowchart of FIG. 4 described above.
  • the discrimination process is executed in the same manner as the flowchart of FIG. 5 described above.
  • the discrimination process of step S5 the frequency spectrum and the feature amount image data are used.
  • the frequency spectrum and the feature amount image data are input to the model created from the teacher data to distinguish the observation target.
  • the discrimination process is performed without losing the information contained in the frequency spectrum, as compared with the case where the conventional frequency spectrum is approximated to obtain the feature amount for discrimination. Therefore, the characteristics of the observation target obtained from the frequency spectrum can be discriminated with high accuracy.
  • the doctor observes the image in which the visual information related to the feature amount is superimposed on the B mode image and visually diagnoses the result. , Can be used for discrimination, and can be discriminated with higher accuracy.
  • an extracorporeal ultrasonic probe that irradiates ultrasonic waves from the body surface of the subject may be applied as the ultrasonic probe.
  • Extracorporeal ultrasound probes are commonly used to observe abdominal organs (liver, gallbladder, bladder), breasts (particularly mammary glands), and thyroid glands.
  • the data for discrimination is stored in the in-hospital server of the hospital where the ultrasonic observation device 3 is arranged.
  • the ultrasonic observation device 3 may be configured to acquire data for discrimination from this in-hospital server.
  • a plurality of different trained models trained using different types of teacher data are stored in advance, and the discrimination unit selects and selects one of the plurality of trained models.
  • the data corresponding to the trained model may be input.
  • the trained model may be selected by the user by the input unit, or may be selected by the discrimination unit from the input parameters and the like.
  • the frequency spectrum used may be a multi-spectrum.
  • the ultrasonic observation device, the operation method of the ultrasonic observation device, and the operation program of the ultrasonic observation device according to the present invention described above are useful for discriminating the characteristics of the observation target obtained from the frequency spectrum with high accuracy.
  • Ultrasonic observation system 2 Ultrasonic endoscope 3 Ultrasonic observation device 4 Display device 5 Cloud 6 External server 31 Transmission / reception unit 32 Signal processing unit 33 B mode image generation unit 34 Display image generation unit 35 Frequency analysis unit 36 Discrimination unit 37 Communication unit 38 Learning unit 39 Control unit 40 Storage unit 41 Feature quantity information generation unit

Abstract

An ultrasonic observation device according to the present invention is provided with: a reception unit for receiving an echo signal of an ultrasonic wave reflected from an observation subject; a frequency analysis unit for calculating a frequency spectrum by performing frequency analysis by fast Fourier transformation based on the echo signal; and a discrimination unit for discriminating the observation subject by inputting data of the frequency spectrum received by the reception unit to a learned model obtained through learning performed using multiple sets of training data comprising disease information of the observation subject and the frequency spectrum acquired from the observation subject.

Description

超音波観測装置、超音波観測装置の作動方法および超音波観測装置の作動プログラムUltrasonic observation device, operation method of ultrasonic observation device and operation program of ultrasonic observation device
 本発明は、超音波を用いて観測対象の組織を観測する超音波観測装置、超音波観測装置の作動方法および超音波観測装置の作動プログラムに関する。 The present invention relates to an ultrasonic observation device for observing a tissue to be observed using ultrasonic waves, an operation method of the ultrasonic observation device, and an operation program of the ultrasonic observation device.
 観測対象である生体組織または材料の特性を観測するために、超音波を適用することがある。具体的には、観測対象に超音波を送信し、その観測対象によって反射された超音波エコーに対して所定の信号処理を施すことによって、観測対象の特性に関する情報を取得する。一方、生体組織において散乱した超音波の周波数特徴量を利用することで、生体組織における組織性状の差異を表す特徴量画像を生成する技術も知られている(例えば、特許文献1を参照)。この技術においては、超音波エコーを表す受信信号に高速フーリエ変換(FFT)演算を施して周波数解析を行うことにより周波数スペクトルを算出し、該周波数スペクトルに近似処理を施すなどして抽出した特徴量に基づいて特徴量画像を生成する。 Ultrasound may be applied to observe the characteristics of the biological tissue or material to be observed. Specifically, information on the characteristics of the observation target is acquired by transmitting ultrasonic waves to the observation target and performing predetermined signal processing on the ultrasonic echo reflected by the observation target. On the other hand, there is also known a technique for generating a feature amount image showing a difference in tissue properties in a living tissue by utilizing the frequency feature amount of ultrasonic waves scattered in the living tissue (see, for example, Patent Document 1). In this technique, a frequency spectrum is calculated by performing a fast Fourier transform (FFT) operation on a received signal representing an ultrasonic echo and performing frequency analysis, and the feature quantity extracted by performing approximation processing on the frequency spectrum. A feature amount image is generated based on.
特許第5114609号公報Japanese Patent No. 5114609
 特許文献1では、周波数スペクトルを近似して特徴量を算出することによって、組織性状を鑑別できる技術が開示されている。一方で、近似によってスペクトルの情報量の一部が欠落する。そのため、欠落した特徴量に基づいて鑑別することによって、組織性状の鑑別精度に影響する場合が生じるおそれがあった。特に診断においては、組織の良悪性の鑑別を高精度に行うことが求められる。 Patent Document 1 discloses a technique capable of distinguishing the texture by approximating the frequency spectrum and calculating the feature amount. On the other hand, some of the information in the spectrum is missing due to the approximation. Therefore, there is a possibility that the discrimination accuracy of the tissue property may be affected by the discrimination based on the missing feature amount. Especially in diagnosis, it is required to distinguish between benign and malignant tissues with high accuracy.
 本発明は、上記に鑑みてなされたものであって、周波数スペクトルから得られる観測対象の特性を高精度に鑑別することができる超音波観測装置、超音波観測装置の作動方法および超音波観測装置の作動プログラムを提供することを目的とする。 The present invention has been made in view of the above, and is an ultrasonic observation device capable of discriminating the characteristics of an observation target obtained from a frequency spectrum with high accuracy, an operating method of the ultrasonic observation device, and an ultrasonic observation device. The purpose is to provide an operation program for.
 上述した課題を解決し、目的を達成するために、本発明に係る超音波観測装置は、観測対象で反射された超音波のエコー信号を受信する受信部と、前記エコー信号に基づく高速フーリエ変換によって周波数解析を行って周波数スペクトルを算出する周波数解析部と、観測対象から取得した周波数スペクトル、および、前記観測対象の疾患情報からなる複数の教師データを用いて学習された学習済みモデルに対して前記受信部が受信した周波数スペクトルのデータを入力することによって、前記観測対象を鑑別する鑑別部と、を備えることを特徴とする。 In order to solve the above-mentioned problems and achieve the object, the ultrasonic observation apparatus according to the present invention has a receiving unit that receives an echo signal of ultrasonic waves reflected by an observation target and a high-speed Fourier conversion based on the echo signal. For a trained model learned using a frequency analysis unit that calculates a frequency spectrum by performing frequency analysis, a frequency spectrum acquired from an observation target, and a plurality of teacher data consisting of disease information of the observation target. It is characterized by including a discrimination unit that discriminates the observation target by inputting the data of the frequency spectrum received by the reception unit.
 また、本発明に係る超音波観測装置は、上記発明において、前記周波数解析部は、前記周波数スペクトルを減衰補正し、前記教師データは、減衰補正後の周波数スペクトルからなり、前記鑑別部は、減衰補正後の周波数スペクトルを前記学習済みモデルに入力することを特徴とする。 Further, in the ultrasonic observation apparatus according to the present invention, in the above invention, the frequency analysis unit attenuates and corrects the frequency spectrum, the teacher data includes the frequency spectrum after attenuation correction, and the discrimination unit attenuates. It is characterized in that the corrected frequency spectrum is input to the trained model.
 また、本発明に係る超音波観測装置は、上記発明において、前記エコー信号の振幅を輝度に変換したBモード画像データを生成するBモード画像生成部、をさらに備え、前記教師データは、取得済みの周波数スペクトルと、取得済みのBモード画像データを含み、前記鑑別部は、周波数スペクトルとBモード画像データとを前記学習済みモデルに入力することを特徴とする。 Further, in the above invention, the ultrasonic observation apparatus according to the present invention further includes a B mode image generation unit that generates B mode image data obtained by converting the amplitude of the echo signal into brightness, and the teacher data has already been acquired. The frequency spectrum and the acquired B-mode image data are included, and the discrimination unit is characterized in that the frequency spectrum and the B-mode image data are input to the trained model.
 また、本発明に係る超音波観測装置は、上記発明において、前記鑑別部は、鑑別結果を、対応付けられている周波数スペクトルとともに、前記教師データとして外部に出力することを特徴とする。 Further, the ultrasonic observation apparatus according to the present invention is characterized in that, in the above invention, the discrimination unit outputs the discrimination result to the outside as the teacher data together with the associated frequency spectrum.
 また、本発明に係る超音波観測装置は、上記発明において、前記周波数解析部が算出した前記周波数スペクトルに基づいて特徴量を算出する特徴量情報生成部、をさらに備えることを特徴とする。 Further, the ultrasonic observation apparatus according to the present invention is further characterized in that, in the above invention, a feature amount information generation unit for calculating a feature amount based on the frequency spectrum calculated by the frequency analysis unit is further provided.
 また、本発明に係る超音波観測装置は、上記発明において、前記エコー信号の振幅を輝度に変換したBモード画像データを生成するBモード画像生成部と、前記特徴量に関連する視覚情報を前記Bモード画像データに応じたBモード画像上に重畳して表示用画像データを生成する表示画像生成部と、をさらに備えることを特徴とする。 Further, in the above-mentioned invention, the ultrasonic observation apparatus according to the present invention provides the B-mode image generation unit that generates B-mode image data obtained by converting the amplitude of the echo signal into brightness, and the visual information related to the feature amount. It is further provided with a display image generation unit that generates display image data by superimposing it on a B-mode image corresponding to the B-mode image data.
 また、本発明に係る超音波観測装置は、上記発明において、前記教師データは、取得済みの周波数スペクトル、取得済みのBモード画像データ、取得済みの特徴量に関する情報の少なくとも二つを含み、前記鑑別部は、前記教師データに対応するデータを前記学習済みモデルに入力することを特徴とする。 Further, in the ultrasonic observation apparatus according to the present invention, in the above invention, the teacher data includes at least two pieces of information regarding the acquired frequency spectrum, the acquired B mode image data, and the acquired feature amount. The discrimination unit is characterized in that data corresponding to the teacher data is input to the trained model.
 また、本発明に係る超音波観測装置は、上記発明において、前記鑑別部は、異なる種別の教師データを用いて学習された互いに異なる複数の学習済みモデルのいずれかを選択し、選択した学習済みモデルに対応するデータを入力することを特徴とする。 Further, in the ultrasonic observation apparatus according to the present invention, in the above invention, the discrimination unit selects and selected any of a plurality of different trained models trained using different types of teacher data. It is characterized by inputting data corresponding to the model.
 また、本発明に係る超音波観測装置は、上記発明において、周波数スペクトルと、当該周波数スペクトルの鑑別結果とを記憶する記憶部、をさらに備えることを特徴とする。 Further, the ultrasonic observation device according to the present invention is characterized in that, in the above invention, further includes a storage unit for storing a frequency spectrum and a discrimination result of the frequency spectrum.
 また、本発明に係る超音波観測装置の作動方法は、観測対象で反射された超音波のエコー信号を受信し、周波数解析部が、前記エコー信号に基づく高速フーリエ変換によって周波数解析を行って周波数スペクトルを算出し、鑑別部が、観測対象から取得した周波数スペクトル、および、前記観測対象の疾患情報からなる複数の教師データを用いて学習された学習済みモデルに対して前記受信部が受信した周波数スペクトルのデータを入力することによって、前記観測対象を鑑別することを特徴とする。 Further, in the operation method of the ultrasonic observation device according to the present invention, the echo signal of the ultrasonic wave reflected by the observation target is received, and the frequency analysis unit performs frequency analysis by high-speed Fourier conversion based on the echo signal to perform frequency. The frequency received by the receiving unit with respect to the trained model learned by calculating the spectrum and using the frequency spectrum acquired from the observation target by the discrimination unit and a plurality of teacher data consisting of the disease information of the observation target. It is characterized in that the observation target is discriminated by inputting spectrum data.
 また、本発明に係る超音波観測装置の作動プログラムは、超音波観測装置に、観測対象で反射された超音波のエコー信号を受信し、周波数解析部が、前記エコー信号に基づく高速フーリエ変換によって周波数解析を行って周波数スペクトルを算出し、鑑別部が、前記観測対象から取得した周波数スペクトル、および、前記観測対象の疾患情報からなる複数の教師データを用いて学習された学習済みモデルに対して前記受信部が受信した周波数スペクトルのデータを入力することによって、前記観測対象を鑑別する、ことを実行させることを特徴とする。 Further, in the operation program of the ultrasonic observation device according to the present invention, the ultrasonic observation device receives the echo signal of the ultrasonic wave reflected by the observation target, and the frequency analysis unit performs high-speed Fourier conversion based on the echo signal. Frequency analysis is performed to calculate the frequency spectrum, and the discrimination unit uses the frequency spectrum acquired from the observation target and a plurality of teacher data consisting of the disease information of the observation target to learn the trained model. It is characterized in that the observation target is discriminated by inputting the data of the frequency spectrum received by the receiving unit.
 本発明によれば、周波数スペクトルから得られる観測対象の特性を高精度に鑑別することができるという効果を奏する。 According to the present invention, there is an effect that the characteristics of the observation target obtained from the frequency spectrum can be discriminated with high accuracy.
図1は、本発明の一実施の形態に係る超音波観測装置を備えた超音波観測システムの構成を示す概略図である。FIG. 1 is a schematic view showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to an embodiment of the present invention. 図2は、本発明の一実施の形態に係る超音波観測装置を備えた超音波観測システムの構成を示すブロック図である。FIG. 2 is a block diagram showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to an embodiment of the present invention. 図3は、疾患ごとの、学習用データとしての周波数スペクトルの一例を示す図である。FIG. 3 is a diagram showing an example of a frequency spectrum as learning data for each disease. 図4は、本発明の一実施の形態に係る超音波観測装置が行う学習処理の概要を示すフローチャートである。FIG. 4 is a flowchart showing an outline of the learning process performed by the ultrasonic observation device according to the embodiment of the present invention. 図5は、本発明の一実施の形態に係る超音波観測装置が行う鑑別処理の概要を示すフローチャートである。FIG. 5 is a flowchart showing an outline of the discrimination process performed by the ultrasonic observation device according to the embodiment of the present invention. 図6は、本発明の一実施の形態に係る超音波観測システムの表示装置において表示される鑑別結果の一例を示す図である。FIG. 6 is a diagram showing an example of the discrimination result displayed on the display device of the ultrasonic observation system according to the embodiment of the present invention. 図7は、本発明の実施の形態の変形例1に係る超音波観測装置が行う処理について説明する図である。FIG. 7 is a diagram illustrating a process performed by the ultrasonic observation apparatus according to the first modification of the embodiment of the present invention. 図8は、疾患ごとの、学習用データとしての周波数スペクトルおよびBモード画像の一例を示す図である。FIG. 8 is a diagram showing an example of a frequency spectrum and a B-mode image as learning data for each disease. 図9は、本発明の実施の形態の変形例3に係る超音波観測装置を備えた超音波観測システムの構成を示すブロック図である。FIG. 9 is a block diagram showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to a modification 3 of the embodiment of the present invention. 図10は、疾患ごとの、学習用データとしての周波数スペクトルおよび特徴量画像の一例を示す図である。FIG. 10 is a diagram showing an example of a frequency spectrum and a feature amount image as learning data for each disease.
 以下、添付図面を参照して、本発明を実施するための形態(以下、「実施の形態」という)を説明する。 Hereinafter, embodiments for carrying out the present invention (hereinafter, referred to as “embodiments”) will be described with reference to the accompanying drawings.
(実施の形態)
 図1は、本発明の一実施の形態に係る超音波観測装置を備えた超音波観測システムの構成を示す概略図である。同図に示す超音波観測システム1は、観測対象である被検体へ超音波を送信し、該被検体で反射された超音波を受信する超音波内視鏡2(超音波プローブ)と、超音波内視鏡2が取得した超音波信号(エコー信号)に基づいて超音波画像を生成する超音波観測装置3と、超音波観測装置3が生成した超音波画像を表示する表示装置4と、を備える。また、超音波観測装置3は、クラウド5上のデータベース(体内マップデータベース51、個人情報データベース52および鑑別情報データベース53)と、無線通信する機能を有する。また、クラウド5には、超音波観測装置3とは別の装置(例えば外部サーバ6)が、無線通信により電気的に接続される。体内マップデータベース51には、超音波内視鏡2を用いて観測する体内の部位に関する情報が格納される。個人情報データベース52には、被検体ごとに、超音波観測装置3以外の検査結果や、医師の診断結果が関連付けられて格納される。鑑別情報データベース53には、周波数スペクトルと、鑑別結果とが対応付けられて格納されている。鑑別結果とは、例えば、組織の良悪性である。
(Embodiment)
FIG. 1 is a schematic view showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to an embodiment of the present invention. The ultrasonic observation system 1 shown in the figure includes an ultrasonic endoscope 2 (ultrasonic probe) that transmits ultrasonic waves to a subject to be observed and receives the ultrasonic waves reflected by the subject, and an ultrasonic probe. An ultrasonic observation device 3 that generates an ultrasonic image based on an ultrasonic signal (echo signal) acquired by the ultrasonic endoscope 2, a display device 4 that displays an ultrasonic image generated by the ultrasonic observation device 3, and a display device 4. To be equipped. In addition, the ultrasonic observation device 3 has a function of wirelessly communicating with a database (internal map database 51, personal information database 52, and identification information database 53) on the cloud 5. Further, a device (for example, an external server 6) different from the ultrasonic observation device 3 is electrically connected to the cloud 5 by wireless communication. The body map database 51 stores information about a part of the body observed by using the ultrasonic endoscope 2. In the personal information database 52, test results other than the ultrasonic observation device 3 and diagnosis results of a doctor are associated and stored for each subject. In the discrimination information database 53, the frequency spectrum and the discrimination result are stored in association with each other. The discrimination result is, for example, benign or malignant tissue.
 超音波内視鏡2は、その先端部に超音波振動子21を有し、超音波観測装置3から受信した電気的なパルス信号を超音波パルス(音響パルス)に変換して被検体へ照射するとともに、被検体で反射された超音波エコーを電圧変化で表現する電気的なエコー信号に変換して超音波観測装置3に出力する。超音波振動子21は、一次元(直線状)または二次元に配置される圧電素子を備え、各圧電素子によって被検体に対して超音波を送受信する。超音波振動子21は、コンベックス振動子、リニア振動子およびラジアル振動子のいずれでも構わない。 The ultrasonic endoscope 2 has an ultrasonic transducer 21 at its tip, converts an electrical pulse signal received from the ultrasonic observation device 3 into an ultrasonic pulse (acoustic pulse), and irradiates the subject. At the same time, the ultrasonic echo reflected by the subject is converted into an electrical echo signal expressed by a voltage change and output to the ultrasonic observation device 3. The ultrasonic vibrator 21 includes piezoelectric elements arranged in one dimension (straight line) or two dimensions, and each piezoelectric element transmits and receives ultrasonic waves to a subject. The ultrasonic oscillator 21 may be a convex oscillator, a linear oscillator, or a radial oscillator.
 超音波内視鏡2は、通常は撮像光学系および撮像素子を有しており、被検体の消化管(食道、胃、十二指腸、大腸)、または呼吸器(気管、気管支)へ挿入され、消化管や呼吸器、その周囲臓器(膵臓、胆嚢、胆管、胆道、リンパ節、縦隔臓器、血管等)を撮像することが可能である。また、超音波内視鏡2は、撮像時に被検体へ照射する照明光を導くライトガイドを有する。このライトガイドは、先端部が超音波内視鏡2の被検体への挿入部の先端まで達している一方、基端部が照明光を発生する超音波観測装置3内の光源装置に接続されている。なお、超音波内視鏡2に限らず、撮像光学系および撮像素子を有しない超音波プローブであってもよい。 The ultrasonic endoscope 2 usually has an imaging optical system and an imaging element, and is inserted into the digestive tract (esophagus, stomach, duodenum, large intestine) or respiratory organ (trachea, bile duct) of a subject for digestion. It is possible to image tubes, respiratory organs, and surrounding organs (pancreatic duct, gallbladder, bile duct, biliary tract, lymph nodes, mediastinal organs, blood vessels, etc.). Further, the ultrasonic endoscope 2 has a light guide that guides the illumination light to irradiate the subject at the time of imaging. The tip of this light guide reaches the tip of the insertion portion of the ultrasonic endoscope 2 into the subject, while the proximal end is connected to the light source device in the ultrasonic observation device 3 that generates illumination light. ing. The ultrasonic endoscope 2 is not limited to the ultrasonic probe 2, and an ultrasonic probe that does not have an imaging optical system and an imaging element may be used.
 図2は、本発明の一実施の形態に係る超音波観測装置3を備えた超音波観測システムの構成を示すブロック図である。超音波観測装置3は、送受信部31と、信号処理部32と、Bモード画像生成部33と、表示画像生成部34と、周波数解析部35と、鑑別部36と、通信部37と、学習部38と、制御部39と、記憶部40とを備える。なお、超音波観測装置3には、これらのほか、キーボード、マウス、タッチパネル等のユーザインタフェースとして、各種情報の入力を受け付ける入力部等が設けられる。 FIG. 2 is a block diagram showing a configuration of an ultrasonic observation system including an ultrasonic observation device 3 according to an embodiment of the present invention. The ultrasonic observation device 3 learns from the transmission / reception unit 31, the signal processing unit 32, the B-mode image generation unit 33, the display image generation unit 34, the frequency analysis unit 35, the discrimination unit 36, and the communication unit 37. A unit 38, a control unit 39, and a storage unit 40 are provided. In addition to these, the ultrasonic observation device 3 is provided with an input unit or the like that receives input of various information as a user interface of a keyboard, a mouse, a touch panel, or the like.
 送受信部31は、超音波内視鏡2と電気的に接続され、所定の波形および送信タイミングに基づいて高電圧パルスからなる送信信号(パルス信号)を超音波振動子21へ送信する。また、送受信部31は、超音波振動子21から電気的な受信信号であるエコー信号を受信してデジタルの高周波(RF:Radio Frequency)信号のデータ(以下、RFデータという)を生成し、生成したRFデータを信号処理部32および周波数解析部35へ出力する。この際、送受信部31は、受信深度に応じた増幅補正処理を実施してもよい。なお、超音波内視鏡2が複数の素子をアレイ状に設けた超音波振動子を電子的に走査させる構成を有する場合、送受信部31は、複数の素子に対応したビーム合成用の多チャンネル回路を有する。 The transmission / reception unit 31 is electrically connected to the ultrasonic endoscope 2 and transmits a transmission signal (pulse signal) composed of a high voltage pulse to the ultrasonic vibrator 21 based on a predetermined waveform and transmission timing. Further, the transmission / reception unit 31 receives an echo signal which is an electrical reception signal from the ultrasonic transducer 21 and generates digital high frequency (RF: Radio Frequency) signal data (hereinafter referred to as RF data) to generate the data. The RF data is output to the signal processing unit 32 and the frequency analysis unit 35. At this time, the transmission / reception unit 31 may perform amplification correction processing according to the reception depth. When the ultrasonic endoscope 2 has a configuration in which an ultrasonic vibrator in which a plurality of elements are provided in an array is electronically scanned, the transmission / reception unit 31 is a multi-channel for beam synthesis corresponding to the plurality of elements. Has a circuit.
 送受信部31が超音波内視鏡2に送信するパルス信号の周波数帯域は、超音波振動子21におけるパルス信号の超音波パルスへの電気音響変換の線型応答周波数帯域をほぼカバーする広帯域にするとよい。これによって、後述する周波数スペクトルの近似処理を実行する際、精度のよい近似を行うことが可能となる。 The frequency band of the pulse signal transmitted by the transmission / reception unit 31 to the ultrasonic endoscope 2 may be a wide band that substantially covers the linear response frequency band of the electroacoustic conversion of the pulse signal into the ultrasonic pulse in the ultrasonic transducer 21. .. This makes it possible to perform an accurate approximation when executing the frequency spectrum approximation processing described later.
 送受信部31は、制御部39が出力する各種制御信号を超音波内視鏡2に対して送信するとともに、超音波内視鏡2から識別用のIDを含む各種情報を受信して制御部39へ送信する機能も有する。 The transmission / reception unit 31 transmits various control signals output by the control unit 39 to the ultrasonic endoscope 2, and receives various information including an ID for identification from the ultrasonic endoscope 2 to control unit 39. It also has a function to send to.
 信号処理部32は、送受信部31から受信したRFデータをもとにデジタルのBモード用受信データを生成する。信号処理部32は、RFデータに対してバンドパスフィルタ、包絡線検波、対数変換など公知の処理を施し、デジタルのBモード用受信データを生成する。対数変換では、RFデータを基準電圧Vcで除した量の常用対数をとってデシベル値で表現する。信号処理部32は、生成したBモード用受信データを、Bモード画像生成部33へ出力する。信号処理部32は、演算および制御機能を有するCPU(Central Processing Unit)等の汎用プロセッサや、FPGA(Field Programmable Gate Array)、ASIC(Application Specific Integrated Circuit)等の特定の機能を実行する各種演算回路等の専用プロセッサを用いて構成される。 The signal processing unit 32 generates digital B-mode reception data based on the RF data received from the transmission / reception unit 31. The signal processing unit 32 performs known processing such as a bandpass filter, envelope detection, and logarithmic conversion on the RF data to generate digital B-mode reception data. In logarithm conversion, the common logarithm of the amount obtained by dividing the RF data by the reference voltage V c is taken and expressed in decibel values. The signal processing unit 32 outputs the generated B-mode reception data to the B-mode image generation unit 33. The signal processing unit 32 is a general-purpose processor such as a CPU (Central Processing Unit) having arithmetic and control functions, and various arithmetic circuits that execute specific functions such as FPGA (Field Programmable Gate Array) and ASIC (Application Specific Integrated Circuit). It is configured by using a dedicated processor such as.
 Bモード画像生成部33は、エコー信号の振幅を輝度に変換して表示する超音波画像データ(以下、Bモード画像データという)を生成する。Bモード画像生成部33は、信号処理部32から受信したBモード用受信データに対してゲイン処理、コントラスト処理、γ補正処理等の公知の技術を用いた信号処理を行うとともに、表示装置4における画像の表示レンジに応じて定まるデータステップ幅に応じた深さ方向のデータの間引き等を行うことによってBモード画像データを生成する。Bモード画像は、色空間としてRGB表色系を採用した場合の変数であるR(赤)、G(緑)、B(青)の値を一致させたグレースケール画像である。Bモード画像生成部33は、信号処理部32からのBモード用受信データに走査範囲を空間的に正しく表現できるよう、回転座標上のデータを直交座標上のデータに並べ直す座標変換を施した後、Bモード用受信データ間の補間処理を施すことによってBモード用受信データ間の空隙を埋め、Bモード画像データを生成する。Bモード画像生成部33は、生成したBモード画像データを表示画像生成部34へ出力する。Bモード画像生成部33は、CPU等の汎用プロセッサや、FPGA、ASIC等の専用プロセッサを用いて構成される。 The B-mode image generation unit 33 generates ultrasonic image data (hereinafter referred to as B-mode image data) that converts the amplitude of the echo signal into brightness and displays it. The B-mode image generation unit 33 performs signal processing on the B-mode received data received from the signal processing unit 32 using known techniques such as gain processing, contrast processing, and γ correction processing, and also causes the display device 4 to perform signal processing. B-mode image data is generated by thinning out data in the depth direction according to the data step width determined according to the display range of the image. The B-mode image is a grayscale image in which the values of R (red), G (green), and B (blue), which are variables when the RGB color system is adopted as the color space, are matched. The B-mode image generation unit 33 has undergone coordinate conversion to rearrange the data on the rotational coordinates into the data on the Cartesian coordinates so that the scanning range can be spatially correctly expressed in the B-mode received data from the signal processing unit 32. After that, the gap between the received data for B mode is filled by performing the interpolation processing between the received data for B mode, and the B mode image data is generated. The B-mode image generation unit 33 outputs the generated B-mode image data to the display image generation unit 34. The B-mode image generation unit 33 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as an FPGA or ASIC.
 表示画像生成部34は、Bモード画像生成部33が生成したBモード画像と、鑑別部36の鑑別結果とを含む表示用画像データを生成する。 The display image generation unit 34 generates display image data including the B mode image generated by the B mode image generation unit 33 and the discrimination result of the discrimination unit 36.
 周波数解析部35は、送受信部31が生成したRFデータに高速フーリエ変換(FFT:Fast Fourier Transform)を施して周波数解析を行うことによって周波数スペクトルを算出する。周波数解析部35は、CPU等の汎用プロセッサや、FPGA、ASIC等の専用プロセッサを用いて構成される。 The frequency analysis unit 35 calculates the frequency spectrum by performing a frequency analysis by performing a fast Fourier transform (FFT) on the RF data generated by the transmission / reception unit 31. The frequency analysis unit 35 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as an FPGA or ASIC.
 周波数解析部35は、送受信部31が生成した各音線のRFデータ(ラインデータ)を所定の時間間隔でサンプリングし、サンプルデータを生成する。周波数解析部35は、サンプルデータ群にFFT処理を施すことによって、RFデータ上の複数の箇所(データ位置)における周波数スペクトルを算出する。ここでいう「周波数スペクトル」とは、サンプルデータ群にFFT処理を施すことによって得られた「ある受信深度zにおける強度の周波数分布」を意味する。また、ここでいう「強度」とは、例えばエコー信号の電圧、エコー信号の電力、超音波エコーの音圧、超音波エコーの音響エネルギー等のパラメータ、これらパラメータの振幅や時間積分値やその組み合わせのいずれかを指す。 The frequency analysis unit 35 samples the RF data (line data) of each sound line generated by the transmission / reception unit 31 at predetermined time intervals, and generates sample data. The frequency analysis unit 35 calculates frequency spectra at a plurality of locations (data positions) on the RF data by performing FFT processing on the sample data group. The "frequency spectrum" as used herein means a "frequency distribution of intensity at a certain reception depth z" obtained by subjecting a sample data group to FFT processing. The term "intensity" as used herein means, for example, parameters such as echo signal voltage, echo signal power, ultrasonic echo sound pressure, and ultrasonic echo sound energy, amplitudes, time integration values, and combinations thereof. Refers to any of.
 一般に、周波数スペクトルは、被検体が生体組織である場合、超音波が走査された生体組織の性状によって異なる傾向を示す。これは、周波数スペクトルが、超音波を散乱する散乱体の大きさ、数密度、音響インピーダンス等と相関を有しているためである。ここでいう「生体組織の性状」とは、例えば悪性腫瘍(癌)、良性腫瘍、内分泌腫瘍、粘液性腫瘍、正常組織、嚢胞、脈管などのことである。 In general, when the subject is a living tissue, the frequency spectrum tends to differ depending on the properties of the living tissue scanned by the ultrasonic waves. This is because the frequency spectrum has a correlation with the size, number density, acoustic impedance, etc. of the scatterer that scatters ultrasonic waves. The term "property of living tissue" as used herein means, for example, malignant tumor (cancer), benign tumor, endocrine tumor, mucinous tumor, normal tissue, cyst, vessel and the like.
 鑑別部36は、学習部38が作成する学習済みモデル(以下、単に「モデル」ともいう)を記憶部40から読み出して使用する。周波数解析部35が算出した周波数スペクトルを入力して、モデルを用いて鑑別する。ここで使用される周波数スペクトルは、Bモード画像中の解析対象領域の各位置において算出された周波数スペクトルである。なお、解析対象領域は、術者が設定した領域であってもよいし、輪郭抽出によって検出された組織に内接する円を領域として設定してもよい。鑑別部36は、CPU等の汎用プロセッサや、FPGA、ASIC等の専用プロセッサを用いて構成される。 The discrimination unit 36 reads a learned model created by the learning unit 38 (hereinafter, also simply referred to as a “model”) from the storage unit 40 and uses it. The frequency spectrum calculated by the frequency analysis unit 35 is input and differentiated using a model. The frequency spectrum used here is a frequency spectrum calculated at each position in the analysis target region in the B-mode image. The analysis target region may be a region set by the operator, or a circle inscribed in the tissue detected by contour extraction may be set as the region. The discrimination unit 36 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as an FPGA or ASIC.
 鑑別部36が使用するモデルは、機械学習によって得られるモデル、例えば、深層学習によって得られるモデルである。その他、サポートベクターマシン、サポートベクトル回帰という手法もある。ここでの学習は、識別器の重み、フィルタ係数、オフセットを算出するものである。 The model used by the discrimination unit 36 is a model obtained by machine learning, for example, a model obtained by deep learning. In addition, there are also methods called support vector machine and support vector regression. The learning here is to calculate the weight of the classifier, the filter coefficient, and the offset.
 通信部37は、クラウド5の鑑別情報データベース53から、周波数スペクトルのデータと、その周波数スペクトルに対する鑑別結果(疾患名、良性、悪性等)とを、学習用データとして取得する。通信部37は、CPU等の汎用プロセッサや、FPGA、ASIC等の専用プロセッサを用いて構成される。 The communication unit 37 acquires frequency spectrum data and discrimination results (disease name, benign, malignant, etc.) for the frequency spectrum from the discrimination information database 53 of the cloud 5 as learning data. The communication unit 37 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as an FPGA or ASIC.
 学習部38は、通信部37が取得した周波数スペクトルのデータと医師が診断した確定診断結果とを用いて、上述した鑑別用のモデルを作成する。学習部38が生成したモデルは、記憶部40に記憶される。学習部38は、CPU等の汎用プロセッサや、FPGA、ASIC等の特定の機能を実行する各種演算回路等の専用プロセッサを用いて構成される。 The learning unit 38 creates the above-mentioned discrimination model using the frequency spectrum data acquired by the communication unit 37 and the definitive diagnosis result diagnosed by the doctor. The model generated by the learning unit 38 is stored in the storage unit 40. The learning unit 38 is configured by using a general-purpose processor such as a CPU and a dedicated processor such as various arithmetic circuits that execute specific functions such as FPGA and ASIC.
 制御部39は、CPU等の汎用プロセッサや、FPGA、ASIC等の専用プロセッサを用いて構成される。制御部39は、記憶部40が記憶、格納する情報を記憶部40から読み出し、超音波観測装置3の作動方法に関連した各種演算処理を実行することによって超音波観測装置3を統括して制御する。なお、制御部39を信号処理部32、周波数解析部35および鑑別部36と共通のCPU等を用いて構成することも可能である。 The control unit 39 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as an FPGA or ASIC. The control unit 39 collectively controls the ultrasonic observation device 3 by reading out the information stored and stored by the storage unit 40 from the storage unit 40 and executing various arithmetic processes related to the operation method of the ultrasonic observation device 3. do. It is also possible to configure the control unit 39 by using a CPU or the like common to the signal processing unit 32, the frequency analysis unit 35, and the discrimination unit 36.
 記憶部40は、超音波観測装置3の動作に必要な各種情報を記憶する。記憶部40は、Bモード画像生成部33が生成したBモード画像データや、周波数解析部35が算出した周波数スペクトル、学習部38が生成した学習済みモデル、鑑別部36の鑑別結果等を記憶する。記憶部40は、上記以外にも、例えば増幅処理に必要な情報(増幅率と受信深度との関係)、周波数解析処理に必要な窓関数(Hamming、Hanning、Blackman等)の情報等を記憶する。 The storage unit 40 stores various information necessary for the operation of the ultrasonic observation device 3. The storage unit 40 stores the B-mode image data generated by the B-mode image generation unit 33, the frequency spectrum calculated by the frequency analysis unit 35, the learned model generated by the learning unit 38, the discrimination result of the discrimination unit 36, and the like. .. In addition to the above, the storage unit 40 stores, for example, information required for amplification processing (relationship between amplification factor and reception depth), information on window functions (Hamming, Hanning, Blackman, etc.) required for frequency analysis processing, and the like. ..
 また、記憶部40は、超音波観測装置3の作動方法を実行するための作動プログラムを含む各種プログラムを記憶する。作動プログラムは、ハードディスク、フラッシュメモリ、CD-ROM、DVD-ROM、フレキシブルディスク等のコンピュータ読み取り可能な記録媒体に記録して広く流通させることも可能である。なお、上述した各種プログラムは、通信ネットワークを経由してダウンロードすることによって取得することも可能である。ここでいう通信ネットワークは、例えば既存の公衆回線網、LAN(Local Area Network)、WAN(Wide Area Network)などによって実現されるものであり、有線、無線を問わない。 Further, the storage unit 40 stores various programs including an operation program for executing the operation method of the ultrasonic observation device 3. The operating program can also be recorded on a computer-readable recording medium such as a hard disk, flash memory, CD-ROM, DVD-ROM, or flexible disk and widely distributed. The various programs described above can also be acquired by downloading them via a communication network. The communication network referred to here is realized by, for example, an existing public line network, LAN (Local Area Network), WAN (Wide Area Network), etc., and may be wired or wireless.
 以上の構成を有する記憶部40は、各種プログラム等が予めインストールされたROM(Read Only Memory)、および各処理の演算パラメータやデータ等を記憶するRAM(Random Access Memory)等を用いて実現される。 The storage unit 40 having the above configuration is realized by using a ROM (Read Only Memory) in which various programs and the like are pre-installed, and a RAM (Random Access Memory) for storing calculation parameters and data of each process. ..
 ここで、学習部38のモデル作成処理について、図3および図4を参照して説明する。図3は、疾患ごとの、学習用データとしての周波数スペクトルの一例を示す図である。通信部37によって取得された周波数スペクトルのデータは、例えば、医師等の診断結果(疾患名、良性、悪性等)のデータが対応付いている。図3では、疾患Aおよび疾患Bについて、それぞれ複数の周波数スペクトルがまとめられている例を示している。学習部38は、それぞれの周波数スペクトルの特徴を抽出してモデルを作成する。そして、学習部38は、疾患と対応付いた新たな周波数スペクトルの入力があると、その特徴を抽出してモデルを更新する。 Here, the model creation process of the learning unit 38 will be described with reference to FIGS. 3 and 4. FIG. 3 is a diagram showing an example of a frequency spectrum as learning data for each disease. The frequency spectrum data acquired by the communication unit 37 is associated with, for example, data of a diagnosis result (disease name, benign, malignant, etc.) of a doctor or the like. FIG. 3 shows an example in which a plurality of frequency spectra are collected for each of disease A and disease B. The learning unit 38 extracts the features of each frequency spectrum and creates a model. Then, when there is an input of a new frequency spectrum corresponding to the disease, the learning unit 38 extracts the feature and updates the model.
 図4は、本発明の一実施の形態に係る超音波観測装置3が行う学習処理の概要を示すフローチャートである。まず、通信部37が、クラウド5の鑑別情報データベース53から、鑑別結果が対応付いた周波数スペクトルのデータを、教師データとして取得する(ステップS11)。 FIG. 4 is a flowchart showing an outline of the learning process performed by the ultrasonic observation device 3 according to the embodiment of the present invention. First, the communication unit 37 acquires the frequency spectrum data corresponding to the discrimination result from the discrimination information database 53 of the cloud 5 as teacher data (step S11).
 ステップS12において、学習部38によって学習処理が実施される。学習処理では、入力された周波数スペクトルから、特徴が抽出される。学習部38は、例えば、図3に示す周波数スペクトルSA、SBから、それぞれスペクトルの特徴を抽出する。 In step S12, the learning process is performed by the learning unit 38. In the learning process, features are extracted from the input frequency spectrum. Learning unit 38, for example, the frequency spectrum S A shown in FIG. 3, the S B, respectively for extracting spectral features.
 ステップS13において、学習部38は、学習処理によって抽出された特徴に基づくモデルを作成する。 In step S13, the learning unit 38 creates a model based on the features extracted by the learning process.
 ステップS14において、学習部38は、作成したモデルを鑑別用のモデルとして設定する。ステップS14において設定されたモデルは、記憶部40に記憶される。 In step S14, the learning unit 38 sets the created model as a model for discrimination. The model set in step S14 is stored in the storage unit 40.
 以上説明した処理によって鑑別用のモデルが作成される。なお、新たな周波数スペクトルが入力されると、学習部38は、その都度、入力された周波数スペクトルの特徴を抽出し、モデルを更新する。 A model for discrimination is created by the process explained above. When a new frequency spectrum is input, the learning unit 38 extracts the characteristics of the input frequency spectrum and updates the model each time.
 図5は、以上の構成を有する超音波観測装置3が行う鑑別処理の概要を示すフローチャートである。まず、超音波観測装置3は、超音波内視鏡2から超音波振動子による観測対象の測定結果としてのエコー信号を受信する(ステップS1)。 FIG. 5 is a flowchart showing an outline of the discrimination process performed by the ultrasonic observation device 3 having the above configuration. First, the ultrasonic observation device 3 receives an echo signal as a measurement result of an observation target by the ultrasonic transducer from the ultrasonic endoscope 2 (step S1).
 続いて、Bモード画像生成部33は、送受信部31が受信したエコー信号に基づくBモード画像データを生成して、表示装置4へ出力する(ステップS2)。Bモード画像データを含む表示用画像データを受信した表示装置4は、そのBモード画像データに対応するBモード画像を表示する(ステップS3)。 Subsequently, the B-mode image generation unit 33 generates B-mode image data based on the echo signal received by the transmission / reception unit 31 and outputs it to the display device 4 (step S2). The display device 4 that has received the display image data including the B-mode image data displays the B-mode image corresponding to the B-mode image data (step S3).
 この後、周波数解析部35は、FFT演算による周波数解析を行うことによって全てのサンプルデータ群に対する周波数スペクトルを算出する(ステップS4)。周波数解析部35は、解析対象領域内の音線の各々について複数回のFFT演算を行う。FFT演算の結果は、受信深度および受信方向とともに記憶部40に格納される。
 なお、ステップS4において、周波数解析部35は、エコー信号を受信したすべての領域に対して周波数解析処理を行ってもよいし、設定された関心領域内においてのみ周波数解析処理を行ってもよい。
 また、ステップS4における処理は、ステップS2およびS3の処理よりも前に実行してもよいし、いずれかのステップと同時に実行してもよい。
After that, the frequency analysis unit 35 calculates the frequency spectrum for all the sample data groups by performing the frequency analysis by the FFT calculation (step S4). The frequency analysis unit 35 performs a plurality of FFT calculations for each of the sound lines in the analysis target region. The result of the FFT calculation is stored in the storage unit 40 together with the reception depth and the reception direction.
In step S4, the frequency analysis unit 35 may perform frequency analysis processing on all the regions where the echo signal is received, or may perform frequency analysis processing only within the set region of interest.
Further, the process in step S4 may be executed before the processes in steps S2 and S3, or may be executed at the same time as any of the steps.
 続いて、鑑別部36が、ステップS4において算出された周波数スペクトルと、学習部38が作成したモデルとを用いて、前述した通りに、今回受信したエコー信号の鑑別処理を行う(ステップS5)。鑑別部36は、鑑別結果を表示画像生成部34に出力する。 Subsequently, the discrimination unit 36 uses the frequency spectrum calculated in step S4 and the model created by the learning unit 38 to perform discrimination processing of the echo signal received this time as described above (step S5). The discrimination unit 36 outputs the discrimination result to the display image generation unit 34.
 この後、表示装置4は、制御部39の制御のもと、鑑別結果を含む画像を表示する(ステップS6)。表示装置4には、例えば、Bモード画像と、鑑別結果とを並べて配置した画像が表示される。図6は、本発明の一実施の形態に係る超音波観測システムの表示装置において表示される鑑別結果の一例を示す図である。領域A及び領域Bは、Bモード画像中において検出された組織の外縁に内接する円であり、鑑別する組織性状の領域を示す。この領域A及び領域Bの各領域における周波数スペクトルが鑑別に用いられる。そして、各領域の鑑別結果を、四角枠内にそれぞれ表示する。ここで、「PDAC」は「Pancreatic ductal adenocarcinoma」で、「P-NET」は「Pancreatic Neuroendocrine tumor」を示す。また、「XXX」は、その他の疾患である可能性を示す。それぞれの組織性状を学習結果に基づいて数値化し、表示することによって、組織性状を医師が鑑別する補助情報として利用する。 After that, the display device 4 displays an image including the discrimination result under the control of the control unit 39 (step S6). On the display device 4, for example, an image in which the B mode image and the discrimination result are arranged side by side is displayed. FIG. 6 is a diagram showing an example of the discrimination result displayed on the display device of the ultrasonic observation system according to the embodiment of the present invention. Regions A and B are circles inscribed in the outer edge of the tissue detected in the B-mode image, and indicate regions having tissue properties to be differentiated. The frequency spectra in each of the regions A and B are used for discrimination. Then, the discrimination results of each area are displayed in the square frame. Here, "PDAC" indicates "Pancreatic ductal adenocarcinoma", and "P-NET" indicates "Pancreatic Neuroendocrine tumor". In addition, "XXX" indicates the possibility of other diseases. By quantifying and displaying each tissue property based on the learning result, the tissue property is used as auxiliary information for the doctor to distinguish.
 なお、病変した組織であると鑑別された場合、その組織を拡大し、拡大図と、その病変の確率とを別の表示領域や、別の表示装置に表示させてもよい。 If the tissue is differentiated as a lesion, the tissue may be enlarged and the enlarged view and the probability of the lesion may be displayed on another display area or another display device.
 以上説明した実施の形態では、観測対象について、教師データから作成されるモデルに、周波数解析部35が算出した周波数スペクトルを入力して鑑別する。本実施の形態によれば、従来の周波数スペクトルを近似して特徴量を求めて鑑別する場合と比して、周波数スペクトルがもつ情報を欠落させずに鑑別処理が行われるため、周波数スペクトルから得られる観測対象の特性を高精度に鑑別することができる。 In the embodiment described above, the frequency spectrum calculated by the frequency analysis unit 35 is input to the model created from the teacher data to distinguish the observation target. According to the present embodiment, as compared with the case where the conventional frequency spectrum is approximated to obtain the feature amount and discriminated, the discrimination process is performed without losing the information possessed by the frequency spectrum, and therefore, it is obtained from the frequency spectrum. It is possible to discriminate the characteristics of the observation target with high accuracy.
 なお、上述した実施の形態において、学習部38が作成するモデル、および鑑別部36が鑑別に使用するパラメータとして、周波数スペクトルに加えて、個人の遺伝情報等を用いてもよい。 In the above-described embodiment, the model created by the learning unit 38 and the parameters used by the discrimination unit 36 for discrimination may be the genetic information of an individual or the like in addition to the frequency spectrum.
(変形例1)
 続いて、本発明の実施の形態の変形例1について説明する。図7は、本発明の実施の形態の変形例1に係る超音波観測装置が行う処理について説明する図である。変形例1に係る超音波観測システムの構成は、実施の形態に係る超音波観測システムの構成と同じである。変形例1は、上述した実施の形態に対し、周波数解析部35の処理が異なる。
(Modification example 1)
Subsequently, a modification 1 of the embodiment of the present invention will be described. FIG. 7 is a diagram illustrating a process performed by the ultrasonic observation apparatus according to the first modification of the embodiment of the present invention. The configuration of the ultrasonic observation system according to the first modification is the same as the configuration of the ultrasonic observation system according to the embodiment. In the first modification, the processing of the frequency analysis unit 35 is different from that of the above-described embodiment.
 周波数解析部35は、算出した周波数スペクトルに対して減衰補正を行う。周波数解析部35は、減衰率に応じて補正前特徴量を補正する。一般に、超音波の減衰量A(f,z)は、超音波が受信深度0と受信深度zとの間を往復する間に生じる減衰であり、往復する前後の強度変化として定義される。減衰量A(f,z)は、一様な組織内では周波数に比例することが経験的に知られており、以下の式(1)で表現される。
  A(f,z)=2αzf  ・・・(1)
ここで、αは減衰率と呼ばれる。また、z(cm)は超音波の受信深度であり、f(MHz)は周波数である。減衰率αの具体的な値は、観測対象が生体である場合、生体の部位に応じて定まる。減衰率αの単位は、例えばdB/(cm・MHz)である。なお、本実施の形態において、減衰率αの値をユーザの入力によって変更できる構成とすることも可能である。
The frequency analysis unit 35 performs attenuation correction on the calculated frequency spectrum. The frequency analysis unit 35 corrects the pre-correction feature amount according to the attenuation factor. Generally, the attenuation amount A (f, z) of ultrasonic waves is the attenuation that occurs while the ultrasonic waves reciprocate between the reception depth 0 and the reception depth z, and is defined as the intensity change before and after the reciprocation. It is empirically known that the amount of attenuation A (f, z) is proportional to the frequency in a uniform tissue, and is expressed by the following equation (1).
A (f, z) = 2αzf ... (1)
Here, α is called the damping factor. Further, z (cm) is the reception depth of ultrasonic waves, and f (MHz) is the frequency. When the observation target is a living body, the specific value of the attenuation factor α is determined according to the part of the living body. The unit of the attenuation factor α is, for example, dB / (cm · MHz). In the present embodiment, it is also possible to configure the value of the attenuation factor α to be changed by the input of the user.
 周波数解析部35は、上式(1)によって周波数が高く、深度が深いほど、乗じる係数を大きくして周波数スペクトルを補正する。例えば、図7の(a)に示す周波数スペクトルSCに対して減衰補正を施すことによって、図7の(b)に示す周波数スペクトルSC´が得られる。 According to the above equation (1), the frequency analysis unit 35 corrects the frequency spectrum by increasing the multiplication coefficient as the frequency becomes higher and the depth becomes deeper. For example, by performing attenuation correction on the frequency spectrum S C shown in FIG. 7 (a), the frequency spectrum S C 'shown in (b) of FIG. 7 is obtained.
 ここで、変形例1に係る教師データにおいて、周波数スペクトルは、減衰補正後の周波数スペクトルデータとなる。変形例1では、上述した図4、図5のフローチャートの周波数スペクトルを、減衰補正後の周波数スペクトルに読み替えて処理する。なお、ステップS4の周波数解析処理では、周波数解析部35が、周波数スペクトルを算出後、減衰補正を行う。 Here, in the teacher data according to the first modification, the frequency spectrum is the frequency spectrum data after attenuation correction. In the first modification, the frequency spectrum of the flowcharts of FIGS. 4 and 5 described above is replaced with the frequency spectrum after attenuation correction for processing. In the frequency analysis process of step S4, the frequency analysis unit 35 calculates the frequency spectrum and then performs attenuation correction.
 以上説明した変形例1では、観測対象について、教師データから作成されるモデルに、減衰補正後の周波数スペクトルを入力して鑑別する。変形例1によれば、実施の形態と同様に、従来の周波数スペクトルを近似して特徴量を求めて鑑別する場合と比して、周波数スペクトルがもつ情報を欠落させずに鑑別処理が行われるため、周波数スペクトルから得られる観測対象の特性を高精度に鑑別することができる。 In the modified example 1 described above, the frequency spectrum after attenuation correction is input to the model created from the teacher data to distinguish the observation target. According to the first modification, as in the embodiment, the discrimination process is performed without losing the information contained in the frequency spectrum, as compared with the case where the conventional frequency spectrum is approximated to obtain the feature amount for discrimination. Therefore, the characteristics of the observation target obtained from the frequency spectrum can be discriminated with high accuracy.
 また、変形例1では、周波数スペクトルを減衰補正し、この減衰補正後の周波数スペクトルを用いるため、組織性状(超音波散乱体の大きさや密度)を適切に表したスペクトルを学習、鑑別に用いることができ、一層高精度に鑑別することができる。 Further, in the first modification, the frequency spectrum is attenuated and corrected, and the frequency spectrum after the attenuation correction is used. Therefore, a spectrum that appropriately represents the texture (size and density of the ultrasonic scatterer) is learned and discriminated. It is possible to discriminate with higher accuracy.
(変形例2)
 続いて、本発明の実施の形態の変形例2について説明する。変形例2に係る超音波観測システムの構成は、実施の形態に係る超音波観測システムの構成と同じである。変形例2は、上述した実施の形態に対し、学習用データ(教師データ)、鑑別用データが異なる。変形例2では、教師データおよび鑑別用データとして、上述した周波数スペクトルと、Bモード画像データとを用いる。
(Modification 2)
Subsequently, a modification 2 of the embodiment of the present invention will be described. The configuration of the ultrasonic observation system according to the second modification is the same as the configuration of the ultrasonic observation system according to the embodiment. In the second modification, the learning data (teacher data) and the discrimination data are different from those of the above-described embodiment. In the second modification, the frequency spectrum described above and the B-mode image data are used as the teacher data and the discrimination data.
 図8は、疾患ごとの、学習用データとしての周波数スペクトルおよびBモード画像の一例を示す図である。通信部37によって取得される周波数スペクトルのデータ、およびBモード画像データは、例えば、医師等の診断によって決定された疾患が対応付いている。図8では、疾患Aおよび疾患Bについて、それぞれ複数の周波数スペクトルと、Bモード画像データとがまとめられている例を示している。なお、図8において記憶される周波数スペクトルは、Bモード画像中の指定領域(例えば、領域QA、QB)における周波数スペクトルである。 FIG. 8 is a diagram showing an example of a frequency spectrum and a B-mode image as learning data for each disease. The frequency spectrum data acquired by the communication unit 37 and the B-mode image data correspond to, for example, a disease determined by a diagnosis by a doctor or the like. FIG. 8 shows an example in which a plurality of frequency spectra and B-mode image data are collected for each of the disease A and the disease B. The frequency spectrum stored in 8, designated area in the B-mode image (e.g., region Q A, Q B) is the frequency spectrum in.
 学習部38は、周波数スペクトルの特徴、およびBモード画像データの特徴をそれぞれ抽出してモデルを作成する。そして、学習部38は、疾患と対応付いた新たな周波数スペクトルおよびBモード画像データの入力があると、その特徴を抽出してモデルを更新する。変形例2では、上述した図4のフローチャートと同様に処理され、モデルが作成される。 The learning unit 38 extracts the characteristics of the frequency spectrum and the characteristics of the B-mode image data, and creates a model. Then, when there is an input of a new frequency spectrum and B-mode image data associated with the disease, the learning unit 38 extracts the features and updates the model. In the second modification, the model is created by processing in the same manner as the flowchart of FIG. 4 described above.
 また、変形例2では、上述した図5のフローチャートと同様に鑑別処理が実行される。なお、ステップS5の鑑別処理では、周波数スペクトルおよびBモード画像データが用いられる。 Further, in the modified example 2, the discrimination process is executed in the same manner as the flowchart of FIG. 5 described above. In the discrimination process of step S5, the frequency spectrum and the B mode image data are used.
 以上説明した変形例2では、観測対象について、教師データから作成されるモデルに、周波数スペクトル、およびBモード画像データを入力して鑑別する。変形例2によれば、実施の形態と同様に、従来の周波数スペクトルを近似して特徴量を求めて鑑別する場合と比して、Bモード画像データを利用し、更に、周波数スペクトルがもつ情報を欠落させずに鑑別処理が行われるため、周波数スペクトルから得られる観測対象の特性を高精度に鑑別することができる。 In the modified example 2 described above, the frequency spectrum and the B mode image data are input to the model created from the teacher data to distinguish the observation target. According to the second modification, as in the embodiment, the B mode image data is used and the information contained in the frequency spectrum is further compared with the case where the conventional frequency spectrum is approximated to obtain the feature amount for discrimination. Since the discrimination process is performed without missing the above, the characteristics of the observation target obtained from the frequency spectrum can be discriminated with high accuracy.
 また、変形例2では、周波数スペクトルに加えてBモード画像データを用いるため、医師がBモード画像を観察して目視によって診断した結果を、学習、鑑別に用いることができ、一層高精度に鑑別することができる。 Further, in the second modification, since the B mode image data is used in addition to the frequency spectrum, the result of the doctor observing the B mode image and visually diagnosing can be used for learning and discrimination, and the discrimination can be performed with higher accuracy. can do.
 なお、変形例2において、鑑別に用いる周波数スペクトルを、減衰補正した周波数スペクトルとしてもよい。 Note that the frequency spectrum used for discrimination in the second modification may be an attenuation-corrected frequency spectrum.
(変形例3)
 続いて、本発明の実施の形態の変形例3について説明する。図9は、本発明の実施の形態の変形例3に係る超音波観測装置を備えた超音波観測システムの構成を示すブロック図である。変形例3は、上述した実施の形態に対し、周波数スペクトルから特徴量を算出する点が異なる。なお、上述した実施の形態に係る構成要素と同じ構成要素には、同じ符号を付す。
(Modification example 3)
Subsequently, a modification 3 of the embodiment of the present invention will be described. FIG. 9 is a block diagram showing a configuration of an ultrasonic observation system including an ultrasonic observation device according to a modification 3 of the embodiment of the present invention. The third modification is different from the above-described embodiment in that the feature amount is calculated from the frequency spectrum. The same components as those according to the above-described embodiment are designated by the same reference numerals.
 変形例3に係る超音波観測システムは、超音波内視鏡2と、超音波内視鏡2が取得したエコー信号に基づいて超音波画像を生成する超音波観測装置3Aと、超音波観測装置3Aが生成した超音波画像を表示する表示装置4と、を備える。また、超音波観測装置3Aは、クラウド5上のデータベース(体内マップデータベース51、個人情報データベース52および鑑別情報データベース53)と、無線通信できる(図1参照)。 The ultrasonic observation system according to the modified example 3 includes an ultrasonic endoscope 2, an ultrasonic observation device 3A that generates an ultrasonic image based on an echo signal acquired by the ultrasonic endoscope 2, and an ultrasonic observation device. A display device 4 for displaying an ultrasonic image generated by 3A is provided. In addition, the ultrasonic observation device 3A can wirelessly communicate with the databases on the cloud 5 (internal map database 51, personal information database 52, and identification information database 53) (see FIG. 1).
 超音波観測装置3Aは、送受信部31と、信号処理部32と、Bモード画像生成部33と、表示画像生成部34と、周波数解析部35と、鑑別部36と、通信部37と、学習部38と、制御部39と、記憶部40と、特徴量情報生成部41とを備える。なお、超音波観測装置3には、これらのほか、キーボード、マウス、タッチパネル等のユーザインタフェースを用いて実現され、各種情報の入力を受け付ける入力部等が設けられる。 The ultrasonic observation device 3A learns from the transmission / reception unit 31, the signal processing unit 32, the B-mode image generation unit 33, the display image generation unit 34, the frequency analysis unit 35, the discrimination unit 36, and the communication unit 37. A unit 38, a control unit 39, a storage unit 40, and a feature amount information generation unit 41 are provided. In addition to these, the ultrasonic observation device 3 is provided with an input unit and the like, which are realized by using a user interface such as a keyboard, a mouse, and a touch panel, and receive input of various information.
 特徴量情報生成部41は、例えば設定されている関心領域内において、周波数解析部35が算出した周波数スペクトルの特徴量を算出する。特徴量情報生成部41は、周波数スペクトルを直線で近似することによって特徴量を算出する。なお、近似処理前の周波数スペクトル、または、近似して得られた回帰直線に減衰補正を施してもよい。 The feature amount information generation unit 41 calculates the feature amount of the frequency spectrum calculated by the frequency analysis unit 35, for example, in the set area of interest. The feature amount information generation unit 41 calculates the feature amount by approximating the frequency spectrum with a straight line. Attenuation correction may be applied to the frequency spectrum before the approximation process or the regression line obtained by approximation.
 図10は、疾患ごとの、学習用データとしての周波数スペクトルおよび特徴量画像の一例を示す図である。特徴量情報生成部41は、所定周波数帯域における周波数スペクトルの回帰分析を行って周波数スペクトルを一次式(回帰直線)で近似することによって、この近似した一次式を特徴付ける特徴量を算出する。例えば、図10に示す周波数スペクトルSAの場合、特徴量情報生成部41は、周波数帯域Fで回帰分析を行い周波数スペクトルSAを一次式で近似することによって回帰直線LAを得る。換言すると、近似部334aは、回帰直線LAの傾きaA、切片bA、および周波数帯域Fの中心周波数fM=(fL+fH)/2の回帰直線上の値であるミッドバンドフィット(Mid-band fit)cA=aAM+bAを特徴量として算出する。同様に、図10に示す周波数スペクトルSBの場合、特徴量情報生成部41は、周波数スペクトルSBを一次式で近似することによって回帰直線LBを得る。特徴量情報生成部41は、回帰直線LBの傾きaB、切片bB、およびミッドバンドフィット(Mid-band fit)cB=aBM+bBを特徴量として算出する。 FIG. 10 is a diagram showing an example of a frequency spectrum and a feature amount image as learning data for each disease. The feature quantity information generation unit 41 calculates the feature quantity that characterizes the approximated linear equation by performing regression analysis of the frequency spectrum in a predetermined frequency band and approximating the frequency spectrum with a linear equation (regression straight line). For example, if the frequency spectrum S A shown in FIG. 10, the feature information generating unit 41 obtains a regression line L A by approximated by a linear equation the frequency spectrum S A performs a regression analysis in the frequency band F. In other words, the approximation unit 334a may slope a A of the regression line L A, the intercept b A, and the frequency band F center frequency f M = (f L + f H) / 2 of the mid band fits a regression line on the value of (Mid-band fit) c A = a A f M + b A is calculated as a feature quantity. Similarly, if the frequency spectrum S B shown in FIG. 10, the feature information generating unit 41 obtains a regression line L B by approximating the frequency spectrum S B by a linear equation. Feature amount information generation unit 41, the gradient a B of the regression line L B, and calculates the intercept b B, and the mid-band fit (Mid-band fit) c B = a B f M + b B as the feature amount.
 上述した3つの補正前特徴量のうち、回帰直線の傾きaは、超音波の散乱体の大きさと相関を有し、一般に散乱体が大きいほど傾きが小さな値を有する。また、回帰直線の切片bは、散乱体の大きさ、音響インピーダンスの差、散乱体の数密度(濃度)等と相関を有している。具体的には、切片bは、散乱体が大きいほど大きな値を有し、音響インピーダンスの差が大きいほど大きな値を有し、散乱体の数密度が大きいほど大きな値を有する。ミッドバンドフィットcは、傾きaと切片bから導出される間接的なパラメータであり、有効な周波数帯域内の中心におけるスペクトルの強度を与える。このため、ミッドバンドフィットcは、散乱体の大きさ、音響インピーダンスの差、散乱体の数密度に加えて、Bモード画像の輝度とある程度の相関を有している。なお、特徴量情報生成部41は、回帰分析によって二次以上の多項式で周波数スペクトルを近似してもよい。 Of the above three pre-correction features, the slope a of the regression line has a correlation with the size of the ultrasonic scatterer, and generally, the larger the scatterer, the smaller the slope. Further, the intercept b of the regression line has a correlation with the size of the scatterer, the difference in acoustic impedance, the number density (concentration) of the scatterer, and the like. Specifically, the intercept b has a larger value as the scatterer is larger, has a larger value as the difference in acoustic impedance is larger, and has a larger value as the number density of the scatterer is larger. The midband fit c is an indirect parameter derived from the slope a and intercept b, which gives the intensity of the spectrum at the center within a valid frequency band. Therefore, the mid-band fit c has a certain degree of correlation with the brightness of the B-mode image, in addition to the size of the scatterer, the difference in acoustic impedance, and the number density of the scatterer. The feature amount information generation unit 41 may approximate the frequency spectrum with a polynomial of degree 2 or higher by regression analysis.
 特徴量情報生成部41は、算出した特徴量を視覚情報と関連づけてBモード画像とともに表示する特徴量画像データを生成する。特徴量情報生成部41は、CPU等の汎用プロセッサや、FPGA、ASIC等の特定の機能を実行する各種演算回路等の専用プロセッサを用いて構成される。 The feature amount information generation unit 41 generates feature amount image data for displaying the calculated feature amount together with the B mode image in association with the visual information. The feature amount information generation unit 41 is configured by using a general-purpose processor such as a CPU or a dedicated processor such as various arithmetic circuits that execute specific functions such as FPGA and ASIC.
 表示画像生成部34は、特徴量情報生成部41が算出した特徴量に関連する視覚情報をBモード画像データにおける画像の各画素に対して重畳することによって表示用画像データを生成する。表示画像生成部34は、周波数スペクトルの特徴量に対応する視覚情報を割り当てる。表示画像生成部34は、例えば上述した傾き、切片、ミッドバンドフィットのいずれか一つに視覚情報としての色相を対応付けることによって特徴量画像を生成する。特徴量に関連する視覚情報としては、色相のほか、例えば彩度、明度、輝度値、R(赤)、G(緑)、B(青)などの所定の表色系を構成する色空間の変数を挙げることができる。 The display image generation unit 34 generates display image data by superimposing the visual information related to the feature amount calculated by the feature amount information generation unit 41 on each pixel of the image in the B mode image data. The display image generation unit 34 allocates visual information corresponding to the feature amount of the frequency spectrum. The display image generation unit 34 generates a feature image by associating a hue as visual information with any one of the above-mentioned inclination, intercept, and midband fit, for example. In addition to hue, visual information related to feature quantities includes, for example, saturation, lightness, luminance value, R (red), G (green), B (blue), and other color spaces that constitute a predetermined color system. Variables can be mentioned.
 通信部37によって取得される周波数スペクトルのデータ(回帰直線含む)、および特徴量画像データは、例えば、医師等の診断によって決定された診断結果(疾患名、良性、悪性等)が対応付いている。図10では、疾患Aおよび疾患Bについて、それぞれ複数の周波数スペクトルと、特徴量画像データとがまとめられている例を示している。なお、特徴量に対応付く診断結果は、特徴量に関連する視覚情報の表示によって医師が診断した結果が対応付けられる。 The frequency spectrum data (including the regression line) acquired by the communication unit 37 and the feature amount image data correspond to, for example, the diagnosis results (disease name, benign, malignant, etc.) determined by the diagnosis of a doctor or the like. .. FIG. 10 shows an example in which a plurality of frequency spectra and feature amount image data are collected for each of disease A and disease B. The diagnosis result corresponding to the feature amount is associated with the result of the diagnosis by the doctor by displaying the visual information related to the feature amount.
 学習部38は、周波数スペクトルの特徴、および特徴量画像データの特徴をそれぞれ抽出してモデルを作成する。そして、学習部38は、疾患と対応付いた新たな周波数スペクトルおよび特徴量画像データの入力があると、その特徴を抽出してモデルを更新する。変形例3では、上述した図4のフローチャートと同様に処理され、モデルが作成される。 The learning unit 38 extracts the characteristics of the frequency spectrum and the characteristics of the feature amount image data, and creates a model. Then, when a new frequency spectrum and feature amount image data corresponding to the disease are input, the learning unit 38 extracts the features and updates the model. In the third modification, a model is created by processing in the same manner as the flowchart of FIG. 4 described above.
 また、変形例3では、上述した図5のフローチャートと同様に鑑別処理が実行される。なお、ステップS5の鑑別処理では、周波数スペクトルおよび特徴量画像データが用いられる。 Further, in the modified example 3, the discrimination process is executed in the same manner as the flowchart of FIG. 5 described above. In the discrimination process of step S5, the frequency spectrum and the feature amount image data are used.
 以上説明した変形例3では、観測対象について、教師データから作成されるモデルに、周波数スペクトル、および特徴量画像データを入力して鑑別する。変形例3によれば、実施の形態と同様に、従来の周波数スペクトルを近似して特徴量を求めて鑑別する場合と比して、周波数スペクトルがもつ情報を欠落させずに鑑別処理が行われるため、周波数スペクトルから得られる観測対象の特性を高精度に鑑別することができる。 In the modified example 3 described above, the frequency spectrum and the feature amount image data are input to the model created from the teacher data to distinguish the observation target. According to the third modification, as in the embodiment, the discrimination process is performed without losing the information contained in the frequency spectrum, as compared with the case where the conventional frequency spectrum is approximated to obtain the feature amount for discrimination. Therefore, the characteristics of the observation target obtained from the frequency spectrum can be discriminated with high accuracy.
 また、変形例3では、周波数スペクトルに加えて特徴量画像データを用いるため、医師が特徴量に関連する視覚情報がBモード画像に重畳された画像を観察して目視によって診断した結果を、学習、鑑別に用いることができ、一層高精度に鑑別することができる。 Further, in the modified example 3, since the feature amount image data is used in addition to the frequency spectrum, the doctor observes the image in which the visual information related to the feature amount is superimposed on the B mode image and visually diagnoses the result. , Can be used for discrimination, and can be discriminated with higher accuracy.
 ここまで、本発明を実施するための形態を説明してきたが、本発明は上述した実施の形態によってのみ限定されるべきものではない。本発明はここでは記載していない様々な実施の形態等を含み得るものである。上述した実施の形態1、2において、超音波プローブとして、被検体の体表から超音波を照射する体外式超音波プローブを適用してもよい。体外式超音波プローブは、通常、腹部臓器(肝臓、胆嚢、膀胱)、乳房(特に乳腺)、甲状腺を観察する際に用いられる。 Although the embodiments for carrying out the present invention have been described so far, the present invention should not be limited only to the above-described embodiments. The present invention may include various embodiments not described herein. In the above-described first and second embodiments, an extracorporeal ultrasonic probe that irradiates ultrasonic waves from the body surface of the subject may be applied as the ultrasonic probe. Extracorporeal ultrasound probes are commonly used to observe abdominal organs (liver, gallbladder, bladder), breasts (particularly mammary glands), and thyroid glands.
 なお、上述した実施の形態では、クラウド上のデータベースから鑑別用のデータを取得する例について説明したが、例えば超音波観測装置3が配設される病院の院内サーバに鑑別用のデータが格納され、超音波観測装置3が、この院内サーバから鑑別用のデータを取得する構成としてもよい。 In the above-described embodiment, an example of acquiring data for discrimination from a database on the cloud has been described. However, for example, the data for discrimination is stored in the in-hospital server of the hospital where the ultrasonic observation device 3 is arranged. , The ultrasonic observation device 3 may be configured to acquire data for discrimination from this in-hospital server.
 また、上述した実施の形態において、異なる種別の教師データを用いて学習された互いに異なる複数の学習済みモデルが予め記憶され、鑑別部が、複数の学習済みモデルのいずれかを選択し、選択した学習済みモデルに対応するデータを入力する構成としてもよい。この際、学習済みモデルの選択は、入力部によってユーザが指定してもよいし、鑑別部が、入力されたパラメータ等から選択してもよい。また、上述した実施形態において、使用する周波数スペクトルは、マルチスペクトルでもよい。 Further, in the above-described embodiment, a plurality of different trained models trained using different types of teacher data are stored in advance, and the discrimination unit selects and selects one of the plurality of trained models. The data corresponding to the trained model may be input. At this time, the trained model may be selected by the user by the input unit, or may be selected by the discrimination unit from the input parameters and the like. Further, in the above-described embodiment, the frequency spectrum used may be a multi-spectrum.
 以上説明した本発明にかかる超音波観測装置、超音波観測装置の作動方法および超音波観測装置の作動プログラムは、周波数スペクトルから得られる観測対象の特性を高精度に鑑別するのに有用である。 The ultrasonic observation device, the operation method of the ultrasonic observation device, and the operation program of the ultrasonic observation device according to the present invention described above are useful for discriminating the characteristics of the observation target obtained from the frequency spectrum with high accuracy.
 1 超音波観測システム
 2 超音波内視鏡
 3 超音波観測装置
 4 表示装置
 5 クラウド
 6 外部サーバ
 31 送受信部
 32 信号処理部
 33 Bモード画像生成部
 34 表示画像生成部
 35 周波数解析部
 36 鑑別部
 37 通信部
 38 学習部
 39 制御部
 40 記憶部
 41 特徴量情報生成部
1 Ultrasonic observation system 2 Ultrasonic endoscope 3 Ultrasonic observation device 4 Display device 5 Cloud 6 External server 31 Transmission / reception unit 32 Signal processing unit 33 B mode image generation unit 34 Display image generation unit 35 Frequency analysis unit 36 Discrimination unit 37 Communication unit 38 Learning unit 39 Control unit 40 Storage unit 41 Feature quantity information generation unit

Claims (11)

  1.  観測対象で反射された超音波のエコー信号を受信する受信部と、
     前記エコー信号に基づく周波数解析を行って周波数スペクトルを算出する周波数解析部と、
     観測対象から取得した周波数スペクトル、及び、前記観測対象の疾患情報からなる複数の教師データを用いて学習された学習済みモデルに対して前記受信部が受信した周波数スペクトルのデータを入力することによって、前記観測対象を鑑別する鑑別部と、
     を備える超音波観測装置。
    A receiver that receives the echo signal of the ultrasonic waves reflected by the observation target,
    A frequency analysis unit that calculates a frequency spectrum by performing frequency analysis based on the echo signal,
    By inputting the frequency spectrum data received by the receiver into the trained model learned using the frequency spectrum acquired from the observation target and a plurality of teacher data consisting of the disease information of the observation target. The discrimination unit that discriminates the observation target and
    Ultrasonic observation device equipped with.
  2.  前記周波数解析部は、前記周波数スペクトルを減衰補正し、
     前記教師データは、減衰補正後の周波数スペクトルからなり、
     前記鑑別部は、減衰補正後の周波数スペクトルを前記学習済みモデルに入力する、
     請求項1に記載の超音波観測装置。
    The frequency analysis unit attenuates and corrects the frequency spectrum.
    The teacher data consists of a frequency spectrum after attenuation correction.
    The discrimination unit inputs the frequency spectrum after attenuation correction to the trained model.
    The ultrasonic observation device according to claim 1.
  3.  前記エコー信号の振幅を輝度に変換したBモード画像データを生成するBモード画像生成部、
     をさらに備え、
     前記教師データは、取得済みの周波数スペクトルと、取得済みのBモード画像データを含み、
     前記鑑別部は、周波数スペクトルとBモード画像データとを前記学習済みモデルに入力する、
     請求項1に記載の超音波観測装置。
    A B-mode image generator that generates B-mode image data obtained by converting the amplitude of the echo signal into brightness.
    With more
    The teacher data includes an acquired frequency spectrum and acquired B-mode image data.
    The discrimination unit inputs the frequency spectrum and the B-mode image data into the trained model.
    The ultrasonic observation device according to claim 1.
  4.  前記鑑別部は、鑑別結果を、対応付けられている周波数スペクトルとともに、前記教師データとして外部に出力する、
     請求項1に記載の超音波観測装置。
    The discrimination unit outputs the discrimination result to the outside as the teacher data together with the associated frequency spectrum.
    The ultrasonic observation device according to claim 1.
  5.  前記周波数解析部が算出した前記周波数スペクトルに基づいて特徴量を算出する特徴量情報生成部、
     をさらに備える請求項1に記載の超音波観測装置。
    A feature amount information generation unit that calculates a feature amount based on the frequency spectrum calculated by the frequency analysis unit,
    The ultrasonic observation apparatus according to claim 1.
  6.  前記エコー信号の振幅を輝度に変換したBモード画像データを生成するBモード画像生成部と、
     前記特徴量に関連する視覚情報を前記Bモード画像データに応じたBモード画像上に重畳して表示用画像データを生成する表示画像生成部と、
     をさらに備える請求項5に記載の超音波観測装置。
    A B-mode image generator that generates B-mode image data obtained by converting the amplitude of the echo signal into brightness, and
    A display image generation unit that generates display image data by superimposing visual information related to the feature amount on a B-mode image corresponding to the B-mode image data.
    The ultrasonic observation apparatus according to claim 5.
  7.  前記教師データは、取得済みの周波数スペクトル、取得済みのBモード画像データ、取得済みの特徴量に関する情報の少なくとも二つを含み、
     前記鑑別部は、前記教師データに対応するデータを前記学習済みモデルに入力する、
     請求項6に記載の超音波観測装置。
    The teacher data includes at least two pieces of information regarding the acquired frequency spectrum, the acquired B-mode image data, and the acquired features.
    The discrimination unit inputs data corresponding to the teacher data into the trained model.
    The ultrasonic observation device according to claim 6.
  8.  前記鑑別部は、異なる種別の教師データを用いて学習された互いに異なる複数の学習済みモデルのいずれかを選択し、選択した学習済みモデルに対応するデータを入力する、
     請求項1に記載の超音波観測装置。
    The discrimination unit selects one of a plurality of different trained models trained using different types of teacher data, and inputs data corresponding to the selected trained model.
    The ultrasonic observation device according to claim 1.
  9.  周波数スペクトルと、当該周波数スペクトルの鑑別結果とを記憶する記憶部、
     をさらに備える請求項1に記載の超音波観測装置。
    A storage unit that stores the frequency spectrum and the discrimination result of the frequency spectrum.
    The ultrasonic observation apparatus according to claim 1.
  10.  受信部が、観測対象で反射された超音波のエコー信号を受信し、
     周波数解析部が、前記エコー信号に基づく高速フーリエ変換によって周波数解析を行い周波数スペクトルを算出し、
     鑑別部が、観測対象から取得した周波数スペクトル、および、前記観測対象の疾患情報からなる複数の教師データを用いて学習された学習済みモデルに対して前記受信部が受信した周波数スペクトルのデータを入力することによって、前記観測対象を鑑別する、
     超音波観測装置の作動方法。
    The receiver receives the echo signal of the ultrasonic wave reflected by the observation target,
    The frequency analysis unit performs frequency analysis by fast Fourier transform based on the echo signal, calculates the frequency spectrum, and calculates the frequency spectrum.
    The discrimination unit inputs the frequency spectrum data received by the receiving unit into the trained model learned using the frequency spectrum acquired from the observation target and a plurality of teacher data consisting of the disease information of the observation target. By doing so, the observation target is discriminated.
    How to operate the ultrasonic observation device.
  11.  超音波観測装置に、
     受信部が、観測対象で反射された超音波のエコー信号を受信し、
     周波数解析部が、前記エコー信号に基づく高速フーリエ変換によって周波数解析を行って周波数スペクトルを算出し、
     鑑別部が、観測対象から取得した周波数スペクトル、および、前記観測対象の疾患情報からなる複数の教師データを用いて学習された学習済みモデルに対して前記受信部が受信した周波数スペクトルのデータを入力することによって、前記観測対象を鑑別する、
     ことを実行させる超音波観測装置の作動プログラム。
    For ultrasonic observation equipment
    The receiver receives the echo signal of the ultrasonic wave reflected by the observation target,
    The frequency analysis unit calculates the frequency spectrum by performing frequency analysis by the fast Fourier transform based on the echo signal.
    The discrimination unit inputs the frequency spectrum data received by the receiving unit into the trained model learned using the frequency spectrum acquired from the observation target and a plurality of teacher data consisting of the disease information of the observation target. By doing so, the observation target is discriminated.
    An operation program of an ultrasonic observation device that makes things happen.
PCT/JP2020/003245 2020-01-29 2020-01-29 Ultrasonic observation device, method for operating ultrasonic observation device, and program for operating ultrasonic observation device WO2021152745A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/003245 WO2021152745A1 (en) 2020-01-29 2020-01-29 Ultrasonic observation device, method for operating ultrasonic observation device, and program for operating ultrasonic observation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/003245 WO2021152745A1 (en) 2020-01-29 2020-01-29 Ultrasonic observation device, method for operating ultrasonic observation device, and program for operating ultrasonic observation device

Publications (1)

Publication Number Publication Date
WO2021152745A1 true WO2021152745A1 (en) 2021-08-05

Family

ID=77078762

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/003245 WO2021152745A1 (en) 2020-01-29 2020-01-29 Ultrasonic observation device, method for operating ultrasonic observation device, and program for operating ultrasonic observation device

Country Status (1)

Country Link
WO (1) WO2021152745A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08293025A (en) * 1995-04-20 1996-11-05 Olympus Optical Co Ltd Image sorting device
JP2010503902A (en) * 2006-09-12 2010-02-04 ボストン サイエンティフィック リミテッド System and method for generating individual classifiers
WO2012063976A1 (en) * 2010-11-11 2012-05-18 オリンパスメディカルシステムズ株式会社 Ultrasound diagnostic device, operation method of ultrasound diagnostic device, and operation program for ultrasound diagnostic device
CN103479398A (en) * 2013-09-16 2014-01-01 华南理工大学 Method of detecting hepatic tissue microstructure based on ultrasonic radio frequency flow analysis
JP2016168310A (en) * 2015-03-15 2016-09-23 フィンガルリンク株式会社 Intravascular foreign substance fluoroscopic apparatus and intravascular foreign substance fluoroscopic method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08293025A (en) * 1995-04-20 1996-11-05 Olympus Optical Co Ltd Image sorting device
JP2010503902A (en) * 2006-09-12 2010-02-04 ボストン サイエンティフィック リミテッド System and method for generating individual classifiers
WO2012063976A1 (en) * 2010-11-11 2012-05-18 オリンパスメディカルシステムズ株式会社 Ultrasound diagnostic device, operation method of ultrasound diagnostic device, and operation program for ultrasound diagnostic device
CN103479398A (en) * 2013-09-16 2014-01-01 华南理工大学 Method of detecting hepatic tissue microstructure based on ultrasonic radio frequency flow analysis
JP2016168310A (en) * 2015-03-15 2016-09-23 フィンガルリンク株式会社 Intravascular foreign substance fluoroscopic apparatus and intravascular foreign substance fluoroscopic method

Similar Documents

Publication Publication Date Title
JP5897227B1 (en) Medical diagnostic device, medical diagnostic device operating method, and medical diagnostic device operating program
JP6892320B2 (en) Ultrasonic observation device, operation method of ultrasonic observation device and operation program of ultrasonic observation device
CN108366782B (en) Ultrasonic diagnostic apparatus, method of operating ultrasonic diagnostic apparatus, and recording medium
WO2021152745A1 (en) Ultrasonic observation device, method for operating ultrasonic observation device, and program for operating ultrasonic observation device
JP7100160B2 (en) Ultrasound observation device, operation method of ultrasonic observation device and operation program of ultrasonic observation device
JP2020044044A (en) Ultrasonic observation apparatus, operation method of ultrasonic observation apparatus, and operation program of ultrasonic observation apparatus
JP2018191779A (en) Ultrasonic observation device
WO2016181869A1 (en) Ultrasonic observation device, operation method for ultrasonic observation device, and operation program for ultrasonic observation device
JP7094843B2 (en) Ultrasound observing device, how to operate the ultrasonic observing device, computer-readable recording medium and ultrasonic diagnostic system
JP6022135B1 (en) Ultrasonic diagnostic apparatus, method for operating ultrasonic diagnostic apparatus, and operation program for ultrasonic diagnostic apparatus
US10617389B2 (en) Ultrasound observation apparatus, method of operating ultrasound observation apparatus, and computer-readable recording medium
CN107530057B (en) Ultrasonic diagnostic apparatus, method of operating ultrasonic diagnostic apparatus, and storage medium
WO2022054288A1 (en) Ultrasonic observation device, method for operating ultrasonic observation device, and program for operating ultrasonic observation device
JP2017217359A (en) Ultrasound observation apparatus, operation method for ultrasound observation apparatus, and operation program for ultrasound observation apparatus
JP7238164B2 (en) Ultrasound Observation Apparatus, Ultrasound Observation System, Ultrasound Observation Method, Ultrasound Observation Program, and Ultrasound Endoscope System
US20210345990A1 (en) Ultrasound imaging apparatus, operating method of ultrasound imaging apparatus, and computer-readable recording medium
WO2015198713A1 (en) Ultrasound observation device, ultrasound observation device operating method, and ultrasound observation device operating program
JP6010274B1 (en) Ultrasonic observation apparatus, operation method of ultrasonic observation apparatus, and operation program of ultrasonic observation apparatus
JP6138402B2 (en) Ultrasonic observation apparatus, operation method of ultrasonic observation apparatus, and operation program of ultrasonic observation apparatus
JP2017217313A (en) Ultrasound observation apparatus, operation method for ultrasound observation apparatus, and operation program for ultrasound observation apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20917168

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20917168

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP