US10426444B2 - Ultrasonic diagnosis apparatus, medical image processing apparatus and medical image processing method - Google Patents
Ultrasonic diagnosis apparatus, medical image processing apparatus and medical image processing method Download PDFInfo
- Publication number
- US10426444B2 US10426444B2 US14/732,251 US201514732251A US10426444B2 US 10426444 B2 US10426444 B2 US 10426444B2 US 201514732251 A US201514732251 A US 201514732251A US 10426444 B2 US10426444 B2 US 10426444B2
- Authority
- US
- United States
- Prior art keywords
- calibration information
- image data
- ultrasonic
- scale mark
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/58—Testing, adjusting or calibrating the diagnostic device
- A61B8/585—Automatic set-up of the device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/06—Measuring blood flow
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/13—Tomography
- A61B8/14—Echo-tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/461—Displaying means of special interest
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/48—Diagnostic techniques
- A61B8/488—Diagnostic techniques involving Doppler signals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5207—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/461—Displaying means of special interest
- A61B8/463—Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/467—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
Definitions
- Embodiments of the present invention relate to an ultrasonic diagnosis apparatus, a medical image processing apparatus and a medical image processing method.
- An ultrasonic diagnosis apparatus is a diagnosis apparatus which displays an image of intravital information. Compared to other image diagnosis apparatuses such as an X-ray diagnosis apparatus and an X-ray computed tomography apparatus, the ultrasonic diagnosis apparatus is inexpensive and is free from exposure, and is utilized as a useful apparatus for observation in real time in a non-invasive manner. The range of applications of the ultrasonic diagnosis apparatus is wide, and the ultrasonic diagnosis apparatus is applied to diagnosis of a circulatory organ such as the heart, the abdominal region such as the liver and kidney, peripheral blood vessels, obstetrics and gynecology, and breast cancer.
- the ultrasonic diagnosis apparatus includes a function of storing (saving) the image data, which is displayed on the screen of the monitor, in a database as still images or a moving picture.
- calibration information (information relating to a living body, and setting information of the system) is set as additional information for the image data stored in the database, for an occasion of measuring an intravital conversion distance or a blood flow velocity by using this image data.
- the image data, for which the calibration information is set is displayed, each of locations on the ultrasonic image can be measured.
- the amount of data increases.
- the image data is stored in the database in a state in which the calibration information is not set.
- the object of the present invention is to provide an ultrasonic diagnosis apparatus and a medical image processing apparatus, which can automatically generate calibration information from image data.
- FIG. 1 is a view illustrating a block configuration of an ultrasonic diagnosis apparatus according to an embodiment.
- FIG. 2 is a block diagram which mainly illustrates a functional configuration of a control processor 26 shown in FIG. 1 .
- FIG. 3 is a flowchart illustrating a process procedure of the ultrasonic diagnosis apparatus according to the embodiment.
- FIG. 4 is a flowchart illustrating a process procedure of a calibration information generation process.
- FIG. 5 is a view illustrating an example of image data selected by an operator, among image data stored in a database 25 .
- FIG. 6 is a view illustrating an example of a result of a binarization process executed on the image data shown in FIG. 5 .
- FIG. 7 is a view illustrating an example of a display screen of a monitor 14 at a time when calibration information, together with image data, was displayed.
- FIG. 8 is a flowchart illustrating a flow of a generation method of calibration information according to Modification 1.
- FIG. 9 is a flowchart illustrating a flow of a generation method of calibration information according to a second embodiment.
- An ultrasonic diagnostic apparatus comprises; an ultrasonic probe, an image generating processor configured to generate an ultrasonic image, based on an echo signal which is obtained through the ultrasonic probe, a monitor configured to display the ultrasonic image, a memory configured to store image data including the ultrasonic image displayed on a screen of the monitor, and a calibration information generation processor configured to generate calibration information including a conversion value for performing a predetermined measurement with respect to the ultrasonic image included in the image data, based on a positional relationship between at least one scale mark existing in the image data and the ultrasonic image.
- FIG. 1 is a view illustrating a block configuration of an ultrasonic diagnosis apparatus according to the present embodiment.
- the ultrasonic diagnosis apparatus includes an ultrasonic diagnosis apparatus body (hereinafter, simply referred to as “apparatus body”) 11 , an ultrasonic probe 12 , an input device 13 , and a monitor 14 .
- the apparatus body 11 includes a transmitter/receiver 21 , a B mode processor 22 , a Doppler processor 23 , an image generator 24 , a database 25 , a control processor (CPU) 26 , and an interface unit 27 .
- the transmitter/receiver unit 21 , etc. included in the apparatus body 11 may be configured as hardware such as integrated circuits, or may be configured as software programs which are implemented as software modules. The functions of the respective structural elements will be described below.
- the ultrasonic probe 12 includes a plurality of piezoelectric transducers which generate ultrasonic waves, based on a driving signal from the transmitter/receiver 21 , and convert reflective waves from a subject P to electric signals; a matching layer provided on the piezoelectric transducers; and a backing member which prevents backward propagation of ultrasonic waves from the piezoelectric transducers. If ultrasonic waves are transmitted from the ultrasonic probe 12 , the ultrasonic waves are successively reflected by discontinuous planes of acoustic impedance of intravital tissues, and are received by the ultrasonic probes 12 as an echo signal.
- the amplitude of this echo signal depends on a difference of the acoustic impedance in the discontinuous planes at which the reflection occurred.
- an echo in a case where transmitted ultrasonic pulses were reflected by a moving blood flow or by the wall of the heart, etc. undergoes frequency deviation, depending on a velocity component in an ultrasonic transmission direction of a moving body by a Doppler effect.
- the input device 13 includes a trackball 13 a , various switch buttons 13 b , a mouse 13 c and a keyboard 13 d for inputting to the apparatus body 11 various instructions from the operator, conditions, a region-of-interest (ROI) setting instruction, various image quality condition setting instructions, etc.
- various switch buttons 13 b for inputting to the apparatus body 11 various instructions from the operator, conditions, a region-of-interest (ROI) setting instruction, various image quality condition setting instructions, etc.
- ROI region-of-interest
- the monitor 14 displays intravital morphological information and blood flow information as images, based on a video signal from the image generator 24 .
- the transmitter/receiver 21 includes a pulse generator, a transmission delay unit and a pulser, which are not shown.
- the pulse generator repeatedly generates rate pulses for forming ultrasonic waves at a predetermined rate frequency fr Hz (cycle; 1/fr second).
- the transmission delay unit imparts to a rate pulse of each channel a delay time which is necessary for focusing ultrasonic waves in a beam shape for each channel and for determining directivity of transmission.
- the pulser applies a driving pulse to the ultrasonic probe 12 at a timing based on the rate pulse for each channel.
- the transmitter/receiver 21 includes a pre-amplifier, a reception delay unit and an adder, which are not shown.
- the pre-amplifier amplifies an echo signal, which was taken in through the ultrasonic probe 12 , for each channel.
- the reception delay unit imparts a delay time, which is necessary for determining directivity of reception, to the amplified echo signal, and then executes an addition process in the adder.
- a reflected component from a direction corresponding to the directivity of reception of the echo signal is emphasized, and a comprehensive beam of ultrasonic transmission/reception is formed by the reception directivity and transmission directivity.
- the B mode processor 22 executes a B mode process on the echo signal from the transmitter/receiver 21 . Specifically, the B mode processor 22 executes envelope detection of the echo signal from the transmitter/receiver 21 , and executes logarithmic compression of the envelope-detected echo signal. Thereby, the B mode processor 22 generates data of a B mode image which represents the intensity of the echo signal by luminance. The generated data of the B-mode image is supplied to the image generator 24 .
- the Doppler processor 23 executes a Doppler process on the echo signal from the transmitter/receiver 21 , and generates data of a Doppler spectrum image (an image of a Doppler mode) in a range gate. Specifically, the Doppler processor 23 executes orthogonal detection of the echo signal from the transmitter/receiver 21 . The Doppler processor 23 extracts a signal in a preset range gate from the orthogonal-detected echo signal. The Doppler processor 23 executes spectrum analysis of the signal in the range gate by FFT (Fast Fourier Transform), and calculates a flow velocity value (flow velocity spectrum).
- FFT Fast Fourier Transform
- the Doppler processor 23 generates data of a Doppler spectrum image which is indicative of a variation with time of the flow velocity spectrum.
- the generated data of the Doppler spectrum image is supplied to the image generator 24 .
- a color Doppler mode process for generating data of a color Doppler image may be executed on the echo signal from the transmitter/receiver 21 .
- the image generator 24 is composed of a DSC (Digital Scan Converter).
- the image generator 24 converts the data of the B mode image from the B mode processor 22 to image data which can be displayed on the monitor 14 .
- the image generator 24 converts the data of the Doppler spectrum image from the Doppler processor 23 to image data which can be displayed on the monitor 14 .
- the image data (display image data on the monitor 14 ) including the ultrasonic image displayed on the screen of the above-described monitor 14 is stored (accumulated) in the database 25 , and can be accessed by the operator, for example, after diagnosis.
- the ultrasonic diagnosis apparatus there is a case in which a plurality of ultrasonic images are simultaneously displayed on one screen, and a plurality of ultrasonic images, for example, may be included in the image data stored in the database 25 .
- numerical values or scale marks for representing an intravital conversion distance, etc. in the ultrasonic image included in the image data are depicted (embedded).
- image data for which information that is used when a predetermined measurement is executed on an ultrasonic image is not set as additional information, may be stored in the database 25 .
- the additional information refers to, for example, information relating to a living body, such as depth of field, information as to how many ultrasonic images are included (displayed) in the image data and in what combination of modes (e.g. B mode, Doppler mode) the ultrasonic images are included, or information of a conversion value per pixel, etc. in the ultrasonic image.
- the control processor 26 includes a function as an information processing device (computer), and controls the operation of the apparatus body 11 .
- the control processor 26 reads out, for example, a control program for executing an image process, etc. (to be described later) from an internal storage device (not shown), and executes arithmetic operations, control, etc. relating to various processes. Incidentally, the details of the processes, which are executed by the control processor 26 , will be described later.
- the interface unit 27 is an interface relating to the input device 13 , a network, and a new external storage device (not shown).
- the data, analysis results, etc. of the ultrasonic images, etc. obtained in the ultrasonic diagnosis apparatus can be transferred to other devices by the interface unit 27 over the network.
- the control processor 26 includes a controller 261 , an image management processor 262 , an image processor 263 , a calibration information generation processor 264 , and an image display processor 265 . It is assumed that these components 261 to 265 are realized by the control processor 26 executing the above-described control program.
- the controller 261 instructs the image management processor 262 to read out the image data, which was selected by the operator, from the database 25 .
- the operation for selecting arbitrary image data can be performed, for example, by using the input device 13 .
- the image management processor 262 reads out the image data, which was selected by the operator, from the database 25 .
- the image data, which was read out of the database 25 is temporarily stored in the inside of the image management processor 262 .
- the image processor 263 determines whether additional information is set for the image data that was read out of the database 25 by the image management processor 262 . If additional information is not set, the image processor 263 acquires the image data which is temporarily stored in the image management processor 262 , and delivers the image data to the calibration data generation processor 264 . On the other hand, if additional information is set, the image data, which is temporarily stored in the image management processor 262 , is delivered to the image display processor 265 and displayed on the monitor 14 .
- the calibration information generation processor 264 Based on a plurality of numerical values existing in the image data delivered from the image management processor 262 and the positions of a plurality of scale marks existing near the respective numeral values, the calibration information generation processor 264 generates calibration information including a conversion value (hereinafter referred to as “calibration value”) per pixel in an ultrasonic image included in the image data.
- calibration value a conversion value per pixel in an ultrasonic image included in the image data.
- the calibration information generation processor 264 When a plurality of ultrasonic images are included in the image data delivered from the image management processor 262 , the calibration information generation processor 264 generates the calibration information for each image.
- the calibration information generation processor 264 sets the generated calibration information in the inside of the image management processor 262 as the additional information of the image data delivered from the image management processor 262 .
- the image display processor 265 displays on the monitor 14 the image data selected by the operator (the image data read out of the database 25 ). At this time, the image display processor 265 displays the calibration information, which was set as the additional information of the image data selected by the operator, together with the image data.
- the image data stored in the database 25 is image data at a time when an ultrasonic image was displayed on the screen of the monitor 14 , as described above, and includes at least one ultrasonic image.
- the operator can select desired image data from among a list of image data stored in the database 25 , by operating the input device 13 .
- the image data selected by the operation is referred to as “selected image data”.
- the controller 261 receives operation information from the input device 13 , and instructs the image management processor 262 to read in the selected image data.
- the image management processor 262 reads in the selected image data from the database 25 (step S 1 ). At this time, the image management processor 262 temporarily stores the selected image data, which was read in from the database 25 , in the inside of the image management processor 262 . In addition, the image management processor 262 delivers the selected image data to the image display processor 265 , and delivers to the image processor 263 information of the memory (e.g. address) in which the selected image data is temporarily stored.
- the image management processor 262 delivers the selected image data to the image display processor 265 , and delivers to the image processor 263 information of the memory (e.g. address) in which the selected image data is temporarily stored.
- the image processor 263 determines whether additional information exists (i.e. additional information is set) for the selected image data (step S 2 ).
- additional information is information which is used when a predetermined measurement is executed for an ultrasonic image included in the selected image data, and is, for instance, information relating to a living body or setting information of the system.
- the image processor 263 acquires the selected image data, which is temporarily stored in the inside of the image management processor 262 , by using the information delivered from the image management processor 262 , in order to generate calibration information, and delivers this selected image data to the calibration information generation processor 264 .
- the calibration information generation processor 264 executes a process (hereinafter referred to as “calibration information generation process”) for generating calibration information which is used to measure, for example, an intravital conversion distance, blood flow velocity, etc. (step S 3 ).
- the calibration information generated here includes, for example, a conversion value (calibration value) per pixel in the ultrasonic image included in the selected image data. Incidentally, the details of the calibration information generation process will be described later.
- the calibration information generation processor 264 sets the generated calibration information as the additional information in the inside of the image management processor 262 (step S 4 ).
- the calibration information generation processor 264 delivers to the image display processor 265 information in which the calibration value included in the calibration information is converted to a character string (i.e. a character string representing the calibration value).
- the image display processor 265 displays on the monitor 14 the selected image data which was delivered from the image management processor 262 , for example, in accordance with an instruction from the controller 261 (step S 5 ).
- the information (the character string representing the calibration value), which was delivered from the calibration information generation processor 264 , is displayed on the monitor 14 together with the selected image data.
- step S 2 when it has been determined in the above-described step S 2 that the additional information exists, the process of step S 3 and S 4 is not executed, and the process of step S 5 is executed.
- the calibration information generation processor 264 acquires the selected image data delivered from the image processor 263 (that image data, among the image data stored in the database 25 , which was selected by the operator).
- the numerical values (character string) and scale marks for representing, for example, the intravital conversion distance in the ultrasonic image included in the selected image data are depicted in the selected image data.
- the calibration information generation processor 264 executes a binarization process on the acquired selected image data (step S 11 ). This binarization process is executed based on luminance which can distinguish the ultrasonic image included in the image data and the character string, such as numerical values, and scale marks.
- the calibration information generation processor 264 extracts numerical values existing in the selected image data which was subjected to the binarization process (step S 12 ).
- the calibration information generation processor 264 extracts a numerical value group arranged in a row in an X-axis direction (horizontal direction) (hereinafter referred to as “X-axis directional numerical value group”) in the selected image data which was subjected to the binarization process. Similarly, the calibration information generation processor 264 extracts a numerical value group arranged in a row in a Y-axis direction (vertical direction) (hereinafter referred to as “Y-axis directional numerical value group”) in the selected image data which was subjected to the binarization process. If the intervals of the extracted numerical values are equal, it is highly possible that these numerical values are numerical values for representing, for example, the intravital conversion distance. Incidentally, a plurality of X-axis directional numerical value groups and Y-axis directional numerical value groups may be extracted.
- the calibration information generation processor 264 extracts a scale mark near the extracted numerical values, in the selected image data which was subjected to the binarization process (step S 13 ).
- a predetermined mark disposed at a nearest position in the up-and-down direction or left-and-right direction of the numeral values is extracted as a scale mark.
- the calibration information generation processor 264 extracts a scale mark group existing at regular intervals in the X-axis direction (hereinafter referred to as “X-axis directional scale mark group”) along the X-axis directional numerical value group extracted in step S 12 .
- the calibration information generation processor 264 extracts a scale mark group existing at regular intervals in the Y-axis direction (hereinafter referred to as “Y-axis directional scale mark group”) along the Y-axis directional numerical value group extracted in step S 12 .
- the scale marks include, for instance, a mark of a predetermined shape, and a straight line defined by a specific length.
- a plurality of X-axis directional scale mark groups and Y-axis directional scale mark groups may be extracted.
- the calibration information generation processor 264 extracts this mark as a supplementary mark.
- a supplementary mark existing for the extracted scale marks it becomes possible to extract a proper supplementary mark.
- the scale marks extracted by the calibration information generation processor 264 are scale marks for representing, for instance, an intravital conversion distance.
- a supplementary mark group existing on a straight line connecting the X-axis directional scale mark group is referred to as “X-axis directional supplementary mark group”
- a supplementary mark group existing on a straight line connecting the Y-axis directional scale mark group is referred to as “Y-axis directional supplementary mark group”.
- the calibration information generation processor 264 stores intervals of extracted scale marks (the interval of the X-axis directional scale mark group and the interval of the Y-axis directional scale mark group), for example, in the inside of this calibration information generation processor 264 (step S 14 ).
- the interval of the X-axis directional scale mark group includes the number of pixels between neighboring scale marks of the X-axis directional scale mark group.
- the interval of the Y-axis directional scale mark group includes the number of pixels between neighboring scale marks of the Y-axis directional scale mark group.
- the calibration information generation processor 264 stores the positions (X coordinate, Y coordinate) of marks existing at both ends in the selected image data, among the extracted scale marks and supplementary marks. Specifically, the position of a mark existing on the leftmost side and the position of a mark existing on the rightmost side (hereinafter referred to as “X-axis directional both-end positions) of the X-axis directional scale mark group and supplementary mark group are stored. Similarly, the position of a mark existing on the uppermost side and the position of a mark existing on the lowermost side (hereinafter referred to as “Y-axis directional both-end positions) of the Y-axis directional scale mark group and supplementary mark group are stored.
- a combination pattern of the X-axis directional numerical value group extracted in step S 12 and X-axis directional scale mark group (and supplementary mark group) extracted in step S 13 is referred to as “X-axis directional pattern”.
- a combination pattern of the Y-axis directional numerical value group extracted in step S 12 and Y-axis directional scale mark group (and supplementary mark group) extracted in step S 13 is referred to as “Y-axis directional pattern”.
- the calibration information generation processor 264 calculates a calibration value (a conversion value per pixel) for each of the X-axis directional pattern and Y-axis directional pattern (step S 15 ).
- the calibration information generation processor 264 calculates a calibration value for the X-axis directional pattern (hereinafter referred to as “X-axis directional calibration value”) by dividing the difference between neighboring numerical values of the X-axis directional numerical value group by a pixel number (an interval of the X-axis directional scale mark group) between scale marks existing near the respective neighboring numerical values.
- the calibration information generation processor 264 calculates a calibration value for the Y-axis directional pattern (hereinafter referred to as “Y-axis directional calibration value”) by dividing the difference between neighboring numerical values of the Y-axis directional numerical value group by a pixel number (an interval of the Y-axis directional scale mark group) between scale marks existing near the respective neighboring numerical values.
- the calibration values calculated here are used as calibration information at a time of executing a predetermined measurement with respect to an ultrasonic image included in the selected image data.
- this character string when a character string representing a unit exists near the X-axis directional pattern, this character string is used as a unit of the X-axis directional calibration value.
- this character string when a character string representing a unit exists near the Y-axis directional pattern, this character string is used as a unit of the Y-axis directional calibration value.
- a preset unit for example, is used as the X-axis directional or Y-axis directional calibration value.
- the calibration information generation processor 264 specifies a region of an ultrasonic image included in the selected image data (hereinafter, simply referred to as “image region”), based on the X-axis directional both-end positions and Y-axis directional both-end positions stored in the inside of this calibration information generation processor 264 (step S 16 ).
- This image region is a rectangle (rectangular region) which is set such that, for example, a starting point is set at the X-coordinate of a mark exiting on the leftmost side in the X-axis direction and the Y-coordinate of a mark existing on the uppermost side in the Y-axis direction, an end point is set at the X-coordinate of a mark exiting on the rightmost side in the X-axis direction and the Y-coordinate of a mark existing on the lowermost side in the Y-axis direction, and a straight line connecting the starting point and the end point is set as a diagonal of the rectangle.
- a starting point is set at the X-coordinate of a mark exiting on the leftmost side in the X-axis direction and the Y-coordinate of a mark existing on the uppermost side in the Y-axis direction
- an end point is set at the X-coordinate of a mark exiting on the rightmost side in the X-axis direction and the
- the calibration information generation processor 264 generates calibration information including the X-axis directional calibration value and Y-axis directional calibration value with respect to the image region specified in step S 16 (i.e. the combination of the X-axis directional pattern and Y-axis directional pattern).
- the calibration information generated in the calibration information generation process is set in the inside of the image management processor 262 as additional information of the selected image data, as described above. In this manner, with calibration information being set in the inside of the image management processor 262 , this calibration information can be used at a time of executing a predetermined measurement with respect to the ultrasonic image included in the selected image data.
- FIG. 5 illustrates an example of image data (selected image data) selected by an operator, among image data stored in the database 25 .
- the selected image data illustrated in FIG. 5 is image data at a time when two ultrasonic images are simultaneously displayed on the monitor 14 .
- An ultrasonic image, which exists in an upper part of FIG. 5 is a B mode image.
- An ultrasonic image, which exists in a lower part of FIG. 5 is a Doppler mode image.
- FIG. 6 illustrates an example of a result of the binarization process executed on the image data shown in FIG. 5 .
- numerical values and scale marks existing in the selected image data become easier to recognize.
- the X-axis directional numerical value group i.e. “ ⁇ 6.0”, “ ⁇ 5.0”, “ ⁇ 4.0”, “ ⁇ 3.0”, “ ⁇ 2.0”, “ ⁇ 1.0” and “0”, is referred to as “first X-axis directional numerical value group”.
- the Y-axis directional numerical value group i.e. “40” and “20”, is referred to as “first Y-axis directional numerical value group”.
- the Y-axis directional numerical value group i.e. “0”, “1”, “2”, “3” and “4”, is referred to as “second Y-axis directional numerical value group”.
- first X-axis directional scale mark group a scale mark group (hereinafter referred to as “first X-axis directional scale mark group”), which exists near the first X-axis directional numerical value group, is extracted.
- first X-axis directional scale mark group for example, a plurality of scale marks including a scale mark 100 existing on the upper side of numerical value “ ⁇ 6.0” are extracted.
- the first X-axis directional scale mark group also includes scale marks similar to the scale mark 100 , which exist near the other numerical values “ ⁇ 5.0”, “ ⁇ 4.0”, “ ⁇ 3.0”, “ ⁇ 2.0”, “ ⁇ 1.0”, and “0”.
- first X-axis directional supplementary mark group a plurality of supplementary marks (hereinafter referred to as “first X-axis directional supplementary mark group”), which include a supplementary mark 101 existing on a straight line (an extension line of a straight line) connecting the first X-axis directional scale mark group, are also extracted.
- first X-axis directional pattern 103 a combination pattern of the first X-axis directional numerical value group, scale mark group and supplementary mark group.
- first Y-axis directional scale mark group a scale mark group (hereinafter referred to as “first Y-axis directional scale mark group”), which exists near the first Y-axis directional numerical value group, is extracted.
- first Y-axis directional scale mark group for example, a plurality of scale marks including a scale mark 110 existing on the left side of numerical value “40” are extracted.
- the first Y-axis directional scale mark group also includes scale marks similar to the scale mark 110 , which exist near the other numerical value “20”.
- first Y-axis directional pattern 111 a combination pattern of the first Y-axis directional numerical value group and scale mark group is referred to as “first Y-axis directional pattern 111 ” for convenience' sake.
- a scale mark group (hereinafter referred to as “second Y-axis directional scale mark group”), which exists near the second Y-axis directional numerical value group, is extracted.
- the second Y-axis directional scale mark group for example, a plurality of scale marks including a scale mark 120 existing on the left side of numerical value “0” are extracted.
- the second Y-axis directional scale mark group also includes scale marks similar to the scale mark 120 , which exist near the other numerical values “1”, “2”, “3” and “4”.
- second Y-axis directional supplementary mark group a plurality of supplementary marks (hereinafter referred to as “second Y-axis directional supplementary mark group”), which include a supplementary mark 121 existing on a straight line connecting the second Y-axis directional scale mark group, are also extracted.
- second Y-axis directional pattern 122 a combination pattern of the second Y-axis directional numerical value group, scale mark group and supplementary mark group.
- an X-axis directional numerical value which corresponds to the above-described second Y-axis directional numerical value group, does not exist.
- an X-axis directional scale mark group (hereinafter referred to as “second X-axis directional scale mark group”) which is composed of a scale mark 120 and a scale mark 130 that is similar in shape to the scale mark 120 .
- second X-axis directional scale mark group is extracted.
- second X-axis directional supplementary mark group which include a supplementary mark 131 existing on a straight line (an extension line of a straight line) connecting the second X-axis directional scale mark group, are also extracted.
- second X-axis directional pattern 132 a combination pattern of the second X-axis directional scale mark group and supplementary mark group is referred to as “second X-axis directional pattern 132 ” for convenience' sake.
- the interval of the scale marks in the above-described first X-axis directional pattern 103 is stored in the inside of the calibration information generation processor 264 .
- the number of pixels between neighboring scale marks here, scale marks existing on the upper side of, e.g. numerical value “ ⁇ 6.0” and numerical value “ ⁇ 5.0” of the first X-axis directional scale mark group is stored.
- the interval (pixel number) between scale marks is similarly stored with respect to the second X-axis directional pattern, first Y-axis directional pattern 111 and second Y-axis directional pattern 122 .
- first X-axis directional both-end positions “first Y-axis directional both-end positions”, “second X-axis directional both-end positions”, and “second Y-axis directional both-end positions” are stored.
- the positions of marks (scale marks or supplementary marks) existing at both ends in the second X-axis directional pattern 132 are stored.
- first X-axis directional both-end positions and first Y-axis directional both-end positions it is assumed that an exceptional process is executed. If a concrete description is given of the first X-axis directional both-end positions, in the selected image data illustrated in FIG. 6 , a straight line connecting the first X-axis directional scale mark group (and supplementary mark group) is depicted. In such a case, it is assumed that the positions of both ends of the straight line connecting the first X-axis directional scale mark group and supplementary mark group are stored. Incidentally, it is assumed that the same applies to the first Y-axis directional both-end positions.
- first X-axis directional calibration value a calibration value for the first X-axis directional pattern 103 (hereinafter referred to as “first X-axis directional calibration value”) is calculated.
- This first X-axis directional calibration value is calculated by dividing the difference between neighboring numerical values (in this case, for example, numerical value “ ⁇ 6.0” and numerical value “ ⁇ 5.0”) of the above-described first X-axis directional numerical value group by the interval (pixel number) between scale marks in the first X-axis directional pattern 103 (scale marks existing near the respective neighboring numerical values).
- a character string representing a unit does not exist near the first X-axis directional pattern.
- the unit of the first X-axis directional calibration values is set to be “sec/pix”.
- first Y-axis directional calibration value a calibration value for the first Y-axis directional pattern (hereinafter referred to as “first Y-axis directional calibration value”) is calculated.
- first Y-axis directional calibration value a calibration value for the first Y-axis directional pattern
- the unit of the first Y-axis directional calibration values is set to be “cm/sec ⁇ pix”.
- the unit of the corresponding X axis is, in many cases, “sec/pix”.
- the unit of the above-described first X-axis directional calibration value may be determined to be “sec/pix”.
- second Y-axis directional calibration value a calibration value for the second Y-axis directional pattern (hereinafter referred to as “second Y-axis directional calibration value”) is calculated.
- a character string representing a unit does not exist near the second Y-axis directional pattern.
- the unit of the second Y-axis directional calibration values is set to be “cm/pix”.
- the calibration value for the second X-axis directional pattern 132 (hereinafter referred to as “second X-axis directional calibration value”) is set to be the same value as the calibration value for the pattern (in this case, the second Y-axis directional pattern 122 ) which is in contact with the second X-axis directional pattern 132 in the selected image data illustrated in FIG. 6 .
- the unit of the second X-axis directional calibration value is set to be the same as the unit of the second Y-axis directional calibration value.
- first and second image regions two image regions
- the first image region specified here is a rectangular region formed by the first X-axis directional both-end positions and first Y-axis directional both-end positions.
- the second image region specified here is a rectangular region formed by the second X-axis directional both-end positions and second Y-axis directional both-end positions.
- a region inside the frame 140 may be specified as the first image region by utilizing this frame 140 .
- a region other than the above-described first image region may be specified as the second image region, or, for example, such a configuration may be adopted that the operator is prompted to input the second image region.
- first calibration information including the first X-axis directional calibration value and first Y-axis directional calibration value is generated.
- the first calibration information is information which is used at a time of executing a predetermined measurement with respect to the ultrasonic image corresponding to the above-described first image region.
- second calibration information including the second X-axis directional calibration value and second Y-axis directional calibration value is generated.
- the second calibration information is information which is used at a time of executing a predetermined measurement with respect to the ultrasonic image corresponding to the above-described second image region.
- the first and second calibration information which was generated as described above, is set as additional information of the selected image data in the inside of the calibration information generation processor 264 .
- the first and second calibration information together with the selected image data, is displayed on the monitor 14 .
- the first X-axis directional calibration value and first Y-axis directional calibration value (the character strings representing these calibration values), which are included in the first calibration information
- the second X-axis directional calibration value and second Y-axis directional calibration value (the character strings representing these calibration values), which are included in the second calibration information, are displayed together with the selected image data.
- FIG. 7 illustrates an example of a display screen of the monitor 14 at a time when the first and second calibration information, together with the selected image data, was displayed.
- Area 1 represents the above-described second image region
- Area 2 represents the above-described first image region.
- the calibration information including the conversion value (conversion value per pixel) for executing a predetermined measurement with respect to the ultrasonic image included in this image data is generated.
- exact measurement can be executed by using the calibration information, even with respect to the image data for which no additional information is set.
- a work load for manually setting the calibration information can be reduced.
- the image data stored in the database 25 includes a plurality of ultrasonic images, by specifying an image region with respect to each ultrasonic image, proper calibration information can be generated with respect to each image region.
- a program for this measurement running on the above-described control processor 26 , it is also possible to execute a predetermined measurement process on an ultrasonic image included in image data selected by the operator (e.g. a measurement process of an internal organ or diseased part, blood flow velocity, etc. displayed on the ultrasonic image), for example, based on marks (caliper) for measurement set by the operator and the calibration information set as additional information, on the ultrasonic diagnosis apparatus according to the present embodiment.
- the regions (first and second image regions) of the respective ultrasonic images included in the above-described image data are specified, it is also possible to execute a process of detecting an error, for instance, in such a case that the marks for measurement set by the operator fall out of a predetermined region.
- the description has been given on the assumption that the calibration information is generated in the apparatus body 11 .
- the calibration information may be generated, for example, in a medical image processing apparatus on the outside of the ultrasonic diagnosis apparatus.
- FIG. 8 is a flowchart illustrating a flow of a generation method of calibration information according to Modification 1. Incidentally, the flowchart illustrated in FIG. 8 differs from the flowchart illustrated in FIG. 4 in that steps S 22 and S 23 are added.
- the calibration information generation processor 264 determines whether numerical values exist in the selected image data (step S 22 ). If it is determined that numerical values exist in the selected image data, the calibration information generation processor 264 executes, in steps S 23 a to S 28 , the same process as in the above-described steps S 12 to S 17 .
- the control processor 26 displays, for example, on the monitor 14 , such a message that “Numerical values indicating the interval of scale marks (distance per interval, time, velocity, etc.) do not exist on the selected image. Input numerical values for determining the interval of scale marks (distance per interval, time, velocity, etc.).” Responding to this message, the user inputs desired numerical values from the input device 13 .
- the calibration information generation processor 264 determines the interval of scale marks (distance per interval, time, velocity, etc.) by using the input numerical values (step S 23 b ), and executes the process of steps S 24 to S 28 .
- step S 23 b numerical values for determining the unit of scale marks are input by a manual operation of the user.
- the method is not restricted to this example. From the standpoint of reducing an input work load on the user, the user may select desired numerical values, for example, from among a plurality of preset numerical values.
- numerical values for determining the unit of scale marks may be set by using default values which are preset in the apparatus.
- the generation method of calibration information illustrated in FIG. 4 and FIG. 8 has been described as being executed in such an order that, after the selected image data is subjected to the binarization process, the numerical values in the image are extracted, and the scale marks near them are extracted.
- the method is not restricted to this example. After the scale marks in the binarized selected image are extracted, the numerical values near them may be extracted.
- step S 13 of FIG. 4 and step 24 of FIG. 8 a typical extraction process of scale marks was described.
- character information inherent to the Doppler mode such as a unit of velocity of cm/s (m/s), etc., and numerical values (time instants) of “0, ⁇ 1, ⁇ 2, . . . ”.
- Such inherent character information may be extracted, scale marks existing near this information may be extracted as a baseline of the Doppler waveform, and calibration information may be generated by using this.
- calibration information including an imaging mode of an ultrasonic image included in the selected image is generated and presented.
- FIG. 9 is a flowchart illustrating a flow of a generation method of calibration information according to the second embodiment. Incidentally, the flowchart illustrated in FIG. 9 differs from the flowchart illustrated in FIG. 4 in that a process of step S 37 is added.
- the calibration information generation processor 264 determines an imaging mode of each ultrasonic image included in the image data, by using the numerical value (character string) information extracted in step S 32 and the calibration value (physical unit per pixel) calculated in step S 35 .
- the calibration information generation processor 264 determines that the imaging mode of the ultrasonic image, to which these numerical values are allocated, is the Doppler mode.
- the calibration information generation processor 264 determines that the ultrasonic image corresponding to the calibration values is a B mode.
- the calibration information generation processor 264 determines that the ultrasonic image corresponding to the calibration values is an M mode (step S 37 ).
- the imaging mode may also be determined based on a manual operation of the user, etc., like the above-described Modification 1.
- the calibration information generation processor 264 generates calibration information including the imaging mode determined in step S 37 and the calibration values calculated in step S 35 , and displays the calibration information in a predetermined mode in which the association with each ultrasonic image included in the selected image is explicitly indicated (step S 38 ).
- the calibration information which includes not only the calibration values as the physical unit per pixel, but also the imaging mode of each ultrasonic image included in the selected image, can be provided to the user. Accordingly, for example, because of secondary storage data, even if image reproduction is possible by a personal computer but calibration information cannot be reproduced, the calibration information of each ultrasonic image included in the selected image and the imaging mode can be provided, as needed, as the calibration information. As a result, even in the case where an image, which does not conform to specific standards, is handled by a personal computer or a tablet computer, the calibration information can be acquired and used on the spot. As a result, the range of general-purpose uses of medical images can be increased, contributing to an improvement in the quality of diagnosis with imaging, and medical services.
- processor means, for example, a central processing unit (CPU), a graphics processing unit (GPU), an application specific integrated circuit (ASIC), a programmable logical device (e.g., a simple programmable logic device (SPLD), a complex programmable logic device (CPLD), and a field programmable gate array (FPGA)), or the like.
- CPU central processing unit
- GPU graphics processing unit
- ASIC application specific integrated circuit
- SPLD simple programmable logic device
- CPLD complex programmable logic device
- FPGA field programmable gate array
- programs may be directly incorporated in processing circuitry instead that programs are stored in storage memory 12 .
- the processing circuitry reads programs incorporated in circuitry and executes the programs to realize predetermined functions.
- Each function (each component) in the present embodiment is not necessary to be corresponded to a single processing circuit and may be realized by a plurality of processing circuits. To the contrary, for example, at least two functions (at least two components) may be realized by a single processing circuit. Further, a plurality of functions (a plurality of components) may be realized by a single processing circuit.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Hematology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Description
Claims (14)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2012279562 | 2012-12-21 | ||
| JP2012-279562 | 2012-12-21 | ||
| PCT/JP2013/084515 WO2014098248A1 (en) | 2012-12-21 | 2013-12-24 | Ultrasonic diagnostic device and medical image processing device |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2013/084515 Continuation WO2014098248A1 (en) | 2012-12-21 | 2013-12-24 | Ultrasonic diagnostic device and medical image processing device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20150265255A1 US20150265255A1 (en) | 2015-09-24 |
| US10426444B2 true US10426444B2 (en) | 2019-10-01 |
Family
ID=50978560
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/732,251 Active 2035-11-27 US10426444B2 (en) | 2012-12-21 | 2015-06-05 | Ultrasonic diagnosis apparatus, medical image processing apparatus and medical image processing method |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US10426444B2 (en) |
| JP (1) | JP6208002B2 (en) |
| CN (1) | CN104869910B (en) |
| WO (1) | WO2014098248A1 (en) |
Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH04282142A (en) | 1991-03-12 | 1992-10-07 | Fuji Electric Co Ltd | Ultrasonic diagnostic device |
| US5331419A (en) * | 1991-03-26 | 1994-07-19 | Kyocera Corporation | Size display system for electronic camera |
| JPH0739546A (en) | 1993-07-27 | 1995-02-10 | Ge Yokogawa Medical Syst Ltd | Apparatus for ultrasonic examination |
| JP2000262532A (en) | 1999-03-15 | 2000-09-26 | Toshiba Corp | Ultrasound therapy device, support image display device for ultrasound therapy, and ultrasound diagnostic device |
| JP2004229924A (en) | 2003-01-30 | 2004-08-19 | Aloka System Engineering Co Ltd | Ultrasonic diagnostic system, ultrasonic diagnostic apparatus and image data processor |
| US6817982B2 (en) * | 2002-04-19 | 2004-11-16 | Sonosite, Inc. | Method, apparatus, and product for accurately determining the intima-media thickness of a blood vessel |
| JP2005081082A (en) | 2003-09-11 | 2005-03-31 | Toshiba Corp | Ultrasonic diagnostic apparatus and ultrasonic data compression method |
| JP2009261520A (en) | 2008-04-23 | 2009-11-12 | Ge Medical Systems Global Technology Co Llc | Ultrasonic imaging system |
| US20130259363A1 (en) * | 2012-03-30 | 2013-10-03 | Brother Kogyo Kabushiki Kaisha | Image processing device generating binary image data by selecting specific component |
| US9158986B2 (en) * | 2013-02-06 | 2015-10-13 | Nidec Sankyo Corporation | Character segmentation device and character segmentation method |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS60199437A (en) * | 1984-03-24 | 1985-10-08 | 株式会社東芝 | Ultrasonic diagnostic apparatus |
| US20120136248A1 (en) * | 2010-11-25 | 2012-05-31 | Toshiba Medical Systems Corporation | Ultrasound diagnosis apparatus, image generating method, and image processing apparatus |
-
2013
- 2013-12-24 JP JP2013266079A patent/JP6208002B2/en active Active
- 2013-12-24 WO PCT/JP2013/084515 patent/WO2014098248A1/en not_active Ceased
- 2013-12-24 CN CN201380066773.3A patent/CN104869910B/en active Active
-
2015
- 2015-06-05 US US14/732,251 patent/US10426444B2/en active Active
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH04282142A (en) | 1991-03-12 | 1992-10-07 | Fuji Electric Co Ltd | Ultrasonic diagnostic device |
| US5331419A (en) * | 1991-03-26 | 1994-07-19 | Kyocera Corporation | Size display system for electronic camera |
| JPH0739546A (en) | 1993-07-27 | 1995-02-10 | Ge Yokogawa Medical Syst Ltd | Apparatus for ultrasonic examination |
| JP2000262532A (en) | 1999-03-15 | 2000-09-26 | Toshiba Corp | Ultrasound therapy device, support image display device for ultrasound therapy, and ultrasound diagnostic device |
| US6817982B2 (en) * | 2002-04-19 | 2004-11-16 | Sonosite, Inc. | Method, apparatus, and product for accurately determining the intima-media thickness of a blood vessel |
| JP2004229924A (en) | 2003-01-30 | 2004-08-19 | Aloka System Engineering Co Ltd | Ultrasonic diagnostic system, ultrasonic diagnostic apparatus and image data processor |
| JP2005081082A (en) | 2003-09-11 | 2005-03-31 | Toshiba Corp | Ultrasonic diagnostic apparatus and ultrasonic data compression method |
| JP2009261520A (en) | 2008-04-23 | 2009-11-12 | Ge Medical Systems Global Technology Co Llc | Ultrasonic imaging system |
| US20130259363A1 (en) * | 2012-03-30 | 2013-10-03 | Brother Kogyo Kabushiki Kaisha | Image processing device generating binary image data by selecting specific component |
| US9158986B2 (en) * | 2013-02-06 | 2015-10-13 | Nidec Sankyo Corporation | Character segmentation device and character segmentation method |
Non-Patent Citations (2)
| Title |
|---|
| International Search Report dated Feb. 25, 2014 in PCT/JP2013/084515 filed Dec. 24, 2013. |
| International Written Opinion dated Feb. 25, 2014 in PCT/JP2013/084515 filed Dec. 24, 2013 with English translation. |
Also Published As
| Publication number | Publication date |
|---|---|
| JP6208002B2 (en) | 2017-10-04 |
| CN104869910B (en) | 2017-05-31 |
| WO2014098248A1 (en) | 2014-06-26 |
| US20150265255A1 (en) | 2015-09-24 |
| JP2014138840A (en) | 2014-07-31 |
| CN104869910A (en) | 2015-08-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10828010B2 (en) | Image diagnosis apparatus and method for dynamically focusing tracked ultrasound probe with multimodal imaging system | |
| JP5762076B2 (en) | Ultrasonic diagnostic apparatus, ultrasonic image processing apparatus, and medical image diagnostic apparatus | |
| CN104640506B (en) | Diagnostic ultrasound equipment, medical image-processing apparatus and medical image processing method | |
| CN101721226B (en) | Ultrasound imaging apparatus, medical image processing apparatus, display apparatus, and display method | |
| JP2023113952A (en) | Analysis device and analysis program | |
| CN103169500B (en) | Diagnostic ultrasound equipment, medical diagnostic imaging apparatus and medical image processing method | |
| JP5872216B2 (en) | Ultrasonic diagnostic apparatus and ultrasonic image processing apparatus | |
| JP5984244B2 (en) | Ultrasonic diagnostic apparatus, ultrasonic diagnostic apparatus control program, and medical image display method | |
| JP5209025B2 (en) | Ultrasonic diagnostic equipment | |
| JP5417048B2 (en) | Ultrasonic diagnostic apparatus and ultrasonic diagnostic program | |
| JP2011224346A (en) | Ultrasound diagnosis apparatus, image processing apparatus, and image processing method | |
| WO2014081006A1 (en) | Ultrasonic diagnostic device, image processing device, and image processing method | |
| JP2013111434A (en) | Image processor, ultrasonic diagnostic apparatus and image processing program | |
| JP2010119842A (en) | Imt measuring region setting method and ultrasound system therefor | |
| US20240122577A1 (en) | Ultrasonic diagnostic apparatus | |
| CN113229847B (en) | Image processing device, ultrasonic diagnostic device, and computer program product | |
| JP6358192B2 (en) | Ultrasonic diagnostic apparatus and control method of ultrasonic diagnostic apparatus | |
| WO2020149191A1 (en) | Image analyzing device | |
| EP3364881B1 (en) | Ultrasound imaging apparatus and controlling method for the same | |
| JP7457571B2 (en) | Ultrasound diagnostic device and diagnostic support method | |
| JP5535596B2 (en) | Ultrasonic diagnostic equipment | |
| JP2020138017A (en) | Ultrasonic diagnostic equipment | |
| US10426444B2 (en) | Ultrasonic diagnosis apparatus, medical image processing apparatus and medical image processing method | |
| JP2008099931A (en) | Medical image diagnostic apparatus, medical image display apparatus, and program | |
| JP6104529B2 (en) | Ultrasonic diagnostic apparatus, image generation apparatus, and image display apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: TOSHIBA MEDICAL SYSTEMS CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AKIYAMA, MITSUO;REEL/FRAME:035796/0168 Effective date: 20150522 Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AKIYAMA, MITSUO;REEL/FRAME:035796/0168 Effective date: 20150522 |
|
| AS | Assignment |
Owner name: TOSHIBA MEDICAL SYSTEMS CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KABUSHIKI KAISHA TOSHIBA;REEL/FRAME:039127/0669 Effective date: 20160608 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| AS | Assignment |
Owner name: CANON MEDICAL SYSTEMS CORPORATION, JAPAN Free format text: CHANGE OF NAME;ASSIGNOR:TOSHIBA MEDICAL SYSTEMS CORPORATION;REEL/FRAME:049637/0787 Effective date: 20180104 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |