WO2021003711A1 - 超声成像设备及检测b线的方法、装置、存储介质 - Google Patents

超声成像设备及检测b线的方法、装置、存储介质 Download PDF

Info

Publication number
WO2021003711A1
WO2021003711A1 PCT/CN2019/095473 CN2019095473W WO2021003711A1 WO 2021003711 A1 WO2021003711 A1 WO 2021003711A1 CN 2019095473 W CN2019095473 W CN 2019095473W WO 2021003711 A1 WO2021003711 A1 WO 2021003711A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
line
lung
ultrasound
processor
Prior art date
Application number
PCT/CN2019/095473
Other languages
English (en)
French (fr)
Inventor
王勃
牛乾
丛龙飞
刘硕
Original Assignee
深圳迈瑞生物医疗电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳迈瑞生物医疗电子股份有限公司 filed Critical 深圳迈瑞生物医疗电子股份有限公司
Priority to CN201980097733.2A priority Critical patent/CN114007513A/zh
Priority to PCT/CN2019/095473 priority patent/WO2021003711A1/zh
Publication of WO2021003711A1 publication Critical patent/WO2021003711A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the invention relates to the technical field of ultrasonic imaging, in particular to an ultrasonic imaging equipment, a method, a device and a storage medium for detecting B-line.
  • Ultrasound imaging is a medical imaging technology used to image organs and soft tissues in the human body, and it plays an important role in clinical medicine.
  • Lung ultrasound has great application value in the identification and diagnosis of pulmonary exudative lesions. It has good sensitivity and specificity for the diagnosis of various lung diseases, and can even be used instead of chest CT examination.
  • the diagnosis of lung diseases in emergency intensive care medicine can save time and cost in clinical practice and save patients' lives in time.
  • lung ultrasound imaging in most cases does not reflect direct images of lung tissue, but a series of artifacts, which can be defined according to the display characteristics of lung ultrasound images .
  • Ultrasound can only detect the pleura when the normally inflated lung is inspected. However, as the air content decreases, the echo loss effect between the lung and surrounding tissues will be reduced. Ultrasound can reflect the image of the deeper area to a certain extent, resulting in a typical Image signs.
  • pulmonary ultrasound has been widely used and valued in the fields of critical and emergency care. Recognizing typical image signs can help medical staff in the rapid diagnosis of lung diseases.
  • the current lung ultrasound examination mainly relies on manual manual measurement, which is time-consuming and laborious. The examination results are largely affected by the experience level of the operator, and the degree of intelligence is low.
  • the present invention mainly provides an ultrasonic imaging equipment, a method, a device, and a storage medium for detecting B-line, so as to improve the intelligence of lung ultrasonic inspection.
  • an ultrasound imaging device including:
  • a receiving circuit and a beam combining module for receiving the echo of the ultrasonic beam to obtain an ultrasonic echo signal
  • the processor is configured to process the ultrasound echo signal to obtain at least one frame of lung ultrasound image; the processor is also configured to identify image signs of the lung ultrasound image, the image signs including at least B-line , And calculate the parameter information of the lung ultrasound image whose image sign is the B-line, the parameter information includes at least one of the B-line coverage percentage and the B-line interval, and the B-line coverage percentage is the area occupied by the B-line in the lung Percentage of the detection area, the B line interval is the distance between adjacent B lines;
  • a human-computer interaction device which is connected to the processor, is used to detect user input information and display the detection result, and the detection result includes the parameter information.
  • an embodiment provides a method for automatically detecting the B-line in a lung ultrasound image, including:
  • the parameter information includes at least one of the B-line coverage percentage and the B-line interval, and the B-line coverage percentage is the area occupied by the B-line in the lung detection The percentage of the area, the B line interval is the distance between adjacent B lines;
  • an embodiment provides an apparatus for automatically detecting B-line in a lung ultrasound image, including:
  • An acquisition module for acquiring at least one frame of lung ultrasound images
  • An identification module configured to identify image signs corresponding to an ultrasound image of the lungs in the lung detection area, where the image signs include at least a B line;
  • the calculation module is used to calculate the parameter information of the lung ultrasound image whose image sign is the B-line, the parameter information includes at least one of the B-line coverage percentage and the B-line interval, and the B-line coverage percentage is the B-line coverage
  • the area accounts for the percentage of the detection area of the lung, and the B line interval is the distance between adjacent B lines;
  • the display module is used to display the parameter information.
  • an embodiment provides a computer-readable storage medium that includes a program that can be executed by a processor to implement the method as described above.
  • the ultrasonic imaging equipment can automatically detect the image signs in the lung ultrasound images, the image signs include at least the B-line, and the B-line can be automatically calculated
  • the percentage of the area occupied in the detection area of the lung and/or the distance between the B-line realizes the quantitative analysis of the B-line and improves the intelligence of the lung ultrasound examination.
  • Figure 1 is a schematic structural diagram of an ultrasonic imaging device in an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for automatically detecting B-line in an ultrasound image of a lung in an embodiment of the present invention
  • FIG. 3 is a flowchart of a method for automatically detecting B-line in an ultrasound image of the lung in a specific embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of a processor in a specific embodiment of the present invention.
  • FIG. 5 is a schematic diagram of the display effect of displaying the target image and the quantitative result in a specific embodiment of the present invention
  • FIG. 6 is a schematic structural diagram of an apparatus for automatically detecting B-line in an ultrasound image of a lung in an embodiment of the present invention
  • FIG. 7 is a schematic structural diagram of another device for automatically detecting the B-line in the lung ultrasound image in an embodiment of the present invention.
  • Fig. 8 is a schematic structural diagram of another device for automatically detecting the B-line in the lung ultrasound image in an embodiment of the present invention.
  • connection and “connection” mentioned in this application include direct and indirect connection (connection) unless otherwise specified.
  • the available methods are for example manually counting the number of B lines.
  • the number of B lines is small, there is no big problem in implementation. However, as the number increases, the B lines will merge with each other and it is difficult to distinguish. It is difficult to use the number of B lines to quantitatively evaluate the B line.
  • manual measurement is time-consuming and laborious, and relies on operator experience, which is very unfavorable to the promotion and application of lung ultrasound in the field of acute and severe diseases.
  • Intelligent B-line automatic identification and quantitative analysis tools that support multiple parameters are expected to improve the efficiency and accuracy of B-line quantitative analysis, and promote lung ultrasound to play a greater role in the field of acute and severe diseases.
  • the ultrasound imaging device recognizes the image signs of the acquired lung ultrasound images, the image signs include at least the B line, and then calculates the number of B lines and the percentage of the area occupied by the B line in the lung detection area And/or the distance between line B, complete automatic detection and quantitative analysis of line B.
  • FIG. 1 is a schematic structural diagram of an ultrasonic imaging device provided by an embodiment of the present invention.
  • the ultrasonic imaging device includes an ultrasonic probe 01, a transmitting circuit 02, a receiving circuit 03, a beam combining module 04, a processor 05, and a human
  • the machine interaction device 06, the transmitting circuit 02 and the receiving circuit 03 can be connected to the ultrasonic probe 01 through the transmitting/receiving selection switch 07.
  • the transmitting circuit 02 sends delayed-focused transmission pulses with a certain amplitude and polarity to the ultrasound probe 01 through the transmission/reception selection switch 07 to stimulate the ultrasound probe 01 to target tissues (for example, humans or animals).
  • tissues for example, humans or animals.
  • Organs, tissues, blood vessels, etc.) in the body emit ultrasonic beams.
  • ultrasonic beams are emitted to the lungs.
  • the receiving circuit 03 receives the echo of the ultrasonic beam through the transmit/receive selection switch 07, obtains the ultrasonic echo signal, and sends the echo signal to the beam synthesis module 04, and the beam synthesis module 04 responds to the ultrasonic echo
  • the signal is processed by focusing delay, weighting and channel summation to obtain the beam-synthesized ultrasonic echo signal, and then the beam-synthesized ultrasonic echo signal is sent to the processor 05 for related processing to obtain the desired ultrasonic image or A video file composed of ultrasound images.
  • the ultrasound probe 01 usually includes an array of multiple array elements. Each time an ultrasonic wave is transmitted, all the array elements of the ultrasonic probe 01 or a part of all the array elements participate in the transmission of the ultrasonic wave. At this time, each of the array elements or parts of the array elements that participate in the ultrasonic emission is excited by the emission pulse and emits ultrasonic waves respectively. The ultrasonic waves emitted by these array elements are superimposed during the propagation process to form the The synthetic ultrasonic beam of the scanning target, in the embodiment of the present invention, the synthetic ultrasonic beam is the ultrasonic beam emitted to the lungs.
  • the human-computer interaction device 06 is connected to the processor 05.
  • the processor 05 can be connected to the human-computer interaction device 06 through an external input/output port.
  • the human-computer interaction device 06 can detect the input information of the user, and the input information can be, for example,
  • the control instruction for the timing of ultrasound transmission and reception may be an operation input instruction such as editing and marking the ultrasound image, or may also include other instruction types.
  • the operation instructions obtained by the user when editing, marking, and measuring the ultrasound image are used for the measurement of the target tissue.
  • the human-computer interaction device 06 may include one or a combination of a keyboard, a mouse, a scroll wheel, a trackball, a mobile input device (such as a mobile device with a touch display screen, a mobile phone, etc.), a multi-function knob, etc., so
  • the corresponding external input/output port can be a wireless communication module, a wired communication module, or a combination of the two.
  • the external input/output ports can also be implemented based on USB, bus protocols such as CAN, and/or wired network protocols.
  • the human-computer interaction device 06 also includes a display, which can display the ultrasound image obtained by the processor 05.
  • the display can also provide the user with a graphical interface for human-computer interaction while displaying the ultrasound image.
  • One or more controlled objects are set on the graphical interface and provided for the user to use the human-computer interaction device 06 to input operating instructions to control these The controlled object, thereby performing the corresponding control operation.
  • an icon is displayed on a graphical interface, and the icon can be operated using a human-computer interaction device to perform a specific function, such as the function of marking ultrasound images.
  • the display may be a touch screen display.
  • the display in this embodiment may include one display or multiple displays.
  • the processor 05 is used to process the ultrasound echo signal obtained by the beam synthesis module 04 to obtain at least one frame of lung ultrasound image; the processor 05 is also used to identify the image of the lung ultrasound image
  • the image sign includes at least the B-line, and the parameter information of the lung ultrasound image whose image sign is the B-line is calculated.
  • the parameter information includes at least one of the number of the B-line, the coverage percentage of the B-line, and the B-line interval.
  • the coverage percentage of line B is the percentage of the area occupied by line B in the detection area of the lung, and the interval of line B is the distance between adjacent lines.
  • the human-computer interaction device 06 displays the detection result through the display, and the detection result includes the parameter information calculated by the processor 05.
  • the embodiment of the present invention also provides a method for automatically detecting the B-line in the lung ultrasound image.
  • the flowchart can be seen in FIG. 2, and the method may include the following steps:
  • Step 101 Acquire at least one frame of lung ultrasound images, for example, acquire a video file including one or more frames of ultrasound images.
  • the processor 05 acquires at least one frame of lung ultrasound images collected by the ultrasound probe 01 in real time, or the processor 05 may also read at least one frame of lung ultrasound images from a storage device.
  • Step 102 Recognize the image signs.
  • the processor 05 After acquiring at least one frame of lung ultrasound images, the processor 05 recognizes image signs of each frame of lung ultrasound images, and the image signs include at least the B-line. In one embodiment, the processor 05 may only identify lung ultrasound images with B-line from the acquired lung ultrasound images based on the target detection algorithm.
  • Step 103 Calculate parameter information of line B.
  • the processor 05 After the processor 05 recognizes the B-line in the lung ultrasound image, it calculates the parameter information of the lung ultrasound image with the image sign as the B-line.
  • the parameter information includes at least one of the coverage percentage of the B-line and the interval of the B-line.
  • the coverage percentage of line B is the percentage of the area occupied by line B in the detection area of the lung, and the interval of line B is the distance between adjacent B lines.
  • the lung detection area of the frame of lung ultrasound image can be determined first, and then the area occupied by the B-line can be calculated based on the recognized B line.
  • the percentage of the lung detection area; the lung detection area can be determined according to the deep learning method, or the position of the pleural line of the lung ultrasound image can be identified first, and then the far field area of the pleural line position can be regarded as the lung of the lung ultrasound image Department detection area.
  • the area occupied by line B refers to the area of all lines B in the lung ultrasound image, which can be determined during the recognition process of line B.
  • the processor 05 can determine the identified area pixels belonging to line B, and then recognize The area that belongs to line B is determined as the area occupied by line B. For example, the processor 05 may determine the pixels belonging to the B line, and regard the area defined by each pixel as the area belonging to the B line.
  • the distance between adjacent B-lines at the position of the pleural line can also be calculated.
  • Step 104 Display parameter information.
  • the processor 05 sends the calculated parameter information to the human-computer interaction device 06 for display.
  • the ultrasound imaging equipment and the method for automatically detecting the B-line in the lung ultrasound image provided by the embodiments of the present invention can automatically detect the image signs of the lung ultrasound image, and calculate the B-line coverage percentage of the lung ultrasound image whose image signs are the B-line And/or B-line interval, realizes the B-line intelligent identification and quantitative analysis of multiple parameters, and improves the intelligence of lung ultrasound examination.
  • By calculating the coverage percentage of the B line it is possible to avoid the problem that the number of B lines cannot be used to quantitatively evaluate the B line due to the difficulty of mutual integration when the number of B lines is large, and it overcomes the problems of low manual measurement efficiency and the detection results being easily affected by human factors. Improve measurement efficiency and accuracy of detection results.
  • FIG. 4 provides a specific method of automatically detecting the B-line in the lung ultrasound image.
  • the structure of the processor 05 can be seen in FIG. 4, which may include an acquisition unit 51, an image selection unit 52, an image analysis unit 53, a result selection unit 54 and a scoring unit 55.
  • the method may include the following steps:
  • Step 201 Acquire at least one frame of lung ultrasound images.
  • the processor 05 obtains at least one frame of lung ultrasound images collected by the ultrasound probe 01 in real time through the obtaining unit 51, or the obtaining unit 51 may also read at least one frame of lung ultrasound images from a storage device.
  • Step 202 Filter out images to be analyzed.
  • the processor 05 After the processor 05 obtains at least one frame of lung ultrasound images, it inputs these lung ultrasound images to the image selection unit 52, and the image selection unit 52 screens out the images to be analyzed from these lung ultrasound images.
  • the images to be analyzed are Ultrasound images of the lungs with pathological features. Image screening can eliminate useless images such as images without image information, images with non-pulmonary signs, and blurred images, so as to improve detection efficiency and reduce false detections.
  • Step 203 Identify the image signs of the image to be analyzed.
  • the image selection unit 52 selects the images to be analyzed
  • the images to be analyzed are input to the image analysis unit 53, and the image analysis unit 53 identifies the image signs of each frame of the image to be analyzed, and the identified image signs contain at least the B line .
  • the image analysis unit 53 first determines the lung detection area of each frame of the image to be analyzed.
  • the image analysis unit 53 can determine the lung detection area of each frame of the image to be analyzed according to the deep learning method; that is, it can pre-calibrate a large number of lung areas, and then train the machine to recognize this area through the target detection algorithm .
  • the target detection algorithm can be, for example, a fast area convolutional neural network (Faster RCNN) algorithm, etc.
  • the image analysis unit 53 may also determine the lung detection area of each frame of the image to be analyzed by image processing; for example, it may first identify the position of the pleural line of the image to be analyzed, and then the position of the pleural line The far-field area of is used as the lung detection area of the image to be analyzed, where the position of the pleural line can be determined according to the recognized near-field highlight horizontal line features, or it can be implemented by means of deep learning.
  • the bat signs include chest mold lines. Therefore, the image analysis unit 53 can also use the bat signs of the image to be analyzed to determine the lung detection area, that is, it is recognizing In the bat sign, the position of the pleural line can be determined from the bat sign.
  • the image analysis unit 53 determines the lung detection area, it recognizes image signs in the lung detection area.
  • image signs in lung ultrasound images include A-line, B-line, bat sign, coastal sign, pleural slip sign, debris sign, etc.
  • the detected image signs can include B-line, lung consolidation, etc.
  • the image analysis unit 53 can recognize image signs in the lung detection area according to the deep learning method, that is, calibrate a large number of diseases, and then train the machine to recognize through the target detection algorithm.
  • the target detection algorithm may be Faster, for example. RCNN algorithm etc.
  • the image analysis unit 53 can also use image processing methods to identify image signs in the lung detection area; since the B line is a discrete vertical reverberation artifact that extends from the pleural line to the bottom of the image display screen, No loss occurs. Therefore, according to this feature, the vertical linear feature in the direction of the sound beam line can be detected in the lung detection area to obtain the B line; among them, the recognition of the linear feature can be obtained by template matching and other methods. In practical applications, the B line can be divided into a single B line and a diffuse B line according to the width of the B line.
  • Step 204 Determine the lung ultrasound image with B-line.
  • the image analysis unit 53 After the image analysis unit 53 recognizes the image signs of each frame of the image to be analyzed, it determines the lung ultrasound images with B-line from the images to be analyzed according to the image signs.
  • Step 205 Calculate the parameter information of the B line.
  • the image analysis unit 53 determines the lung ultrasound image with B-line, for each frame of the lung ultrasound image with B-line, calculate the coverage percentage of B-line and/or the interval of B-line according to the recognized B-line, and realize the comparison of B-line. Quantitative analysis of the line.
  • the image analysis unit 53 may first determine the area occupied by the B line, and then calculate the percentage of the area occupied by the B line in the corresponding lung detection area.
  • the area occupied by the B line refers to the area occupied by all the B lines in the lung ultrasound image, which can be considered as the area defined by the position where the B line pixels appear.
  • the B line pixels can be used in the process of identifying the B line Learned. Based on this, when determining the area occupied by line B, the image analysis unit 53 can first determine the identified area pixels belonging to line B in the lung detection area, and determine the area belonging to line B as the area occupied by line B area.
  • the image analysis unit 53 may first determine the position of the pleural line of the lung ultrasound image, and then calculate the distance between adjacent B-lines at the position of the pleural line according to the recognized B-line, or it may also calculate the distance from the pleura
  • the line position is the distance between adjacent B lines at a preset distance. Calculating the interval of the B-line can assist medical staff to assess the condition of the lungs.
  • the image analysis unit 53 may also calculate the number of B lines based on the identified B lines.
  • the number of B lines In the process of evaluating the number of B lines, when the number of B lines is small, each B line can be clearly distinguished. However, as the number of B lines increases, the B lines will merge with each other and be difficult to distinguish. At this time, in order to To obtain a more accurate number of line B, you can calculate the percentage of line B.
  • the image analysis unit 53 after the image analysis unit 53 recognizes the B line, it can calculate the number of B lines when each line B can be clearly distinguished (the number of B lines is small), and can also calculate the B line coverage percentage , Or calculate the number of lines B and the coverage percentage of line B at the same time; when each line B cannot be clearly distinguished (the number of lines B is so much that it is difficult to distinguish between them), the coverage percentage of line B can be calculated and covered with line B The percentage to reflect the number of B lines.
  • Step 206 Determine the target image.
  • the result selection unit 54 of the processor 05 may select at least one frame of images from the lung ultrasound images as the target image according to a preset rule.
  • the preset rule may be the largest number of B-lines or the largest coverage percentage of the B-line.
  • the selection unit 54 may select a frame with the largest number of B-lines from the lung ultrasound images.
  • the target image or, select the frame with the largest coverage of line B as the target image.
  • the selection criteria for selecting the target image can be set by the user.
  • Step 207 Score the target image.
  • the scoring unit 55 scores the selected at least one frame of the target image to obtain a scoring result for each frame of the target image, and the scoring result reflects the correlation between the target image and the associated disease.
  • the scoring form for scoring the target image may be various forms consisting of at least one of numbers, letters, and text, and the scoring may be determined according to one or more of the calculated parameter information.
  • a scoring rule can be: when there is no abnormality in the target image or there are less than 2 clear B-lines, the scoring result is determined to be N or 0; when there are more than 3 clear B-lines in the target image, the scoring result is determined to be B1 or 1; when the B-line interval in the target image is less than the preset value (diffuse B-line), the score result is determined to be B2 or 2; when the image signs of the target image represent lung consolidation, the score result is determined to be C or 3.
  • the scoring unit 55 can not only obtain the scoring result of a single position of the lung, but also calculate the sum of the scores of the target image corresponding to each scan position of the lung, that is, add the scores of each scan position to obtain the entire lung Scoring results.
  • Step 208 Display the target image and the quantitative result.
  • the processor 05 may send the selected target image and its corresponding parameter information and scoring result to the human-computer interaction device 06 for display.
  • the human-computer interaction device 06 may also highlight at least one of the lung detection area, the B line and the marked line of the B line interval in the target image.
  • Figure 5 is a schematic diagram of the display effect, where the dotted line indicates the lung detection area, the vertical solid line in this area is the detected B line, and the upper right area shows the quantitative analysis result of the target image.
  • the analysis results can include scoring results, the number of B lines and the coverage percentage of B lines.
  • the table in the lower right area indicates the B line interval.
  • the method for automatically detecting the B-line in the lung ultrasound image provided in this embodiment first selects the images to be analyzed with pathological features from the acquired lung ultrasound images, thereby eliminating useless images, improving the detection efficiency and reducing False detection rate; then determine the lung detection area of each frame of the image to be analyzed, and identify the image signs in the lung detection area, the image signs include at least the B line; and then determine the B line according to the recognized image signs Lung ultrasound images, and then calculate at least one of the B-line coverage percentage, B-line interval, and B-line number corresponding to each of these lung ultrasound images to realize automatic detection and quantitative analysis of B-line.
  • At least one frame of the lung ultrasound image can be selected as the target image according to the calculated parameter information of the B-line, and the target image can be scored, and finally the target image and its corresponding parameter information and scoring result can be displayed ,
  • This method overcomes the problem of low manual measurement efficiency, and at the same time avoids the influence of human factors on the detection results, improves the intelligence of lung ultrasound examination, and also improves the measurement efficiency and the accuracy of the detection results.
  • the processor 05 screens out the image to be analyzed with pathological characteristics when performing image screening, then recognizes the image signs of the image to be analyzed, and then determines the lung ultrasound image with B-line according to the recognized image signs In order to obtain an ultrasound image of the lung with B-line.
  • the processor 05 can also filter out only the images with B-line when performing image screening, that is, directly identify the images with the B-line from the acquired at least one frame of lung ultrasound images.
  • the lung ultrasound image of the B-line for example, the lung ultrasound image with the B-line can be identified from the acquired lung ultrasound image based on the target detection algorithm.
  • the foregoing embodiment is an example for determining the target image and scoring the target image.
  • the human-computer interaction device 06 displays the target image and its corresponding parameter information and scoring result. In practical applications, the target image may not be scored.
  • the processor calculates the parameter information and selects at least one frame of the lung ultrasound image as the target image according to preset rules, and the human-computer interaction device 06 directly displays the target image And its corresponding parameter information.
  • the detection of the disease is realized by a fully automatic method. In practical applications, it can also be realized by a combination of automatic and manual methods, such as automatically identifying B-line and manually marking non-B-line lung ultrasound images.
  • the image analysis unit 53 of the processor 05 can determine from the image to be analyzed that the image sign is a non-B-line lung ultrasound image based on the recognized image sign of the image to be analyzed; the human-computer interaction device 06 detects the user's
  • the image sign is an operation instruction for marking the lung ultrasound image that is not B-line, and the operation instruction is sent to the processor 05.
  • the processor 05 will perform an operation instruction for the lung ultrasound that has a non-B-line image sign.
  • the image is marked.
  • a device for automatically detecting the B-line in the lung ultrasound image includes an acquisition module 61, a determination module 62, an identification module 63, a calculation module 64 and a display module 65.
  • the acquiring module 61 is used to acquire at least one frame of lung ultrasound images.
  • the determination module 62 is used to determine the lung detection area of the lung ultrasound image acquired by the acquisition module 61; for example, the determination module 62 may determine the lung detection area of the lung ultrasound image according to a deep learning method, or the determination module 62 may also first Identify the position of the pleural line of the lung ultrasound image, and then use the far-field area of the pleural line position as the lung detection area of the frame of the lung ultrasound image.
  • the identification module 63 is configured to identify image signs corresponding to the lung ultrasound images in the lung detection area determined by the determining module 62, and the image signs include at least the B-line.
  • the calculation module 64 is used to calculate the parameter information of the lung ultrasound image whose image sign is the B-line, the parameter information includes at least one of the B-line coverage percentage and the B-line interval, where the B-line coverage percentage is the area occupied by the B-line It accounts for the percentage of the detection area of the lung, where the B-line interval is the distance between adjacent B-lines.
  • the calculation module 64 is specifically configured to determine the position of the pleural line of the lung ultrasound image whose image sign is the B-line, and then calculate the distance between adjacent B-lines at the position of the pleural line according to the recognized B-line .
  • the calculation module 64 is used to calculate the percentage of the area occupied by the B line in the corresponding lung detection area according to the identified B line to obtain the B line coverage percentage; specifically, the calculation module 64 can first determine the identified area belonging to line B, and then determine the area belonging to line B as the area occupied by line B, and then calculate the percentage of the area occupied by line B to its corresponding lung detection area.
  • the display module 65 is used to display the parameter information calculated by the calculation module 64.
  • the device includes an acquisition module 61, a determination module 62, an identification module 63, a calculation module 64, a display module 65, Selection module 66 and scoring module 67.
  • the functions of the acquiring module 61, the determining module 62, the identifying module 63, and the calculating module 64 are the same as the one-to-one correspondence in FIG. 6.
  • the selection module 66 is configured to select at least one frame of images from the lung ultrasound images acquired by the acquisition module 61 as the target image according to preset rules; for example, one frame of images may be selected as the target image, and the calculation module 64 may also calculate the number of B lines.
  • the preset rule can be that the number of lines B is the largest or the coverage percentage of line B is the largest.
  • the selection module 66 can select line B from the lung ultrasound images acquired by the acquisition module 61 according to the calculation result of the calculation module 64 The image with the largest number is used as the target image, or the image with the largest coverage of line B is selected as the target image.
  • the display module 65 is used to display the target image determined by the selection module 66 and its corresponding parameter information.
  • the scoring module 67 can also score the selected at least one frame of the target image to obtain a scoring result, which can reflect the correlation between the target image and the associated disease Sex.
  • the display module 65 can synchronously display the target image and its corresponding parameter information and scoring result.
  • the display module 65 may also highlight in the target image at least one of the lung detection area, the B-line and the marked line of the B-line interval.
  • the device includes an acquisition module 61, a determination module 62, an identification module 63, a calculation module 64, a display module 65, The screening module 68, the non-B line determination module 69, and the marking module 60.
  • the acquiring module 61 is used to acquire at least one frame of lung ultrasound images.
  • the screening module 68 is configured to screen out an image to be analyzed from the lung ultrasound images acquired by the acquiring module 61, and the image to be analyzed is a lung ultrasound image with pathological characteristics. Different from the device in FIG. 6, in the device in FIG.
  • the determining module 62 is used to determine the lung detection area of the image to be analyzed filtered by the screening module 68, and the identification module 63 is used to identify the image to be analyzed in the lung detection area According to the image signs, the lung ultrasound image with B-line is determined from the image to be analyzed.
  • the function of the calculation module 64 is the same as that in the device of FIG. 6.
  • the non-B-line determination module 69 is configured to determine the lung ultrasound image with a non-B-line image sign from the image to be analyzed according to the image signs of the image to be analyzed recognized by the recognition module 63.
  • the marking module 60 is configured to receive an operation instruction for marking the lung ultrasound image whose image sign is non-B-line, and according to the operation instruction, the image sign determined by the non-B-line determination module 69 is a lung ultrasound image that is not B-line To mark.
  • the device may also include a selection module 66 and a scoring module 67 in the device as shown in FIG. 7. At this time, the selection module 66 is used to select at least from the to-be-analyzed images screened by the screening module 68 according to preset rules. One frame of image is used as the target image.
  • any tangible, non-transitory computer-readable storage medium can be used, including magnetic storage devices (hard disks, floppy disks, etc.), optical storage devices (CD-ROM, DVD, Blu-Ray disks, etc.), flash memory and/or And so on.
  • These computer program instructions can be loaded on a general-purpose computer, a special-purpose computer, or other programmable data processing equipment to form a machine, so that these instructions executed on the computer or other programmable data processing device can generate a device that realizes the specified function.
  • Computer program instructions can also be stored in a computer-readable memory, which can instruct a computer or other programmable data processing equipment to operate in a specific manner, so that the instructions stored in the computer-readable memory can form a piece of Manufactured products, including realization devices that realize specified functions.
  • Computer program instructions can also be loaded on a computer or other programmable data processing equipment, thereby executing a series of operation steps on the computer or other programmable equipment to produce a computer-implemented process, so that the execution on the computer or other programmable equipment Instructions can provide steps for implementing specified functions.
  • Coupled refers to physical connection, electrical connection, magnetic connection, optical connection, communication connection, functional connection and/or any other connection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

一种超声成像设备及检测B线的方法、装置、存储介质。该超声成像设备包括超声探头(01)、激励超声探头(01)向肺部发射超声波束的发射电路(02)、接收超声波束的回波以获得超声回波信号的接收电路(03)和波束合成模块(04)、处理器(05)和人机交互装置(06)。处理器(05)对超声回波信号进行处理,得到至少一帧肺部超声图像,识别该肺部超声图像的图像征象(至少包括B线),并计算图像征象为B线的肺部超声图像的B线覆盖百分比和/或B线间隔,然后发送给人机交互装置(06)进行显示。本设备和方法实现了B线的定量分析,提高了肺部超声检查的智能化程度。

Description

超声成像设备及检测B线的方法、装置、存储介质 技术领域
本发明涉及超声成像技术领域,具体涉及一种超声成像设备及检测B线的方法、装置、存储介质。
背景技术
超声成像是一项用于对人体中的器官和软组织进行成像的医学成像技术,在临床医学中发挥着重要的作用。肺部超声在肺部渗出性病变的鉴别和诊断方面有较大的应用价值,对于多种肺部疾病的诊断具有很好的敏感性和特异性,甚至可以代替胸部CT检查而被应用于急诊重症医学中肺部疾病的诊断,在临床实践中能够节约时间和成本,及时拯救患者生命。
有别于其他人体部位的超声检查,肺部超声成像大多数情况下反映的不是肺部组织的直接影像,而是一系列的伪像,这些伪像可根据肺部超声图像的显示特点来定义。超声检查正常充气的肺时只能检测到胸膜,但随着空气含量的降低,肺与周围组织之间的回声失落效应会随之减少,超声便能一定程度上反应更深区域的影像,产生典型的图像征象。
技术问题
近年来,在重症急诊等领域,肺部超声得到了越来越广泛的应用和重视,通过识别典型图像征象有助于辅助医护人员进行肺部疾病的快速诊断。但是,目前的肺部超声检查主要依靠人工手动测量,费时费力,检查结果很大程度上受到操作人员经验水平的影响,智能化程度低。
技术解决方案
本发明主要提供一种超声成像设备及检测B线的方法、装置、存储介质,以提高肺部超声检查的智能化程度。
根据第一方面,一种实施例中提供一种超声成像设备,包括:
超声探头;
发射电路,用于激励所述超声探头向肺部发射超声波束;
接收电路和波束合成模块,用于接收所述超声波束的回波,获得超声回波信号;
处理器,用于对所述超声回波信号进行处理,得到至少一帧肺部超声图像;所述处理器还用于识别所述肺部超声图像的图像征象,所述图像征象至少包括B线,并计算图像征象为B线的肺部超声图像的参数信息,所述参数信息包括B线覆盖百分比和B线间隔中的至少之一,所述B线覆盖百分比为B线所占区域占肺部检测区域的百分比,所述B线间隔为相邻B线间的距离;
人机交互装置,其与所述处理器连接,用于检测用户的输入信息和显示检测结果,所述检测结果中包括所述参数信息。
根据第二方面,一种实施例中提供一种自动检测肺部超声图像中B线的方法,包括:
获取至少一帧肺部超声图像;
识别所述肺部超声图像的图像征象,所述图像征象至少包括B线;
计算图像征象为B线的肺部超声图像的参数信息,所述参数信息包括B线覆盖百分比和B线间隔中的至少之一,所述B线覆盖百分比为B线所占区域占肺部检测区域的百分比,所述B线间隔为相邻B线间的距离;
显示所述参数信息。
根据第三方面,一种实施例中提供一种自动检测肺部超声图像中B线的装置,包括:
获取模块,用于获取至少一帧肺部超声图像;
确定模块,用于确定所述肺部超声图像的肺部检测区域;
识别模块,用于在所述肺部检测区域内识别对应肺部超声图像的图像征象,所述图像征象至少包括B线;
计算模块,用于计算图像征象为B线的肺部超声图像的参数信息,所述参数信息包括B线覆盖百分比和B线间隔中的至少之一,所述B线覆盖百分比为B线所占区域占肺部检测区域的百分比,所述B线间隔为相邻B线间的距离;
显示模块,用于显示所述参数信息。
根据第四方面,一种实施例中提供一种计算机可读存储介质,其包括程序,所述程序能够被处理器执行以实现如上所述的方法。
有益效果
依据上述实施例的超声成像设备及检测B线的方法、装置、存储介质,由于超声成像设备可自动检测肺部超声图像中的图像征象,该图像征象至少包括B线,并可自动计算B线所占区域占肺部检测区域的百分比和/或B线间的距离,实现了B线的定量分析,提高了肺部超声检查的智能化程度。
附图说明
图1为本发明实施例中一种超声成像设备的结构示意图;
图2为本发明实施例中一种自动检测肺部超声图像中B线的方法的流程图;
图3为本发明一种具体实施例中自动检测肺部超声图像中B线的方法的流程图;
图4为本发明一种具体实施例中处理器的结构示意图;
图5为本发明一种具体实施例中显示目标图像及定量结果的显示效果的示意图;
图6为本发明实施例中一种自动检测肺部超声图像中B线的装置的结构示意图;
图7为本发明实施例中另一种自动检测肺部超声图像中B线的装置的结构示意图;
图8为本发明实施例中又一种自动检测肺部超声图像中B线的装置的结构示意图。
本发明的实施方式
下面通过具体实施方式结合附图对本发明作进一步详细说明。说明书中所描述的特点、操作或者特征可以以任意适当的方式结合形成各种实施方式。本申请所说“连接”、“联接”,如无特别说明,均包括直接和间接连接(联接)。
肺内空气含量降低时,一些渗出液、漏出液、胶原及血液等会使肺密度增加,肺与周围组织之间的回声失落效应也便减少了,超声便能在一定程度上反应更深区域的影像。这种现象会产生一些垂直混合回声,称为B线。在肺部超声图像中,B线表现为从胸膜线出现延伸至屏幕底部的离散垂直混响伪像影,不发生失落,与肺滑行同步运动。
在定量评估B线时,可用的方法例如人工统计B线个数,当B线数量较少时,实施起来没有太大的问题,但随着数量增多,B线便会相互融合难以分辨,这时便难以用B线个数来定量评估B线。而且,人工测量比较耗时费事,且依赖操作者经验,非常不利于肺超在急重症领域的推广和应用。智能化的B线自动识别和支持多种参数的定量分析工具,有望提升B线定量分析的效率和准确度,促进肺超在急重症领域发挥更大的作用。
在本发明实施例中,超声成像设备对获取的肺部超声图像进行图像征象的识别,该图像征象至少包括B线,然后计算B线个数、B线所占区域占肺部检测区域的百分比和/或B线间的距离,完成对B线的自动检测和定量分析。
请参考图1,图1为本发明实施例提供的一种超声成像设备的结构示意图,该超声成像设备包括超声探头01、发射电路02、接收电路03、波束合成模块04、处理器05和人机交互装置06,发射电路02和接收电路03可以通过发射/接收选择开关07与超声探头01连接。
在超声成像过程中,发射电路02将经过延迟聚焦的具有一定幅度和极性的发射脉冲通过发射/接收选择开关07发送到超声探头01,以激励超声探头01向目标组织(例如,人体或者动物体内的器官、组织、血管等等)发射超声波束,在本发明实施例中为向肺部发射超声波束。经一定延时后,接收电路03通过发射/接收选择开关07接收超声波束的回波,得到超声回波信号,并将该回波信号发送给波束合成模块04,波束合成模块04对超声回波信号进行聚焦延时、加权和通道求和等处理,获得波束合成的超声回波信号,然后将该波束合成的超声回波信号送入处理器05进行相关的处理,得到所需的超声图像或由超声图像组成的视频文件。
超声探头01通常包括多个阵元的阵列。在每次发射超声波时,超声探头01的所有阵元或者所有阵元中的一部分参与超声波的发射。此时,这些参与超声波发射的阵元中的每个阵元或者每部分阵元分别受到发射脉冲的激励并分别发射超声波,这些阵元分别发射的超声波在传播过程中发生叠加,形成被发射到扫描目标的合成超声波束,在本发明实施例中,该合成超声波束即为向肺部发射的超声波束。
人机交互装置06与处理器05连接,比如,处理器05可以通过外部输入/输出端口与人机交互装置06连接,该人机交互装置06可以检测用户的输入信息,该输入信息比如可以是对超声波发射接收时序的控制指令,可以是对超声图像进行编辑和标注等的操作输入指令,或者还可以包括其他指令类型。通常用户对超声图像进行编辑、标注、测量等操作输入时所获得的操作指令用于针对目标组织的测量。人机交互装置06可以包括键盘、鼠标、滚轮、轨迹球、移动式输入设备(比如带触摸显示屏的移动设备、手机等等)、多功能旋钮等等其中之一或者多个的结合,因此,相应的外部输入/输出端口可以是无线通信模块,也可以是有线通信模块,或者两者的组合。外部输入/输出端口也可基于USB、如CAN等总线协议、和/或有线网络协议等来实现。
人机交互装置06还包括显示器,该显示器可以显示处理器05获得的超声图像。此外,显示器在显示超声图像的同时还可以提供给用户进行人机交互的图形界面,在图形界面上设置一个或多个被控对象,提供给用户利用人机交互装置06输入操作指令来控制这些被控对象,从而执行相应的控制操作。例如,图形界面上显示图标,利用人机交互装置可以对该图标进行操作,用来执行特定的功能,比如对超声图像进行标注的功能。实际应用中,该显示器可以是触摸屏显示器。此外,本实施例中的显示器可以包括一个显示器,也可以包括多个显示器。
在本发明实施例中,处理器05用于对波束合成模块04获得的超声回波信号进行处理,得到至少一帧肺部超声图像;该处理器05还用于识别该肺部超声图像的图像征象,该图像征象至少包括B线,并计算图像征象为B线的肺部超声图像的参数信息,该参数信息包括B线个数、B线覆盖百分比和B线间隔中的至少之一,其中的B线覆盖百分比为B线所占区域占肺部检测区域的百分比,B线间隔为相邻B线间的距离。人机交互装置06通过显示器显示检测结果,该检测结果中包括处理器05计算得到的参数信息。
基于上述实施例的超声成像设备,本发明实施例还提供一种自动检测肺部超声图像中B线的方法,其流程图可参见图2,该方法可以包括如下步骤:
步骤101:获取至少一帧肺部超声图像,例如获取包括一帧或多帧超声图像的视频文件。
处理器05获取超声探头01实时采集的至少一帧肺部超声图像,或者,处理器05也可从存储设备中读取至少一帧肺部超声图像。
步骤102:识别图像征象。
处理器05获取到至少一帧肺部超声图像后,识别各帧肺部超声图像的图像征象,该图像征象至少包括B线。在一个实施例中,处理器05可以基于目标检测算法从获取的肺部超声图像中只识别出具有B线的肺部超声图像。
步骤103:计算B线的参数信息。
处理器05识别出肺部超声图像中的B线后,计算图像征象为B线的肺部超声图像的参数信息,该参数信息包括B线覆盖百分比和B线间隔中的至少之一,其中的B线覆盖百分比为B线所占区域占肺部检测区域的百分比,B线间隔为相邻B线间的距离。
例如,在计算一帧肺部超声图像的B线覆盖百分比时,可以先确定出该帧肺部超声图像的肺部检测区域,然后根据识别到的B线计算该B线所占区域占该肺部检测区域的百分比;其中,肺部检测区域可以根据深度学习法确定,也可以先识别肺部超声图像的胸膜线位置,然后将该胸膜线位置的远场区域作为该肺部超声图像的肺部检测区域。B线所占区域是指肺部超声图像中所有B线的区域,在B线的识别过程中可确定,具体的,处理器05可以确定识别到的属于B线的区域像素点,然后将识别到的属于B线的区域确定为B线所占区域。例如处理器05可以确定属于B线的像素点,将各个像素点所限定的区域作为属于B线的区域。
例如,在计算一帧肺部超声图像中的B线间隔时,可以先确定该帧肺部超声图像的胸膜线的位置,然后根据识别到的B线计算胸膜线位置处相邻B线间的距离,或者,也可以计算距胸膜线位置预设距离处相邻B线间的距离。
步骤104:显示参数信息。
处理器05将计算出的参数信息发送给人机交互装置06进行显示。
本发明实施例提供的超声成像设备和自动检测肺部超声图像中B线的方法,能够自动检测肺部超声图像的图像征象,并计算图像征象为B线的肺部超声图像的B线覆盖百分比和/或B线间隔,实现了B线的智能化识别和多种参数的定量分析,提高了肺部超声检查的智能化程度。通过计算B线覆盖百分比可以避免在B线数量较多时因相互融合难以分辨而无法用B线个数定量评估B线的问题,克服了手动测量效率低以及检测结果易受人为因素影响的问题,提高了测量效率和检测结果的准确性。
为了更加清楚地体现出本发明的目的,在上述实施例的基础上作进一步的举例说明。
请参考图3,图3提供了一种具体的自动检测肺部超声图像中B线的方法。在该实施例中,处理器05的结构可参见图4,其可以包括获取单元51、图像选择单元52、图像分析单元53、结果选择单元54和评分单元55。具体的,该方法可以包括如下步骤:
步骤201:获取至少一帧肺部超声图像。
处理器05通过获取单元51获取超声探头01实时采集的至少一帧肺部超声图像,或者,获取单元51也可从存储设备中读取至少一帧肺部超声图像。
步骤202:筛选出待分析图像。
处理器05获取到至少一帧肺部超声图像后,将这些肺部超声图像输入到图像选择单元52,通过图像选择单元52从这些肺部超声图像中筛选出待分析图像,该待分析图像为具有病理特征的肺部超声图像。通过图像筛选可以剔除无图像信息的图像、非肺部图像征的图像、模糊不清的图像等无用的图像,以提高检测效率并降低误检。
步骤203:识别待分析图像的图像征象。
图像选择单元52筛选出待分析图像之后,将这些待分析图像输入到图像分析单元53,图像分析单元53识别出每一帧待分析图像的图像征象,识别出的图像征象中至少包含有B线。
具体的,图像分析单元53先确定出每一帧待分析图像的肺部检测区域。在一个实施例中,图像分析单元53可以根据深度学习法确定每一帧待分析图像的肺部检测区域;即就是,可以预先标定大量的肺部区域,然后训练机器通过目标检测算法识别此区域,目标检测算法比如可以是快速区域卷积神经网络(Faster RCNN)算法等。在另一个实施例中,图像分析单元53也可以通过图像处理方法确定每一帧待分析图像的肺部检测区域;比如,可以先识别出待分析图像的胸膜线位置,然后将该胸膜线位置的远场区域作为该待分析图像的肺部检测区域,其中,可以根据识别到的近场高亮水平线状特征来确定胸膜线位置,也可以借助深度学习的方法实现。此外,肺部超声图像中有时会出现蝙蝠征象,该蝙蝠征象中包含有胸模线,因此,图像分析单元53也可以利用待分析图像的蝙蝠征象来确定肺部检测区域,即就是在识别出蝙蝠征象时,可以从该蝙蝠征象中确定出胸膜线的位置。
图像分析单元53确定出肺部检测区域之后,在该肺部检测区域内识别图像征象。肺部超声图像中常见的图像征象有A线、B线、蝙蝠征、海岸征、胸膜滑动征、碎片征等,检测的图像征象可以包括B线、肺实变等。一个实施例中,图像分析单元53可以根据深度学习法在肺部检测区域内识别图像征象,即就是,标定大量的病症,然后训练机器通过目标检测算法识别,目标检测算法比如可以是Faster RCNN算法等。另一个实施例中,图像分析单元53也可以采用图像处理的方法在肺部检测区域内识别图像征象;由于B线是从胸膜线出现延伸至图像显示屏幕底部的离散垂直混响伪像影,不发生失落,因此可根据此特点,在肺部检测区域内检测声束线方向的垂直线状特征,得到B线;其中,线状特征的识别可以通过模板匹配等方式得到。实际应用中,可以根据B线的宽度,将B线分为单条B线和弥漫型B线。
步骤204:确定具有B线的肺部超声图像。
图像分析单元53识别出每一帧待分析图像的图像征象之后,根据该图像征象从这些待分析图像中确定出具有B线的肺部超声图像。
步骤205:计算B线的参数信息。
图像分析单元53确定出具有B线的肺部超声图像之后,对于每一帧具有B线的肺部超声图像,根据识别出的B线计算B线覆盖百分比和/或B线间隔,实现对B线的定量分析。在计算B线覆盖百分比时,图像分析单元53可以先确定出B线所占的区域,然后计算B线所占区域占其对应的肺部检测区域的百分比。其中,B线所占的区域是指肺部超声图像中所有B线所占据的区域,可以认为是出现B线像素点的位置所限定的区域,B线像素点可以在识别B线的过程中获知。基于此,图像分析单元53在确定B线所占的区域时,可以在肺部检测区域内先确定识别到的属于B线的区域像素点,并将属于B线的区域确定为B线所占区域。
图像分析单元53在计算B线间隔时,可以先确定肺部超声图像的胸膜线位置,然后根据识别到的B线计算胸膜线位置处相邻B线间的距离,或者,也可以计算距胸膜线位置预设距离处相邻B线间的距离。通过计算B线的间隔可以辅助医护人员来评估肺部情况。
实际应用中,图像分析单元53还可以根据识别出的B线计算B线的个数。在对B线数量进行评估的过程中,当B线数量较少时,可以清晰分辨出每条B线,但随着B线数量的增多,B线便会相互融合难以分辨,这时,为了获取B线较为准确的数量,可以计算B线所占的百分率。因此,在一种实施例中,图像分析单元53识别出B线之后,可以在能够清晰分辨出每条B线(B线数量较少)时计算B线个数,也可以计算B线覆盖百分比,或者同时计算B线个数和B线覆盖百分比;在无法清晰分辨出每条B线(B线较多以至于发生相互融合而难以分辨)时,可以计算B线覆盖百分比,用B线覆盖百分比来反映B线的数量。
步骤206:确定目标图像。
图像分析单元53计算出B线的参数信息之后,处理器05的结果选择单元54可以根据预设规则从肺部超声图像中选择至少一帧图像作为目标图像。比如,在一种具体的实施例中,预设规则可以是B线个数最多或B线覆盖百分比最大,结果选择单元54可以从肺部超声图像中选择出B线个数最多的一帧图像作为目标图像,或者,选择出B线覆盖百分比最大的一帧图像作为目标图像。实际应用中,选择目标图像的选择准则可以由用户进行设定。
步骤207:对目标图像进行评分。
结果选择单元54在确定出目标图像之后,由评分单元55对选定的至少一帧目标图像进行评分,得到每帧目标图像的评分结果,该评分结果反映目标图像与关联病症的相关性。
其中,对目标图像进行评分的评分形式可以是数字、字母、文字等其中的至少之一组成的各种形式,评分可依据计算的参数信息中的一种或多种确定。比如,一种评分规则可以是:当目标图像未见异常或存在2条以内清晰B线时,确定评分结果为N或0;当目标图像中存在3条以上清晰B线时,确定评分结果为B1或1;当目标图像中的B线间隔小于预设值(弥散型B线)时,确定评分结果为B2或2;当目标图像的图像征象代表肺实变时,确定评分结果为C或3。
实际应用中,评分单元55不仅可以得到肺部单个位置处的评分结果,还可以计算肺部各扫查位置对应的目标图像的评分之和,即将各个扫查位置得分相加,得到整个肺部的评分结果。
步骤208:显示目标图像及定量结果。
处理器05在得到目标图像的评分结果之后,可以将选定的目标图像及其对应的参数信息和评分结果发送给人机交互装置06进行显示。一种实施例中,人机交互装置06还可以在目标图像中突出显示肺部检测区域、B线和B线间隔的标注线中至少之一。
如图5所示是一种显示效果的示意图,其中,虚线表明了肺部检测区域,该区域内的纵向实线为检测到的B线,右上区域显示了目标图像的定量分析结果,该定量分析结果可以包括评分结果、B线数量和B线覆盖百分比,右下区域的表格标明了B线间隔。
本实施例提供的自动检测肺部超声图像中B线的方法,先从获取的肺部超声图像中筛选出具有病理特征的待分析图像,以此剔除无用的图像,提高了检测效率并降低了误检率;接着确定出每一帧待分析图像的肺部检测区域,在该肺部检测区域内识别图像征象,该图像征象至少包括B线;再根据识别的图像征象确定出具有B线的肺部超声图像,然后计算这些肺部超声图像各自对应的B线覆盖百分比、B线间隔和B线数量中至少之一,实现对B线的自动检测和定量分析。之后,可以根据计算出的B线的参数信息从肺部超声图像中选择至少一帧图像作为目标图像,并对该目标图像进行评分,最后将目标图像及其对应的参数信息和评分结果显示出来,以给出统一直观的定量分析结果,辅助医护人员结合其他检查结果联合判断患者肺部的情况。该方法克服了手动测量效率低的问题,同时避免了人为因素对检测结果的影响,提高了肺部超声检查的智能化程度,同时也提高了测量效率和检测结果的准确性。
在上述实施例的方法中,处理器05在进行图像筛选时筛选出具有病理特征的待分析图像,再识别待分析图像的图像征象,然后根据识别的图像征象确定具有B线的肺部超声图像,以此得到具有B线的肺部超声图像。与之不同的,在另一种具体的实施例中,处理器05也可以在进行图像筛选时只筛选出具有B线的图像,即从获取的至少一帧肺部超声图像中直接识别出具有B线的肺部超声图像,比如可以基于目标检测算法从获取的肺部超声图像中识别出具有B线的肺部超声图像。
上述实施例以确定出目标图像并对该目标图像进行评分为例来进行说明,人机交互装置06显示的是目标图像及其对应的参数信息和评分结果。在实际应用中,可以不对目标图像进行评分,处理器计算出参数信息,并根据预设规则从肺部超声图像中选择至少一帧图像作为目标图像之后,人机交互装置06直接显示该目标图像及其对应的参数信息。
上述实施例的方法中,病症的检测采用了全自动的方法实现,实际应用中,也可以采用自动和手动结合的方法实现,比如自动识别B线且手动标记非B线的肺部超声图像。具体的,处理器05的图像分析单元53可以根据识别出的待分析图像的图像征象,从待分析图像中确定出图像征象为非B线的肺部超声图像;人机交互装置06检测用户对图像征象为非B线的肺部超声图像进行标记的操作指令,并将该操作指令发送给处理器05,这时,处理器05会根据该操作指令对图像征象为非B线的肺部超声图像进行标记。
如图6所示,提供了一种自动检测肺部超声图像中B线的装置,该装置包括获取模块61、确定模块62、识别模块63、计算模块64和显示模块65。获取模块61用于获取至少一帧肺部超声图像。确定模块62用于确定获取模块61获取的肺部超声图像的肺部检测区域;比如,确定模块62可以根据深度学习法确定肺部超声图像的肺部检测区域,或者,确定模块62也可以先识别肺部超声图像的胸膜线位置,然后将胸膜线位置的远场区域作为该帧肺部超声图像的肺部检测区域。识别模块63用于在确定模块62确定的肺部检测区域内识别对应肺部超声图像的图像征象,该图像征象至少包括B线。计算模块64用于计算图像征象为B线的肺部超声图像的参数信息,该参数信息包括B线覆盖百分比和B线间隔中的至少之一,其中的B线覆盖百分比为B线所占区域占肺部检测区域的百分比,其中的B线间隔为相邻B线间的距离。当参数信息为B线间隔时,计算模块64具体用于确定图像征象为B线的肺部超声图像的胸膜线位置,然后根据识别到的B线计算胸膜线位置处相邻B线间的距离。当参数信息为B线覆盖百分比时,计算模块64用于根据识别到的B线计算该B线所占区域占其对应的肺部检测区域的百分比,得到B线覆盖百分比;具体的,计算模块64可以先确定识别到的属于B线的区域,接着将属于B线的区域确定为B线所占区域,然后计算B线所占区域占其对应的肺部检测区域的百分比。显示模块65用于显示计算模块64计算出的参数信息。
基于图6,如图7所示,提供了另一种自动检测肺部超声图像中B线的装置,该装置包括获取模块61、确定模块62、识别模块63、计算模块64、显示模块65、选定模块66和评分模块67。其中,获取模块61、确定模块62、识别模块63和计算模块64的功能与图6中的一一对应相同。选定模块66用于根据预设规则从获取模块61获取的肺部超声图像中选择至少一帧图像作为目标图像;比如,可以选择一帧图像作为目标图像,计算模块64还可以计算B线个数,预设规则可以是B线个数最多或B线覆盖百分比最大,这时,选定模块66可以根据计算模块64的计算结果,从获取模块61获取的肺部超声图像中选择出B线个数最多的一帧图像作为目标图像,或者选择出B线覆盖百分比最大的一帧图像作为目标图像。显示模块65用于显示选定模块66确定的目标图像及其对应的参数信息。在一个实施例中,在选定模块66确定出目标图像之后,评分模块67还可以对选定的至少一帧目标图像进行评分,得到评分结果,该评分结果能够反映目标图像与关联病症的相关性。这时,显示模块65可以同步显示目标图像及其对应的参数信息和评分结果。此外,显示模块65还可以在目标图像中突出显示肺部检测区域、B线和B线间隔的标注线中的至少之一。
基于图6,如图8所示,提供了又一种自动检测肺部超声图像中B线的装置,该装置包括获取模块61、确定模块62、识别模块63、计算模块64、显示模块65、筛选模块68、非B线确定模块69和标记模块60。获取模块61用于获取至少一帧肺部超声图像。筛选模块68用于从获取模块61获取的肺部超声图像中筛选出待分析图像,该待分析图像为具有病理特征的肺部超声图像。与图6装置不同的是,在图8装置中,确定模块62用于确定筛选模块68筛选出的待分析图像的肺部检测区域,识别模块63用于在肺部检测区域内识别待分析图像的图像征象,并根据该图像征象从待分析图像中确定出具有B线的肺部超声图像。计算模块64的功能与图6装置中的相同。非B线确定模块69用于根据识别模块63识别出的待分析图像的图像征象,从待分析图像中确定出图像征象为非B线的肺部超声图像。标记模块60用于接收对图像征象为非B线的肺部超声图像进行标记的操作指令,并根据该操作指令对非B线确定模块69确定出的图像征象为非B线的肺部超声图像进行标记。此外,该装置也可包括如图7所示装置中的选定模块66和评分模块67,此时,选定模块66用于根据预设规则从筛选模块68筛选出的待分析图像中选择至少一帧图像作为目标图像。
本文参照了各种示范实施例进行说明。然而,本领域的技术人员将认识到,在不脱离本文范围的情况下,可以对示范性实施例做出改变和修正。例如,各种操作步骤以及用于执行操作步骤的组件,可以根据特定的应用或考虑与系统的操作相关联的任何数量的成本函数以不同的方式实现(例如一个或多个步骤可以被删除、修改或结合到其他步骤中)。
另外,如本领域技术人员所理解的,本文的原理可以反映在计算机可读存储介质上的计算机程序产品中,该可读存储介质预装有计算机可读程序代码。任何有形的、非暂时性的计算机可读存储介质皆可被使用,包括磁存储设备(硬盘、软盘等)、光学存储设备(CD-ROM、DVD、Blu-Ray盘等)、闪存和/或诸如此类。这些计算机程序指令可被加载到通用计算机、专用计算机或其他可编程数据处理设备上以形成机器,使得这些在计算机上或其他可编程数据处理装置上执行的指令可以生成实现指定的功能的装置。这些计算机程序指令也可以存储在计算机可读存储器中,该计算机可读存储器可以指示计算机或其他可编程数据处理设备以特定的方式运行,这样存储在计算机可读存储器中的指令就可以形成一件制造品,包括实现指定功能的实现装置。计算机程序指令也可以加载到计算机或其他可编程数据处理设备上,从而在计算机或其他可编程设备上执行一系列操作步骤以产生一个计算机实现的进程,使得在计算机或其他可编程设备上执行的指令可以提供用于实现指定功能的步骤。
虽然在各种实施例中已经示出了本文的原理,但是许多特别适用于特定环境和操作要求的结构、布置、比例、元件、材料和部件的修改可以在不脱离本披露的原则和范围内使用。以上修改和其他改变或修正将被包含在本文的范围之内。
前述具体说明已参照各种实施例进行了描述。然而,本领域技术人员将认识到,可以在不脱离本披露的范围的情况下进行各种修正和改变。因此,对于本披露的考虑将是说明性的而非限制性的意义上的,并且所有这些修改都将被包含在其范围内。同样,有关于各种实施例的优点、其他优点和问题的解决方案已如上所述。然而,益处、优点、问题的解决方案以及任何能产生这些的要素,或使其变得更明确的解决方案都不应被解释为关键的、必需的或必要的。本文中所用的术语“包括”和其任何其他变体,皆属于非排他性包含,这样包括要素列表的过程、方法、文章或设备不仅包括这些要素,还包括未明确列出的或不属于该过程、方法、系统、文章或设备的其他要素。此外,本文中所使用的术语“耦合”和其任何其他变体都是指物理连接、电连接、磁连接、光连接、通信连接、功能连接和/或任何其他连接。
具有本领域技术的人将认识到,在不脱离本发明的基本原理的情况下,可以对上述实施例的细节进行许多改变。因此,本发明的范围应根据以下权利要求确定。

Claims (42)

  1. 一种超声成像设备,其特征在于,包括:
    超声探头;
    发射电路,用于激励所述超声探头向肺部发射超声波束;
    接收电路和波束合成模块,用于接收所述超声波束的回波,获得超声回波信号;
    处理器,用于对所述超声回波信号进行处理,得到至少一帧肺部超声图像;所述处理器还用于识别所述肺部超声图像的图像征象,所述图像征象至少包括B线,并计算图像征象为B线的肺部超声图像的参数信息,所述参数信息包括B线覆盖百分比和B线间隔中的至少之一,所述B线覆盖百分比为B线所占区域占肺部检测区域的百分比,所述B线间隔为相邻B线间的距离;
    人机交互装置,其与所述处理器连接,用于检测用户的输入信息和显示检测结果,所述检测结果中包括所述参数信息。
  2. 如权利要求1所述的超声成像设备,其特征在于,所述参数信息包括B线间隔,所述处理器在计算B线间隔时用于:
    确定胸膜线的位置;
    根据识别到的B线计算胸膜线位置处相邻B线间的距离。
  3. 如权利要求1所述的超声成像设备,其特征在于,所述处理器用于从所述至少一帧肺部超声图像中筛选出待分析图像,所述待分析图像为具有病理特征的肺部超声图像,识别所述待分析图像的图像征象,并根据该图像征象从待分析图像中确定出具有B线的肺部超声图像。
  4. 如权利要求3所述的超声成像设备,其特征在于,所述处理器还根据识别出的待分析图像的图像征象,从待分析图像中确定出图像征象为非B线的肺部超声图像;所述人机交互装置还用于检测用户对图像征象为非B线的肺部超声图像进行标记的操作指令,并将该操作指令发送给处理器;
    所述处理器还用于根据所述操作指令对所述图像征象为非B线的肺部超声图像进行标记。
  5. 如权利要求1所述的超声成像设备,其特征在于,所述处理器用于从所述至少一帧肺部超声图像中识别出具有B线的肺部超声图像。
  6. 如权利要求1至5中任一项所述的超声成像设备,其特征在于,所述处理器在识别肺部超声图像的图像征象时用于:
    确定肺部超声图像的肺部检测区域;
    在所述肺部检测区域内识别图像征象。
  7. 如权利要求6所述的超声成像设备,其特征在于,所述处理器根据深度学习法确定肺部超声图像的肺部检测区域。
  8. 如权利要求6所述的超声成像设备,其特征在于,所述处理器在确定肺部超声图像的肺部检测区域时用于:识别肺部超声图像的胸膜线位置,并将所述胸膜线位置的远场区域作为该肺部超声图像的肺部检测区域。
  9. 如权利要求6所述的超声成像设备,其特征在于,所述参数信息包括B线覆盖百分比,所述处理器在计算B线覆盖百分比时用于:根据识别到的B线计算该B线所占区域占其对应的肺部检测区域的百分比。
  10. 如权利要求9所述的超声成像设备,其特征在于,所述处理器用于确定识别到的属于B线的区域,确定为B线所占区域。
  11. 如权利要求6所述的超声成像设备,其特征在于,所述处理器根据深度学习法在所述肺部检测区域内识别图像征象。
  12. 如权利要求6所述的肺部超声成像设备,其特征在于,所述图像征象为B线,所述处理器用于在所述肺部检测区域内检测声束线方向的垂直线状特征,得到对应的图像征象。
  13. 如权利要求1所述的超声成像设备,其特征在于,所述处理器在计算出参数信息之后,还用于根据预设规则从肺部超声图像中选择至少一帧图像作为目标图像;
    所述人机交互装置用于显示所述目标图像及其对应的参数信息。
  14. 如权利要求1所述的超声成像设备,其特征在于,所述处理器在计算出参数信息之后,还用于根据预设规则从肺部超声图像中选择至少一帧图像作为目标图像,并对选定的至少一帧目标图像进行评分,得到评分结果,所述评分结果反映目标图像与关联病症的相关性。
  15. 如权利要求13或14所述的超声成像设备,其特征在于,所述参数信息还包括B线个数,所述目标图像是一帧图像,所述预设规则包括:B线个数最多或B线覆盖百分比最大。
  16. 如权利要求14所述的超声成像设备,其特征在于,所述处理器还用于计算肺部各扫查位置对应的目标图像的评分之和,得到整个肺部的评分结果。
  17. 如权利要求14所述的超声成像设备,其特征在于,所述人机交互装置用于显示所述目标图像及其对应的参数信息和评分结果。
  18. 如权利要求13或14所述的超声成像设备,其特征在于,所述人机交互装置还用于在所述目标图像中突出显示肺部检测区域、B线和B线间隔的标注线中至少之一。
  19. 如权利要求1所述的超声成像设备,其特征在于,所述处理器还用于从存储设备中读取至少一帧肺部超声图像。
  20. 一种自动检测肺部超声图像中B线的方法,其特征在于,包括:
    获取至少一帧肺部超声图像;
    识别所述肺部超声图像的图像征象,所述图像征象至少包括B线;
    计算图像征象为B线的肺部超声图像的参数信息,所述参数信息包括B线覆盖百分比和B线间隔中的至少之一,所述B线覆盖百分比为B线所占区域占肺部检测区域的百分比,所述B线间隔为相邻B线间的距离;
    显示所述参数信息。
  21. 如权利要求20所述的方法,其特征在于,所述参数信息包括B线间隔,所述计算图像征象为B线的肺部超声图像的参数信息包括:
    确定图像征象为B线的肺部超声图像的胸膜线位置;
    根据识别到的B线计算胸膜线位置处相邻B线间的距离。
  22. 如权利要求20所述的方法,其特征在于,所述识别所述肺部超声图像的图像征象包括:
    从所述肺部超声图像中筛选出待分析图像,所述待分析图像为具有病理特征的肺部超声图像;
    识别所述待分析图像的图像征象,并根据该图像征象从待分析图像中确定出具有B线的肺部超声图像。
  23. 如权利要求22所述的方法,其特征在于,还包括:
    根据识别出的待分析图像的图像征象,从待分析图像中确定出图像征象为非B线的肺部超声图像;
    接收对图像征象为非B线的肺部超声图像进行标记的操作指令,并根据该操作指令对所述图像征象为非B线的肺部超声图像进行标记。
  24. 如权利要求20所述的方法,其特征在于,所述识别所述肺部超声图像的图像征象包括:
    基于目标检测算法从所述肺部超声图像中识别出具有B线的肺部超声图像。
  25. 如权利要求20至24中任一项所述的方法,其特征在于,识别肺部超声图像的图像征象包括:
    确定肺部超声图像的肺部检测区域;
    在所述肺部检测区域内识别图像征象。
  26. 如权利要求25所述的方法,其特征在于,所述确定肺部超声图像的肺部检测区域包括:
    根据深度学习法确定肺部超声图像的肺部检测区域;
    或者,
    识别肺部超声图像的胸膜线位置,并将所述胸膜线位置的远场区域作为该帧肺部超声图像的肺部检测区域。
  27. 如权利要求25所述的方法,其特征在于,所述参数信息包括B线覆盖百分比,所述计算图像征象为B线的肺部超声图像的参数信息包括:
    根据识别到的B线计算该B线所占区域占其对应的肺部检测区域的百分比。
  28. 如权利要求27所述的方法,其特征在于,所述根据识别到的B线计算该B线所占区域占其对应的肺部检测区域的百分比包括:
    确定识别到的属于B线的区域,并将其确定为B线所占区域;
    计算B线所占区域占其对应的肺部检测区域的百分比。
  29. 如权利要求20所述的方法,其特征在于,在所述计算图像征象为B线的肺部超声图像的参数信息之后,所述方法还包括:
    根据预设规则从肺部超声图像中选择至少一帧图像作为目标图像;
    显示所述目标图像及其对应的参数信息。
  30. 如权利要求20所述的方法,其特征在于,在所述计算图像征象为B线的肺部超声图像的参数信息之后,所述方法还包括:
    对选定的至少一帧目标图像进行评分,得到评分结果,所述目标图像根据预设规则选自于肺部超声图像,所述评分结果反映目标图像与关联病症的相关性。
  31. 如权利要求29或30所述的方法,其特征在于,所述参数信息还包括B线个数,所述目标图像是一帧图像,所述预设规则包括:B线个数最多或B线覆盖百分比最大。
  32. 如权利要求30所述的方法,其特征在于,还包括:
    计算肺部各扫查位置对应的目标图像的评分之和,得到整个肺部的评分结果。
  33. 如权利要求30所述的方法,其特征在于,在显示所述参数信息时,所述方法还包括:
    同步显示所述目标图像及其对应的评分结果。
  34. 如权利要求29或30所述的方法,其特征在于,还包括:
    在所述目标图像中突出显示肺部检测区域、B线和B线间隔的标注线中至少之一。
  35. 一种超声成像设备,其特征在于,包括:
    超声探头;
    发射电路,用于激励所述超声探头向肺部发射超声波束;
    接收电路和波束合成模块,用于接收所述超声波束的回波,获得超声回波信号;
    处理器,用于对所述超声回波信号进行处理,得到包括多帧肺部超声图像的视频文件;所述处理器还用于确定肺部超声图像的肺部检测区域,并在所述肺部检测区域内识别所述肺部超声图像的图像征象,所述图像征象至少包括B线;所述处理器还用于确定图像征象为B线的肺部超声图像的定量参数信息,所述定量参数信息包括B线个数、B线覆盖百分比和B线间隔中的至少之一;
    人机交互装置,其与所述处理器连接,用于检测用户的输入信息和显示检测结果,所述检测结果中包括所述参数信息。
  36. 如权利要求35所述的超声成像设备,其特征在于,所述处理器用于从所述视频文件的多帧肺部超声图像中筛选出待分析图像,所述待分析图像为具有病理特征的肺部超声图像,识别所述待分析图像的图像征象,并根据该图像征象从待分析图像中确定出具有B线的肺部超声图像。
  37. 如权利要求35所述的超声成像设备,其特征在于,所述处理器在确定肺部超声图像的肺部检测区域时用于:识别肺部超声图像的胸膜线位置,并将所述胸膜线位置的远场区域作为该肺部超声图像的肺部检测区域。
  38. 如权利要求35所述的超声成像设备,其特征在于,所述参数信息包括B线覆盖百分比,所述处理器在确定B线覆盖百分比时用于:根据识别到的B线计算该B线所占区域占其对应的肺部检测区域的百分比。
  39. 如权利要求38所述的超声成像设备,其特征在于,所述处理器用于确定识别到的属于B线的区域,将并将其确定为B线所占区域。
  40. 如权利要求35所述的超声成像设备,其特征在于,所述参数信息包括B线间隔,所述处理器在确定B线间隔时用于:
    确定胸膜线的位置;
    根据识别到的B线计算胸膜线位置处相邻B线间的距离,或根据识别到的B线计算距离胸膜线预设位置处相邻B线间的距离。
  41. 如权利要求35所述的超声成像设备,其特征在于,所述参数信息包括B线个数,所述处理器在确定B线个数时用于:统计所述肺部超声图像的肺部检测区域内识别到的B线的条数。
  42. 一种计算机可读存储介质,其特征在于,包括程序,所述程序能够被处理器执行以实现如权利要求20至34中任一项所述的方法。
PCT/CN2019/095473 2019-07-10 2019-07-10 超声成像设备及检测b线的方法、装置、存储介质 WO2021003711A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980097733.2A CN114007513A (zh) 2019-07-10 2019-07-10 超声成像设备及检测b线的方法、装置、存储介质
PCT/CN2019/095473 WO2021003711A1 (zh) 2019-07-10 2019-07-10 超声成像设备及检测b线的方法、装置、存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/095473 WO2021003711A1 (zh) 2019-07-10 2019-07-10 超声成像设备及检测b线的方法、装置、存储介质

Publications (1)

Publication Number Publication Date
WO2021003711A1 true WO2021003711A1 (zh) 2021-01-14

Family

ID=74114313

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/095473 WO2021003711A1 (zh) 2019-07-10 2019-07-10 超声成像设备及检测b线的方法、装置、存储介质

Country Status (2)

Country Link
CN (1) CN114007513A (zh)
WO (1) WO2021003711A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819773A (zh) * 2021-01-28 2021-05-18 清华大学 一种超声图像定量评估方法
CN113763353A (zh) * 2021-09-06 2021-12-07 杭州类脑科技有限公司 一种肺部超声图像检测系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070167797A1 (en) * 2003-11-07 2007-07-19 Michalakis Averkiou System and method for ultrasound perfusion imaging
US20120302885A1 (en) * 2011-05-27 2012-11-29 Samsung Medison Co., Ltd. Providing a measuring item candidate group for measuring size of a target object in an ultrasound system
CN104116523A (zh) * 2013-04-25 2014-10-29 深圳迈瑞生物医疗电子股份有限公司 一种超声影像分析系统及其分析方法
CN108038875A (zh) * 2017-12-07 2018-05-15 浙江大学 一种肺部超声图像识别方法和装置
CN109310398A (zh) * 2016-03-24 2019-02-05 皇家飞利浦有限公司 用于检测肺部滑动的超声系统和方法
US10324065B2 (en) * 2014-01-06 2019-06-18 Samsung Electronics Co., Ltd. Ultrasound diagnostic apparatus, ultrasound image capturing method, and computer-readable recording medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11185311B2 (en) * 2015-09-17 2021-11-30 Koninklijke Philips N.V. Distinguishing lung sliding from external motion
US10667793B2 (en) * 2015-09-29 2020-06-02 General Electric Company Method and system for enhanced visualization and selection of a representative ultrasound image by automatically detecting B lines and scoring images of an ultrasound scan
EP3518771B1 (en) * 2016-09-29 2020-09-02 General Electric Company Method and system for enhanced visualization and selection of a representative ultrasound image by automatically detecting b lines and scoring images of an ultrasound scan
EP3482689A1 (en) * 2017-11-13 2019-05-15 Koninklijke Philips N.V. Detection, presentation and reporting of b-lines in lung ultrasound

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070167797A1 (en) * 2003-11-07 2007-07-19 Michalakis Averkiou System and method for ultrasound perfusion imaging
US20120302885A1 (en) * 2011-05-27 2012-11-29 Samsung Medison Co., Ltd. Providing a measuring item candidate group for measuring size of a target object in an ultrasound system
CN104116523A (zh) * 2013-04-25 2014-10-29 深圳迈瑞生物医疗电子股份有限公司 一种超声影像分析系统及其分析方法
US10324065B2 (en) * 2014-01-06 2019-06-18 Samsung Electronics Co., Ltd. Ultrasound diagnostic apparatus, ultrasound image capturing method, and computer-readable recording medium
CN109310398A (zh) * 2016-03-24 2019-02-05 皇家飞利浦有限公司 用于检测肺部滑动的超声系统和方法
CN108038875A (zh) * 2017-12-07 2018-05-15 浙江大学 一种肺部超声图像识别方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819773A (zh) * 2021-01-28 2021-05-18 清华大学 一种超声图像定量评估方法
CN113763353A (zh) * 2021-09-06 2021-12-07 杭州类脑科技有限公司 一种肺部超声图像检测系统

Also Published As

Publication number Publication date
CN114007513A (zh) 2022-02-01

Similar Documents

Publication Publication Date Title
JP6841907B2 (ja) Bラインを自動的に検出し、超音波スキャンの画像をスコア付けすることによる代表超音波画像の向上された視覚化および選択のための方法、システム及び非一時的コンピュータ可読媒体
EP3554380B1 (en) Target probe placement for lung ultrasound
US20170086790A1 (en) Method and system for enhanced visualization and selection of a representative ultrasound image by automatically detecting b lines and scoring images of an ultrasound scan
EP3463098B1 (en) Medical ultrasound image processing device
JP7022217B2 (ja) 超音波システムのためのエコー窓のアーチファクト分類及び視覚的インジケータ
CN107157515B (zh) 超声检测血管系统及方法
JP7285826B2 (ja) 肺超音波検査におけるbラインの検知、提示及び報告
US11403778B2 (en) Fetal development monitoring
US11931201B2 (en) Device and method for obtaining anatomical measurements from an ultrasound image
JP6648587B2 (ja) 超音波診断装置
JP2020503099A (ja) 出産前超音波イメージング
CN111511288A (zh) 超声肺评估
WO2021003711A1 (zh) 超声成像设备及检测b线的方法、装置、存储介质
KR20150000261A (ko) 초음파 영상에 대응하는 참조 영상을 제공하는 초음파 시스템 및 방법
WO2021087687A1 (zh) 超声图像分析方法、超声成像系统和计算机存储介质
CN116194048A (zh) 膈肌的超声测量方法及系统
US20220361852A1 (en) Ultrasonic diagnostic apparatus and diagnosis assisting method
WO2020037673A1 (zh) 一种超声弹性成像装置及对弹性图像进行处理的方法
CN115299986A (zh) 一种超声成像设备及其超声检查方法
JP2009148499A (ja) 超音波診断装置
US20230320694A1 (en) Graphical user interface for providing ultrasound imaging guidance
WO2021042242A1 (zh) 一种超声成像设备及其超声回波信号的处理方法
US11382595B2 (en) Methods and systems for automated heart rate measurement for ultrasound motion modes
JP2013052131A (ja) 超音波診断装置及び血管狭窄改善表示プログラム
JP7457571B2 (ja) 超音波診断装置及び診断支援方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19936909

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19936909

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13/06/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19936909

Country of ref document: EP

Kind code of ref document: A1