WO2021127930A1 - Procédé et système de détection ultrasonore de trauma, dispositif mobile et support de stockage - Google Patents

Procédé et système de détection ultrasonore de trauma, dispositif mobile et support de stockage Download PDF

Info

Publication number
WO2021127930A1
WO2021127930A1 PCT/CN2019/127644 CN2019127644W WO2021127930A1 WO 2021127930 A1 WO2021127930 A1 WO 2021127930A1 CN 2019127644 W CN2019127644 W CN 2019127644W WO 2021127930 A1 WO2021127930 A1 WO 2021127930A1
Authority
WO
WIPO (PCT)
Prior art keywords
ultrasound
artificial intelligence
diagnosis
dark area
liquid dark
Prior art date
Application number
PCT/CN2019/127644
Other languages
English (en)
Chinese (zh)
Inventor
陈萱
胡书剑
熊麟霏
鲍玉婷
伍利
刘健
牟峰
Original Assignee
深圳华大智造科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳华大智造科技股份有限公司 filed Critical 深圳华大智造科技股份有限公司
Priority to PCT/CN2019/127644 priority Critical patent/WO2021127930A1/fr
Priority to CN201980100817.7A priority patent/CN114599291A/zh
Publication of WO2021127930A1 publication Critical patent/WO2021127930A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves

Definitions

  • the present invention relates to the field of ultrasonic detection, in particular to a wound ultrasonic detection method, system, mobile equipment and storage medium.
  • Medical ultrasound examination is a medical imaging diagnosis technology that uses ultrasound equipment to emit high-frequency sound waves and record the reflected waves generated by the tissue structure in the organism.
  • Traditional desktop ultrasound equipment is usually bulky and heavy, and it is not convenient to move and transport. The use of desktop ultrasound equipment usually needs to be performed in the clinic.
  • portable ultrasound is similar to the size of a notebook computer, allowing ultrasound inspection to break away from the limitation that must be performed in a specific space.
  • Hand-held ultrasound is usually composed of a handheld ultrasound probe and a mobile device. The size that can be put into a pocket can meet scenarios with high portability requirements such as battlefield rescue or disease screening in remote areas.
  • the affordable price of handheld ultrasound greatly reduces the difficulty of popularizing and promoting the use of ultrasound.
  • a wound ultrasound diagnosis method including:
  • the analysis result of the artificial intelligence model is displayed.
  • the method further includes: if there is a liquid dark area, segmenting the liquid dark area using the artificial intelligence model, and analyzing the orientation, shape and/or size of the liquid dark area.
  • the analysis result is that there is no liquid dark area or the distribution, shape and/or size of the liquid dark area.
  • the analysis result of the artificial intelligence model is displayed as: output the name of the part where the liquid dark area exists, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and/or output The orientation, size and/or shape of each liquid dark zone.
  • the artificial intelligence model uses a lightweight deep convolutional neural network as a model, and the trauma ultrasound diagnosis method is implemented using mobile devices.
  • the "using an artificial intelligence model to analyze the detected part based on the ultrasound image video stream to determine whether there is a liquid dark area” includes: using a fully convolutional neural network to perform multiple convolutions on each frame of the ultrasound video image One or more feature images are obtained by calculation, and a probability is obtained after the one or more feature images are passed through the global pooling layer, the fully connected layer and the activation function, and whether there is a liquid dark zone is determined according to the probability.
  • the "segmentation of the liquid dark area using the artificial intelligence model, and analyzing the position, shape and/or size of the liquid dark area” includes: using a fully convolutional neural network to perform ultrasound on each frame
  • the video image is subjected to multiple convolution operations to obtain one or more feature images, and each feature image in the one or more feature images is then subjected to a 1*1 convolution operation to obtain a numerical matrix with the same size as the frame of the ultrasound video image
  • Each value in the numerical matrix corresponds to a pixel of a frame of ultrasound video image, and the numerical matrix divides the numerical distribution of the corresponding liquid dark area through the activation function, and calculates each liquid according to the numerical distribution.
  • the area and/or fluid volume of the dark zone includes: using a fully convolutional neural network to perform ultrasound on each frame
  • the video image is subjected to multiple convolution operations to obtain one or more feature images, and each feature image in the one or more feature images is then subjected to a 1*1 convolution operation to obtain
  • the method further includes: forming an artificial intelligence-assisted diagnosis and treatment report, and displaying and/or storing the artificial intelligence-assisted diagnosis and treatment report.
  • the method further includes: using an artificial intelligence model to analyze which part of the diagnosed part is.
  • the method further includes: using an artificial intelligence model to determine whether all diagnostic parts that need to be diagnosed have been diagnosed, and if all diagnostic parts that need to be diagnosed have been diagnosed, displaying the analysis results and/or formation of the artificial intelligence model In the artificial intelligence-assisted diagnosis and treatment report, if any diagnosis part that needs to be diagnosed has not been diagnosed, the electric signal converted by the ultrasonic echo is continued to be obtained.
  • a trauma ultrasound diagnosis system in a second aspect, includes:
  • a signal acquisition module for acquiring an electrical signal converted by an ultrasonic echo, the ultrasonic echo being the ultrasonic echo received when the ultrasonic probe detects the subject;
  • a video image signal generating module configured to generate an ultrasound image video stream according to the electrical signal
  • the artificial intelligence model includes:
  • the liquid dark area judgment module is configured to receive the ultrasound image video stream, and analyze whether there is a liquid dark area in the detection part based on the ultrasound image video stream;
  • the display module is used for receiving the ultrasound image video stream and displaying the ultrasound video image on a display unit, and for receiving the analysis result of the artificial intelligence model and displaying the analysis result on the display unit.
  • the artificial intelligence model further includes: a liquid dark area segmentation module, the liquid dark area segmentation module is used to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area .
  • the analysis result of the artificial intelligence model is that there is no liquid dark zone or the distribution, shape and/or size of the liquid dark zone.
  • the display module is used to output the name of the part where the liquid dark area exists, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and/or output the orientation, size and/or of each liquid dark area Or shape.
  • the artificial intelligence model adopts a lightweight deep convolutional neural network as a model, and the system is installed on a mobile device.
  • the liquid dark area judgment module is used to perform multiple convolution operations on each frame of ultrasound video image using a fully convolutional neural network to obtain one or more feature images, and pass the one or more feature images through A probability is obtained after the global pooling layer, the fully connected layer, and the activation function, and it is determined whether there is a liquid dark zone according to the probability.
  • the liquid dark area segmentation module is used to perform multiple convolution operations on each frame of ultrasound video image by using a fully convolutional neural network to obtain one or more feature images, and combine the one or more feature images
  • Each feature image of the 1*1 convolution operation is performed to obtain a numerical matrix with the same size as the frame of the ultrasound video image, wherein each value in the numerical matrix corresponds to a pixel of the frame of the ultrasound video image
  • the liquid The dark area segmentation module is also used to segment the numerical value distribution corresponding to the dark area of the liquid from the numerical matrix, and calculate the area and/or amount of liquid accumulation of each dark area of the liquid based on the value distribution.
  • system further includes an auxiliary diagnosis and treatment report generation module, and the auxiliary diagnosis and treatment report generation module is used to generate an artificial intelligence auxiliary diagnosis and treatment report according to the analysis result of the artificial intelligence model.
  • system further includes an auxiliary diagnosis and treatment report storage module, and the auxiliary diagnosis and treatment report storage module is used to store the artificial intelligence auxiliary diagnosis and treatment report.
  • the display module is also used to display the artificial intelligence-assisted diagnosis and treatment report.
  • system further includes a diagnosis part judgment module, and the diagnosis part judgment module is used to judge which part the diagnosed part is.
  • the system further includes a diagnosis completion judging module, the diagnosis completion judging module is used to judge whether all the diagnosed parts that need to be diagnosed have completed the diagnosis, and the auxiliary diagnosis and treatment report generation module is used to judge all When the diagnosis parts that need to be diagnosed have been diagnosed, the auxiliary diagnosis and treatment report is generated, and/or the display module is used to display the manual when the diagnosis completion judgment module judges that all the parts that need to be diagnosed have been diagnosed.
  • the analysis result of the smart model is on the display unit.
  • a mobile device in a third aspect, includes a communication unit, a display unit, a processing unit, and a storage unit.
  • the storage unit stores a plurality of program modules, and the plurality of program modules are processed by the The unit loads and executes the above-mentioned trauma ultrasound diagnosis method.
  • a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processing unit to realize the above-mentioned trauma ultrasound diagnosis method.
  • the trauma ultrasound diagnosis method, system, mobile device and storage medium provided by the embodiments of the present invention apply artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately perform ultrasound scans by medical staff , Real-time judgment of whether there is blood and fluid accumulation in the detected part of the subject, as well as its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis.
  • the application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
  • Fig. 1 is a schematic flowchart of a wound ultrasonic diagnosis method in the first embodiment of the present invention.
  • Figure 2 is a schematic diagram of an exemplary artificial intelligence analysis of whether there is a liquid dark zone.
  • Figure 3 is a schematic diagram of an exemplary artificial intelligence method for dividing liquid dark areas.
  • Fig. 4 is a schematic flowchart of a wound ultrasonic diagnosis method in the second embodiment of the present invention.
  • Fig. 5 is a schematic flow chart of a wound ultrasonic diagnosis method in the third embodiment of the present invention.
  • Fig. 6 is a system frame diagram of a trauma ultrasonic diagnosis system in the fourth embodiment of the present invention.
  • Figure 7 is a schematic diagram of the artificial intelligence module analyzing whether there is a liquid dark zone.
  • Figure 8 is a schematic diagram of the artificial intelligence module dividing the liquid dark area.
  • Fig. 9 is a system frame diagram of a trauma ultrasound diagnosis system in the fifth embodiment of the present invention.
  • Fig. 10 is a system frame diagram of a trauma ultrasonic diagnosis system in the sixth embodiment of the present invention.
  • Fig. 11 is a system frame diagram of a trauma ultrasonic diagnosis system in the seventh embodiment of the present invention.
  • Fig. 12 is a system frame diagram of a trauma ultrasonic diagnosis system in the eighth embodiment of the present invention.
  • Liquid dark area judgment module 631 Liquid dark area judgment module 631
  • FIG. 1 is a schematic flowchart of a wound ultrasonic diagnosis method in Embodiment 1 of the present invention. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted. For ease of description, only the parts related to the embodiment of the present invention are shown.
  • the wound ultrasound diagnosis method in the first embodiment includes the following steps.
  • Step S11 Obtain the electrical signal corresponding to the ultrasound echo sent by the ultrasound probe at the mobile device side, and generate an ultrasound image video stream.
  • a communication connection is established between the mobile device and a hand-held ultrasound probe, and the hand-held ultrasound probe is used to obtain the ultrasound echo of the detected part of the examinee by drawing a picture on the detected part of the examinee.
  • the ultrasonic echo is converted into an electrical signal by the handheld ultrasonic device and sent to the mobile device, so that the mobile device can continuously obtain the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe and generate an ultrasonic image video stream.
  • Step S12 Display the ultrasound video image on the mobile device.
  • Step S13 Use the artificial intelligence model to analyze the detected part based on the ultrasound image video stream to determine whether there is a liquid dark area of effusion or blood accumulation. If there is a liquid dark area, go to step S14, if there is no liquid dark area Area, go to step S15.
  • Step S14 Use the artificial intelligence model to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area.
  • step S12 is not necessarily related to steps S13 and S14, and step S12 can be performed simultaneously with steps S13 and S14, or performed before or after steps S13 and S14.
  • the artificial intelligence model uses a lightweight deep convolutional neural network as the model, and uses the convolutional layer in the convolutional neural network to filter each frame of the input ultrasound image video stream through the convolution kernel To extract feature information.
  • the artificial intelligence model includes multiple convolutional layers, and the convolutional layers are connected to each other, and each frame of ultrasound video image is extracted level by level with richer semantic information and more distinguishable features, so as to facilitate the comparison of the results. Make accurate predictions.
  • Each convolutional layer contains many parameters and requires a lot of calculations to extract features.
  • the convolutional layer is optimized to achieve acceptable accuracy. Within the scope, the amount of parameters and calculations are reduced as much as possible, so that the artificial intelligence model can achieve performance optimization when computing resources are limited.
  • the optimization of the convolutional layer can refer to the "mobilenet" lightweight network proposed by Google.
  • the artificial intelligence model analysis is divided into two stages.
  • the full convolutional neural network 1 is used to analyze whether there is a dark area of effusion.
  • the fully convolutional neural network 1 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolution layers to obtain one or more feature images, the one or Multiple feature images are highly refined after the global pooling layer, and their dimensions are reduced, and then the fully connected layer is used for feature classification and synthesis. Finally, the output of the fully connected layer is connected to the activation function 1.
  • the activation function 1 can be " Softmax” function or "Sigmoid” function, the activation function 1 compresses the value of the feature tensor output by the fully connected layer to a range of 0 to 1.
  • the value or array obtained through activation function 1 is a probability. In this embodiment, the probability is less than 0.5 and it is considered that there is no liquid dark zone.
  • the artificial intelligence model returns to the judgment result of the liquid-free dark zone after step S13. If the probability is greater than or equal to 0.5, it is considered that there is a liquid dark zone, and the artificial intelligence model returns the judgment result of the existence of a liquid dark zone after step S13.
  • the probability limit for setting the presence or absence of a liquid dark zone can be any value between 0 and 1 according to specific conditions and different neural network models and parameters used.
  • the full convolutional network 2 is used to segment and measure the existing liquid dark areas.
  • the fully convolutional neural network 2 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolutional layers to obtain one or more feature images, and each feature in the one or more feature images The image is then subjected to a convolution operation with a 1*1 convolution layer to obtain a numerical matrix with the same size as the original ultrasound video image frame. Each value in the numerical matrix represents a corresponding pixel point in the original ultrasound video image frame.
  • the activation The function compresses each value in the numeric matrix to the range of 0 to 1.
  • the size of each value indicates the probability that the pixel in the corresponding ultrasound image belongs to the liquid dark zone. In this embodiment, it is greater than Equal to 0.5 indicates that the pixel belongs to the liquid dark area, and less than 0.5 indicates that the pixel does not belong to the liquid dark area.
  • the activation function then divides the corresponding liquid dark area in the numeric matrix according to the magnitude of each value in the numeric matrix. The distribution of values. Further, the activation function combines the spatial scale corresponding to each pixel of the ultrasound probe to calculate the area and/or the amount of fluid accumulation of each liquid dark zone. In other embodiments, depending on the specific situation and the different neural network models and parameters used, the probability limit for judging whether the pixel corresponding to each value in the numerical matrix belongs to the liquid dark zone can be any value between 0 and 1 .
  • the results of multiple convolution operations in which each frame of the ultrasound video image in the first stage is passed through multiple convolution layers can be used in the second stage, and the liquid dark area can be divided and calculated on this result.
  • the orientation, shape and/or size of the liquid dark zone can be used in the second stage, and the liquid dark area can be divided and calculated on this result.
  • Step S15 Display the analysis result of the artificial intelligence model.
  • the analysis result of the artificial intelligence model is displayed on the mobile terminal.
  • the analysis result of the artificial intelligence model may be "there is no liquid dark zone” or the distribution of the liquid dark zone , Shape and/or size.
  • the display can be in any one of the following ways or a combination of two or more of the following ways: output the existence product The name of the liquid part, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and the size and/or shape of each piece of effusion are output.
  • Step S16 forming an artificial intelligence-assisted diagnosis and treatment report.
  • Step S17 Display and/or store the artificial intelligence-assisted diagnosis and treatment report.
  • the artificial intelligence-assisted diagnosis and treatment report may be stored in a mobile device, or the artificial intelligence-assisted diagnosis and treatment report may also be uploaded and saved to a cloud server.
  • the trauma ultrasound diagnosis method provided in this embodiment applies artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately determine whether the detected part of the subject is detected in real time during the ultrasound scan performed by medical staff.
  • the application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
  • the wound ultrasound diagnosis method may include steps S11-S15, but does not include S16 and S17.
  • the ultrasonic diagnosis method for trauma may further include the step of establishing a communication connection between the mobile device and the ultrasound probe, and after the communication connection between the mobile device and the ultrasound probe is established, step S11 is started.
  • FIG. 4 is a schematic flowchart of a wound ultrasonic diagnosis method in Embodiment 2 of the present invention. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted. For ease of description, only the parts related to the embodiment of the present invention are shown.
  • the wound ultrasound diagnosis method in the second embodiment includes the following steps.
  • Step S21 Obtain the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe at the mobile device side, and generate an ultrasonic image video stream.
  • Step S22 Display the ultrasound video image on the mobile device.
  • Steps S21, S22 and their related descriptions in this embodiment are respectively consistent with steps S11, S12 and their related descriptions in Embodiment 1.
  • steps S11, S12 and their related descriptions in Embodiment 1 please refer to Embodiment 1, which will not be repeated here.
  • Step S23 Use the artificial intelligence model to analyze which part of the diagnosed part is.
  • the part that can be diagnosed by the artificial intelligence model can be around the liver, spleen, pericardium, pelvic cavity or lung.
  • the artificial intelligence model can judge which part of the diagnosis is based on the user's input, for example, the user selects via a mobile device For the part to be diagnosed, the artificial intelligence model judges which part is diagnosed according to the user's choice.
  • the artificial intelligence model can also judge which part is diagnosed based on the ultrasound image video stream and according to different parameters or pictures of different parts.
  • Step S24 Use the artificial intelligence model to analyze the diagnosed part based on the ultrasound image video stream to determine whether there is a liquid dark area of effusion or blood accumulation, if there is a liquid dark area, go to step S25, if there is no liquid dark area Area, go to step S26.
  • Step S25 Use the artificial intelligence model to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area.
  • Step S26 Display the analysis result of the artificial intelligence model on the mobile terminal.
  • Step S27 Form an artificial intelligence-assisted diagnosis and treatment report.
  • Step S28 Display and/or store the artificial intelligence-assisted diagnosis and treatment report.
  • Steps S24-S28 and related descriptions in this embodiment are respectively consistent with steps S13-S17 and related descriptions in Embodiment 1.
  • steps S13-S17 and related descriptions in Embodiment 1 please refer to Embodiment 1, and will not be repeated here.
  • step S22 and steps S23-S25 are not necessarily sequential.
  • Step S22 can be performed simultaneously with steps S23-S25, or performed before or after steps S23-S25.
  • the trauma ultrasound diagnosis method provided in this embodiment applies artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately determine the location of medical staff performing ultrasound scanning, and determine whether there is blood accumulation in the detected location in real time. Fluid effusion and its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis.
  • the application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
  • FIG. 5 is a schematic flowchart of a wound ultrasound diagnosis method in Embodiment 3 of the present invention. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted. For ease of description, only the parts related to the embodiment of the present invention are shown.
  • the wound ultrasound diagnosis method in the third embodiment includes the following steps.
  • Step S31 Obtain the electrical signal corresponding to the ultrasound echo sent by the ultrasound probe at the mobile device side, and generate an ultrasound image video stream.
  • Step S32 Display the ultrasound video image on the mobile device.
  • Step S33 Use the artificial intelligence model to analyze which part of the diagnosed part is.
  • Step S34 Use an artificial intelligence model to analyze the diagnosed part based on the ultrasound image video stream to determine whether there is a liquid dark area of effusion or blood accumulation, if there is a liquid dark area, go to step S35, if there is no liquid dark area Area, go to step S36.
  • Step S35 Use the artificial intelligence model to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area.
  • Steps S31-S35 and related descriptions in this embodiment are respectively consistent with steps S21-S25 and related descriptions in the second embodiment.
  • steps S21-S25 and related descriptions in the second embodiment please refer to the second embodiment, which will not be repeated here.
  • step S32 and steps S33-S35 are not necessarily sequential.
  • Step S32 can be performed simultaneously with steps S33-S35, or performed before or after steps S33-S35.
  • Step S36 Display the analysis result of the artificial intelligence model on the mobile terminal.
  • Step S37 Use the artificial intelligence model to determine whether all the diagnostic parts that need to be diagnosed have been diagnosed, if yes, go to step S38, if not, go to step S31 to start ultrasound diagnosis of the next part.
  • Step S38 forming an artificial intelligence-assisted diagnosis and treatment report.
  • Step S39 Display and/or store the artificial intelligence-assisted diagnosis and treatment report.
  • Steps S37-S39 and related descriptions in this embodiment are respectively consistent with steps S26-S28 and related descriptions in the second embodiment.
  • steps S26-S28 and related descriptions in the second embodiment please refer to the second embodiment, which will not be repeated here.
  • step S37 may be performed before step S36. If step S37 determines that all diagnostic parts that need to be diagnosed have been diagnosed, the flow proceeds to step S36, and after step S36 is completed, it proceeds to step S38. Alternatively, step S38 is executed at the same time as step S36 is executed. If it is determined in step S37 that there are still parts to be diagnosed that have not been diagnosed, the flow proceeds to step S31 to start ultrasound diagnosis of the next part.
  • the trauma ultrasound diagnosis method provided in this embodiment applies artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately assist medical staff in performing ultrasound diagnosis of all parts to be inspected, and judge whether there is any accumulation in the detected part in real time. Blood, effusion and its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and quickly complete trauma ultrasound diagnosis of all parts to be inspected. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis.
  • the application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
  • Figures 1-5 illustrate the trauma ultrasound diagnosis method in different embodiments of the present invention.
  • the use of artificial intelligence to diagnose ultrasound video images reduces the requirements for the relevant skills of medical staff and greatly improves the accuracy and accuracy of ultrasound diagnosis of trauma. effectiveness.
  • the functional modules and hardware device architecture of the software system implementing the trauma ultrasonic diagnosis method will be introduced below in conjunction with FIG. 6. It should be understood that the embodiments are only for illustrative purposes, and are not limited by this structure in the scope of the patent application.
  • FIG. 6 is a system frame diagram of the trauma ultrasound diagnosis system in the fourth embodiment of the present invention.
  • the trauma ultrasound diagnosis system may include multiple functional modules composed of program code segments.
  • the program code of each program segment of the trauma ultrasound diagnosis system can be stored in a memory of a computer device, such as a mobile device, and executed by at least one processor in the computer device, so as to realize a trauma ultrasound diagnosis function.
  • the trauma ultrasound diagnosis system 60 can be divided into multiple functional modules according to the functions it performs, and each functional module is used to execute each of the corresponding embodiments in FIG. 1, FIG. 3, or FIG. 4. Steps to achieve the function of ultrasound diagnosis of trauma.
  • the functional modules of the trauma ultrasound diagnosis system 60 include: a signal acquisition module 61, a video image signal generation module 62, an artificial intelligence model 63, and a display module 64. The functions of each functional module will be described in detail in the following embodiments.
  • the signal acquisition module 61 is used to acquire the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe 70.
  • the video image signal generating module 62 is configured to generate an ultrasound image video stream according to the electrical signal.
  • the artificial intelligence model 63 includes a liquid dark area judgment module 631 and a liquid dark area segmentation module 633.
  • the liquid dark area judging module 631 is configured to receive the ultrasound image video stream, and analyze and detect whether there is a liquid dark area with effusion or hemorrhage based on the ultrasound image video stream.
  • the liquid dark area segmentation module 633 is used to segment the liquid dark area and analyze the orientation, shape and/or size of the liquid dark area.
  • the artificial intelligence model uses a lightweight deep convolutional neural network as the model, and uses the convolutional layer in the convolutional neural network to filter each frame of the input ultrasound image video stream through the convolution kernel To extract feature information.
  • the artificial intelligence model includes multiple convolutional layers, and the convolutional layers are connected to each other, and each frame of ultrasound video image is extracted level by level with richer semantic information and more distinguishable features, so as to facilitate the comparison of the results. Make accurate predictions.
  • Each convolutional layer contains many parameters and requires a lot of calculations to extract features.
  • the convolutional layer is optimized to achieve acceptable accuracy. Within the scope, the parameter amount and calculation amount are reduced as much as possible, so that the artificial intelligence model achieves performance optimization in the case of limited computing resources.
  • the optimization of the convolutional layer can refer to the "mobilenet" lightweight network proposed by Google.
  • the liquid dark area determination module 631 uses a fully convolutional neural network 1 to analyze whether there is a dark area of effusion.
  • the fully convolutional neural network 1 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolutional layers to obtain one or more feature images, and the multiple feature images are passed through the global pooling layer. The features are highly purified and the dimensions are reduced, and then the fully connected layer is used for feature classification and synthesis. Finally, the output of the fully connected layer is connected to the activation function 1.
  • the activation function 1 can be a "Softmax” function or a "Sigmoid" function.
  • the activation function 1 compresses the value of the feature tensor output by the fully connected layer to the range of 0 to 1.
  • the value or array obtained through the activation function 1 is a probability. In this embodiment, the probability is less than 0.5 and it is considered that there is no dark liquid zone.
  • the dark liquid zone judgment module 631 outputs the judgment result of the dark zone without liquid. If the probability is greater than or equal to 0.5, it is considered that there is a liquid dark zone, and the liquid dark zone judgment module 631 outputs the judgment result of the existence of a liquid dark zone.
  • the probability limit for setting the presence or absence of a liquid dark zone can be any value between 0 and 1 according to specific conditions and different neural network models and parameters used.
  • the liquid dark area segmentation module 633 uses the fully convolutional network 2 to segment and measure the liquid dark area when the liquid dark area judging module 631 determines that there is a liquid dark area.
  • the fully convolutional neural network 2 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolutional layers to obtain one or more feature images, and each feature in the one or more feature images The image is then subjected to a convolution operation with a 1*1 convolution layer to obtain a numerical matrix with the same size as the original ultrasound video image frame. Each value in the numerical matrix represents a corresponding pixel point in the original ultrasound video image frame.
  • the activation compresses each value in the numeric matrix to the range of 0 to 1.
  • each value indicates the probability that the pixel in the corresponding ultrasound image belongs to the liquid dark zone. In this embodiment, it is greater than Equal to 0.5 indicates that the pixel belongs to the liquid dark area, and less than 0.5 indicates that the pixel does not belong to the liquid dark area.
  • the activation function then divides the value of the corresponding liquid dark area in the numeric matrix according to each value in the numeric matrix. distributed. Further, the activation function combines the spatial scale corresponding to each pixel of the handheld ultrasound probe to calculate the area and/or fluid accumulation of each liquid dark zone.
  • the probability limit for judging whether the pixel corresponding to each value in the numerical matrix belongs to the liquid dark zone can be any value between 0 and 1 .
  • the liquid dark area segmentation module 633 can use the liquid dark area judgment module 631 to pass the ultrasonic video image of each frame through multiple convolutional layers to the results of multiple convolution operations, where the result is Divide the liquid dark area up and calculate the position, shape and/or size of the liquid dark area.
  • the display module 64 is used for receiving the ultrasound image video stream and displaying the ultrasound video image on a display unit 80, and for receiving the analysis result of the artificial intelligence model 63 and displaying the analysis result on the display On unit 80.
  • the analysis result of the artificial intelligence model 63 may be "there is no liquid dark zone" or the distribution, shape and/or size of the liquid dark zone.
  • the display module 64 may display the Analysis result: output the name of the part where the effusion exists, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and output the size and/or shape of each effusion.
  • the trauma ultrasound diagnosis system 60 provided in this embodiment applies artificial intelligence-enabled medical treatment to trauma ultrasound diagnosis, and can quickly and accurately determine whether there is blood accumulation in the detected part of the subject in the process of ultrasonic scanning by medical staff. , Fluid effusion and its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive ultrasound experience can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis.
  • the application of the trauma ultrasound diagnosis system 60 to mobile devices can avoid dependence on the mobile network.
  • FIG. 9 is a system frame diagram of the trauma ultrasound diagnosis system in the fifth embodiment of the present invention.
  • the trauma ultrasound diagnosis system 90 of the fifth embodiment further includes an auxiliary diagnosis and treatment report generation module 91 and an auxiliary diagnosis and treatment report storage module 93.
  • the auxiliary diagnosis and treatment report generation module 91 is used to generate an auxiliary diagnosis and treatment report according to
  • the analysis result of the artificial intelligence model 63 generates an artificial intelligence-assisted diagnosis and treatment report.
  • the auxiliary diagnosis and treatment report storage module 93 is also used to store the artificial intelligence auxiliary diagnosis and treatment report.
  • the auxiliary diagnosis and treatment report storage module 93 may store the artificial intelligence auxiliary diagnosis and treatment report in the storage unit 82 of the mobile device, or upload and save the artificial intelligence auxiliary diagnosis and treatment report to a cloud server. 84.
  • the display module 95 in this embodiment is also used to display the artificial intelligence-assisted diagnosis and treatment report.
  • the beneficial effects of this embodiment can refer to the beneficial effects of the fourth embodiment.
  • the trauma ultrasound diagnosis system 90 of this embodiment generates and stores artificial intelligence-assisted diagnosis and treatment reports, which can prepare diagnosis and treatment reports. Repeated use and subsequent further verification of the artificial intelligence model.
  • FIG. 10 is a system frame diagram of the trauma ultrasound diagnosis system in the sixth embodiment of the present invention.
  • the artificial intelligence model 101 of the trauma ultrasound diagnosis system 100 in the sixth embodiment further includes a diagnosis part judgment module 102, which is used to analyze which part is diagnosed.
  • the part that can be diagnosed by the artificial intelligence model 101 may be around the liver, around the spleen, pericardium, around the pelvis, or lung, and the diagnostic part judgment module 102 can judge which part of the diagnosis is according to the input of the user.
  • the diagnostic part judgment module 102 judges which part to be diagnosed according to the user's selection, the diagnostic part judgment module 102 may also be based on the ultrasound image video stream, according to different parts Different parameters or pictures to determine which part of the diagnosis is.
  • the beneficial effects of this embodiment can refer to the beneficial effects of the fifth embodiment.
  • the trauma ultrasound diagnosis system 100 of this embodiment uses the artificial intelligence model 101 to determine which part of the diagnosed part is. It can further improve the accuracy of diagnosis.
  • FIG. 11 is a system frame diagram of the trauma ultrasonic diagnosis system in the seventh embodiment of the present invention.
  • the artificial intelligence model 111 of the trauma ultrasound diagnosis system 110 in the seventh embodiment also includes a diagnosis completion judgment module 112, which is used to judge all diagnoses that need to be diagnosed. Whether the location is complete diagnosis.
  • the auxiliary diagnosis and treatment report generating module 113 in this embodiment is used to generate an auxiliary diagnosis and treatment report when the diagnosis completion judgment module 112 judges that all diagnosis parts that need to be diagnosed have been diagnosed.
  • the display module 64 is configured to display the analysis result of the artificial intelligence model when the diagnosis completion judgment module judges that all diagnosis parts that need to be diagnosed have been diagnosed. Displayed on the display unit 80.
  • the beneficial effects of this embodiment can be referred to the beneficial effects of the sixth embodiment.
  • the trauma ultrasound diagnosis system 110 of this embodiment uses the artificial intelligence model 111 to determine whether the diagnosis of all parts to be diagnosed is completed. It can avoid missing the detection of key parts.
  • FIG. 12 is a schematic diagram of functional modules of a mobile device in the eighth embodiment of the present invention.
  • the mobile device 120 includes a processing unit 121, a storage unit 122, a communication unit 123, a display unit 124, and a built-in ultrasonic testing program 125 and an artificial intelligence model 126.
  • the ultrasonic testing program 125 and the artificial intelligence model 126 can be installed on the mobile device 120 in the form of an application program, and the processing unit, the storage unit 122, the communication unit 123 and the display unit 124 of the mobile device 120 are used to complete the ultrasonic diagnosis of trauma.
  • the mobile device 120 may be, but is not limited to, a smart phone, a tablet computer, etc.
  • an electronic device for ultrasonic diagnosis of trauma is completed by installing the ultrasonic detection program 125 and the artificial intelligence model 126 It is not limited to the mobile device 120, and may also be other terminal devices with computing operation capabilities, such as desktop computers.
  • the communication unit 123 is used to communicate with an ultrasonic probe 70, obtain the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe 70 from the ultrasonic probe 70, and transmit the electrical signal to the ultrasonic testing program 125 uses.
  • the communication unit 123 and the ultrasound probe 70 may be connected via a wired connection, using a wired communication technology, or may be a wireless connection, using a wireless communication technology.
  • the communication unit 123 may be a signal transceiving unit matching the corresponding communication mode.
  • the storage unit 122 is used to store the ultrasonic testing program 125 and the artificial intelligence model 126 and the data (such as parameters) they need to use or the data generated (such as analysis results, diagnosis reports, etc.).
  • the processing unit 121 is used to execute the ultrasonic detection program 125 and the artificial intelligence model 126 to complete the ultrasonic diagnosis of trauma.
  • the processing unit 121 executes the ultrasonic detection program 125 and the artificial intelligence model 126, the steps of the trauma ultrasonic diagnosis method in the above method embodiment are implemented, or the processing unit 121 executes the ultrasonic detection program 125 and the artificial intelligence model 126
  • the function of each module in the trauma ultrasound diagnosis system in the above-mentioned embodiment is realized.
  • both the ultrasonic testing program 125 and the artificial intelligence model 126 may be divided into one or more modules, and the one or more modules are stored in the storage unit 122 and processed by the The unit 121 executes to complete various embodiments of the present invention.
  • the one or more modules/units may be a series of computer program instruction segments that can complete specific functions, and the instruction segments are used to describe the ultrasonic testing program 125 or the artificial intelligence model 126 in the electronic device 10 Implementation process.
  • the ultrasonic testing program 125 can be divided into modules 61, 62 and 64 in the fourth embodiment and FIG. 6, and the artificial intelligence model 126 can be divided into modules 631 and 632 in the fourth embodiment and FIG.
  • the ultrasonic testing program 125 can be divided into the modules 61, 62, 91, 93 and 95 in the fifth embodiment and FIG. 9, and the artificial intelligence model 126 can be divided into the modules 631 and 632 in the fourth embodiment.
  • the ultrasonic testing program 125 can be divided into modules 61, 62, 91, 93 and 95 in the sixth embodiment and FIG. 10
  • the artificial intelligence model 126 can be divided into the sixth embodiment and the modules in FIG. 10 Modules 631, 632, and 102; or, the ultrasonic testing program 125 can also be divided into the seventh embodiment and the modules 61, 62, 91, 93, and 95 in FIG. 11, and the artificial intelligence model 126 can also be divided into The seventh embodiment and the modules 631, 632, 102, and 112 in FIG. 11.
  • the schematic diagram 12 is only an example of the electronic device 10, and does not constitute a limitation on the mobile device 120.
  • the mobile device 120 may include more or less components than those shown in the figure, or combine certain components. , Or different components, for example, the mobile device 120 may also include input and output devices.
  • the so-called processing unit 121 can be any type of processor that can run the artificial intelligence model 126, such as a digital signal processor (DSP), an application specific integrated circuit (ASIC), etc.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • the processing unit 121 uses various types of processors. Such interfaces and lines connect various parts of the mobile device 120.
  • the integrated module of the mobile device 120 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the present invention implements the above-mentioned embodiment method All or part of the processes in the computer program can also be used to instruct the relevant hardware to complete the computer program.
  • the computer program can be stored in a computer-readable storage medium.
  • the computer program can realize each of the above Steps of method embodiment.
  • the content contained in the computer-readable medium can be appropriately added or deleted according to the requirements of the legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to the legislation and patent practice, the computer-readable medium Does not include electrical carrier signals and telecommunication signals.
  • the mobile device 120 provided in this embodiment applies artificial intelligence-enabled medical treatment to the mobile device, so that the mobile device can be matched with the ultrasound probe 70 to complete the trauma ultrasound diagnosis, which can quickly and accurately perform the ultrasound scan in the medical staff in real time. Determine whether there is blood and effusion in the detected part of the subject, as well as its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis.
  • the application of trauma ultrasound diagnosis to mobile devices can also avoid dependence on mobile networks.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

Un procédé et système de diagnostic ultrasonore de trauma, un dispositif mobile, et un support de stockage lisible par ordinateur. Le procédé consiste : à obtenir un signal électrique converti à partir d'un écho ultrasonore et à générer un flux vidéo d'image ultrasonore, l'écho ultrasonore étant un écho ultrasonore reçu lorsqu'une sonde ultrasonore détecte une personne détectée ; à afficher une image vidéo ultrasonore ; à utiliser un modèle d'intelligence artificielle pour analyser une partie détectée sur la base du flux vidéo d'image ultrasonore, et à déterminer si une zone sonoluminescente de fluide existe ; et à afficher un résultat d'analyse du modèle d'intelligence artificielle. Le procédé améliore la précision et l'efficacité du diagnostic par ultrasons, et évite la dépendance à un réseau mobile.
PCT/CN2019/127644 2019-12-23 2019-12-23 Procédé et système de détection ultrasonore de trauma, dispositif mobile et support de stockage WO2021127930A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/127644 WO2021127930A1 (fr) 2019-12-23 2019-12-23 Procédé et système de détection ultrasonore de trauma, dispositif mobile et support de stockage
CN201980100817.7A CN114599291A (zh) 2019-12-23 2019-12-23 创伤超声检测方法、系统、移动设备及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/127644 WO2021127930A1 (fr) 2019-12-23 2019-12-23 Procédé et système de détection ultrasonore de trauma, dispositif mobile et support de stockage

Publications (1)

Publication Number Publication Date
WO2021127930A1 true WO2021127930A1 (fr) 2021-07-01

Family

ID=76573414

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/127644 WO2021127930A1 (fr) 2019-12-23 2019-12-23 Procédé et système de détection ultrasonore de trauma, dispositif mobile et support de stockage

Country Status (2)

Country Link
CN (1) CN114599291A (fr)
WO (1) WO2021127930A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140288424A1 (en) * 2013-03-09 2014-09-25 West Virginia University System and Device for Tumor Characterization Using Nonlinear Elastography Imaging
CN108463174A (zh) * 2015-12-18 2018-08-28 皇家飞利浦有限公司 用于表征对象的组织的装置和方法
CN108652672A (zh) * 2018-04-02 2018-10-16 中国科学院深圳先进技术研究院 一种超声成像系统、方法及装置
CN110327016A (zh) * 2019-06-11 2019-10-15 清华大学 基于光学影像与光学治疗的智能型微创诊疗一体化系统
US20190336107A1 (en) * 2017-01-05 2019-11-07 Koninklijke Philips N.V. Ultrasound imaging system with a neural network for image formation and tissue characterization
CN110556183A (zh) * 2019-09-20 2019-12-10 林于慧 一种应用在中医设备上的快速诊断设备和方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140288424A1 (en) * 2013-03-09 2014-09-25 West Virginia University System and Device for Tumor Characterization Using Nonlinear Elastography Imaging
CN108463174A (zh) * 2015-12-18 2018-08-28 皇家飞利浦有限公司 用于表征对象的组织的装置和方法
US20190336107A1 (en) * 2017-01-05 2019-11-07 Koninklijke Philips N.V. Ultrasound imaging system with a neural network for image formation and tissue characterization
CN108652672A (zh) * 2018-04-02 2018-10-16 中国科学院深圳先进技术研究院 一种超声成像系统、方法及装置
CN110327016A (zh) * 2019-06-11 2019-10-15 清华大学 基于光学影像与光学治疗的智能型微创诊疗一体化系统
CN110556183A (zh) * 2019-09-20 2019-12-10 林于慧 一种应用在中医设备上的快速诊断设备和方法

Also Published As

Publication number Publication date
CN114599291A (zh) 2022-06-07

Similar Documents

Publication Publication Date Title
US11960571B2 (en) Method and apparatus for training image recognition model, and image recognition method and apparatus
EP3445250B1 (fr) Analyse d'image échocardiographique
US9314225B2 (en) Method and apparatus for performing ultrasound imaging
US8861824B2 (en) Ultrasonic diagnostic device that provides enhanced display of diagnostic data on a tomographic image
WO2019197427A1 (fr) Système ultrasonore à réseau neuronal artificiel permettant de récupérer des paramètres d'imagerie pour patient récurrent
JP2002253552A (ja) リモート閲覧用ステーションにおいて画像及びリポートを結合する方法及び装置
CN111448614B (zh) 用于分析超声心动图的方法和装置
CN114565577A (zh) 一种基于多模态影像组学的颈动脉易损性分级方法及系统
WO2021127930A1 (fr) Procédé et système de détection ultrasonore de trauma, dispositif mobile et support de stockage
US11510656B2 (en) Ultrasound imaging method and ultrasound imaging system therefor
US20240115245A1 (en) Method and system for expanding function of ultrasonic imaging device
US20230137369A1 (en) Aiding a user to perform a medical ultrasound examination
CN111696085B (zh) 一种肺冲击伤伤情现场快速超声评估方法及设备
CN115813433A (zh) 基于二维超声成像的卵泡测量方法和超声成像系统
CN112270974A (zh) 一种基于人工智能的智能辅助医学影像工作站
JP2019118694A (ja) 医用画像生成装置
CN113792740A (zh) 眼底彩照的动静脉分割方法、系统、设备及介质
KR20150107515A (ko) 의료진단을 위한 의료영상 처리장치 및 그 방법
JP2002163635A (ja) 診断部位の超音波画像から得られた特徴量に基づき階層型ニューラルネットワークを利用してびまん性肝疾患を診断支援するシステム、及びその診断支援方法
CN112515705A (zh) 用于投影轮廓启用的计算机辅助检测(cad)的方法和系统
TWI494549B (zh) 利用多核心支援向量迴歸之背光模組輝度檢測方法及檢測器
US20240184854A1 (en) Method and apparatus for training image recognition model, and image recognition method and apparatus
Ibrahim et al. Inexpensive 1024-channel 3D telesonography system on FPGA
US20220361854A1 (en) Dematerialized, multi-user system for the acquisition, generation and processing of ultrasound images
US11170544B2 (en) Application of machine learning to iterative and multimodality image reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19957632

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19957632

Country of ref document: EP

Kind code of ref document: A1