WO2021127930A1 - Trauma ultrasonic detection method and system, mobile device, and storage medium - Google Patents

Trauma ultrasonic detection method and system, mobile device, and storage medium Download PDF

Info

Publication number
WO2021127930A1
WO2021127930A1 PCT/CN2019/127644 CN2019127644W WO2021127930A1 WO 2021127930 A1 WO2021127930 A1 WO 2021127930A1 CN 2019127644 W CN2019127644 W CN 2019127644W WO 2021127930 A1 WO2021127930 A1 WO 2021127930A1
Authority
WO
WIPO (PCT)
Prior art keywords
ultrasound
artificial intelligence
diagnosis
dark area
liquid dark
Prior art date
Application number
PCT/CN2019/127644
Other languages
French (fr)
Chinese (zh)
Inventor
陈萱
胡书剑
熊麟霏
鲍玉婷
伍利
刘健
牟峰
Original Assignee
深圳华大智造科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳华大智造科技股份有限公司 filed Critical 深圳华大智造科技股份有限公司
Priority to CN201980100817.7A priority Critical patent/CN114599291A/en
Priority to PCT/CN2019/127644 priority patent/WO2021127930A1/en
Publication of WO2021127930A1 publication Critical patent/WO2021127930A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves

Definitions

  • the present invention relates to the field of ultrasonic detection, in particular to a wound ultrasonic detection method, system, mobile equipment and storage medium.
  • Medical ultrasound examination is a medical imaging diagnosis technology that uses ultrasound equipment to emit high-frequency sound waves and record the reflected waves generated by the tissue structure in the organism.
  • Traditional desktop ultrasound equipment is usually bulky and heavy, and it is not convenient to move and transport. The use of desktop ultrasound equipment usually needs to be performed in the clinic.
  • portable ultrasound is similar to the size of a notebook computer, allowing ultrasound inspection to break away from the limitation that must be performed in a specific space.
  • Hand-held ultrasound is usually composed of a handheld ultrasound probe and a mobile device. The size that can be put into a pocket can meet scenarios with high portability requirements such as battlefield rescue or disease screening in remote areas.
  • the affordable price of handheld ultrasound greatly reduces the difficulty of popularizing and promoting the use of ultrasound.
  • a wound ultrasound diagnosis method including:
  • the analysis result of the artificial intelligence model is displayed.
  • the method further includes: if there is a liquid dark area, segmenting the liquid dark area using the artificial intelligence model, and analyzing the orientation, shape and/or size of the liquid dark area.
  • the analysis result is that there is no liquid dark area or the distribution, shape and/or size of the liquid dark area.
  • the analysis result of the artificial intelligence model is displayed as: output the name of the part where the liquid dark area exists, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and/or output The orientation, size and/or shape of each liquid dark zone.
  • the artificial intelligence model uses a lightweight deep convolutional neural network as a model, and the trauma ultrasound diagnosis method is implemented using mobile devices.
  • the "using an artificial intelligence model to analyze the detected part based on the ultrasound image video stream to determine whether there is a liquid dark area” includes: using a fully convolutional neural network to perform multiple convolutions on each frame of the ultrasound video image One or more feature images are obtained by calculation, and a probability is obtained after the one or more feature images are passed through the global pooling layer, the fully connected layer and the activation function, and whether there is a liquid dark zone is determined according to the probability.
  • the "segmentation of the liquid dark area using the artificial intelligence model, and analyzing the position, shape and/or size of the liquid dark area” includes: using a fully convolutional neural network to perform ultrasound on each frame
  • the video image is subjected to multiple convolution operations to obtain one or more feature images, and each feature image in the one or more feature images is then subjected to a 1*1 convolution operation to obtain a numerical matrix with the same size as the frame of the ultrasound video image
  • Each value in the numerical matrix corresponds to a pixel of a frame of ultrasound video image, and the numerical matrix divides the numerical distribution of the corresponding liquid dark area through the activation function, and calculates each liquid according to the numerical distribution.
  • the area and/or fluid volume of the dark zone includes: using a fully convolutional neural network to perform ultrasound on each frame
  • the video image is subjected to multiple convolution operations to obtain one or more feature images, and each feature image in the one or more feature images is then subjected to a 1*1 convolution operation to obtain
  • the method further includes: forming an artificial intelligence-assisted diagnosis and treatment report, and displaying and/or storing the artificial intelligence-assisted diagnosis and treatment report.
  • the method further includes: using an artificial intelligence model to analyze which part of the diagnosed part is.
  • the method further includes: using an artificial intelligence model to determine whether all diagnostic parts that need to be diagnosed have been diagnosed, and if all diagnostic parts that need to be diagnosed have been diagnosed, displaying the analysis results and/or formation of the artificial intelligence model In the artificial intelligence-assisted diagnosis and treatment report, if any diagnosis part that needs to be diagnosed has not been diagnosed, the electric signal converted by the ultrasonic echo is continued to be obtained.
  • a trauma ultrasound diagnosis system in a second aspect, includes:
  • a signal acquisition module for acquiring an electrical signal converted by an ultrasonic echo, the ultrasonic echo being the ultrasonic echo received when the ultrasonic probe detects the subject;
  • a video image signal generating module configured to generate an ultrasound image video stream according to the electrical signal
  • the artificial intelligence model includes:
  • the liquid dark area judgment module is configured to receive the ultrasound image video stream, and analyze whether there is a liquid dark area in the detection part based on the ultrasound image video stream;
  • the display module is used for receiving the ultrasound image video stream and displaying the ultrasound video image on a display unit, and for receiving the analysis result of the artificial intelligence model and displaying the analysis result on the display unit.
  • the artificial intelligence model further includes: a liquid dark area segmentation module, the liquid dark area segmentation module is used to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area .
  • the analysis result of the artificial intelligence model is that there is no liquid dark zone or the distribution, shape and/or size of the liquid dark zone.
  • the display module is used to output the name of the part where the liquid dark area exists, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and/or output the orientation, size and/or of each liquid dark area Or shape.
  • the artificial intelligence model adopts a lightweight deep convolutional neural network as a model, and the system is installed on a mobile device.
  • the liquid dark area judgment module is used to perform multiple convolution operations on each frame of ultrasound video image using a fully convolutional neural network to obtain one or more feature images, and pass the one or more feature images through A probability is obtained after the global pooling layer, the fully connected layer, and the activation function, and it is determined whether there is a liquid dark zone according to the probability.
  • the liquid dark area segmentation module is used to perform multiple convolution operations on each frame of ultrasound video image by using a fully convolutional neural network to obtain one or more feature images, and combine the one or more feature images
  • Each feature image of the 1*1 convolution operation is performed to obtain a numerical matrix with the same size as the frame of the ultrasound video image, wherein each value in the numerical matrix corresponds to a pixel of the frame of the ultrasound video image
  • the liquid The dark area segmentation module is also used to segment the numerical value distribution corresponding to the dark area of the liquid from the numerical matrix, and calculate the area and/or amount of liquid accumulation of each dark area of the liquid based on the value distribution.
  • system further includes an auxiliary diagnosis and treatment report generation module, and the auxiliary diagnosis and treatment report generation module is used to generate an artificial intelligence auxiliary diagnosis and treatment report according to the analysis result of the artificial intelligence model.
  • system further includes an auxiliary diagnosis and treatment report storage module, and the auxiliary diagnosis and treatment report storage module is used to store the artificial intelligence auxiliary diagnosis and treatment report.
  • the display module is also used to display the artificial intelligence-assisted diagnosis and treatment report.
  • system further includes a diagnosis part judgment module, and the diagnosis part judgment module is used to judge which part the diagnosed part is.
  • the system further includes a diagnosis completion judging module, the diagnosis completion judging module is used to judge whether all the diagnosed parts that need to be diagnosed have completed the diagnosis, and the auxiliary diagnosis and treatment report generation module is used to judge all When the diagnosis parts that need to be diagnosed have been diagnosed, the auxiliary diagnosis and treatment report is generated, and/or the display module is used to display the manual when the diagnosis completion judgment module judges that all the parts that need to be diagnosed have been diagnosed.
  • the analysis result of the smart model is on the display unit.
  • a mobile device in a third aspect, includes a communication unit, a display unit, a processing unit, and a storage unit.
  • the storage unit stores a plurality of program modules, and the plurality of program modules are processed by the The unit loads and executes the above-mentioned trauma ultrasound diagnosis method.
  • a computer-readable storage medium on which a computer program is stored, and the computer program is executed by a processing unit to realize the above-mentioned trauma ultrasound diagnosis method.
  • the trauma ultrasound diagnosis method, system, mobile device and storage medium provided by the embodiments of the present invention apply artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately perform ultrasound scans by medical staff , Real-time judgment of whether there is blood and fluid accumulation in the detected part of the subject, as well as its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis.
  • the application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
  • Fig. 1 is a schematic flowchart of a wound ultrasonic diagnosis method in the first embodiment of the present invention.
  • Figure 2 is a schematic diagram of an exemplary artificial intelligence analysis of whether there is a liquid dark zone.
  • Figure 3 is a schematic diagram of an exemplary artificial intelligence method for dividing liquid dark areas.
  • Fig. 4 is a schematic flowchart of a wound ultrasonic diagnosis method in the second embodiment of the present invention.
  • Fig. 5 is a schematic flow chart of a wound ultrasonic diagnosis method in the third embodiment of the present invention.
  • Fig. 6 is a system frame diagram of a trauma ultrasonic diagnosis system in the fourth embodiment of the present invention.
  • Figure 7 is a schematic diagram of the artificial intelligence module analyzing whether there is a liquid dark zone.
  • Figure 8 is a schematic diagram of the artificial intelligence module dividing the liquid dark area.
  • Fig. 9 is a system frame diagram of a trauma ultrasound diagnosis system in the fifth embodiment of the present invention.
  • Fig. 10 is a system frame diagram of a trauma ultrasonic diagnosis system in the sixth embodiment of the present invention.
  • Fig. 11 is a system frame diagram of a trauma ultrasonic diagnosis system in the seventh embodiment of the present invention.
  • Fig. 12 is a system frame diagram of a trauma ultrasonic diagnosis system in the eighth embodiment of the present invention.
  • Liquid dark area judgment module 631 Liquid dark area judgment module 631
  • FIG. 1 is a schematic flowchart of a wound ultrasonic diagnosis method in Embodiment 1 of the present invention. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted. For ease of description, only the parts related to the embodiment of the present invention are shown.
  • the wound ultrasound diagnosis method in the first embodiment includes the following steps.
  • Step S11 Obtain the electrical signal corresponding to the ultrasound echo sent by the ultrasound probe at the mobile device side, and generate an ultrasound image video stream.
  • a communication connection is established between the mobile device and a hand-held ultrasound probe, and the hand-held ultrasound probe is used to obtain the ultrasound echo of the detected part of the examinee by drawing a picture on the detected part of the examinee.
  • the ultrasonic echo is converted into an electrical signal by the handheld ultrasonic device and sent to the mobile device, so that the mobile device can continuously obtain the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe and generate an ultrasonic image video stream.
  • Step S12 Display the ultrasound video image on the mobile device.
  • Step S13 Use the artificial intelligence model to analyze the detected part based on the ultrasound image video stream to determine whether there is a liquid dark area of effusion or blood accumulation. If there is a liquid dark area, go to step S14, if there is no liquid dark area Area, go to step S15.
  • Step S14 Use the artificial intelligence model to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area.
  • step S12 is not necessarily related to steps S13 and S14, and step S12 can be performed simultaneously with steps S13 and S14, or performed before or after steps S13 and S14.
  • the artificial intelligence model uses a lightweight deep convolutional neural network as the model, and uses the convolutional layer in the convolutional neural network to filter each frame of the input ultrasound image video stream through the convolution kernel To extract feature information.
  • the artificial intelligence model includes multiple convolutional layers, and the convolutional layers are connected to each other, and each frame of ultrasound video image is extracted level by level with richer semantic information and more distinguishable features, so as to facilitate the comparison of the results. Make accurate predictions.
  • Each convolutional layer contains many parameters and requires a lot of calculations to extract features.
  • the convolutional layer is optimized to achieve acceptable accuracy. Within the scope, the amount of parameters and calculations are reduced as much as possible, so that the artificial intelligence model can achieve performance optimization when computing resources are limited.
  • the optimization of the convolutional layer can refer to the "mobilenet" lightweight network proposed by Google.
  • the artificial intelligence model analysis is divided into two stages.
  • the full convolutional neural network 1 is used to analyze whether there is a dark area of effusion.
  • the fully convolutional neural network 1 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolution layers to obtain one or more feature images, the one or Multiple feature images are highly refined after the global pooling layer, and their dimensions are reduced, and then the fully connected layer is used for feature classification and synthesis. Finally, the output of the fully connected layer is connected to the activation function 1.
  • the activation function 1 can be " Softmax” function or "Sigmoid” function, the activation function 1 compresses the value of the feature tensor output by the fully connected layer to a range of 0 to 1.
  • the value or array obtained through activation function 1 is a probability. In this embodiment, the probability is less than 0.5 and it is considered that there is no liquid dark zone.
  • the artificial intelligence model returns to the judgment result of the liquid-free dark zone after step S13. If the probability is greater than or equal to 0.5, it is considered that there is a liquid dark zone, and the artificial intelligence model returns the judgment result of the existence of a liquid dark zone after step S13.
  • the probability limit for setting the presence or absence of a liquid dark zone can be any value between 0 and 1 according to specific conditions and different neural network models and parameters used.
  • the full convolutional network 2 is used to segment and measure the existing liquid dark areas.
  • the fully convolutional neural network 2 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolutional layers to obtain one or more feature images, and each feature in the one or more feature images The image is then subjected to a convolution operation with a 1*1 convolution layer to obtain a numerical matrix with the same size as the original ultrasound video image frame. Each value in the numerical matrix represents a corresponding pixel point in the original ultrasound video image frame.
  • the activation The function compresses each value in the numeric matrix to the range of 0 to 1.
  • the size of each value indicates the probability that the pixel in the corresponding ultrasound image belongs to the liquid dark zone. In this embodiment, it is greater than Equal to 0.5 indicates that the pixel belongs to the liquid dark area, and less than 0.5 indicates that the pixel does not belong to the liquid dark area.
  • the activation function then divides the corresponding liquid dark area in the numeric matrix according to the magnitude of each value in the numeric matrix. The distribution of values. Further, the activation function combines the spatial scale corresponding to each pixel of the ultrasound probe to calculate the area and/or the amount of fluid accumulation of each liquid dark zone. In other embodiments, depending on the specific situation and the different neural network models and parameters used, the probability limit for judging whether the pixel corresponding to each value in the numerical matrix belongs to the liquid dark zone can be any value between 0 and 1 .
  • the results of multiple convolution operations in which each frame of the ultrasound video image in the first stage is passed through multiple convolution layers can be used in the second stage, and the liquid dark area can be divided and calculated on this result.
  • the orientation, shape and/or size of the liquid dark zone can be used in the second stage, and the liquid dark area can be divided and calculated on this result.
  • Step S15 Display the analysis result of the artificial intelligence model.
  • the analysis result of the artificial intelligence model is displayed on the mobile terminal.
  • the analysis result of the artificial intelligence model may be "there is no liquid dark zone” or the distribution of the liquid dark zone , Shape and/or size.
  • the display can be in any one of the following ways or a combination of two or more of the following ways: output the existence product The name of the liquid part, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and the size and/or shape of each piece of effusion are output.
  • Step S16 forming an artificial intelligence-assisted diagnosis and treatment report.
  • Step S17 Display and/or store the artificial intelligence-assisted diagnosis and treatment report.
  • the artificial intelligence-assisted diagnosis and treatment report may be stored in a mobile device, or the artificial intelligence-assisted diagnosis and treatment report may also be uploaded and saved to a cloud server.
  • the trauma ultrasound diagnosis method provided in this embodiment applies artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately determine whether the detected part of the subject is detected in real time during the ultrasound scan performed by medical staff.
  • the application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
  • the wound ultrasound diagnosis method may include steps S11-S15, but does not include S16 and S17.
  • the ultrasonic diagnosis method for trauma may further include the step of establishing a communication connection between the mobile device and the ultrasound probe, and after the communication connection between the mobile device and the ultrasound probe is established, step S11 is started.
  • FIG. 4 is a schematic flowchart of a wound ultrasonic diagnosis method in Embodiment 2 of the present invention. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted. For ease of description, only the parts related to the embodiment of the present invention are shown.
  • the wound ultrasound diagnosis method in the second embodiment includes the following steps.
  • Step S21 Obtain the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe at the mobile device side, and generate an ultrasonic image video stream.
  • Step S22 Display the ultrasound video image on the mobile device.
  • Steps S21, S22 and their related descriptions in this embodiment are respectively consistent with steps S11, S12 and their related descriptions in Embodiment 1.
  • steps S11, S12 and their related descriptions in Embodiment 1 please refer to Embodiment 1, which will not be repeated here.
  • Step S23 Use the artificial intelligence model to analyze which part of the diagnosed part is.
  • the part that can be diagnosed by the artificial intelligence model can be around the liver, spleen, pericardium, pelvic cavity or lung.
  • the artificial intelligence model can judge which part of the diagnosis is based on the user's input, for example, the user selects via a mobile device For the part to be diagnosed, the artificial intelligence model judges which part is diagnosed according to the user's choice.
  • the artificial intelligence model can also judge which part is diagnosed based on the ultrasound image video stream and according to different parameters or pictures of different parts.
  • Step S24 Use the artificial intelligence model to analyze the diagnosed part based on the ultrasound image video stream to determine whether there is a liquid dark area of effusion or blood accumulation, if there is a liquid dark area, go to step S25, if there is no liquid dark area Area, go to step S26.
  • Step S25 Use the artificial intelligence model to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area.
  • Step S26 Display the analysis result of the artificial intelligence model on the mobile terminal.
  • Step S27 Form an artificial intelligence-assisted diagnosis and treatment report.
  • Step S28 Display and/or store the artificial intelligence-assisted diagnosis and treatment report.
  • Steps S24-S28 and related descriptions in this embodiment are respectively consistent with steps S13-S17 and related descriptions in Embodiment 1.
  • steps S13-S17 and related descriptions in Embodiment 1 please refer to Embodiment 1, and will not be repeated here.
  • step S22 and steps S23-S25 are not necessarily sequential.
  • Step S22 can be performed simultaneously with steps S23-S25, or performed before or after steps S23-S25.
  • the trauma ultrasound diagnosis method provided in this embodiment applies artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately determine the location of medical staff performing ultrasound scanning, and determine whether there is blood accumulation in the detected location in real time. Fluid effusion and its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis.
  • the application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
  • FIG. 5 is a schematic flowchart of a wound ultrasound diagnosis method in Embodiment 3 of the present invention. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted. For ease of description, only the parts related to the embodiment of the present invention are shown.
  • the wound ultrasound diagnosis method in the third embodiment includes the following steps.
  • Step S31 Obtain the electrical signal corresponding to the ultrasound echo sent by the ultrasound probe at the mobile device side, and generate an ultrasound image video stream.
  • Step S32 Display the ultrasound video image on the mobile device.
  • Step S33 Use the artificial intelligence model to analyze which part of the diagnosed part is.
  • Step S34 Use an artificial intelligence model to analyze the diagnosed part based on the ultrasound image video stream to determine whether there is a liquid dark area of effusion or blood accumulation, if there is a liquid dark area, go to step S35, if there is no liquid dark area Area, go to step S36.
  • Step S35 Use the artificial intelligence model to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area.
  • Steps S31-S35 and related descriptions in this embodiment are respectively consistent with steps S21-S25 and related descriptions in the second embodiment.
  • steps S21-S25 and related descriptions in the second embodiment please refer to the second embodiment, which will not be repeated here.
  • step S32 and steps S33-S35 are not necessarily sequential.
  • Step S32 can be performed simultaneously with steps S33-S35, or performed before or after steps S33-S35.
  • Step S36 Display the analysis result of the artificial intelligence model on the mobile terminal.
  • Step S37 Use the artificial intelligence model to determine whether all the diagnostic parts that need to be diagnosed have been diagnosed, if yes, go to step S38, if not, go to step S31 to start ultrasound diagnosis of the next part.
  • Step S38 forming an artificial intelligence-assisted diagnosis and treatment report.
  • Step S39 Display and/or store the artificial intelligence-assisted diagnosis and treatment report.
  • Steps S37-S39 and related descriptions in this embodiment are respectively consistent with steps S26-S28 and related descriptions in the second embodiment.
  • steps S26-S28 and related descriptions in the second embodiment please refer to the second embodiment, which will not be repeated here.
  • step S37 may be performed before step S36. If step S37 determines that all diagnostic parts that need to be diagnosed have been diagnosed, the flow proceeds to step S36, and after step S36 is completed, it proceeds to step S38. Alternatively, step S38 is executed at the same time as step S36 is executed. If it is determined in step S37 that there are still parts to be diagnosed that have not been diagnosed, the flow proceeds to step S31 to start ultrasound diagnosis of the next part.
  • the trauma ultrasound diagnosis method provided in this embodiment applies artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately assist medical staff in performing ultrasound diagnosis of all parts to be inspected, and judge whether there is any accumulation in the detected part in real time. Blood, effusion and its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and quickly complete trauma ultrasound diagnosis of all parts to be inspected. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis.
  • the application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
  • Figures 1-5 illustrate the trauma ultrasound diagnosis method in different embodiments of the present invention.
  • the use of artificial intelligence to diagnose ultrasound video images reduces the requirements for the relevant skills of medical staff and greatly improves the accuracy and accuracy of ultrasound diagnosis of trauma. effectiveness.
  • the functional modules and hardware device architecture of the software system implementing the trauma ultrasonic diagnosis method will be introduced below in conjunction with FIG. 6. It should be understood that the embodiments are only for illustrative purposes, and are not limited by this structure in the scope of the patent application.
  • FIG. 6 is a system frame diagram of the trauma ultrasound diagnosis system in the fourth embodiment of the present invention.
  • the trauma ultrasound diagnosis system may include multiple functional modules composed of program code segments.
  • the program code of each program segment of the trauma ultrasound diagnosis system can be stored in a memory of a computer device, such as a mobile device, and executed by at least one processor in the computer device, so as to realize a trauma ultrasound diagnosis function.
  • the trauma ultrasound diagnosis system 60 can be divided into multiple functional modules according to the functions it performs, and each functional module is used to execute each of the corresponding embodiments in FIG. 1, FIG. 3, or FIG. 4. Steps to achieve the function of ultrasound diagnosis of trauma.
  • the functional modules of the trauma ultrasound diagnosis system 60 include: a signal acquisition module 61, a video image signal generation module 62, an artificial intelligence model 63, and a display module 64. The functions of each functional module will be described in detail in the following embodiments.
  • the signal acquisition module 61 is used to acquire the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe 70.
  • the video image signal generating module 62 is configured to generate an ultrasound image video stream according to the electrical signal.
  • the artificial intelligence model 63 includes a liquid dark area judgment module 631 and a liquid dark area segmentation module 633.
  • the liquid dark area judging module 631 is configured to receive the ultrasound image video stream, and analyze and detect whether there is a liquid dark area with effusion or hemorrhage based on the ultrasound image video stream.
  • the liquid dark area segmentation module 633 is used to segment the liquid dark area and analyze the orientation, shape and/or size of the liquid dark area.
  • the artificial intelligence model uses a lightweight deep convolutional neural network as the model, and uses the convolutional layer in the convolutional neural network to filter each frame of the input ultrasound image video stream through the convolution kernel To extract feature information.
  • the artificial intelligence model includes multiple convolutional layers, and the convolutional layers are connected to each other, and each frame of ultrasound video image is extracted level by level with richer semantic information and more distinguishable features, so as to facilitate the comparison of the results. Make accurate predictions.
  • Each convolutional layer contains many parameters and requires a lot of calculations to extract features.
  • the convolutional layer is optimized to achieve acceptable accuracy. Within the scope, the parameter amount and calculation amount are reduced as much as possible, so that the artificial intelligence model achieves performance optimization in the case of limited computing resources.
  • the optimization of the convolutional layer can refer to the "mobilenet" lightweight network proposed by Google.
  • the liquid dark area determination module 631 uses a fully convolutional neural network 1 to analyze whether there is a dark area of effusion.
  • the fully convolutional neural network 1 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolutional layers to obtain one or more feature images, and the multiple feature images are passed through the global pooling layer. The features are highly purified and the dimensions are reduced, and then the fully connected layer is used for feature classification and synthesis. Finally, the output of the fully connected layer is connected to the activation function 1.
  • the activation function 1 can be a "Softmax” function or a "Sigmoid" function.
  • the activation function 1 compresses the value of the feature tensor output by the fully connected layer to the range of 0 to 1.
  • the value or array obtained through the activation function 1 is a probability. In this embodiment, the probability is less than 0.5 and it is considered that there is no dark liquid zone.
  • the dark liquid zone judgment module 631 outputs the judgment result of the dark zone without liquid. If the probability is greater than or equal to 0.5, it is considered that there is a liquid dark zone, and the liquid dark zone judgment module 631 outputs the judgment result of the existence of a liquid dark zone.
  • the probability limit for setting the presence or absence of a liquid dark zone can be any value between 0 and 1 according to specific conditions and different neural network models and parameters used.
  • the liquid dark area segmentation module 633 uses the fully convolutional network 2 to segment and measure the liquid dark area when the liquid dark area judging module 631 determines that there is a liquid dark area.
  • the fully convolutional neural network 2 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolutional layers to obtain one or more feature images, and each feature in the one or more feature images The image is then subjected to a convolution operation with a 1*1 convolution layer to obtain a numerical matrix with the same size as the original ultrasound video image frame. Each value in the numerical matrix represents a corresponding pixel point in the original ultrasound video image frame.
  • the activation compresses each value in the numeric matrix to the range of 0 to 1.
  • each value indicates the probability that the pixel in the corresponding ultrasound image belongs to the liquid dark zone. In this embodiment, it is greater than Equal to 0.5 indicates that the pixel belongs to the liquid dark area, and less than 0.5 indicates that the pixel does not belong to the liquid dark area.
  • the activation function then divides the value of the corresponding liquid dark area in the numeric matrix according to each value in the numeric matrix. distributed. Further, the activation function combines the spatial scale corresponding to each pixel of the handheld ultrasound probe to calculate the area and/or fluid accumulation of each liquid dark zone.
  • the probability limit for judging whether the pixel corresponding to each value in the numerical matrix belongs to the liquid dark zone can be any value between 0 and 1 .
  • the liquid dark area segmentation module 633 can use the liquid dark area judgment module 631 to pass the ultrasonic video image of each frame through multiple convolutional layers to the results of multiple convolution operations, where the result is Divide the liquid dark area up and calculate the position, shape and/or size of the liquid dark area.
  • the display module 64 is used for receiving the ultrasound image video stream and displaying the ultrasound video image on a display unit 80, and for receiving the analysis result of the artificial intelligence model 63 and displaying the analysis result on the display On unit 80.
  • the analysis result of the artificial intelligence model 63 may be "there is no liquid dark zone" or the distribution, shape and/or size of the liquid dark zone.
  • the display module 64 may display the Analysis result: output the name of the part where the effusion exists, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and output the size and/or shape of each effusion.
  • the trauma ultrasound diagnosis system 60 provided in this embodiment applies artificial intelligence-enabled medical treatment to trauma ultrasound diagnosis, and can quickly and accurately determine whether there is blood accumulation in the detected part of the subject in the process of ultrasonic scanning by medical staff. , Fluid effusion and its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive ultrasound experience can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis.
  • the application of the trauma ultrasound diagnosis system 60 to mobile devices can avoid dependence on the mobile network.
  • FIG. 9 is a system frame diagram of the trauma ultrasound diagnosis system in the fifth embodiment of the present invention.
  • the trauma ultrasound diagnosis system 90 of the fifth embodiment further includes an auxiliary diagnosis and treatment report generation module 91 and an auxiliary diagnosis and treatment report storage module 93.
  • the auxiliary diagnosis and treatment report generation module 91 is used to generate an auxiliary diagnosis and treatment report according to
  • the analysis result of the artificial intelligence model 63 generates an artificial intelligence-assisted diagnosis and treatment report.
  • the auxiliary diagnosis and treatment report storage module 93 is also used to store the artificial intelligence auxiliary diagnosis and treatment report.
  • the auxiliary diagnosis and treatment report storage module 93 may store the artificial intelligence auxiliary diagnosis and treatment report in the storage unit 82 of the mobile device, or upload and save the artificial intelligence auxiliary diagnosis and treatment report to a cloud server. 84.
  • the display module 95 in this embodiment is also used to display the artificial intelligence-assisted diagnosis and treatment report.
  • the beneficial effects of this embodiment can refer to the beneficial effects of the fourth embodiment.
  • the trauma ultrasound diagnosis system 90 of this embodiment generates and stores artificial intelligence-assisted diagnosis and treatment reports, which can prepare diagnosis and treatment reports. Repeated use and subsequent further verification of the artificial intelligence model.
  • FIG. 10 is a system frame diagram of the trauma ultrasound diagnosis system in the sixth embodiment of the present invention.
  • the artificial intelligence model 101 of the trauma ultrasound diagnosis system 100 in the sixth embodiment further includes a diagnosis part judgment module 102, which is used to analyze which part is diagnosed.
  • the part that can be diagnosed by the artificial intelligence model 101 may be around the liver, around the spleen, pericardium, around the pelvis, or lung, and the diagnostic part judgment module 102 can judge which part of the diagnosis is according to the input of the user.
  • the diagnostic part judgment module 102 judges which part to be diagnosed according to the user's selection, the diagnostic part judgment module 102 may also be based on the ultrasound image video stream, according to different parts Different parameters or pictures to determine which part of the diagnosis is.
  • the beneficial effects of this embodiment can refer to the beneficial effects of the fifth embodiment.
  • the trauma ultrasound diagnosis system 100 of this embodiment uses the artificial intelligence model 101 to determine which part of the diagnosed part is. It can further improve the accuracy of diagnosis.
  • FIG. 11 is a system frame diagram of the trauma ultrasonic diagnosis system in the seventh embodiment of the present invention.
  • the artificial intelligence model 111 of the trauma ultrasound diagnosis system 110 in the seventh embodiment also includes a diagnosis completion judgment module 112, which is used to judge all diagnoses that need to be diagnosed. Whether the location is complete diagnosis.
  • the auxiliary diagnosis and treatment report generating module 113 in this embodiment is used to generate an auxiliary diagnosis and treatment report when the diagnosis completion judgment module 112 judges that all diagnosis parts that need to be diagnosed have been diagnosed.
  • the display module 64 is configured to display the analysis result of the artificial intelligence model when the diagnosis completion judgment module judges that all diagnosis parts that need to be diagnosed have been diagnosed. Displayed on the display unit 80.
  • the beneficial effects of this embodiment can be referred to the beneficial effects of the sixth embodiment.
  • the trauma ultrasound diagnosis system 110 of this embodiment uses the artificial intelligence model 111 to determine whether the diagnosis of all parts to be diagnosed is completed. It can avoid missing the detection of key parts.
  • FIG. 12 is a schematic diagram of functional modules of a mobile device in the eighth embodiment of the present invention.
  • the mobile device 120 includes a processing unit 121, a storage unit 122, a communication unit 123, a display unit 124, and a built-in ultrasonic testing program 125 and an artificial intelligence model 126.
  • the ultrasonic testing program 125 and the artificial intelligence model 126 can be installed on the mobile device 120 in the form of an application program, and the processing unit, the storage unit 122, the communication unit 123 and the display unit 124 of the mobile device 120 are used to complete the ultrasonic diagnosis of trauma.
  • the mobile device 120 may be, but is not limited to, a smart phone, a tablet computer, etc.
  • an electronic device for ultrasonic diagnosis of trauma is completed by installing the ultrasonic detection program 125 and the artificial intelligence model 126 It is not limited to the mobile device 120, and may also be other terminal devices with computing operation capabilities, such as desktop computers.
  • the communication unit 123 is used to communicate with an ultrasonic probe 70, obtain the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe 70 from the ultrasonic probe 70, and transmit the electrical signal to the ultrasonic testing program 125 uses.
  • the communication unit 123 and the ultrasound probe 70 may be connected via a wired connection, using a wired communication technology, or may be a wireless connection, using a wireless communication technology.
  • the communication unit 123 may be a signal transceiving unit matching the corresponding communication mode.
  • the storage unit 122 is used to store the ultrasonic testing program 125 and the artificial intelligence model 126 and the data (such as parameters) they need to use or the data generated (such as analysis results, diagnosis reports, etc.).
  • the processing unit 121 is used to execute the ultrasonic detection program 125 and the artificial intelligence model 126 to complete the ultrasonic diagnosis of trauma.
  • the processing unit 121 executes the ultrasonic detection program 125 and the artificial intelligence model 126, the steps of the trauma ultrasonic diagnosis method in the above method embodiment are implemented, or the processing unit 121 executes the ultrasonic detection program 125 and the artificial intelligence model 126
  • the function of each module in the trauma ultrasound diagnosis system in the above-mentioned embodiment is realized.
  • both the ultrasonic testing program 125 and the artificial intelligence model 126 may be divided into one or more modules, and the one or more modules are stored in the storage unit 122 and processed by the The unit 121 executes to complete various embodiments of the present invention.
  • the one or more modules/units may be a series of computer program instruction segments that can complete specific functions, and the instruction segments are used to describe the ultrasonic testing program 125 or the artificial intelligence model 126 in the electronic device 10 Implementation process.
  • the ultrasonic testing program 125 can be divided into modules 61, 62 and 64 in the fourth embodiment and FIG. 6, and the artificial intelligence model 126 can be divided into modules 631 and 632 in the fourth embodiment and FIG.
  • the ultrasonic testing program 125 can be divided into the modules 61, 62, 91, 93 and 95 in the fifth embodiment and FIG. 9, and the artificial intelligence model 126 can be divided into the modules 631 and 632 in the fourth embodiment.
  • the ultrasonic testing program 125 can be divided into modules 61, 62, 91, 93 and 95 in the sixth embodiment and FIG. 10
  • the artificial intelligence model 126 can be divided into the sixth embodiment and the modules in FIG. 10 Modules 631, 632, and 102; or, the ultrasonic testing program 125 can also be divided into the seventh embodiment and the modules 61, 62, 91, 93, and 95 in FIG. 11, and the artificial intelligence model 126 can also be divided into The seventh embodiment and the modules 631, 632, 102, and 112 in FIG. 11.
  • the schematic diagram 12 is only an example of the electronic device 10, and does not constitute a limitation on the mobile device 120.
  • the mobile device 120 may include more or less components than those shown in the figure, or combine certain components. , Or different components, for example, the mobile device 120 may also include input and output devices.
  • the so-called processing unit 121 can be any type of processor that can run the artificial intelligence model 126, such as a digital signal processor (DSP), an application specific integrated circuit (ASIC), etc.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • the processing unit 121 uses various types of processors. Such interfaces and lines connect various parts of the mobile device 120.
  • the integrated module of the mobile device 120 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the present invention implements the above-mentioned embodiment method All or part of the processes in the computer program can also be used to instruct the relevant hardware to complete the computer program.
  • the computer program can be stored in a computer-readable storage medium.
  • the computer program can realize each of the above Steps of method embodiment.
  • the content contained in the computer-readable medium can be appropriately added or deleted according to the requirements of the legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to the legislation and patent practice, the computer-readable medium Does not include electrical carrier signals and telecommunication signals.
  • the mobile device 120 provided in this embodiment applies artificial intelligence-enabled medical treatment to the mobile device, so that the mobile device can be matched with the ultrasound probe 70 to complete the trauma ultrasound diagnosis, which can quickly and accurately perform the ultrasound scan in the medical staff in real time. Determine whether there is blood and effusion in the detected part of the subject, as well as its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis.
  • the application of trauma ultrasound diagnosis to mobile devices can also avoid dependence on mobile networks.

Abstract

A trauma ultrasonic diagnosis method and system, a mobile device, and a computer-readable storage medium. The method comprises: obtaining an electric signal converted from an ultrasonic echo and generating an ultrasonic image video stream, the ultrasonic echo being an ultrasonic echo received when an ultrasonic probe detects a detected person; displaying an ultrasonic video image; using an artificial intelligence model to analyze a detected part on the basis of the ultrasonic image video stream, and determine whether a fluid sonolucent area exists; and displaying an analysis result of the artificial intelligence model. The method improves the accuracy and efficiency of ultrasonic diagnosis, and avoids dependence on a mobile network.

Description

创伤超声检测方法、系统、移动设备及存储介质Trauma ultrasonic detection method, system, mobile equipment and storage medium 技术领域Technical field
本发明涉及超声检测领域,尤其涉及一种创伤超声检测方法、系统、移动设备及存储介质。The present invention relates to the field of ultrasonic detection, in particular to a wound ultrasonic detection method, system, mobile equipment and storage medium.
背景技术Background technique
医学超声检查是一种利用超声设备发射高频声波并记录下生物体内组织结构产生的反射波的医学影像学诊断技术。传统的台式超声设备通常体积大、重量重,不便于移动和运输,使用台式超声设备通常需要在诊室中进行。作为台式超声的小型化和轻量化,便携式超声类似于笔记本电脑的体积,让超声检查脱离了必须在特定空间中进行的限制。然而,即便是便携超声,其设备重量仍达数公斤,对于远距离、长时间携带依然是很大的挑战。手持超声通常由手持超声探头和移动设备构成,可放入口袋的大小能够满足如战场救援或是偏远地区疾病筛查等对轻便性有较高要求的场景。此外,手持超声价格实惠的特点,让普及和推广超声使用的难度大大降低。Medical ultrasound examination is a medical imaging diagnosis technology that uses ultrasound equipment to emit high-frequency sound waves and record the reflected waves generated by the tissue structure in the organism. Traditional desktop ultrasound equipment is usually bulky and heavy, and it is not convenient to move and transport. The use of desktop ultrasound equipment usually needs to be performed in the clinic. As the miniaturization and light weight of desktop ultrasound, portable ultrasound is similar to the size of a notebook computer, allowing ultrasound inspection to break away from the limitation that must be performed in a specific space. However, even if it is a portable ultrasound, its equipment weighs several kilograms, which is still a big challenge for long-distance and long-term carrying. Hand-held ultrasound is usually composed of a handheld ultrasound probe and a mobile device. The size that can be put into a pocket can meet scenarios with high portability requirements such as battlefield rescue or disease screening in remote areas. In addition, the affordable price of handheld ultrasound greatly reduces the difficulty of popularizing and promoting the use of ultrasound.
另,由于熟练实施超声诊断需要医生付出较长的学习周期,在一些地方若无熟练的超声诊断医生,则无法利用超声设备完成超声诊断,因此,将人工智能应用于超声设备中、辅助医生诊疗是一个有效的解决方案,可以缩短医生的学习曲线,使其能够快速上手进行超声诊断。In addition, because skilled ultrasound diagnosis requires doctors to pay a long learning cycle, in some places, if there is no skilled ultrasound diagnosis doctor, it is impossible to use ultrasound equipment to complete ultrasound diagnosis. Therefore, artificial intelligence is used in ultrasound equipment to assist doctors in diagnosis and treatment. It is an effective solution that can shorten the learning curve for doctors and enable them to quickly get started with ultrasound diagnosis.
受限于手持超声有限的算力,人工智能赋能手持超声检查通常需要将智能模型部署于云端服务器,再通过网络连接进行数据传输来实现。然,将人工智能模型部署于云端的方案极大地受限于网络 环境质量,网络带宽、网络延时将直接影响手持超声和云端服务器之间的数据传输,可能导致实时显示智能分析结果的需求无法得到满足,无网络连接时,因无法进行数据传输,手持超声的人工智能分析功能将完全无法使用。Limited by the limited computing power of hand-held ultrasound, artificial intelligence-enabled hand-held ultrasound inspection usually requires the deployment of smart models on a cloud server, and then data transmission through a network connection. However, the deployment of artificial intelligence models in the cloud is greatly limited by the quality of the network environment. Network bandwidth and network delay will directly affect the data transmission between handheld ultrasound and cloud servers, which may lead to the need for real-time display of intelligent analysis results. It is satisfied that when there is no network connection, the artificial intelligence analysis function of handheld ultrasound will be completely unusable due to the inability to transmit data.
发明内容Summary of the invention
为了解决现有技术的上述至少一个问题及/或其他潜在问题,有必要提出一种创伤超声诊断方法、系统、移动设备及计算机可读存储介质。In order to solve at least one of the above-mentioned problems and/or other potential problems in the prior art, it is necessary to provide a wound ultrasound diagnosis method, system, mobile device, and computer-readable storage medium.
第一方面,提供一种创伤超声诊断方法,所述方法包括:In a first aspect, a wound ultrasound diagnosis method is provided, the method including:
获取由超声回波转换的电信号并生成超声图像视频流,所述超声回波为超声探头检测被检测者时接收的超声回波;Acquiring an electrical signal converted by an ultrasonic echo and generating an ultrasonic image video stream, the ultrasonic echo being the ultrasonic echo received when the ultrasonic probe detects the subject;
显示超声视频图像;Display ultrasound video images;
利用人工智能模型基于所述超声图像视频流分析被检测部位,判断是否存在液性暗区;及Use an artificial intelligence model to analyze the detected part based on the ultrasound image video stream to determine whether there is a liquid dark area; and
显示所述人工智能模型的分析结果。The analysis result of the artificial intelligence model is displayed.
进一步地,所述方法还包括:若存在液性暗区,利用所述人工智能模型分割所述液性暗区,并分析所述液性暗区的方位、形状及/或大小。Further, the method further includes: if there is a liquid dark area, segmenting the liquid dark area using the artificial intelligence model, and analyzing the orientation, shape and/or size of the liquid dark area.
进一步地,所述分析结果是不存在液性暗区或者是液性暗区的分布、形状及/或大小。Further, the analysis result is that there is no liquid dark area or the distribution, shape and/or size of the liquid dark area.
进一步地,若存在液性暗区,显示所述人工智能模型的分析结果为:输出存在液性暗区的部位名称、叠加了液性暗区分割蒙版的超声视频图像帧、及/或输出每块液性暗区的方位、大小及/或形状。Further, if there is a liquid dark area, the analysis result of the artificial intelligence model is displayed as: output the name of the part where the liquid dark area exists, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and/or output The orientation, size and/or shape of each liquid dark zone.
进一步地,所述人工智能模型采用轻量化的深度卷积神经网络作为模型,所述创伤超声诊断方法利用移动设备实施。Further, the artificial intelligence model uses a lightweight deep convolutional neural network as a model, and the trauma ultrasound diagnosis method is implemented using mobile devices.
进一步地,所述“利用人工智能模型基于所述超声图像视频流分析被检测部位,判断是否存在液性暗区”包括:利用全卷积神经网络对每一帧超声视频图像进行多次卷积运算获得一或多个特征图像,所述一或多个特征图像经全局池化层、全连接层及激活函数后获得一概率,根据所述概率判断是否存在液性暗区。Further, the "using an artificial intelligence model to analyze the detected part based on the ultrasound image video stream to determine whether there is a liquid dark area" includes: using a fully convolutional neural network to perform multiple convolutions on each frame of the ultrasound video image One or more feature images are obtained by calculation, and a probability is obtained after the one or more feature images are passed through the global pooling layer, the fully connected layer and the activation function, and whether there is a liquid dark zone is determined according to the probability.
进一步地,所述“利用所述人工智能模型分割所述液性暗区,并分析所述液性暗区的方位、形状及/或大小”包括:利用全卷积神经网络对每一帧超声视频图像进行多次卷积运算获得一或多个特征图像,所述一或多个特征图像中的每一特征图像再经1*1卷积运算获得与该帧超声视频图像大小相同的数值矩阵,所述数值矩阵中的每一数值对应该帧超声视频图像的一像素点,所述数值矩阵经由激活函数分割出对应液性暗区的数值分布,并根据所述数值分布计算出每一液性暗区的面积及/或积液量。Further, the "segmentation of the liquid dark area using the artificial intelligence model, and analyzing the position, shape and/or size of the liquid dark area" includes: using a fully convolutional neural network to perform ultrasound on each frame The video image is subjected to multiple convolution operations to obtain one or more feature images, and each feature image in the one or more feature images is then subjected to a 1*1 convolution operation to obtain a numerical matrix with the same size as the frame of the ultrasound video image Each value in the numerical matrix corresponds to a pixel of a frame of ultrasound video image, and the numerical matrix divides the numerical distribution of the corresponding liquid dark area through the activation function, and calculates each liquid according to the numerical distribution. The area and/or fluid volume of the dark zone.
进一步地,所述方法还包括:形成人工智能辅助诊疗报告,及,显示及/或存储所述人工智能辅助诊疗报告。Further, the method further includes: forming an artificial intelligence-assisted diagnosis and treatment report, and displaying and/or storing the artificial intelligence-assisted diagnosis and treatment report.
进一步地,所述方法还包括:利用人工智能模型分析所诊断部位是哪一部位。Further, the method further includes: using an artificial intelligence model to analyze which part of the diagnosed part is.
进一步地,所述方法还包括:利用人工智能模型判断所有需要诊断的诊断部位是否完成诊断,若所有需要诊断的诊断部位均已完成诊断,则显示所述人工智能模型的分析结果及/或形成所述人工智能辅助诊疗报告,若有任一需要诊断的诊断部位未完成诊断,则继续获取由超声回波转换的电信号。Further, the method further includes: using an artificial intelligence model to determine whether all diagnostic parts that need to be diagnosed have been diagnosed, and if all diagnostic parts that need to be diagnosed have been diagnosed, displaying the analysis results and/or formation of the artificial intelligence model In the artificial intelligence-assisted diagnosis and treatment report, if any diagnosis part that needs to be diagnosed has not been diagnosed, the electric signal converted by the ultrasonic echo is continued to be obtained.
第二方面,提供一种创伤超声诊断系统,所述系统包括:In a second aspect, a trauma ultrasound diagnosis system is provided, and the system includes:
信号获取模块,用于获取由超声回波转换的电信号,所述超声回波为超声探头检测被检测者时接收的超声回波;A signal acquisition module for acquiring an electrical signal converted by an ultrasonic echo, the ultrasonic echo being the ultrasonic echo received when the ultrasonic probe detects the subject;
视频图像信号生成模块,用于根据所述电信号,生成超声图像 视频流;A video image signal generating module, configured to generate an ultrasound image video stream according to the electrical signal;
人工智能模型,所述人工智能模型包括:Artificial intelligence model, the artificial intelligence model includes:
液性暗区判断模块,用于接收所述超声图图像视频流、基于所述超声图像视频流分析检测部位是否存在液性暗区;及The liquid dark area judgment module is configured to receive the ultrasound image video stream, and analyze whether there is a liquid dark area in the detection part based on the ultrasound image video stream; and
显示模块,用于接收所述超声图像视频流并显示超声视频图像于一显示单元,及用于接收所述人工智能模型的分析结果并将所述分析结果显示于所述显示单元上。The display module is used for receiving the ultrasound image video stream and displaying the ultrasound video image on a display unit, and for receiving the analysis result of the artificial intelligence model and displaying the analysis result on the display unit.
进一步地,所述人工智能模型还包括:液性暗区分割模块,所述液性暗区分割模块用于分割液性暗区,并分析所述液性暗区的方位、形状及/或大小。Further, the artificial intelligence model further includes: a liquid dark area segmentation module, the liquid dark area segmentation module is used to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area .
进一步地,所述人工智能模型的分析结果是不存在液性暗区或者是液性暗区的分布、形状及/或大小。Further, the analysis result of the artificial intelligence model is that there is no liquid dark zone or the distribution, shape and/or size of the liquid dark zone.
进一步地,所述显示模块用于输出存在液性暗区的部位名称、叠加了液性暗区分割蒙版的超声视频图像帧、及/或输出每块液性暗区的方位、大小及/或形状。Further, the display module is used to output the name of the part where the liquid dark area exists, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and/or output the orientation, size and/or of each liquid dark area Or shape.
进一步地,所述人工智能模型采用轻量化的深度卷积神经网络作为模型,所述系统安装于移动设备上。Further, the artificial intelligence model adopts a lightweight deep convolutional neural network as a model, and the system is installed on a mobile device.
进一步地,所述液性暗区判断模块用于利用全卷积神经网络对每一帧超声视频图像进行多次卷积运算获得一或多个特征图像,将所述一或多个特征图像经全局池化层、全连接层及激活函数后获得一概率,及根据所述概率判断是否存在液性暗区。Further, the liquid dark area judgment module is used to perform multiple convolution operations on each frame of ultrasound video image using a fully convolutional neural network to obtain one or more feature images, and pass the one or more feature images through A probability is obtained after the global pooling layer, the fully connected layer, and the activation function, and it is determined whether there is a liquid dark zone according to the probability.
进一步地,所述液性暗区分割模块用于利用全卷积神经网络对每一帧超声视频图像进行多次卷积运算获得一或多个特征图像,将所述一或多个特征图像中的每一特征图像再经1*1卷积运算获得与该帧超声视频图像大小相同的数值矩阵,其中所述数值矩阵中的 每一数值对应该帧超声视频图像的一像素点,所述液性暗区分割模块还用于从所述数值矩阵中分割出对应液性暗区的数值分布,并根据所述数值分布计算出每一液性暗区的面积及/或积液量。Further, the liquid dark area segmentation module is used to perform multiple convolution operations on each frame of ultrasound video image by using a fully convolutional neural network to obtain one or more feature images, and combine the one or more feature images Each feature image of the 1*1 convolution operation is performed to obtain a numerical matrix with the same size as the frame of the ultrasound video image, wherein each value in the numerical matrix corresponds to a pixel of the frame of the ultrasound video image, and the liquid The dark area segmentation module is also used to segment the numerical value distribution corresponding to the dark area of the liquid from the numerical matrix, and calculate the area and/or amount of liquid accumulation of each dark area of the liquid based on the value distribution.
进一步地,所述系统还包括辅助诊疗报告生成模块,所述辅助诊疗报告生成模块用于根据所述人工智能模型的分析结果生成人工智能辅助诊疗报告。Further, the system further includes an auxiliary diagnosis and treatment report generation module, and the auxiliary diagnosis and treatment report generation module is used to generate an artificial intelligence auxiliary diagnosis and treatment report according to the analysis result of the artificial intelligence model.
进一步地,所述系统还包括辅助诊疗报告存储模块,所述辅助诊疗报告存储模块用于存储所述人工智能辅助诊疗报告。Further, the system further includes an auxiliary diagnosis and treatment report storage module, and the auxiliary diagnosis and treatment report storage module is used to store the artificial intelligence auxiliary diagnosis and treatment report.
进一步地,所述显示模块还用于显示所述人工智能辅助诊疗报告。Further, the display module is also used to display the artificial intelligence-assisted diagnosis and treatment report.
进一步地,所述系统还包括诊断部位判断模块,所述诊断部位判断模块用于判断所诊断部位是哪一部位。Further, the system further includes a diagnosis part judgment module, and the diagnosis part judgment module is used to judge which part the diagnosed part is.
进一步地,所述系统还包括诊断完成判断模块,所述诊断完成判断模块用于判断所有需要诊断的诊断部位是否完成诊断,所述辅助诊疗报告生成模块用于在所述诊断完成判断模块判断所有需要诊断的诊断部位已完成诊断时,生成所述辅助诊疗报告,及/或,所述显示模块用于在所述诊断完成判断模块判断所有需要诊断的诊断部位已完成诊断时,显示所述人工智能模型的分析结果于所述显示单元上。Further, the system further includes a diagnosis completion judging module, the diagnosis completion judging module is used to judge whether all the diagnosed parts that need to be diagnosed have completed the diagnosis, and the auxiliary diagnosis and treatment report generation module is used to judge all When the diagnosis parts that need to be diagnosed have been diagnosed, the auxiliary diagnosis and treatment report is generated, and/or the display module is used to display the manual when the diagnosis completion judgment module judges that all the parts that need to be diagnosed have been diagnosed. The analysis result of the smart model is on the display unit.
第三方面,提供一种移动设备,所述移动设备包括:通信单元、显示单元、处理单元及存储单元,所述存储单元中存储有多个程序模块,所述多个程序模块由所述处理单元加载并执行上述的创伤超声诊断方法。In a third aspect, a mobile device is provided. The mobile device includes a communication unit, a display unit, a processing unit, and a storage unit. The storage unit stores a plurality of program modules, and the plurality of program modules are processed by the The unit loads and executes the above-mentioned trauma ultrasound diagnosis method.
第四方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理单元执行时实现上述的创伤超声诊断方法。In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program is executed by a processing unit to realize the above-mentioned trauma ultrasound diagnosis method.
本发明的实施例提供的创伤超声诊断方法、系统、移动设备及存储介质,将人工智能赋能医疗应用于移动设备进行创伤的超声诊断,能够快速准确地在医护人员进行超声扫查的过程中,实时判断被检测者被检测部位是否存在积血、积液以及其的具体位置、大小。因此,在人工智能辅助下,即使是没有丰富经验的超声使用经验的医护人员,也能够快速的解读创伤超声视频图像,并做出相应的治疗措施。因此,本实施例提供的创伤超声诊断方法降低了对医护人员的相关技能要求、大大提高了超声诊断的准确性与效率。而将人工智能创伤超声诊疗应用于移动设备,可避免对移动网络的依赖。The trauma ultrasound diagnosis method, system, mobile device and storage medium provided by the embodiments of the present invention apply artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately perform ultrasound scans by medical staff , Real-time judgment of whether there is blood and fluid accumulation in the detected part of the subject, as well as its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis. The application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
附图说明Description of the drawings
为了更清楚地说明本发明实施例的技术方案,下面将对本发明实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions of the embodiments of the present invention more clearly, the following will briefly introduce the drawings needed in the embodiments of the present invention. Obviously, the drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, without creative work, other drawings can be obtained from these drawings.
图1为本发明实施例一中的创伤超声诊断方法的流程示意图。Fig. 1 is a schematic flowchart of a wound ultrasonic diagnosis method in the first embodiment of the present invention.
图2为例举的人工智能分析是否存在液性暗区的方法示意图。Figure 2 is a schematic diagram of an exemplary artificial intelligence analysis of whether there is a liquid dark zone.
图3为例举的人工智能分割液性暗区的方法示意图。Figure 3 is a schematic diagram of an exemplary artificial intelligence method for dividing liquid dark areas.
图4为本发明实施例二中的创伤超声诊断方法的流程示意图。Fig. 4 is a schematic flowchart of a wound ultrasonic diagnosis method in the second embodiment of the present invention.
图5为本发明实施例三中的创伤超声诊断方法的流程示意图。Fig. 5 is a schematic flow chart of a wound ultrasonic diagnosis method in the third embodiment of the present invention.
图6为本发明实施例四中的创伤超声诊断系统的系统框架图。Fig. 6 is a system frame diagram of a trauma ultrasonic diagnosis system in the fourth embodiment of the present invention.
图7为人工智能模块分析是否存在液性暗区的示意图。Figure 7 is a schematic diagram of the artificial intelligence module analyzing whether there is a liquid dark zone.
图8为人工智能模块分割液性暗区的示意图。Figure 8 is a schematic diagram of the artificial intelligence module dividing the liquid dark area.
图9为本发明实施例五中的创伤超声诊断系统的系统框架图。Fig. 9 is a system frame diagram of a trauma ultrasound diagnosis system in the fifth embodiment of the present invention.
图10为本发明实施例六中的创伤超声诊断系统的系统框架 图。Fig. 10 is a system frame diagram of a trauma ultrasonic diagnosis system in the sixth embodiment of the present invention.
图11为本发明实施例七中的创伤超声诊断系统的系统框架图。Fig. 11 is a system frame diagram of a trauma ultrasonic diagnosis system in the seventh embodiment of the present invention.
图12为本发明实施例八中的创伤超声诊断系统的系统框架图。Fig. 12 is a system frame diagram of a trauma ultrasonic diagnosis system in the eighth embodiment of the present invention.
如下具体实施方式将结合上述附图进一步说明本发明。The following specific embodiments will further illustrate the present invention in conjunction with the above-mentioned drawings.
主要元件符号说明Symbol description of main components
步骤                    S11-S17、S21-S28、S31-S39Step S11-S17, S21-S28, S31-S39
创伤超声诊断系统        60、90、100、110Trauma ultrasound diagnosis system 60, 90, 100, 110
信号获取模块            61 Signal acquisition module 61
视频图像信号生成模块    62Video image signal generation module 62
人工智能模型            63、101、111、126 Artificial intelligence model 63, 101, 111, 126
显示模块                64、95 Display module 64, 95
超声探头                70、130 Ultrasound probe 70, 130
液性暗区判断模块        631Liquid dark area judgment module 631
液性暗区分割模块        633Liquid dark zone segmentation module 633
显示单元                80 Display unit 80
辅助诊疗报告生成模块    91、113Auxiliary diagnosis and treatment report generation module 91, 113
辅助诊疗报告存储模块    93Auxiliary diagnosis and treatment report storage module 93
存储单元                82 Storage unit 82
云端服务器              84 Cloud server 84
诊断部位判断模块        102Diagnosis part judgment module 102
诊断完成判断模块        112Diagnosis completion judgment module 112
移动设备                120 Mobile equipment 120
处理单元                121 Processing unit 121
存储单元                122 Storage unit 122
通信单元                123 Communication unit 123
显示单元                124 Display unit 124
超声检测程序            125 Ultrasonic testing procedures 125
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
需要说明的是,当组件被称为“固定于”、“安装于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“设置于”另一个组件,它可以是直接设置在另一个组件上或者可能同时存在居中组件。本文所使用的术语“及/或”包括一个或多个相关的所列项目的所有的和任意的组合。It should be noted that when a component is referred to as being "fixed to" or "installed on" another component, it can be directly on the other component or a central component may also exist. When a component is considered to be "installed on" another component, it can be directly installed on another component or a centered component may exist at the same time. The term "and/or" as used herein includes all and any combinations of one or more related listed items.
实施例一Example one
请参阅图1所示,为本发明实施例一中的创伤超声诊断方法的流程示意图。根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。为了便于说明,仅示出了与本发明实施例相关的部分。Please refer to FIG. 1, which is a schematic flowchart of a wound ultrasonic diagnosis method in Embodiment 1 of the present invention. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted. For ease of description, only the parts related to the embodiment of the present invention are shown.
如图1所示,所述实施例一中的创伤超声诊断方法包括以下步骤。As shown in Fig. 1, the wound ultrasound diagnosis method in the first embodiment includes the following steps.
步骤S11、在移动设备端获取超声探头发送的对应超声回波的电信号,生成超声图像视频流。Step S11: Obtain the electrical signal corresponding to the ultrasound echo sent by the ultrasound probe at the mobile device side, and generate an ultrasound image video stream.
在本实施例中,将所述移动设备与一手持超声探头建立通信连接,利用所述手持超声探头通过在被检测者的被检测部位打图而获 得被检测者被检测部位的超声回波,所述超声回波被所述手持超声设备转换成电信号发送给所述移动设备,从而使移动设备可持续获得超声探头发送的对应超声回波的电信号并生成超声图像视频流。In this embodiment, a communication connection is established between the mobile device and a hand-held ultrasound probe, and the hand-held ultrasound probe is used to obtain the ultrasound echo of the detected part of the examinee by drawing a picture on the detected part of the examinee. The ultrasonic echo is converted into an electrical signal by the handheld ultrasonic device and sent to the mobile device, so that the mobile device can continuously obtain the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe and generate an ultrasonic image video stream.
步骤S12、显示超声视频图像于所述移动设备上。Step S12: Display the ultrasound video image on the mobile device.
步骤S13、利用人工智能模型基于所述超声图像视频流分析被检测部位,判断是否存在积液、积血的液性暗区,若存在液性暗区,进入步骤S14,若不存在液性暗区,进入步骤S15。Step S13: Use the artificial intelligence model to analyze the detected part based on the ultrasound image video stream to determine whether there is a liquid dark area of effusion or blood accumulation. If there is a liquid dark area, go to step S14, if there is no liquid dark area Area, go to step S15.
步骤S14、利用所述人工智能模型分割所述液性暗区,并分析所述液性暗区的方位、形状及/或大小。Step S14: Use the artificial intelligence model to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area.
在本实施例中,步骤S12与步骤S13、S14并没有必然的先后关系,步骤S12可以是与步骤S13、S14同时进行、或者先于或晚于步骤S13、S14进行。In this embodiment, step S12 is not necessarily related to steps S13 and S14, and step S12 can be performed simultaneously with steps S13 and S14, or performed before or after steps S13 and S14.
在本实施例中,所述人工智能模型采用轻量化的深度卷积神经网络作为模型,利用卷积神经网络中的卷积层,通过卷积核对输入的超声图像视频流的每一帧进行过滤以提取特征信息。所述人工智能模型包括多个卷积层,卷积层与卷积层之间相互连接,逐级对每一帧超声视频图像提取语义信息更丰富、更具分辨性的特征,以便于对结果进行准确的预测。每层卷积层包含众多参数,需要大量的运算进行特征的提取,为使人工智能模型成功部署于移动设备上使用,在本实施例中,对卷积层进行了优化,在可接受的精度范围内,尽可能地减少了参数量和计算量,以使人工智能模型在计算资源受限的情况中达到性能优化。对卷积层的优化可参考谷歌提出的“mobilenet”轻量化网络。In this embodiment, the artificial intelligence model uses a lightweight deep convolutional neural network as the model, and uses the convolutional layer in the convolutional neural network to filter each frame of the input ultrasound image video stream through the convolution kernel To extract feature information. The artificial intelligence model includes multiple convolutional layers, and the convolutional layers are connected to each other, and each frame of ultrasound video image is extracted level by level with richer semantic information and more distinguishable features, so as to facilitate the comparison of the results. Make accurate predictions. Each convolutional layer contains many parameters and requires a lot of calculations to extract features. In order to successfully deploy the artificial intelligence model on mobile devices, in this embodiment, the convolutional layer is optimized to achieve acceptable accuracy. Within the scope, the amount of parameters and calculations are reduced as much as possible, so that the artificial intelligence model can achieve performance optimization when computing resources are limited. The optimization of the convolutional layer can refer to the "mobilenet" lightweight network proposed by Google.
请参阅图2与图3所示,在本实施例中,所述人工智能模型分析分成两个阶段,第一阶段,利用全卷积神经网络1分析是否存在积液暗区,在超声图像视频流输出至人工智能模型后,所述全卷积 神经网络1将每一帧的超声视频图像经过多个卷积层的多次卷积运算,获得一或多个的特征图像,所述一或多个特征图像经全局池化层后特征得到高度提纯,维度缩减,之后再经全连接层进行特征分类与综合,最终,全连接层的输出连接激活函数1,所述激活函数1可以是“Softmax”函数或“Sigmoid”函数,所述激活函数1将全连接层输出的特征张量中的数值压缩至0~1的范围内。经过激活函数1获得的数值或数组为一概率,在本实施例中,概率小于0.5被认为不存在液性暗区,所述人工智能模型在步骤S13后返回无液性暗区的判断结果,而概率大于等于0.5被认为存在液性暗区,所述人工智能模型在步骤S13后返回存在液性暗区的判断结果。在其他实施例中,根据具体情况以及采用的神经网络模型与参数的不同,设定有无液性暗区的概率界限可以为0-1之间的任意值。第二阶段,利用全卷积网络2对存在的液性暗区进行分割并测量。所述全卷积神经网络2将每一帧的超声视频图像经过多个卷积层的多次卷积运算,获得一或多个特征图像,所述一或多个特征图像中的每一特征图像再经1*1的卷积层进行卷积运算,获得与原超声视频图像帧大小相同的数值矩阵,数值矩阵中的每一个数值代表原超声视频图像帧中对应的像素点,所述激活函数将数值矩阵中的每一个数值压缩至0~1的范围内,每一数值的大小表明其所对应的超声图像中的像素点属于液性暗区的概率大小,在本实施方式中,大于等于0.5表明该像素点属于液性暗区,小于0.5表明该像素点不属于液性暗区,所述激活函数再根据数值矩阵中的每一个数值的大小分割出数值矩阵中对应液性暗区的数值分布。进一步地,所述激活函数结合超声探头每个像素点对应的空间尺度,计算出每一液性暗区的面积及/或积液量。在其他实施例中,根据具体情况以及采用的神经网络模型与参数的不同,判断数值矩阵中每一数值对应的像素 点是否属于液性暗区的概率界限可以为0-1之间的任意值。Please refer to Figure 2 and Figure 3. In this embodiment, the artificial intelligence model analysis is divided into two stages. In the first stage, the full convolutional neural network 1 is used to analyze whether there is a dark area of effusion. After the stream is output to the artificial intelligence model, the fully convolutional neural network 1 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolution layers to obtain one or more feature images, the one or Multiple feature images are highly refined after the global pooling layer, and their dimensions are reduced, and then the fully connected layer is used for feature classification and synthesis. Finally, the output of the fully connected layer is connected to the activation function 1. The activation function 1 can be " Softmax" function or "Sigmoid" function, the activation function 1 compresses the value of the feature tensor output by the fully connected layer to a range of 0 to 1. The value or array obtained through activation function 1 is a probability. In this embodiment, the probability is less than 0.5 and it is considered that there is no liquid dark zone. The artificial intelligence model returns to the judgment result of the liquid-free dark zone after step S13. If the probability is greater than or equal to 0.5, it is considered that there is a liquid dark zone, and the artificial intelligence model returns the judgment result of the existence of a liquid dark zone after step S13. In other embodiments, the probability limit for setting the presence or absence of a liquid dark zone can be any value between 0 and 1 according to specific conditions and different neural network models and parameters used. In the second stage, the full convolutional network 2 is used to segment and measure the existing liquid dark areas. The fully convolutional neural network 2 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolutional layers to obtain one or more feature images, and each feature in the one or more feature images The image is then subjected to a convolution operation with a 1*1 convolution layer to obtain a numerical matrix with the same size as the original ultrasound video image frame. Each value in the numerical matrix represents a corresponding pixel point in the original ultrasound video image frame. The activation The function compresses each value in the numeric matrix to the range of 0 to 1. The size of each value indicates the probability that the pixel in the corresponding ultrasound image belongs to the liquid dark zone. In this embodiment, it is greater than Equal to 0.5 indicates that the pixel belongs to the liquid dark area, and less than 0.5 indicates that the pixel does not belong to the liquid dark area. The activation function then divides the corresponding liquid dark area in the numeric matrix according to the magnitude of each value in the numeric matrix. The distribution of values. Further, the activation function combines the spatial scale corresponding to each pixel of the ultrasound probe to calculate the area and/or the amount of fluid accumulation of each liquid dark zone. In other embodiments, depending on the specific situation and the different neural network models and parameters used, the probability limit for judging whether the pixel corresponding to each value in the numerical matrix belongs to the liquid dark zone can be any value between 0 and 1 .
在另一实施例中,第二阶段中可以利用第一阶段中将每一帧的超声视频图像经过多个卷积层的多次卷积运算结果,在此结果上分割液性暗区及计算液性暗区的方位、形状及/或大小。In another embodiment, in the second stage, the results of multiple convolution operations in which each frame of the ultrasound video image in the first stage is passed through multiple convolution layers can be used in the second stage, and the liquid dark area can be divided and calculated on this result. The orientation, shape and/or size of the liquid dark zone.
步骤S15、显示所述人工智能模型的分析结果。Step S15: Display the analysis result of the artificial intelligence model.
在本实施例中,显示所述人工智能模型的分析结果于所述移动终端,根据不同情况,所述人工智能模型的分析结果可以是“不存在液性暗区”或者液性暗区的分布、形状及/或大小。当所述人工智能模型的分析结果是液性暗区的分布、形状及/或大小时,所述显示可以是以下述的任一方式或者下述的两或多个方式的组合:输出存在积液的部位名称、叠加了液性暗区分割蒙版的超声视频图像帧、输出每块积液的大小及/或形状。In this embodiment, the analysis result of the artificial intelligence model is displayed on the mobile terminal. According to different situations, the analysis result of the artificial intelligence model may be "there is no liquid dark zone" or the distribution of the liquid dark zone , Shape and/or size. When the analysis result of the artificial intelligence model is the distribution, shape and/or size of the liquid dark area, the display can be in any one of the following ways or a combination of two or more of the following ways: output the existence product The name of the liquid part, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and the size and/or shape of each piece of effusion are output.
步骤S16、形成人工智能辅助诊疗报告。Step S16, forming an artificial intelligence-assisted diagnosis and treatment report.
步骤S17、显示及/或存储所述人工智能辅助诊疗报告。Step S17: Display and/or store the artificial intelligence-assisted diagnosis and treatment report.
在本实施例中,所述人工智能辅助诊疗报告可以存储于移动设备,或者,所述人工智能辅助诊疗报告还可上传并保存至云端服务器。In this embodiment, the artificial intelligence-assisted diagnosis and treatment report may be stored in a mobile device, or the artificial intelligence-assisted diagnosis and treatment report may also be uploaded and saved to a cloud server.
本实施例提供的创伤超声诊断方法,将人工智能赋能医疗应用于移动设备进行创伤的超声诊断,能够快速准确地在医护人员进行超声扫查的过程中,实时判断被检测者被检测部位是否存在积血、积液以及其的具体位置、大小。因此,在人工智能辅助下,即使是没有丰富经验的超声使用经验的医护人员,也能够快速的解读创伤超声视频图像,并做出相应的治疗措施。因此,本实施例提供的创伤超声诊断方法降低了对医护人员的相关技能要求、大大提高了超声诊断的准确性与效率。而将人工智能创伤超声诊疗应用于移动设备,可避免对移动网络的依赖。The trauma ultrasound diagnosis method provided in this embodiment applies artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately determine whether the detected part of the subject is detected in real time during the ultrasound scan performed by medical staff. There are blood and fluid accumulation and their specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis. The application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
在另一实施例中,所述创伤超声诊断方法可以包括步骤S11-S15,而不包括S16与S17。In another embodiment, the wound ultrasound diagnosis method may include steps S11-S15, but does not include S16 and S17.
在另一实施例中,所述创伤超声诊断方法还可包括步骤:建立所述移动设备与超声探头的通信连接,在移动设备与超声探头的通信连接建立后,开始步骤S11。In another embodiment, the ultrasonic diagnosis method for trauma may further include the step of establishing a communication connection between the mobile device and the ultrasound probe, and after the communication connection between the mobile device and the ultrasound probe is established, step S11 is started.
实施例二Example two
请参阅图4所示,为本发明实施例二中的创伤超声诊断方法的流程示意图。根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。为了便于说明,仅示出了与本发明实施例相关的部分。Please refer to FIG. 4, which is a schematic flowchart of a wound ultrasonic diagnosis method in Embodiment 2 of the present invention. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted. For ease of description, only the parts related to the embodiment of the present invention are shown.
如图4所示,所述实施例二中的创伤超声诊断方法包括以下步骤。As shown in Figure 4, the wound ultrasound diagnosis method in the second embodiment includes the following steps.
步骤S21、在移动设备端获取超声探头发送的对应超声回波的电信号,生成超声图像视频流。Step S21: Obtain the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe at the mobile device side, and generate an ultrasonic image video stream.
步骤S22、显示超声视频图像于所述移动设备上。Step S22: Display the ultrasound video image on the mobile device.
本实施例中步骤S21、S22及其相关描述分别与实施例一中步骤S11、S12及其相关描述一致,具体请参阅实施例一,在此不进行赘述。Steps S21, S22 and their related descriptions in this embodiment are respectively consistent with steps S11, S12 and their related descriptions in Embodiment 1. For details, please refer to Embodiment 1, which will not be repeated here.
步骤S23、利用人工智能模型分析所诊断部位是哪一部位。Step S23: Use the artificial intelligence model to analyze which part of the diagnosed part is.
本实施例中,人工智能模型可诊断的部位可以是肝周、脾周、心包、盆腔周围或肺部,人工智能模型可以根据用户的输入判断是哪一诊断部位,例如,用户通过移动设备选择需诊断的部位,人工智能模型根据用户的选择判断是哪一诊断部位,人工智能模型也可基于所述超声图像视频流、根据不同部位的不同参数或图片判断是哪一诊断部位。In this embodiment, the part that can be diagnosed by the artificial intelligence model can be around the liver, spleen, pericardium, pelvic cavity or lung. The artificial intelligence model can judge which part of the diagnosis is based on the user's input, for example, the user selects via a mobile device For the part to be diagnosed, the artificial intelligence model judges which part is diagnosed according to the user's choice. The artificial intelligence model can also judge which part is diagnosed based on the ultrasound image video stream and according to different parameters or pictures of different parts.
步骤S24、利用人工智能模型基于所述超声图像视频流分析所 诊断部位,判断是否存在积液、积血的液性暗区,若存在液性暗区,进入步骤S25,若不存在液性暗区,进入步骤S26。Step S24: Use the artificial intelligence model to analyze the diagnosed part based on the ultrasound image video stream to determine whether there is a liquid dark area of effusion or blood accumulation, if there is a liquid dark area, go to step S25, if there is no liquid dark area Area, go to step S26.
步骤S25、利用所述人工智能模型分割所述液性暗区,并分析所述液性暗区的方位、形状及/或大小。Step S25: Use the artificial intelligence model to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area.
步骤S26、显示所述人工智能模型的分析结果于所述移动终端。Step S26: Display the analysis result of the artificial intelligence model on the mobile terminal.
步骤S27、形成人工智能辅助诊疗报告。Step S27: Form an artificial intelligence-assisted diagnosis and treatment report.
步骤S28、显示及/或存储所述人工智能辅助诊疗报告。Step S28: Display and/or store the artificial intelligence-assisted diagnosis and treatment report.
本实施例中步骤S24-S28及其相关描述分别与实施例一中步骤S13-S17及其相关描述一致,具体请参阅实施例一,在此不进行赘述。Steps S24-S28 and related descriptions in this embodiment are respectively consistent with steps S13-S17 and related descriptions in Embodiment 1. For details, please refer to Embodiment 1, and will not be repeated here.
本实施例中,步骤S22与步骤S23-S25并没有必然的先后关系,步骤S22可以是与步骤S23-S25同时进行、或者先于或晚于步骤S23-S25进行。In this embodiment, step S22 and steps S23-S25 are not necessarily sequential. Step S22 can be performed simultaneously with steps S23-S25, or performed before or after steps S23-S25.
本实施例提供的创伤超声诊断方法,将人工智能赋能医疗应用于移动设备进行创伤的超声诊断,能够快速准确地判断医护人员进行超声扫查的部位,实时判断被检测部位是否存在积血、积液以及其的具体位置、大小。因此,在人工智能辅助下,即使是没有丰富经验的超声使用经验的医护人员,也能够快速的解读创伤超声视频图像,并做出相应的治疗措施。因此,本实施例提供的创伤超声诊断方法降低了对医护人员的相关技能要求、大大提高了超声诊断的准确性与效率。而将人工智能创伤超声诊疗应用于移动设备,可避免对移动网络的依赖。The trauma ultrasound diagnosis method provided in this embodiment applies artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately determine the location of medical staff performing ultrasound scanning, and determine whether there is blood accumulation in the detected location in real time. Fluid effusion and its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis. The application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
实施例三Example three
请参阅图5所示,为本发明实施例三中的创伤超声诊断方法的流程示意图。根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。为了便于说明,仅示出了与本发明实施例相关 的部分。Please refer to FIG. 5, which is a schematic flowchart of a wound ultrasound diagnosis method in Embodiment 3 of the present invention. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted. For ease of description, only the parts related to the embodiment of the present invention are shown.
如图5所示,所述实施例三中的创伤超声诊断方法包括以下步骤。As shown in Fig. 5, the wound ultrasound diagnosis method in the third embodiment includes the following steps.
步骤S31、在移动设备端获取超声探头发送的对应超声回波的电信号,生成超声图像视频流。Step S31: Obtain the electrical signal corresponding to the ultrasound echo sent by the ultrasound probe at the mobile device side, and generate an ultrasound image video stream.
步骤S32、显示超声视频图像于所述移动设备上。Step S32: Display the ultrasound video image on the mobile device.
步骤S33、利用人工智能模型分析所诊断部位是哪一部位。Step S33: Use the artificial intelligence model to analyze which part of the diagnosed part is.
步骤S34、利用人工智能模型基于所述超声图像视频流分析所诊断部位,判断是否存在积液、积血的液性暗区,若存在液性暗区,进入步骤S35,若不存在液性暗区,进入步骤S36。Step S34: Use an artificial intelligence model to analyze the diagnosed part based on the ultrasound image video stream to determine whether there is a liquid dark area of effusion or blood accumulation, if there is a liquid dark area, go to step S35, if there is no liquid dark area Area, go to step S36.
步骤S35、利用所述人工智能模型分割所述液性暗区,并分析所述液性暗区的方位、形状及/或大小。Step S35: Use the artificial intelligence model to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area.
本实施例中步骤S31-S35及其相关描述分别与实施例二中步骤S21-S25及其相关描述一致,具体请参阅实施例二,在此不进行赘述。Steps S31-S35 and related descriptions in this embodiment are respectively consistent with steps S21-S25 and related descriptions in the second embodiment. For details, please refer to the second embodiment, which will not be repeated here.
本实施例中,步骤S32与步骤S33-S35并没有必然的先后关系,步骤S32可以是与步骤S33-S35同时进行、或者先于或晚于步骤S33-S35进行。In this embodiment, step S32 and steps S33-S35 are not necessarily sequential. Step S32 can be performed simultaneously with steps S33-S35, or performed before or after steps S33-S35.
步骤S36、显示所述人工智能模型的分析结果于所述移动终端。Step S36: Display the analysis result of the artificial intelligence model on the mobile terminal.
步骤S37、利用人工智能模型判断所有需要诊断的诊断部位是否完成诊断,若是,进入步骤S38,若否,进入步骤S31,开始下一部位的超声诊断。Step S37: Use the artificial intelligence model to determine whether all the diagnostic parts that need to be diagnosed have been diagnosed, if yes, go to step S38, if not, go to step S31 to start ultrasound diagnosis of the next part.
步骤S38、形成人工智能辅助诊疗报告。Step S38, forming an artificial intelligence-assisted diagnosis and treatment report.
步骤S39、显示及/或存储所述人工智能辅助诊疗报告。Step S39: Display and/or store the artificial intelligence-assisted diagnosis and treatment report.
本实施例中步骤S37-S39及其相关描述分别与实施例二中步骤S26-S28及其相关描述一致,具体请参阅实施例二,在此不进行赘 述。Steps S37-S39 and related descriptions in this embodiment are respectively consistent with steps S26-S28 and related descriptions in the second embodiment. For details, please refer to the second embodiment, which will not be repeated here.
可以理解,在另一实施例中,步骤S37可以是在步骤S36之前进行,若步骤S37判断出所有需要诊断的诊断部位均已完成诊断,则流程进入步骤S36,完成步骤S36后进入步骤S38,或者,在执行步骤S36的同时执行步骤S38。若步骤S37判断出还有需要诊断的诊断部位未完成诊断,则流程进入步骤S31,开始下一部位的超声诊断。It can be understood that, in another embodiment, step S37 may be performed before step S36. If step S37 determines that all diagnostic parts that need to be diagnosed have been diagnosed, the flow proceeds to step S36, and after step S36 is completed, it proceeds to step S38. Alternatively, step S38 is executed at the same time as step S36 is executed. If it is determined in step S37 that there are still parts to be diagnosed that have not been diagnosed, the flow proceeds to step S31 to start ultrasound diagnosis of the next part.
本实施例提供的创伤超声诊断方法,将人工智能赋能医疗应用于移动设备进行创伤的超声诊断,能够快速准确地协助医护人员进行所有待检查部位的超声诊断,实时判断被检测部位是否存在积血、积液以及其的具体位置、大小。因此,在人工智能辅助下,即使是没有丰富经验的超声使用经验的医护人员,也能够快速的解读创伤超声视频图像,快速完成所有待检查部位的创伤超声诊断。因此,本实施例提供的创伤超声诊断方法降低了对医护人员的相关技能要求、大大提高了超声诊断的准确性与效率。而将人工智能创伤超声诊疗应用于移动设备,可避免对移动网络的依赖。The trauma ultrasound diagnosis method provided in this embodiment applies artificial intelligence-enabled medical treatment to mobile devices for ultrasound diagnosis of trauma, which can quickly and accurately assist medical staff in performing ultrasound diagnosis of all parts to be inspected, and judge whether there is any accumulation in the detected part in real time. Blood, effusion and its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and quickly complete trauma ultrasound diagnosis of all parts to be inspected. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis. The application of artificial intelligence trauma ultrasound diagnosis and treatment to mobile devices can avoid dependence on mobile networks.
图1-5介绍了本发明不同实施例中的创伤超声诊断方法,利用人工智能对超声视频图像进行诊断,降低了对医护人员的相关技能的要求、大大提高了超声对创伤诊断的准确性与效率。下面结合图6,对实现所述创伤超声诊断方法的软件系统的功能模块以及硬件装置架构进行介绍。应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。Figures 1-5 illustrate the trauma ultrasound diagnosis method in different embodiments of the present invention. The use of artificial intelligence to diagnose ultrasound video images reduces the requirements for the relevant skills of medical staff and greatly improves the accuracy and accuracy of ultrasound diagnosis of trauma. effectiveness. The functional modules and hardware device architecture of the software system implementing the trauma ultrasonic diagnosis method will be introduced below in conjunction with FIG. 6. It should be understood that the embodiments are only for illustrative purposes, and are not limited by this structure in the scope of the patent application.
实施例四Example four
请参阅图6所示,为本发明实施例四中的创伤超声诊断系统的系统框架图。在本实施例中,所述创伤超声诊断系统可以包括多个由程序代码段所组成的功能模块。所述创伤超声诊断系统的各个程 序段的程序代码可以存储于计算机装置例如移动设备的存储器中,并由计算机装置中的至少一个处理器所执行,以实现创伤超声诊断功能。Please refer to FIG. 6, which is a system frame diagram of the trauma ultrasound diagnosis system in the fourth embodiment of the present invention. In this embodiment, the trauma ultrasound diagnosis system may include multiple functional modules composed of program code segments. The program code of each program segment of the trauma ultrasound diagnosis system can be stored in a memory of a computer device, such as a mobile device, and executed by at least one processor in the computer device, so as to realize a trauma ultrasound diagnosis function.
本实施例中,所述创伤超声诊断系统60根据其所执行的功能,可以被划分为多个功能模块,所述各个功能模块用于执行图1、图3或图4对应实施例中的各个步骤,以实现创伤超声诊断功能。本实施例中,所述创伤超声诊断系统60的功能模块包括:信号获取模块61、视频图像信号生成模块62、人工智能模型63及显示模块64。各个功能模块的功能将在下面的实施例中进行详述。In this embodiment, the trauma ultrasound diagnosis system 60 can be divided into multiple functional modules according to the functions it performs, and each functional module is used to execute each of the corresponding embodiments in FIG. 1, FIG. 3, or FIG. 4. Steps to achieve the function of ultrasound diagnosis of trauma. In this embodiment, the functional modules of the trauma ultrasound diagnosis system 60 include: a signal acquisition module 61, a video image signal generation module 62, an artificial intelligence model 63, and a display module 64. The functions of each functional module will be described in detail in the following embodiments.
所述信号获取模块61用于获取超声探头70发送的对应超声回波的电信号。The signal acquisition module 61 is used to acquire the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe 70.
所述视频图像信号生成模块62用于根据所述电信号,生成超声图像视频流。The video image signal generating module 62 is configured to generate an ultrasound image video stream according to the electrical signal.
所述人工智能模型63包括液性暗区判断模块631与液性暗区分割模块633。所述液性暗区判断模块631用于接收所述超声图图像视频流、基于所述超声图像视频流分析检测部位是否存在积液、积血的液性暗区。所述液性暗区分割模块633用于分割液性暗区,并分析所述液性暗区的方位、形状及/或大小。The artificial intelligence model 63 includes a liquid dark area judgment module 631 and a liquid dark area segmentation module 633. The liquid dark area judging module 631 is configured to receive the ultrasound image video stream, and analyze and detect whether there is a liquid dark area with effusion or hemorrhage based on the ultrasound image video stream. The liquid dark area segmentation module 633 is used to segment the liquid dark area and analyze the orientation, shape and/or size of the liquid dark area.
在本实施例中,所述人工智能模型采用轻量化的深度卷积神经网络作为模型,利用卷积神经网络中的卷积层,通过卷积核对输入的超声图像视频流的每一帧进行过滤以提取特征信息。所述人工智能模型包括多个卷积层,卷积层与卷积层之间相互连接,逐级对每一帧超声视频图像提取语义信息更丰富、更具分辨性的特征,以便于对结果进行准确的预测。每层卷积层包含众多参数,需要大量的运算进行特征的提取,为使人工智能模型成功部署于移动设备上使用,在本实施例中,对卷积层进行了优化,在可接受的精度范围内, 尽可能地减少了参数量和计算量,以使人工智能模型在计算资源受限的情况中达到性能优化。对卷积层的优化可参考谷歌提出的“mobilenet”轻量化网络。In this embodiment, the artificial intelligence model uses a lightweight deep convolutional neural network as the model, and uses the convolutional layer in the convolutional neural network to filter each frame of the input ultrasound image video stream through the convolution kernel To extract feature information. The artificial intelligence model includes multiple convolutional layers, and the convolutional layers are connected to each other, and each frame of ultrasound video image is extracted level by level with richer semantic information and more distinguishable features, so as to facilitate the comparison of the results. Make accurate predictions. Each convolutional layer contains many parameters and requires a lot of calculations to extract features. In order to successfully deploy the artificial intelligence model on mobile devices, in this embodiment, the convolutional layer is optimized to achieve acceptable accuracy. Within the scope, the parameter amount and calculation amount are reduced as much as possible, so that the artificial intelligence model achieves performance optimization in the case of limited computing resources. The optimization of the convolutional layer can refer to the "mobilenet" lightweight network proposed by Google.
请参阅图7与图8,在本实施例中,所述液性暗区判断模块631采用全卷积神经网络1分析是否存在积液暗区,在超声图像视频流输出至人工智能模型后,所述全卷积神经网络1将每一帧的超声视频图像经过多个卷积层的多次卷积运算,获得一或多个的特征图像,所述多个特征图像经全局池化层后特征得到高度提纯,维度缩减,之后再经全连接层进行特征分类与综合,最终,全连接层的输出连接激活函数1,所述激活函数1可以是“Softmax”函数或“Sigmoid”函数,所述激活函数1将全连接层输出的特征张量中的数值压缩至0~1的范围内。经过激活函数1获得的数值或数组为一概率,在本实施例中,概率小于0.5被认为不存在液性暗区,所述液性暗区判断模块631输出无液性暗区的判断结果,而概率大于等于0.5被认为存在液性暗区,所述液性暗区判断模块631输出存在液性暗区的判断结果。在其他实施例中,根据具体情况以及采用的神经网络模型与参数的不同,设定有无液性暗区的概率界限可以为0-1之间的任意值。所述液性暗区分割模块633在所述液性暗区判断模块631判断存在液性暗区时,利用全卷积网络2对存在的液性暗区进行分割并测量。所述全卷积神经网络2将每一帧的超声视频图像经过多个卷积层的多次卷积运算,获得一或多个特征图像,所述一或多个特征图像中的每一特征图像再经1*1的卷积层进行卷积运算,获得与原超声视频图像帧大小相同的数值矩阵,数值矩阵中的每一个数值代表原超声视频图像帧中对应的像素点,所述激活函数将数值矩阵中的每一个数值压缩至0~1的范围内,每一数值的大小表明其所对应的超声图像中的像素点属于液性暗区的概率大 小,在本实施方式中,大于等于0.5表明该像素点属于液性暗区,小于0.5表明该像素点不属于液性暗区,所述激活函数再根据数值矩阵中的每一个数值分割出数值矩阵中对应液性暗区的数值分布。进一步地,所述激活函数结合手持超声探头每个像素点对应的空间尺度,计算出每一液性暗区的面积及/或积液量。在其他实施例中,根据具体情况以及采用的神经网络模型与参数的不同,判断数值矩阵中每一数值对应的像素点是否属于液性暗区的概率界限可以为0-1之间的任意值。Referring to Figures 7 and 8, in this embodiment, the liquid dark area determination module 631 uses a fully convolutional neural network 1 to analyze whether there is a dark area of effusion. After the ultrasound image video stream is output to the artificial intelligence model, The fully convolutional neural network 1 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolutional layers to obtain one or more feature images, and the multiple feature images are passed through the global pooling layer. The features are highly purified and the dimensions are reduced, and then the fully connected layer is used for feature classification and synthesis. Finally, the output of the fully connected layer is connected to the activation function 1. The activation function 1 can be a "Softmax" function or a "Sigmoid" function. The activation function 1 compresses the value of the feature tensor output by the fully connected layer to the range of 0 to 1. The value or array obtained through the activation function 1 is a probability. In this embodiment, the probability is less than 0.5 and it is considered that there is no dark liquid zone. The dark liquid zone judgment module 631 outputs the judgment result of the dark zone without liquid. If the probability is greater than or equal to 0.5, it is considered that there is a liquid dark zone, and the liquid dark zone judgment module 631 outputs the judgment result of the existence of a liquid dark zone. In other embodiments, the probability limit for setting the presence or absence of a liquid dark zone can be any value between 0 and 1 according to specific conditions and different neural network models and parameters used. The liquid dark area segmentation module 633 uses the fully convolutional network 2 to segment and measure the liquid dark area when the liquid dark area judging module 631 determines that there is a liquid dark area. The fully convolutional neural network 2 passes the ultrasound video image of each frame through multiple convolution operations of multiple convolutional layers to obtain one or more feature images, and each feature in the one or more feature images The image is then subjected to a convolution operation with a 1*1 convolution layer to obtain a numerical matrix with the same size as the original ultrasound video image frame. Each value in the numerical matrix represents a corresponding pixel point in the original ultrasound video image frame. The activation The function compresses each value in the numeric matrix to the range of 0 to 1. The size of each value indicates the probability that the pixel in the corresponding ultrasound image belongs to the liquid dark zone. In this embodiment, it is greater than Equal to 0.5 indicates that the pixel belongs to the liquid dark area, and less than 0.5 indicates that the pixel does not belong to the liquid dark area. The activation function then divides the value of the corresponding liquid dark area in the numeric matrix according to each value in the numeric matrix. distributed. Further, the activation function combines the spatial scale corresponding to each pixel of the handheld ultrasound probe to calculate the area and/or fluid accumulation of each liquid dark zone. In other embodiments, depending on the specific situation and the different neural network models and parameters used, the probability limit for judging whether the pixel corresponding to each value in the numerical matrix belongs to the liquid dark zone can be any value between 0 and 1 .
在另一实施例中,所述液性暗区分割模块633可以利用液性暗区判断模块631将每一帧的超声视频图像经过多个卷积层的多次卷积运算结果,在此结果上分割液性暗区及计算液性暗区的方位、形状及/或大小。In another embodiment, the liquid dark area segmentation module 633 can use the liquid dark area judgment module 631 to pass the ultrasonic video image of each frame through multiple convolutional layers to the results of multiple convolution operations, where the result is Divide the liquid dark area up and calculate the position, shape and/or size of the liquid dark area.
所述显示模块64用于接收所述超声图像视频流并显示超声视频图像于一显示单元80上,及用于接收所述人工智能模型63的分析结果并将所述分析结果显示于所述显示单元80上。The display module 64 is used for receiving the ultrasound image video stream and displaying the ultrasound video image on a display unit 80, and for receiving the analysis result of the artificial intelligence model 63 and displaying the analysis result on the display On unit 80.
在本实施例中,所述人工智能模型63的分析结果可以是“不存在液性暗区”或者液性暗区的分布、形状及/或大小。当所述人工智能模型的分析结果是液性暗区的分布、形状及/或大小时,所述显示模块64可以以下述的任一方式或者下述的两或多个方式的组合显示所述分析结果:输出存在积液的部位名称、叠加了液性暗区分割蒙版的超声视频图像帧、输出每块积液的大小及/或形状。In this embodiment, the analysis result of the artificial intelligence model 63 may be "there is no liquid dark zone" or the distribution, shape and/or size of the liquid dark zone. When the analysis result of the artificial intelligence model is the distribution, shape and/or size of the liquid dark area, the display module 64 may display the Analysis result: output the name of the part where the effusion exists, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and output the size and/or shape of each effusion.
本实施例提供的创伤超声诊断系统60,将人工智能赋能医疗应用于创伤超声诊断,能够快速准确地在医护人员进行超声扫查的过程中,实时判断被检测者被检测部位是否存在积血、积液以及其的具体位置、大小。因此,在人工智能辅助下,即使是没有丰富经验的超声使用经验的医护人员,也能够快速的解读创伤超声视频图 像,并做出相应的治疗措施。因此,本实施例提供的创伤超声诊断方法降低了对医护人员的相关技能要求、大大提高了超声诊断的准确性与效率。而将创伤超声诊断系统60应用于移动设备,可避免对移动网络的依赖。The trauma ultrasound diagnosis system 60 provided in this embodiment applies artificial intelligence-enabled medical treatment to trauma ultrasound diagnosis, and can quickly and accurately determine whether there is blood accumulation in the detected part of the subject in the process of ultrasonic scanning by medical staff. , Fluid effusion and its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive ultrasound experience can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis. The application of the trauma ultrasound diagnosis system 60 to mobile devices can avoid dependence on the mobile network.
实施例五Example five
请参阅图9所示,为本发明实施例五中的创伤超声诊断系统的系统框架图。与实施例四中的创伤超声诊断系统60不同,实施例五的创伤超声诊断系统90还包括辅助诊疗报告生成模块91、与辅助诊疗报告存储模块93,所述辅助诊疗报告生成模块91用于根据所述人工智能模型63的分析结果生成人工智能辅助诊疗报告。所述辅助诊疗报告存储模块93还用于存储所述人工智能辅助诊疗报告。在本实施例中,所述辅助诊疗报告存储模块93可将所述人工智能辅助诊疗报告存储于移动设备的存储单元82中,或者,还将所述人工智能辅助诊疗报告上传并保存至云端服务器84。Please refer to FIG. 9, which is a system frame diagram of the trauma ultrasound diagnosis system in the fifth embodiment of the present invention. Different from the trauma ultrasound diagnosis system 60 in the fourth embodiment, the trauma ultrasound diagnosis system 90 of the fifth embodiment further includes an auxiliary diagnosis and treatment report generation module 91 and an auxiliary diagnosis and treatment report storage module 93. The auxiliary diagnosis and treatment report generation module 91 is used to generate an auxiliary diagnosis and treatment report according to The analysis result of the artificial intelligence model 63 generates an artificial intelligence-assisted diagnosis and treatment report. The auxiliary diagnosis and treatment report storage module 93 is also used to store the artificial intelligence auxiliary diagnosis and treatment report. In this embodiment, the auxiliary diagnosis and treatment report storage module 93 may store the artificial intelligence auxiliary diagnosis and treatment report in the storage unit 82 of the mobile device, or upload and save the artificial intelligence auxiliary diagnosis and treatment report to a cloud server. 84.
与实施例四中的显示模块64不同,除用于显示超声视频图像及人工智能模型63的分析结果外,本实施例中的显示模块95还用于显示所述人工智能辅助诊疗报告。Different from the display module 64 in the fourth embodiment, in addition to displaying the ultrasound video image and the analysis result of the artificial intelligence model 63, the display module 95 in this embodiment is also used to display the artificial intelligence-assisted diagnosis and treatment report.
本实施例的有益效果可以参考实施例四的有益效果,除同具有实施例四的有益效果外,本实施例的创伤超声诊断系统90通过生成和存储人工智能辅助诊疗报告,可备诊疗报告的重复使用及用于后续对人工智能模型的进一步验证。The beneficial effects of this embodiment can refer to the beneficial effects of the fourth embodiment. In addition to the beneficial effects of the fourth embodiment, the trauma ultrasound diagnosis system 90 of this embodiment generates and stores artificial intelligence-assisted diagnosis and treatment reports, which can prepare diagnosis and treatment reports. Repeated use and subsequent further verification of the artificial intelligence model.
实施例六Example Six
请参阅图10所示,为本发明实施例六中的创伤超声诊断系统的系统框架图。与实施例五中的创伤超声诊断系统90不同,实施例六的创伤超声诊断系统100的人工智能模型101还包括诊断部位判断模块102,所述诊断部位判断模块102用于分析所诊断部位是 哪一部位。本实施例中,所述人工智能模型101可诊断的部位可以是肝周、脾周、心包、盆腔周围或肺部,所述诊断部位判断模块102可以根据用户的输入判断是哪一诊断部位,例如,用户通过计算机装置选择需诊断的部位,所述诊断部位判断模块102根据用户的选择判断是哪一诊断部位,所述诊断部位判断模块102也可基于所述超声图像视频流、根据不同部位的不同参数或图片判断是哪一诊断部位。Please refer to FIG. 10, which is a system frame diagram of the trauma ultrasound diagnosis system in the sixth embodiment of the present invention. Different from the trauma ultrasound diagnosis system 90 in the fifth embodiment, the artificial intelligence model 101 of the trauma ultrasound diagnosis system 100 in the sixth embodiment further includes a diagnosis part judgment module 102, which is used to analyze which part is diagnosed. One part. In this embodiment, the part that can be diagnosed by the artificial intelligence model 101 may be around the liver, around the spleen, pericardium, around the pelvis, or lung, and the diagnostic part judgment module 102 can judge which part of the diagnosis is according to the input of the user. For example, a user selects a part to be diagnosed through a computer device, the diagnostic part judgment module 102 judges which part to be diagnosed according to the user's selection, the diagnostic part judgment module 102 may also be based on the ultrasound image video stream, according to different parts Different parameters or pictures to determine which part of the diagnosis is.
本实施例的有益效果可以参考实施例五的有益效果,除同具有实施例五的有益效果外,本实施例的创伤超声诊断系统100通过人工智能模型101判断所诊断的部位是哪一部位,可进一步提高诊断的准确率。The beneficial effects of this embodiment can refer to the beneficial effects of the fifth embodiment. In addition to the same beneficial effects of the fifth embodiment, the trauma ultrasound diagnosis system 100 of this embodiment uses the artificial intelligence model 101 to determine which part of the diagnosed part is. It can further improve the accuracy of diagnosis.
实施例七Example Seven
请参阅图11所示,为本发明实施例七中的创伤超声诊断系统的系统框架图。与实施例六中的创伤超声诊断系统100不同,实施例七的创伤超声诊断系统110的人工智能模型111还包括诊断完成判断模块112,所述诊断完成判断模块112用于判断所有需要诊断的诊断部位是否完成诊断。不同于实施例六,本实施例中的辅助诊疗报告生成模块113用于在所述诊断完成判断模块112判断所有需要诊断的诊断部位已完成诊断时,生成辅助诊疗报告。或者,在另一实施方式中,不同于实施例六,所述显示模块64用于在所述诊断完成判断模块判断所有需要诊断的诊断部位已完成诊断时,将所述人工智能模型的分析结果显示于所述显示单元80上。Please refer to FIG. 11, which is a system frame diagram of the trauma ultrasonic diagnosis system in the seventh embodiment of the present invention. Different from the trauma ultrasound diagnosis system 100 in the sixth embodiment, the artificial intelligence model 111 of the trauma ultrasound diagnosis system 110 in the seventh embodiment also includes a diagnosis completion judgment module 112, which is used to judge all diagnoses that need to be diagnosed. Whether the location is complete diagnosis. Different from the sixth embodiment, the auxiliary diagnosis and treatment report generating module 113 in this embodiment is used to generate an auxiliary diagnosis and treatment report when the diagnosis completion judgment module 112 judges that all diagnosis parts that need to be diagnosed have been diagnosed. Or, in another embodiment, different from the sixth embodiment, the display module 64 is configured to display the analysis result of the artificial intelligence model when the diagnosis completion judgment module judges that all diagnosis parts that need to be diagnosed have been diagnosed. Displayed on the display unit 80.
本实施例的有益效果可以参考实施例六的有益效果,除同具有实施例六的有益效果外,本实施例的创伤超声诊断系统110通过人工智能模型111判断是否完成所有需诊断部位的诊断,可避免遗漏关键部位的检测。The beneficial effects of this embodiment can be referred to the beneficial effects of the sixth embodiment. In addition to the same beneficial effects of the sixth embodiment, the trauma ultrasound diagnosis system 110 of this embodiment uses the artificial intelligence model 111 to determine whether the diagnosis of all parts to be diagnosed is completed. It can avoid missing the detection of key parts.
实施例八Example eight
图12为本发明实施例八中的移动设备的功能模块示意图。所述移动设备120包括处理单元121、存储单元122、通信单元123、显示单元124及内置的超声检测程序125与人工智能模型126。所述超声检测程序125与人工智能模型126可以应用程序的方式安装于所述移动设备120上,利用移动设备120的处理单元、存储单元122、通信单元123及显示单元124完成创伤的超声诊断。FIG. 12 is a schematic diagram of functional modules of a mobile device in the eighth embodiment of the present invention. The mobile device 120 includes a processing unit 121, a storage unit 122, a communication unit 123, a display unit 124, and a built-in ultrasonic testing program 125 and an artificial intelligence model 126. The ultrasonic testing program 125 and the artificial intelligence model 126 can be installed on the mobile device 120 in the form of an application program, and the processing unit, the storage unit 122, the communication unit 123 and the display unit 124 of the mobile device 120 are used to complete the ultrasonic diagnosis of trauma.
在本实施例中,所述移动设备120可以是但不限于智能手机、平板电脑等,在其他实施例中,通过安装所述超声检测程序125与人工智能模型126完成创伤的超声诊断的电子装置并不限于移动设备120,还可以是其他具有计算运行能力的其他终端设备,例如桌上电脑等。In this embodiment, the mobile device 120 may be, but is not limited to, a smart phone, a tablet computer, etc. In other embodiments, an electronic device for ultrasonic diagnosis of trauma is completed by installing the ultrasonic detection program 125 and the artificial intelligence model 126 It is not limited to the mobile device 120, and may also be other terminal devices with computing operation capabilities, such as desktop computers.
所述通信单元123用于与一超声探头70通信,从所述超声探头70处获取所述超声探头70发送的对应超声回波的电信号,并将所述电信号传送给所述超声检测程序125使用。所述通信单元123与超声探头70之间可以是有线连接,采用有线通信技术,也可以是无线连接,采用无线通信技术。所述通信单元123可以是与相应通信方式匹配的信号收发单元。The communication unit 123 is used to communicate with an ultrasonic probe 70, obtain the electrical signal corresponding to the ultrasonic echo sent by the ultrasonic probe 70 from the ultrasonic probe 70, and transmit the electrical signal to the ultrasonic testing program 125 uses. The communication unit 123 and the ultrasound probe 70 may be connected via a wired connection, using a wired communication technology, or may be a wireless connection, using a wireless communication technology. The communication unit 123 may be a signal transceiving unit matching the corresponding communication mode.
所述存储单元122用于存储所述超声检测程序125与人工智能模型126及其所需要使用的资料(如参数)或其所生成的资料(如分析结果、诊断报告等)。The storage unit 122 is used to store the ultrasonic testing program 125 and the artificial intelligence model 126 and the data (such as parameters) they need to use or the data generated (such as analysis results, diagnosis reports, etc.).
所述处理单元121用于执行所述超声检测程序125与人工智能模型126以完成创伤的超声诊断。所述处理单元121执行所述超声检测程序125与人工智能模型126时实现上述方法实施例中创伤超声诊断方法的步骤,或者,所述处理单元121执行所述超声检测程序125与人工智能模型126时实现上述实施例中创伤超声诊断系统 中各模块的功能。The processing unit 121 is used to execute the ultrasonic detection program 125 and the artificial intelligence model 126 to complete the ultrasonic diagnosis of trauma. When the processing unit 121 executes the ultrasonic detection program 125 and the artificial intelligence model 126, the steps of the trauma ultrasonic diagnosis method in the above method embodiment are implemented, or the processing unit 121 executes the ultrasonic detection program 125 and the artificial intelligence model 126 The function of each module in the trauma ultrasound diagnosis system in the above-mentioned embodiment is realized.
示例性的,所述超声检测程序125及所述人工智能模型126均可以被分割成一个或多个模块,所述一个或多个模块被存储在所述存储单元122中,并由所述处理单元121执行,以完成本发明各种实施方式。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述超声检测程序125或所述人工智能模型126在所述电子装置10中的执行过程。例如,所述超声检测程序125可以被分割成实施例四及图6中的模块61、62及64,所述人工智能模型126可以被分割成实施例四及图6中的模块631与632;或者,所述超声检测程序125可以被分割成实施例五及图9中的模块61、62、91、93及95,所述人工智能模型126可以被分割成实施例四中的模块631与632;或者,所述超声检测程序125可以被分割成实施例六及图10中的模块61、62、91、93及95,所述人工智能模型126可以被分割成实施例六及图10中的模块631、632及102;或者,所述超声检测程序125还可被分割成实施例七及图11中的模块61、62、91、93及95,所述人工智能模型126还可被分割成实施例七及图11中的模块631、632、102及112。Exemplarily, both the ultrasonic testing program 125 and the artificial intelligence model 126 may be divided into one or more modules, and the one or more modules are stored in the storage unit 122 and processed by the The unit 121 executes to complete various embodiments of the present invention. The one or more modules/units may be a series of computer program instruction segments that can complete specific functions, and the instruction segments are used to describe the ultrasonic testing program 125 or the artificial intelligence model 126 in the electronic device 10 Implementation process. For example, the ultrasonic testing program 125 can be divided into modules 61, 62 and 64 in the fourth embodiment and FIG. 6, and the artificial intelligence model 126 can be divided into modules 631 and 632 in the fourth embodiment and FIG. 6; Alternatively, the ultrasonic testing program 125 can be divided into the modules 61, 62, 91, 93 and 95 in the fifth embodiment and FIG. 9, and the artificial intelligence model 126 can be divided into the modules 631 and 632 in the fourth embodiment. Or, the ultrasonic testing program 125 can be divided into modules 61, 62, 91, 93 and 95 in the sixth embodiment and FIG. 10, and the artificial intelligence model 126 can be divided into the sixth embodiment and the modules in FIG. 10 Modules 631, 632, and 102; or, the ultrasonic testing program 125 can also be divided into the seventh embodiment and the modules 61, 62, 91, 93, and 95 in FIG. 11, and the artificial intelligence model 126 can also be divided into The seventh embodiment and the modules 631, 632, 102, and 112 in FIG. 11.
本领域技术人员可以理解,所述示意图12仅仅是电子装置10的示例,并不构成对移动设备120的限定,移动设备120可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述移动设备120还可以包括输入输出设备等。Those skilled in the art can understand that the schematic diagram 12 is only an example of the electronic device 10, and does not constitute a limitation on the mobile device 120. The mobile device 120 may include more or less components than those shown in the figure, or combine certain components. , Or different components, for example, the mobile device 120 may also include input and output devices.
所称处理单元121可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)等可以运行人工智能模型126的任意类型处理器,所述处理单元121通过各种接口和线路连接移动设备120的各个部 分。The so-called processing unit 121 can be any type of processor that can run the artificial intelligence model 126, such as a digital signal processor (DSP), an application specific integrated circuit (ASIC), etc. The processing unit 121 uses various types of processors. Such interfaces and lines connect various parts of the mobile device 120.
所述移动设备120集成的模块如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中,基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。If the integrated module of the mobile device 120 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the present invention implements the above-mentioned embodiment method All or part of the processes in the computer program can also be used to instruct the relevant hardware to complete the computer program. The computer program can be stored in a computer-readable storage medium. When the computer program is executed by the processor, the computer program can realize each of the above Steps of method embodiment. It should be noted that the content contained in the computer-readable medium can be appropriately added or deleted according to the requirements of the legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to the legislation and patent practice, the computer-readable medium Does not include electrical carrier signals and telecommunication signals.
本实施例提供的移动设备120,将人工智能赋能医疗应用于移动设备,使移动设备搭配超声探头70即可完成创伤超声诊断,能够快速准确地在医护人员进行超声扫查的过程中,实时判断被检测者被检测部位是否存在积血、积液以及其的具体位置、大小。因此,在人工智能辅助下,即使是没有丰富经验的超声使用经验的医护人员,也能够快速的解读创伤超声视频图像,并做出相应的治疗措施。因此,本实施例提供的创伤超声诊断方法降低了对医护人员的相关技能要求、大大提高了超声诊断的准确性与效率。而将创伤超声诊断应用于移动设备,还可避免对移动网络的依赖。The mobile device 120 provided in this embodiment applies artificial intelligence-enabled medical treatment to the mobile device, so that the mobile device can be matched with the ultrasound probe 70 to complete the trauma ultrasound diagnosis, which can quickly and accurately perform the ultrasound scan in the medical staff in real time. Determine whether there is blood and effusion in the detected part of the subject, as well as its specific location and size. Therefore, with the assistance of artificial intelligence, even medical staff without extensive experience in ultrasound use can quickly interpret trauma ultrasound video images and make corresponding treatment measures. Therefore, the trauma ultrasound diagnosis method provided in this embodiment reduces the requirements for related skills of medical staff and greatly improves the accuracy and efficiency of ultrasound diagnosis. The application of trauma ultrasound diagnosis to mobile devices can also avoid dependence on mobile networks.
最后应说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或等同替换,而不脱离本发明技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present invention can be Modifications or equivalent replacements are made without departing from the spirit and scope of the technical solution of the present invention.

Claims (24)

  1. 一种创伤超声诊断方法,其特征在于,包括:A trauma ultrasound diagnosis method, which is characterized in that it comprises:
    获取由超声回波转换的电信号并生成超声图像视频流,所述超声回波为超声探头检测被检测者时接收的超声回波;Acquiring an electrical signal converted by an ultrasonic echo and generating an ultrasonic image video stream, the ultrasonic echo being the ultrasonic echo received when the ultrasonic probe detects the subject;
    显示超声视频图像;Display ultrasound video images;
    利用人工智能模型基于所述超声图像视频流分析被检测部位,判断是否存在液性暗区;及Use an artificial intelligence model to analyze the detected part based on the ultrasound image video stream to determine whether there is a liquid dark area; and
    显示所述人工智能模型的分析结果。The analysis result of the artificial intelligence model is displayed.
  2. 如权利要求1所述的创伤超声诊断方法,其特征在于,还包括:若存在液性暗区,利用所述人工智能模型分割所述液性暗区,并分析所述液性暗区的方位、形状及/或大小。The trauma ultrasound diagnosis method of claim 1, further comprising: if there is a liquid dark area, segmenting the liquid dark area using the artificial intelligence model, and analyzing the orientation of the liquid dark area , Shape and/or size.
  3. 如权利要求2所述的创伤超声诊断方法,其特征在于,所述分析结果是不存在液性暗区或者是液性暗区的分布、形状及/或大小。The wound ultrasound diagnosis method according to claim 2, wherein the analysis result is that there is no liquid dark area or the distribution, shape and/or size of the liquid dark area.
  4. 如权利要求2所述的创伤超声诊断方法,其特征在于,若存在液性暗区,显示所述人工智能模型的分析结果为:输出存在液性暗区的部位名称、叠加了液性暗区分割蒙版的超声视频图像帧、及/或输出每块液性暗区的方位、大小及/或形状。The trauma ultrasonic diagnosis method of claim 2, wherein if there is a liquid dark area, the analysis result of the artificial intelligence model is displayed as: outputting the name of the part where the liquid dark area exists, and superimposing the liquid dark area Segment the masked ultrasonic video image frame, and/or output the position, size and/or shape of each liquid dark area.
  5. 如权利要求1或2所述的创伤超声诊断方法,其特征在于,所述人工智能模型采用轻量化的深度卷积神经网络作为模型,所述创伤超声诊断方法利用移动设备实施。The trauma ultrasound diagnosis method according to claim 1 or 2, wherein the artificial intelligence model adopts a lightweight deep convolutional neural network as a model, and the trauma ultrasound diagnosis method is implemented by a mobile device.
  6. 如权利要求1所述的创伤超声诊断方法,其特征在于,所述“利用人工智能模型基于所述超声图像视频流分 析被检测部位,判断是否存在液性暗区”包括:利用全卷积神经网络对每一帧超声视频图像进行多次卷积运算获得一或多个特征图像,所述一或多个特征图像经全局池化层、全连接层及激活函数后获得一概率,根据所述概率判断是否存在液性暗区。The trauma ultrasound diagnosis method according to claim 1, wherein the "using an artificial intelligence model to analyze the detected part based on the ultrasound image video stream to determine whether there is a liquid dark area" comprises: using a fully convolutional nerve The network performs multiple convolution operations on each frame of ultrasound video image to obtain one or more feature images. The one or more feature images obtain a probability after passing through the global pooling layer, the fully connected layer, and the activation function. Probability to judge whether there is a liquid dark zone.
  7. 如权利要求2所述的创伤超声诊断方法,其特征在于,所述“利用所述人工智能模型分割所述液性暗区,并分析所述液性暗区的方位、形状及/或大小”包括:利用全卷积神经网络对每一帧超声视频图像进行多次卷积运算获得一或多个特征图像,所述一或多个特征图像中的每一特征图像再经1*1卷积运算获得与该帧超声视频图像大小相同的数值矩阵,所述数值矩阵中的每一数值对应该帧超声视频图像的一像素点,所述数值矩阵经由激活函数分割出对应液性暗区的数值分布,并根据所述数值分布计算出每一液性暗区的面积及/或积液量。The trauma ultrasound diagnosis method according to claim 2, wherein the "use the artificial intelligence model to segment the liquid dark area, and analyze the orientation, shape and/or size of the liquid dark area" Including: using a fully convolutional neural network to perform multiple convolution operations on each frame of ultrasound video image to obtain one or more feature images, each of the one or more feature images is then 1*1 convolution A numerical matrix with the same size as the frame of the ultrasound video image is obtained by calculation, each value in the numerical matrix corresponds to a pixel of the frame of the ultrasound video image, and the numerical matrix is divided by an activation function to obtain a value corresponding to the liquid dark zone According to the numerical distribution, the area and/or the amount of liquid accumulation of each liquid dark zone are calculated.
  8. 如权利要求1或2所述的创伤超声诊断方法,其特征在于,还包括:形成人工智能辅助诊疗报告,及,显示及/或存储所述人工智能辅助诊疗报告。The trauma ultrasound diagnosis method of claim 1 or 2, further comprising: forming an artificial intelligence-assisted diagnosis and treatment report, and displaying and/or storing the artificial intelligence-assisted diagnosis and treatment report.
  9. 如权利要求8所述的创伤超声诊断方法,其特征在于,还包括:利用人工智能模型分析所诊断部位是哪一部位。8. The trauma ultrasound diagnosis method of claim 8, further comprising: using an artificial intelligence model to analyze which part of the diagnosed part is.
  10. 如权利要求9所述的创伤超声诊断方法,其特征在于,还包括:利用人工智能模型判断所有需要诊断的诊断部位是否完成诊断,若所有需要诊断的诊断部位均已完成诊断,则显示所述人工智能模型的分析结果及/或形成所述人工智能辅助诊疗报告,若有任一需要 诊断的诊断部位未完成诊断,则继续获取由超声回波转换的电信号。The trauma ultrasound diagnosis method of claim 9, further comprising: using an artificial intelligence model to determine whether all the diagnostic parts that need to be diagnosed have been diagnosed, and if all the diagnostic parts that need to be diagnosed have been diagnosed, then displaying the The analysis result of the artificial intelligence model and/or the formation of the artificial intelligence-assisted diagnosis and treatment report. If any diagnosis part that needs to be diagnosed has not been diagnosed, then continue to obtain the electrical signal converted by the ultrasonic echo.
  11. 一种创伤超声诊断系统,其特征在于,所述系统包括:A trauma ultrasound diagnosis system, characterized in that the system includes:
    信号获取模块,用于获取由超声回波转换的电信号,所述超声回波为超声探头检测被检测者时接收的超声回波;A signal acquisition module for acquiring an electrical signal converted by an ultrasonic echo, the ultrasonic echo being the ultrasonic echo received when the ultrasonic probe detects the subject;
    视频图像信号生成模块,用于根据所述电信号,生成超声图像视频流;A video image signal generating module, configured to generate an ultrasound image video stream according to the electrical signal;
    人工智能模型,所述人工智能模型包括:Artificial intelligence model, the artificial intelligence model includes:
    液性暗区判断模块,用于接收所述超声图图像视频流、基于所述超声图像视频流分析检测部位是否存在液性暗区;及The liquid dark area judgment module is configured to receive the ultrasound image video stream, and analyze whether there is a liquid dark area in the detection part based on the ultrasound image video stream; and
    显示模块,用于接收所述超声图像视频流并显示超声视频图像于一显示单元,及用于接收所述人工智能模型的分析结果并将所述分析结果显示于所述显示单元上。The display module is used for receiving the ultrasound image video stream and displaying the ultrasound video image on a display unit, and for receiving the analysis result of the artificial intelligence model and displaying the analysis result on the display unit.
  12. 如权利要求11所述的系统,其特征在于,所述人工智能模型还包括:液性暗区分割模块,所述液性暗区分割模块用于分割液性暗区,并分析所述液性暗区的方位、形状及/或大小。The system of claim 11, wherein the artificial intelligence model further comprises: a liquid dark area segmentation module, the liquid dark area segmentation module is used to segment the liquid dark area, and analyze the liquid dark area The orientation, shape and/or size of the dark area.
  13. 如权利要求12所述的系统,其特征在于,所述人工智能模型的分析结果是不存在液性暗区或者是液性暗区的分布、形状及/或大小。The system according to claim 12, wherein the analysis result of the artificial intelligence model is that there is no liquid dark zone or the distribution, shape and/or size of the liquid dark zone.
  14. 如权利要求12所述的系统,其特征在于,所述显示模块用于输出存在液性暗区的部位名称、叠加了液性暗区分割蒙版的超声视频图像帧、及/或输出每块液性暗区的方位、大小及/或形状。The system of claim 12, wherein the display module is used to output the name of the part where the liquid dark area exists, the ultrasonic video image frame superimposed with the liquid dark area segmentation mask, and/or output each block The orientation, size and/or shape of the liquid dark zone.
  15. 如权利要求11或12所述的系统,其特征在于,所述人工智能模型采用轻量化的深度卷积神经网络作为模型,所述系统安装于移动设备上。The system according to claim 11 or 12, wherein the artificial intelligence model uses a lightweight deep convolutional neural network as a model, and the system is installed on a mobile device.
  16. 如权利要求11所述的系统,其特征在于,所述液性暗区判断模块用于利用全卷积神经网络对每一帧超声视频图像进行多次卷积运算获得一或多个特征图像,将所述一或多个特征图像经全局池化层、全连接层及激活函数后获得一概率,及根据所述概率判断是否存在液性暗区。The system according to claim 11, wherein the liquid dark area judgment module is configured to perform multiple convolution operations on each frame of ultrasound video image by using a fully convolutional neural network to obtain one or more characteristic images, After passing the one or more characteristic images through the global pooling layer, the fully connected layer, and the activation function, a probability is obtained, and whether there is a liquid dark zone is determined according to the probability.
  17. 如权利要求12所述的系统,其特征在于,所述液性暗区分割模块用于利用全卷积神经网络对每一帧超声视频图像进行多次卷积运算获得一或多个特征图像,将所述一或多个特征图像中的每一特征图像再经1*1卷积运算获得与该帧超声视频图像大小相同的数值矩阵,其中所述数值矩阵中的每一数值对应该帧超声视频图像的一像素点,所述液性暗区分割模块还用于从所述数值矩阵中分割出对应液性暗区的数值分布,并根据所述数值分布计算出每一液性暗区的面积及/或积液量。The system of claim 12, wherein the liquid dark area segmentation module is used to perform multiple convolution operations on each frame of ultrasound video image by using a fully convolutional neural network to obtain one or more characteristic images, Each feature image of the one or more feature images is subjected to a 1*1 convolution operation to obtain a numerical matrix with the same size as the frame of ultrasound video image, wherein each value in the numerical matrix corresponds to the frame of ultrasound For a pixel point of the video image, the liquid dark area segmentation module is also used to segment the numerical value distribution of the corresponding liquid dark area from the numerical matrix, and calculate the value of each liquid dark area according to the numerical value distribution. Area and/or effusion volume.
  18. 如权利要求11或12所述的系统,其特征在于,所述系统还包括辅助诊疗报告生成模块,所述辅助诊疗报告生成模块用于根据所述人工智能模型的分析结果生成人工智能辅助诊疗报告。The system according to claim 11 or 12, wherein the system further comprises an auxiliary diagnosis and treatment report generation module, the auxiliary diagnosis and treatment report generation module is configured to generate an artificial intelligence auxiliary diagnosis and treatment report according to the analysis result of the artificial intelligence model .
  19. 如权利要求18所述的系统,其特征在于,所述系统还包括辅助诊疗报告存储模块,所述辅助诊疗报告存储模块用于存储所述人工智能辅助诊疗报告。The system according to claim 18, wherein the system further comprises an auxiliary diagnosis and treatment report storage module, and the auxiliary diagnosis and treatment report storage module is configured to store the artificial intelligence auxiliary diagnosis and treatment report.
  20. 如权利要求18或19所述的系统,其特征在于,所述 显示模块还用于显示所述人工智能辅助诊疗报告。The system according to claim 18 or 19, wherein the display module is also used to display the artificial intelligence-assisted diagnosis and treatment report.
  21. 如权利要求18所述的系统,其特征在于,所述系统还包括诊断部位判断模块,所述诊断部位判断模块用于判断所诊断部位是哪一部位。21. The system according to claim 18, wherein the system further comprises a diagnosis part judgment module, and the diagnosis part judgment module is used to judge which part the diagnosed part is.
  22. 如权利要求21所述的系统,其特征在于,所述系统还包括诊断完成判断模块,所述诊断完成判断模块用于判断所有需要诊断的诊断部位是否完成诊断,所述辅助诊疗报告生成模块用于在所述诊断完成判断模块判断所有需要诊断的诊断部位已完成诊断时,生成所述辅助诊疗报告,及/或,所述显示模块用于在所述诊断完成判断模块判断所有需要诊断的诊断部位已完成诊断时,显示所述人工智能模型的分析结果于所述显示单元上。The system of claim 21, wherein the system further comprises a diagnosis completion judging module, the diagnosis completion judging module is used to judge whether all the diagnosed parts that need to be diagnosed have been diagnosed, and the auxiliary diagnosis and treatment report generation module is used When the diagnosis completion judgment module judges that all the diagnosis parts that need to be diagnosed have been diagnosed, the auxiliary diagnosis and treatment report is generated, and/or the display module is used for judging all the diagnosis needs to be diagnosed by the diagnosis completion judgment module When the part has been diagnosed, the analysis result of the artificial intelligence model is displayed on the display unit.
  23. 一种移动设备,其特征在于,包括:A mobile device, characterized in that it comprises:
    通信单元;Communication unit
    显示单元;Display unit;
    处理单元;Processing unit
    存储单元,所述存储单元中存储有多个程序模块,所述多个程序模块由所述处理单元加载并执行如权利要求1-10任一项所述的创伤超声诊断方法。A storage unit, wherein a plurality of program modules are stored in the storage unit, and the plurality of program modules are loaded by the processing unit and execute the wound ultrasound diagnosis method according to any one of claims 1-10.
  24. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理单元执行时实现如权利要求1-10任一项所述的创伤超声诊断方法。A computer-readable storage medium having a computer program stored thereon, wherein the computer program is executed by a processing unit to implement the wound ultrasound diagnosis method according to any one of claims 1-10.
PCT/CN2019/127644 2019-12-23 2019-12-23 Trauma ultrasonic detection method and system, mobile device, and storage medium WO2021127930A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980100817.7A CN114599291A (en) 2019-12-23 2019-12-23 Wound ultrasonic detection method, system, mobile device and storage medium
PCT/CN2019/127644 WO2021127930A1 (en) 2019-12-23 2019-12-23 Trauma ultrasonic detection method and system, mobile device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/127644 WO2021127930A1 (en) 2019-12-23 2019-12-23 Trauma ultrasonic detection method and system, mobile device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021127930A1 true WO2021127930A1 (en) 2021-07-01

Family

ID=76573414

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/127644 WO2021127930A1 (en) 2019-12-23 2019-12-23 Trauma ultrasonic detection method and system, mobile device, and storage medium

Country Status (2)

Country Link
CN (1) CN114599291A (en)
WO (1) WO2021127930A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140288424A1 (en) * 2013-03-09 2014-09-25 West Virginia University System and Device for Tumor Characterization Using Nonlinear Elastography Imaging
CN108463174A (en) * 2015-12-18 2018-08-28 皇家飞利浦有限公司 Device and method for the tissue for characterizing object
CN108652672A (en) * 2018-04-02 2018-10-16 中国科学院深圳先进技术研究院 A kind of ultrasonic image-forming system, method and device
CN110327016A (en) * 2019-06-11 2019-10-15 清华大学 Intelligent minimally invasive diagnosis and treatment integral system based on optical image and optical therapeutic
US20190336107A1 (en) * 2017-01-05 2019-11-07 Koninklijke Philips N.V. Ultrasound imaging system with a neural network for image formation and tissue characterization
CN110556183A (en) * 2019-09-20 2019-12-10 林于慧 rapid diagnosis equipment and method applied to traditional Chinese medicine equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140288424A1 (en) * 2013-03-09 2014-09-25 West Virginia University System and Device for Tumor Characterization Using Nonlinear Elastography Imaging
CN108463174A (en) * 2015-12-18 2018-08-28 皇家飞利浦有限公司 Device and method for the tissue for characterizing object
US20190336107A1 (en) * 2017-01-05 2019-11-07 Koninklijke Philips N.V. Ultrasound imaging system with a neural network for image formation and tissue characterization
CN108652672A (en) * 2018-04-02 2018-10-16 中国科学院深圳先进技术研究院 A kind of ultrasonic image-forming system, method and device
CN110327016A (en) * 2019-06-11 2019-10-15 清华大学 Intelligent minimally invasive diagnosis and treatment integral system based on optical image and optical therapeutic
CN110556183A (en) * 2019-09-20 2019-12-10 林于慧 rapid diagnosis equipment and method applied to traditional Chinese medicine equipment

Also Published As

Publication number Publication date
CN114599291A (en) 2022-06-07

Similar Documents

Publication Publication Date Title
US11960571B2 (en) Method and apparatus for training image recognition model, and image recognition method and apparatus
US11129591B2 (en) Echocardiographic image analysis
US9314225B2 (en) Method and apparatus for performing ultrasound imaging
US8861824B2 (en) Ultrasonic diagnostic device that provides enhanced display of diagnostic data on a tomographic image
EP3776353A1 (en) Ultrasound system with artificial neural network for retrieval of imaging parameter settings for recurring patient
JP2002253552A (en) Method and device for connecting image and report in remote browsing station
US20210219922A1 (en) A method and apparatus for analysing echocardiograms
WO2023138619A1 (en) Endoscope image processing method and apparatus, readable medium, and electronic device
CN114565577A (en) Carotid vulnerability classification method and system based on multi-modal imaging omics
WO2021127930A1 (en) Trauma ultrasonic detection method and system, mobile device, and storage medium
US11510656B2 (en) Ultrasound imaging method and ultrasound imaging system therefor
US20240115245A1 (en) Method and system for expanding function of ultrasonic imaging device
US20230137369A1 (en) Aiding a user to perform a medical ultrasound examination
CN115813433A (en) Follicle measuring method based on two-dimensional ultrasonic imaging and ultrasonic imaging system
JP2019118694A (en) Medical image generation apparatus
CN113792740A (en) Arteriovenous segmentation method, system, equipment and medium for fundus color photography
KR20150107515A (en) medical image processor and method thereof for medical diagnosis
CN111062935B (en) Mammary gland tumor detection method, storage medium and terminal equipment
CN115311188A (en) Image identification method and device, electronic equipment and storage medium
CN112515705A (en) Method and system for projection contour enabled Computer Aided Detection (CAD)
CN111696085B (en) Rapid ultrasonic evaluation method and equipment for lung impact injury condition on site
TWI494549B (en) A luminance inspecting method for backlight modules based on multiple kernel support vector regression and apparatus thereof
Ibrahim et al. Inexpensive 1024-channel 3D telesonography system on FPGA
JP6883536B2 (en) Ultrasonic signal processing device, ultrasonic diagnostic device and ultrasonic signal calculation processing method
US20220101518A1 (en) System and method for stylizing a medical image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19957632

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19957632

Country of ref document: EP

Kind code of ref document: A1