CN113842166A - Ultrasonic image acquisition method based on ultrasonic imaging equipment and related device - Google Patents

Ultrasonic image acquisition method based on ultrasonic imaging equipment and related device Download PDF

Info

Publication number
CN113842166A
CN113842166A CN202111246132.9A CN202111246132A CN113842166A CN 113842166 A CN113842166 A CN 113842166A CN 202111246132 A CN202111246132 A CN 202111246132A CN 113842166 A CN113842166 A CN 113842166A
Authority
CN
China
Prior art keywords
image
ultrasonic
neural network
network model
thyroid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111246132.9A
Other languages
Chinese (zh)
Inventor
张维拓
钱碧云
徐天宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University School of Medicine
Original Assignee
Shanghai Jiaotong University School of Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University School of Medicine filed Critical Shanghai Jiaotong University School of Medicine
Priority to CN202111246132.9A priority Critical patent/CN113842166A/en
Publication of CN113842166A publication Critical patent/CN113842166A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/56Details of data transmission or power supply
    • A61B8/565Details of data transmission or power supply involving data transmission via a network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Vascular Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an ultrasonic image acquisition method based on ultrasonic imaging equipment and a related device. The method comprises the steps of obtaining continuous ultrasonic images through the ultrasonic imaging equipment; inputting the N frame image in the continuous ultrasonic images and the continuous ultrasonic images into a preset neural network model; judging whether the Nth frame of image meets the acquisition requirement or not according to the preset neural network model; if the condition is not met, guiding the acquisition mode of the continuous ultrasonic images through a reminding module in the ultrasonic imaging equipment so as to obtain the continuous ultrasonic images again; if yes, continuing to judge whether the (N + 1) th frame image also meets the acquisition requirement after storing the (N) th frame image. The ultrasound image acquisition system and the ultrasound image acquisition method solve the technical problem that a professional is needed for ultrasound image acquisition of a patient, and remote or home remote follow-up cannot be achieved. Make non-professional sonographer also can operate the collection through the application, and promoted the ultrasonic image quality of gathering when operating ultrasonic equipment.

Description

Ultrasonic image acquisition method based on ultrasonic imaging equipment and related device
Technical Field
The application relates to the field of artificial intelligence and medicine, in particular to an ultrasonic image acquisition method based on ultrasonic imaging equipment and a related device.
Background
When a patient is examined by using the ultrasonic imaging equipment, the focus area of the patient is acquired, and medical diagnosis is performed on the ultrasonic image by an ultrasonic doctor.
In the related art, the examination mode of hospital visit is generally adopted, but the cost is high. In some cases, ultrasound images are identified and the benign or malignant nature of the nodules is determined with the aid of AI, but a professional is required for acquisition. In other schemes, remote diagnosis and treatment are carried out based on a 5G communication network, an ultrasonic doctor carries out remote operation and acquires pictures for diagnosis, but the requirement on infrastructure is higher.
Aiming at the problems that in the related art, professional personnel are needed for the ultrasonic image acquisition of a patient, and then remote or home remote follow-up cannot be realized, an effective solution is not provided at present.
Disclosure of Invention
The main objective of the present application is to provide an ultrasound image acquisition method and related apparatus based on ultrasound imaging equipment, so as to solve the problem that professional personnel is required for ultrasound image acquisition of a patient, and then remote or home remote follow-up cannot be realized.
In order to achieve the above object, according to one aspect of the present application, an ultrasound image acquisition method based on an ultrasound imaging device is provided.
The ultrasonic image acquisition method based on the ultrasonic imaging equipment comprises the following steps: acquiring continuous ultrasonic images through the ultrasonic imaging equipment; inputting the N frame image and the continuous ultrasonic image in the continuous ultrasonic image into a preset neural network model, wherein the preset neural network model is obtained by using a plurality of groups of image data through machine learning training, and each group of data in the plurality of groups of data comprises: a sample ultrasound image and a category label of the sample ultrasound image, wherein N is an integer; judging whether the Nth frame of image meets the acquisition requirement or not according to the preset neural network model; if the condition is not met, guiding the acquisition mode of the continuous ultrasonic images through a reminding module in the ultrasonic imaging equipment so as to obtain the continuous ultrasonic images again; if yes, continuing to judge whether the (N + 1) th frame image also meets the acquisition requirement after storing the (N) th frame image.
Further, the preset neural network model includes: a plurality of individual deep neural network models, each of which is trained for a preset medical scenario and medical information, the individual deep neural network models including at least one of the following functions: matching a background template, detecting nodules of the single-frame image, judging whether the focus of the single-frame image is benign or malignant, and estimating the confidence coefficient of the continuous image.
Further, the preset neural network model includes: the method comprises the steps that a first deep neural network model is adopted, wherein the ultrasonic images comprise thyroid ultrasonic images, different parts of a thyroid and ultrasonic images with preset angles corresponding to each part are collected, and the thyroid ultrasonic images at least comprising preset positions and preset angles are obtained; through the first deep neural network model, whether the background in the thyroid ultrasound image is matched with a preset background is identified, wherein the first deep neural network model is obtained by using multiple groups of image data through machine learning training, and each group of data in the multiple groups of data comprises: acquiring neck ultrasonic image samples at different positions and different angles; if the matching is carried out, the requirement for placing the ultrasonic probe on the ultrasonic imaging equipment is met; and if not, the placement requirement of the ultrasonic probe on the ultrasonic imaging equipment is not met.
Further, the preset neural network model includes: a second deep neural network model, wherein the ultrasound image includes a thyroid ultrasound image, and whether a thyroid nodule exists in a single frame image of the thyroid ultrasound image is identified through the second deep neural network model, the second deep neural network model is obtained by using multiple groups of image data through machine learning training, and each group of data in the multiple groups of data includes: thyroid ultrasonic image samples containing thyroid nodules and thyroid nodules not containing thyroid nodules, wherein nodule position information and/or nodule size information are marked in the thyroid ultrasonic images containing the thyroid nodules in advance; if the thyroid nodule is detected, a local image of the thyroid nodule is intercepted and is transmitted to the third deep neural network model as an input.
Further, the local image of the thyroid nodule input by the second deep neural network model is received based on the third deep neural network model, and whether a malignant or benign thyroid nodule exists in the single-frame image of the thyroid ultrasound image is identified, wherein the third deep neural network model is obtained by using multiple groups of image data through machine learning training, and each group of data in the multiple groups of data comprises: benign and malignant thyroid nodule local image samples are pre-labeled.
Further, the preset neural network model further includes: a fourth deep neural network model, wherein the ultrasonic image comprises a thyroid ultrasonic image, and a continuous thyroid ultrasonic image within a preset threshold time, and a background template matching result, a nodule detection result of a single-frame image and a lesion benign and malignant judgment result of the single-frame image output within the preset threshold time are input into the fourth deep neural network model; and outputting the confidence score of each frame of thyroid ultrasound images in the preset threshold time and the overall confidence of the continuous thyroid ultrasound images based on the fourth deep neural network model.
Further, if yes, after saving the nth frame image, continuing to determine whether the (N + 1) th frame image also meets the acquisition requirement, including: and if the nth frame image meets the acquisition requirement, selecting a target frame image with image quality meeting the requirement from the nth frame image and the (N + 1) th frame image … … (N + M) th frame image for storage, and performing network transmission through a communication module of the ultrasonic imaging equipment within preset time, wherein M is an integer.
In order to achieve the above object, according to another aspect of the present application, an ultrasound image acquisition apparatus based on an ultrasound imaging device is provided.
The ultrasonic image detection device based on the ultrasonic image equipment comprises: the image acquisition module is used for acquiring continuous ultrasonic images through the ultrasonic imaging equipment; a neural network module, configured to input the nth frame image in the continuous ultrasound images and the continuous ultrasound images into a preset neural network model, where the preset neural network model is obtained by using multiple sets of image data through machine learning training, and each set of data in the multiple sets of data includes: a sample ultrasound image and a category label of the sample ultrasound image, wherein N is an integer; the judging module is used for judging whether the Nth frame of image meets the acquisition requirement or not according to the preset neural network model; the guiding module is used for guiding the acquisition mode of the continuous ultrasonic images through a reminding module in the ultrasonic imaging equipment when the condition is not met so as to obtain the continuous ultrasonic images again; and the execution module is used for continuously judging whether the (N + 1) th frame of image meets the acquisition requirement after the nth frame of image is stored when the (N + 1) th frame of image meets the acquisition requirement.
In order to achieve the above object, according to still another aspect of the present application, there is provided an ultrasonic imaging apparatus.
An ultrasound imaging apparatus according to the present application includes: an apparatus body, characterized in that the apparatus body comprises: the deep learning processor is used for inputting an ultrasonic image into a pre-trained neural network model and judging whether the acquisition of the ultrasonic image meets the requirement or not through the neural network model; the controller receives the judgment result output by the deep learning processor and controls a display or a memory to execute preset operation; under the condition that the judgment result of the controller is that the acquisition of the ultrasonic images does not meet the requirement, controlling the display to guide the acquisition mode of the continuous ultrasonic images; and controlling the memory to store the ultrasonic image under the condition that the judgment result of the controller is that the acquisition of the ultrasonic image meets the requirement.
Further, the apparatus further comprises: the wireless transmission module is used for carrying out network transmission on the ultrasonic image through the wireless communication module; the voice prompt module is used for guiding the acquisition mode of the continuous ultrasonic images in cooperation with the display; the ultrasonic probe is used for transmitting ultrasonic waves to the neck region of the user and converting the received returned ultrasonic waves to obtain analog signals; and the signal processor is used for converting the analog signals transmitted by the ultrasonic probe into digital image signals and transmitting the digital image signals to the deep learning processor.
In the embodiment of the application, the ultrasonic image acquisition method and the related device based on the ultrasonic imaging equipment adopt a mode of acquiring continuous ultrasonic images by the ultrasonic imaging equipment, the purpose of judging whether the N frame image meets the acquisition requirement or not according to the preset neural network model is achieved by inputting the N frame image in the continuous ultrasonic images and the continuous ultrasonic images into the preset neural network model, thereby realizing the operation performed by the personnel of the non-professional sonographer, leading the patient not to go to a special hospital for a doctor, can acquire ultrasonic pictures in community hospitals even at home and greatly improve the quality of ultrasonic images acquired by non-professional sonographers when operating ultrasonic equipment, and then solved and need the professional to patient's ultrasonic image acquisition, and then can't realize long-range or the long-range follow-up technical problem of visiting at home.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
fig. 1 is a schematic hardware configuration diagram of an ultrasound image acquisition method based on an ultrasound imaging device according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an ultrasound image acquisition method based on an ultrasound imaging device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an ultrasound image detection apparatus based on an ultrasound imaging device according to an embodiment of the present application;
fig. 4 is a schematic flowchart of an ultrasound image acquisition method based on an ultrasound imaging device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an ultrasound imaging apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a preset neural network model in an ultrasound image acquisition method based on an ultrasound imaging device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
The inventor finds that the quality of the acquired image is not good for the ultrasonic operator, and the following problems mainly exist:
firstly, an ultrasound operator needs to hold an ultrasound probe to probe the neck of a patient from different positions and angles, and the operation method of the operator may be irregular according to the condition of the patient. Secondly, real-time dynamic images are acquired by the ultrasonic equipment, and when a professional doctor operates the ultrasonic equipment, whether nodules exist on the images or not and whether the images can best reflect the characteristics of the nodules or not can be judged according to the real-time images on the screen of the ultrasonic equipment, and the most critical images are intercepted and stored. And a non-professional operator lacks the professional judgment capability of the image, so that the intercepted image possibly does not contain the key information of the nodule.
Based on the above, the ultrasound image acquisition method based on the ultrasound imaging equipment in the application can improve the quality of the ultrasound image acquired by the ultrasound operator, thereby achieving or approaching the same level of the acquired image directly operated by a professional sonographer.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the example of the method running on a computer device, fig. 1 is a hardware structure block diagram of the computer device according to the ultrasound image acquisition method based on the ultrasound imaging device in the embodiment of the present application.
The embodiment of the application also provides computer equipment. As shown in fig. 1, the computer device 10 may include: at least one processor 101, e.g., a CPU, at least one network interface 104, a user interface 103, a memory 105, at least one communication bus 102, and optionally a display 106. Wherein the communication bus 102 is used for enabling connection communication between these components. The user interface 103 may include a touch screen, a keyboard or a mouse, among others. The network interface 104 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), and a communication connection may be established with the server via the network interface 104. The memory 105 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory, and the memory 105 includes a flash in the embodiment of the present invention. The memory 105 may optionally be at least one memory system located remotely from the processor 101. As shown in fig. 1, memory 105, which is a type of computer storage medium, may include an operating system, a network communication module, a user interface module, and program instructions.
It should be noted that the network interface 104 may be connected to a receiver, a transmitter or other communication module, and the other communication module may include, but is not limited to, a WiFi module, a bluetooth module, etc., and it is understood that the computer device in the embodiment of the present invention may also include a receiver, a transmitter, other communication module, etc.
Processor 101 may be configured to call program instructions stored in memory 105 and cause computer device 10 to perform the following operations:
acquiring continuous ultrasonic images through the ultrasonic imaging equipment;
inputting the N frame image and the continuous ultrasonic image in the continuous ultrasonic image into a preset neural network model, wherein the preset neural network model is obtained by using a plurality of groups of image data through machine learning training, and each group of data in the plurality of groups of data comprises: a sample ultrasound image and a category label of the sample ultrasound image, wherein N is an integer;
judging whether the Nth frame of image meets the acquisition requirement or not according to the preset neural network model;
if the condition is not met, guiding the acquisition mode of the continuous ultrasonic images through a reminding module in the ultrasonic imaging equipment so as to obtain the continuous ultrasonic images again;
if yes, continuing to judge whether the (N + 1) th frame image also meets the acquisition requirement after storing the (N) th frame image.
As shown in fig. 2, the method includes steps S201 to S205 as follows
Step S201, acquiring continuous ultrasonic images through the ultrasonic imaging equipment;
step S202, inputting the N frame image and the continuous ultrasonic image in the continuous ultrasonic image into a preset neural network model, wherein the preset neural network model is obtained by using a plurality of groups of image data through machine learning training, and each group of data in the plurality of groups of data comprises: a sample ultrasound image and a category label of the sample ultrasound image, wherein N is an integer;
step S203, judging whether the Nth frame of image meets the acquisition requirement according to the preset neural network model;
step S204, if the condition is not met, guiding the acquisition mode of the continuous ultrasonic images through a reminding module in the ultrasonic imaging equipment so as to obtain the continuous ultrasonic images again;
and step S205, if yes, continuing to judge whether the (N + 1) th frame image also meets the acquisition requirement after storing the (N) th frame image.
From the above description, it can be seen that the following technical effects are achieved by the present application:
the adoption is passed through ultrasound imaging equipment acquires the mode of continuous ultrasound image, through with N frame image in the continuous ultrasound image and continuous ultrasound image inputs to predetermineeing the neural network model, has reached the basis predetermineeing the neural network model, judge whether N frame image satisfies the purpose of gathering the requirement to realized can having carried out the operation by non professional supersound doctor's personnel, let the patient need not to go to the specialty hospital and see a doctor, can carry out the ultrasound image collection and promote the ultrasound image quality of gathering when non professional supersound doctor operates the ultrasonic equipment by a wide margin at community hospital even at home, and then solved and needed professional to the ultrasound image collection of patient, and then can't realize long-range or the technical problem of long-range follow-up visit at home.
Based on the method, a professional sonographer does not need to operate the ultrasonic equipment, the operation can be performed by community doctors, nurses or other personnel through certain training, and finally the medical judgment of the ultrasonic image is still diagnosed remotely by the sonographer.
In step S201, an ultrasound operator acquires continuous ultrasound images through the ultrasound imaging device. The ultrasound imaging device may include an ultrasound probe and a signal processor, and acquires continuous ultrasound images after conversion by the signal processor.
In an alternative embodiment, the ultrasound operator may be operated after receiving standardized operation training before using the ultrasound imaging device. Further, the input module is used for inputting basic information of the patient and starting a preset examination program.
As an alternative embodiment, the ultrasound operator places an ultrasound probe in the ultrasound imaging device on the neck of the examination object for probing, the ultrasound probe emits ultrasonic waves, and the ultrasonic waves returned by the human body are received and converted into analog signals; and the signal processor processes the analog signals transmitted by the ultrasonic probe into digital image signals and transmits the digital image signals to the ultrasonic image equipment.
After the frame extraction in the step S202, the nth frame image of the consecutive ultrasound images and the consecutive ultrasound images are input to a preset neural network model. It is understood that the predetermined neural network model is program instructions in a memory of the ultrasound imaging device.
As an alternative embodiment, the preset neural network model is obtained by machine learning training using a plurality of sets of image data. Preferably, the preset neural network model comprises a plurality of individual deep neural network models.
As an optional implementation, each of the plurality of sets of training data includes: a sample ultrasound image and a class label of the sample ultrasound image, wherein N is an integer. The sample ultrasound image is obtained by historical data or sampling in advance. The category label of the sample ultrasound image is labeled in advance by a professional sonographer, and the category label includes, but is not limited to, a lesion position and a lesion size.
In step S203, it is determined whether the nth frame of image meets the acquisition requirement according to the preset neural network model. And judging whether the acquisition of each single-frame image meets the preset acquisition requirement.
The method comprises the steps of carrying a preset neural network model program instruction on an ultrasonic imaging device, judging images on the ultrasonic device in real time, simulating the acquisition process of a professional ultrasonic doctor, giving feedback guidance to an ultrasonic operator according to the judgment result, and intercepting and storing key image frames with good quality.
As an optional implementation mode, the preset neural network model program instruction uses an off-line processing mode and is not influenced by the network quality.
As an optional implementation manner, the ultrasound imaging device is an all-in-one machine device or the ultrasound imaging device may be connected to an intelligent terminal.
In the step S204, if it is determined that the nth frame image does not meet the acquisition requirement according to the preset neural network model, the acquisition mode of the continuous ultrasound image is guided by a prompting module in the ultrasound imaging device, so as to obtain the continuous ultrasound image again.
As an optional implementation manner, if the picture acquisition does not meet the requirement, the prompt module of the ultrasound imaging device is controlled to provide synchronous operation guidance information.
As an alternative embodiment, the prompting module can be a video guidance or voice guidance mode.
In the above step S205, it is determined that the nth frame of image meets the acquisition requirement according to the preset neural network model, and then it is continuously determined whether the (N + 1) th frame of image also meets the acquisition requirement after the nth frame of image is saved.
As an optional implementation manner, if the nth frame image meets the acquisition requirement, the nth frame image is saved and then it is continuously determined whether the next frame image also meets the acquisition requirement.
As an optional implementation manner, if the picture acquisition meets the requirement, the picture with the highest quality (for example, the confidence coefficient is highest) is selected and stored in the memory, and the bluetooth or the 5G module in the ultrasound imaging device is used for network transmission when the network is good or idle.
As a preferable example in this embodiment, the preset neural network model includes: a plurality of individual deep neural network models, each of which is trained for a preset medical scenario and medical information, the individual deep neural network models including at least one of the following functions: matching a background template, detecting nodules of the single-frame image, judging whether the focus of the single-frame image is benign or malignant, and estimating the confidence coefficient of the continuous image.
In specific implementation, each single deep neural network model is obtained by training aiming at a preset medical scene and medical information, and each model is used for realizing functions of matching a background template, detecting nodules of a single-frame image, judging whether a focus of the single-frame image is good or bad, estimating confidence coefficient of a continuous image and the like.
Preferably, matching the background template is realized based on the deep neural network, and whether the background template is matched with the background template is judged.
Preferably, thyroid nodule detection of a single frame image is implemented based on a deep neural network.
Preferably, the lesion benign and malignant judgment of the thyroid nodule detection result in the single-frame image is realized based on the deep neural network.
Preferably, the confidence estimation of the continuous images is realized based on the deep neural network, so as to determine the optimal image frame.
Based on the steps, whether the area and the angle of the current ultrasonic probe meet the requirements or not is judged through a single background template matching model in the preset neural network model. Whether thyroid nodules exist in the current ultrasonic visual field is detected through a nodule detection model, and the risk that the detected nodules are malignant is further predicted through a benign and malignant discriminant model. And finally, judging the reliability degree of the nodule detection and the benign and malignant discrimination by the confidence coefficient estimation model according to a section of continuous video and the model result of whether the continuous video is matched with a preset template, whether thyroid nodules exist and whether the nodules are malignant risks.
Preferably, in the embodiment of the present application, when the acquired image and the patient information are transmitted, the relevant detection results of the preset neural network model, including but not limited to the nodule position labeling, the size measurement, and the benign and malignant risk prediction, may be attached at the same time for the reference of the doctor.
As a preferable example in this embodiment, the preset neural network model includes: the method comprises the steps that a first deep neural network model is adopted, wherein the ultrasonic images comprise thyroid ultrasonic images, different parts of a thyroid and ultrasonic images with preset angles corresponding to each part are collected, and the thyroid ultrasonic images at least comprising preset positions and preset angles are obtained; through the first deep neural network model, whether the background in the thyroid ultrasound image is matched with a preset background is identified, wherein the first deep neural network model is obtained by using multiple groups of image data through machine learning training, and each group of data in the multiple groups of data comprises: acquiring neck ultrasonic image samples at different positions and different angles; if the matching is carried out, the requirement for placing the ultrasonic probe on the ultrasonic imaging equipment is met; and if not, the placement requirement of the ultrasonic probe on the ultrasonic imaging equipment is not met.
In specific implementation, the first deep neural network model is used as a background template matching model. Typically, a complete ultrasound image acquisition of the thyroid would need to include different parts of the thyroid gland such as the left, right, isthmus lobes, as well as different angles such as transection, rip-cuts. The ultrasonic imaging equipment prompts an ultrasonic operator to acquire images of the designated parts and angles according to a preset sequence.
The first deep neural network model is established for identifying the background in the ultrasonic image, such as thyroid gland edge, trachea and the like, matching with the preset background and judging whether the ultrasonic operator places the ultrasonic probe according to the requirement. The first deep neural network model is obtained by training a classification model by using neck ultrasonic images acquired at different positions and angles.
As a preferable example in this embodiment, the preset neural network model includes: a second deep neural network model, wherein the ultrasound image includes a thyroid ultrasound image, and whether a thyroid nodule exists in a single frame image of the thyroid ultrasound image is identified through the second deep neural network model, the second deep neural network model is obtained by using multiple groups of image data through machine learning training, and each group of data in the multiple groups of data includes: thyroid ultrasonic image samples containing thyroid nodules and thyroid nodules not containing thyroid nodules, wherein nodule position information and/or nodule size information are marked in the thyroid ultrasonic images containing the thyroid nodules in advance; if the thyroid nodule is detected, a local image of the thyroid nodule is intercepted and is transmitted to the third deep neural network model as an input.
In a specific implementation, the second deep neural network model is a nodule detection model based on a single frame image. The nodule detection model detects whether thyroid nodules exist in the image or not based on the single-frame image in the real-time ultrasonic image. And if the model is detected, intercepting the local image of the nodule, and transmitting the local image of the nodule to a third deep neural network model. The model is obtained by training a detection model by using thyroid ultrasonic images containing nodules and thyroid ultrasonic images without nodules, wherein the thyroid ultrasonic images containing the nodules need to be marked with nodule position information by a doctor.
Preferably, in this embodiment, the local image of the thyroid nodule input by the second deep neural network model is received based on the third deep neural network model, and whether a malignant or benign thyroid nodule exists in the single-frame image of the thyroid ultrasound image is identified, where the third deep neural network model is obtained by machine learning training using multiple sets of image data, and each set of data in the multiple sets of data includes: benign and malignant thyroid nodule local image samples are pre-labeled.
In specific implementation, the third deep neural network model is a good-malignancy discrimination model based on a single frame image, receives the local image of the nodule transmitted by the second deep neural network model, and discriminates the good-malignancy of the nodule based on a single nodule image. The third deep neural network model is obtained by training a classification model by using a doctor-labeled benign and malignant thyroid nodule local picture.
As a preferable preference in this embodiment, the preset neural network model further includes: a fourth deep neural network model, wherein the ultrasonic image comprises a thyroid ultrasonic image, and a continuous thyroid ultrasonic image within a preset threshold time, and a background template matching result, a nodule detection result of a single-frame image and a lesion benign and malignant judgment result of the single-frame image output within the preset threshold time are input into the fourth deep neural network model; and outputting the confidence score of each frame of thyroid ultrasound images in the preset threshold time and the overall confidence of the continuous thyroid ultrasound images based on the fourth deep neural network model.
In a specific implementation, the fourth deep neural network model is a confidence estimation model of a continuous image, and the input of the model is a continuous image within a short time, such as 2 to 5 seconds, and the model output values of the first, second, and third deep neural network models within the same time period, and the output is the confidence of each frame of image within the same time period and the overall confidence of the image. It should be noted that if the confidence is high, it means that the image contains key medical information of thyroid nodule.
The fourth deep neural network model is obtained by adopting prediction models trained by ultrasonic continuous image segments collected by different collectors, and the ultrasonic image segments collected by a professional sonographer and the intercepted key frames are used as standards.
As a preference in this embodiment, if the overall confidence of the continuous thyroid ultrasound images is lower than a threshold, a prompting module in the ultrasound imaging device guides an acquisition mode of the continuous thyroid ultrasound images to reacquire the continuous thyroid ultrasound images; and if the overall confidence of the continuous thyroid ultrasound images is higher than a threshold value, selecting a picture with the highest confidence according to the confidence of each frame of image.
In particular, for a set of consecutive images, if the confidence level is below the threshold, a re-acquisition is required. If a group of consecutive images, if the confidence level of the group of consecutive images is overall high and all the images are above the threshold value, the images with the highest confidence level are selected if the quality of the acquired images is all high.
As a preferable preference in this embodiment, if yes, the step of continuing to determine whether the N +1 th frame image also meets the acquisition requirement after storing the nth frame image includes: and if the nth frame image meets the acquisition requirement, selecting a target frame image with the image quality meeting the requirement from the nth frame image and the (N + 1) th frame image … … (N + M) th frame image for storage, and transmitting the target frame image in a network transmission mode of the Bluetooth module or the 5G module of the ultrasonic imaging device within preset time, wherein M is an integer.
In specific implementation, if the background template of the first deep neural network model is not matched or the confidence of the whole continuous image is not up to the requirement, the operation guidance information under the corresponding condition is fed back according to the output condition of the preset neural model. And if the background template of the first deep neural network model is matched and the confidence coefficient of the whole continuous image meets the requirement, intercepting and storing a plurality of frames of key images with the highest confidence coefficient by the preset neural model, and simultaneously labeling a part of model results on the images, wherein the labeled part comprises the positions of the nodules and the sizes of the calculated nodules.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present application, there is also provided an apparatus for implementing the above ultrasound image acquisition method based on an ultrasound imaging device, as shown in fig. 3, the apparatus includes:
an image acquisition module 301, configured to acquire a continuous ultrasound image through the ultrasound imaging device;
a neural network module 302, configured to input the nth frame image in the continuous ultrasound images and the continuous ultrasound images into a preset neural network model, where the preset neural network model is obtained by using multiple sets of image data through machine learning training, and each set of data in the multiple sets of data includes: a sample ultrasound image and a category label of the sample ultrasound image, wherein N is an integer;
a judging module 303, configured to judge whether the nth frame of image meets an acquisition requirement according to the preset neural network model;
a guidance module 304, configured to guide, by a prompting module in the ultrasound imaging apparatus, an acquisition mode of the continuous ultrasound image to reacquire the continuous ultrasound image if the condition is not met;
and the execution module 305 is configured to, if the requirement is met, continue to determine whether the N +1 th frame of image also meets the acquisition requirement after the nth frame of image is saved.
In the image acquisition module 301 of the embodiment of the present application, an ultrasound operator obtains continuous ultrasound images through the ultrasound imaging device. The ultrasound imaging device may include an ultrasound probe and a signal processor, and acquires continuous ultrasound images after conversion by the signal processor.
In an alternative embodiment, the ultrasound operator may be operated after receiving standardized operation training before using the ultrasound imaging device. Further, the input module is used for inputting basic information of the patient and starting a preset examination program.
As an alternative embodiment, the ultrasound operator places an ultrasound probe in the ultrasound imaging device on the neck of the examination object for probing, the ultrasound probe emits ultrasonic waves, and the ultrasonic waves returned by the human body are received and converted into analog signals; and the signal processor processes the analog signals transmitted by the ultrasonic probe into digital image signals and transmits the digital image signals to the ultrasonic image equipment.
In the neural network module 302 of the embodiment of the application, after the frame extraction, the nth frame image in the continuous ultrasonic images and the continuous ultrasonic images are input to a preset neural network model. It is understood that the predetermined neural network model is program instructions in a memory of the ultrasound imaging device.
As an alternative embodiment, the preset neural network model is obtained by machine learning training using a plurality of sets of image data. Preferably, the preset neural network model comprises a plurality of individual deep neural network models.
As an optional implementation, each of the plurality of sets of training data includes: a sample ultrasound image and a class label of the sample ultrasound image, wherein N is an integer. The sample ultrasound image is obtained by historical data or sampling in advance. The category label of the sample ultrasound image is labeled in advance by a professional sonographer, and the category label includes, but is not limited to, a lesion position and a lesion size.
In the determining module 303 of the embodiment of the application, whether the nth frame image meets the acquisition requirement is determined according to the preset neural network model. And judging whether the acquisition of each single-frame image meets the preset acquisition requirement.
The method comprises the steps of carrying a preset neural network model program instruction on an ultrasonic imaging device, judging images on the ultrasonic device in real time, simulating the acquisition process of a professional ultrasonic doctor, giving feedback guidance to an ultrasonic operator according to the judgment result, and intercepting and storing key image frames with good quality.
As an optional implementation mode, the preset neural network model program instruction uses an off-line processing mode and is not influenced by the network quality.
As an optional implementation manner, the ultrasound imaging device is an all-in-one machine device or the ultrasound imaging device may be connected to an intelligent terminal.
In the guidance module 304 of the embodiment of the application, if it is determined that the nth frame image does not meet the acquisition requirement according to the preset neural network model, the guidance module in the ultrasound imaging device guides the acquisition mode of the continuous ultrasound image so as to reacquire the continuous ultrasound image.
As an optional implementation manner, if the picture acquisition does not meet the requirement, the prompt module of the ultrasound imaging device is controlled to provide synchronous operation guidance information.
As an alternative embodiment, the prompting module can be a video guidance or voice guidance mode.
In the execution module 305 according to the preset neural network model in the embodiment of the application, if it is determined that the nth frame image meets the acquisition requirement, the nth frame image is stored and then it is continuously determined whether the (N + 1) th frame image also meets the acquisition requirement.
As an optional implementation manner, if the nth frame image meets the acquisition requirement, the nth frame image is saved and then it is continuously determined whether the next frame image also meets the acquisition requirement.
As an optional implementation manner, if the picture acquisition meets the requirement, the picture with the highest quality (for example, the confidence coefficient is highest) is selected and stored in the memory, and the bluetooth or the 5G module in the ultrasound imaging device is used for network transmission when the network is good or idle.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
In order to better understand the above flow, the following explains the above technical solutions with reference to preferred embodiments, but the technical solutions of the embodiments of the present invention are not limited.
According to the ultrasound image acquisition method based on the ultrasound imaging equipment, the equipment can be operated by staff of a non-professional sonographer, so that a patient does not need to go to a special hospital for a doctor, ultrasound image acquisition can be performed in a community hospital or even at home, and the doctor cost is greatly reduced. The quality of the ultrasonic image acquired by a non-professional sonographer when operating the ultrasonic equipment is greatly improved.
Fig. 4 is a schematic flowchart of an ultrasound image acquisition method based on an ultrasound imaging apparatus according to an embodiment of the present application, which includes: patient 1, ultrasound operator 2, professional sonographer 3. In specific implementation, a person other than the ultrasound specialist, i.e., the ultrasound operator 2, may operate the ultrasound imaging apparatus to acquire an ultrasound image of the patient 1. And a preset neural network model built in the ultrasonic imaging equipment provides and feeds back operation guidance, image screening and labeling, screened and labeled thyroid nodule images with high confidence level are transmitted to a professional sonographer through a network, and finally the sonographer makes medical judgment.
Fig. 5 is a schematic structural diagram of an ultrasound imaging apparatus according to an embodiment of the present application, in which a host (including a power supply) includes: an ultrasonic probe 501, a signal processor 502, a deep learning processor 503, a loudspeaker 504, a display screen 505, a control chip 506, a storage unit 507 and a Bluetooth/5G module 508.
Wherein the ultrasound probe 501 serves as a generating and receiving device for ultrasound. The host (including power supply): the main body of the ultrasonic equipment is powered by a battery or an external power supply. The signal processor: the analog signals transmitted by the ultrasonic probe are converted into digital image data. One of the display screens 505 is used for displaying real-time ultrasound images, and the ultrasound images processed by the signal processor are displayed on the screen in real time.
In addition to the conventional modules described above, the present invention further includes:
the deep learning processor 503 is configured to input the ultrasound image processed by the signal processor into a preset neural network model, judge whether the current picture meets the acquisition requirement by the preset neural network model, and output a model judgment result to the control chip 506.
And the control chip is used for controlling system hardware according to the model judgment result. And if the picture acquisition does not meet the requirements, controlling the display screen and the loudspeaker to provide synchronous operation guide information. If the picture collection meets the requirements, the picture with the highest quality is selected to be stored in a memory, and network transmission is carried out by using a Bluetooth or 5G module at a proper time.
Another of the display screens 505 is used for a synchronous operation guidance display to demonstrate in a video manner an operation that needs to be performed by an operator. It can be understood that a display screen can be added to the ultrasonic imaging device.
The speaker 504 is used to play voice prompts that guide the operator.
The storage unit 507 is configured to temporarily store the acquired ultrasound image.
The bluetooth/5G module 508 is a network transmission module, and is configured to connect to a 5G network or other network devices in a bluetooth manner, and send the acquired image and the patient-related information to a professional sonographer.
As a preferred feature of this embodiment, the deep learning processor 503, the display screen 505, the speaker 504, the storage unit 507, and the bluetooth/5G module 508 may be directly integrated in the ultrasound device host, or may be implemented by connecting a conventional ultrasound device to other electronic devices, such as a mobile phone and a computer, or by using hardware functions of other electronic devices, such as a chip and a display screen of a mobile phone.
Fig. 6 is a schematic diagram illustrating a preset neural network model in an ultrasound image acquisition method based on an ultrasound imaging device according to an embodiment of the present application.
Taking thyroid ultrasound image examination as an example, the method in the application improves the image quality of ultrasound images acquired by an ultrasound operator, so that the image quality reaches or approaches the same level of images acquired by direct operation of a professional sonographer.
And extracting the continuous thyroid ultrasonic images to obtain a single-frame image. And processed using the preset neural network model as follows.
(1) The first neural network model: background template matching model
Different parts of the thyroid (left lobe, right lobe, isthmus lobe) and different angles (transection, rip) need to be included for one complete thyroid ultrasound image acquisition. The device of the invention can prompt an operator to acquire images of specified positions and angles in sequence. The model is used for identifying the background (such as thyroid gland edge, trachea and the like) in the ultrasonic image, matching the background with a preset background and judging whether an operator places an ultrasonic probe according to requirements. The model is obtained by training a classification model by using neck ultrasonic images acquired at different positions and angles.
(2) A second neural network model: nodule detection model based on single-frame image
The model detects whether thyroid nodules exist in the images based on single-frame images in real-time ultrasound images. And if the model is detected, intercepting the local image of the nodule, and transmitting the local image of the nodule to a third neural network module. The model is obtained by training a detection model by using thyroid ultrasonic images containing nodules and thyroid ultrasonic images without nodules, wherein the thyroid ultrasonic images containing the nodules need to be marked with nodule position information by a doctor.
(3) The third neural network model: and the third neural network model receives the local image of the nodule transmitted by the second neural network model and judges the benign and malignant of the nodule based on the single nodule image. The thyroid nodule local image training classification model is obtained by using a doctor to mark a benign and malignant thyroid nodule local image.
(3) The fourth neural network model: and (3) estimating a model based on the confidence of the continuous images, wherein the input of the model is the continuous images within 2-5s, the model output values of the first, second and third neural networks within the same time period, and the output is the confidence of each frame of image within the same time period and the confidence of the whole image. High confidence indicates that the image contains key medical information of the thyroid nodule. The model is obtained by a prediction model trained by ultrasonic continuous image segments collected by different collectors, and the ultrasonic image segments collected by a professional sonographer and the intercepted key frames are used as gold standards.
And if the background templates are not matched or the confidence coefficient of the whole continuous image does not meet the requirement, feeding back operation guide information under the corresponding condition according to the output condition of the fourth neural network model.
If the background template is matched and the confidence coefficient of the whole continuous image meets the requirement, the fourth neural network model intercepts a plurality of frames of key images with the highest confidence coefficient for storage, and meanwhile, partial model results are marked on the images, wherein the results include but are not limited to marking positions of nodules and calculated sizes of the nodules.
Ultrasonic imaging equipment in this application can be by non-professional supersound doctor's personnel operating device, therefore the patient need not to go to the specialty hospital and see a doctor, can be in the community hospital even at home (for example carry portable ultrasonic equipment by the supersound operator and go to patient's family) carry out the supersound picture and gather, greatly reduced cost of seeking a doctor. And professional sonographers can save the time for acquiring the ultrasonic images and focus on medical judgment, so that the working efficiency can be greatly improved, and the utilization efficiency of medical resources is increased. In addition, the quality of the ultrasonic image acquired by a non-professional sonographer when operating the ultrasonic equipment can be greatly improved.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An ultrasonic image acquisition method based on an ultrasonic imaging device is characterized by comprising the following steps:
acquiring continuous ultrasonic images through the ultrasonic imaging equipment;
inputting the N frame image and the continuous ultrasonic image in the continuous ultrasonic image into a preset neural network model, wherein the preset neural network model is obtained by using a plurality of groups of image data through machine learning training, and each group of data in the plurality of groups of data comprises: a sample ultrasound image and a category label of the sample ultrasound image, wherein N is an integer;
judging whether the Nth frame of image meets the acquisition requirement or not according to the preset neural network model;
if the condition is not met, guiding the acquisition mode of the continuous ultrasonic images through a reminding module in the ultrasonic imaging equipment so as to obtain the continuous ultrasonic images again;
if yes, continuing to judge whether the (N + 1) th frame image also meets the acquisition requirement after storing the (N) th frame image.
2. The method of claim 1, wherein the pre-defined neural network model comprises: a plurality of individual deep neural network models, each of which is trained for a preset medical scenario and medical information, the individual deep neural network models including at least one of the following functions:
matching a background template, detecting nodules of the single-frame image, judging whether the focus of the single-frame image is benign or malignant, and estimating the confidence coefficient of the continuous image.
3. The method of claim 2, wherein the pre-set neural network model comprises: a first deep neural network model, the ultrasound images including thyroid ultrasound images,
acquiring different parts of a thyroid gland and ultrasonic images with preset angles corresponding to each part to obtain the thyroid gland ultrasonic images at least comprising preset parts and preset angles;
through the first deep neural network model, whether the background in the thyroid ultrasound image is matched with a preset background is identified, wherein the first deep neural network model is obtained by using multiple groups of image data through machine learning training, and each group of data in the multiple groups of data comprises: acquiring neck ultrasonic image samples at different positions and different angles;
if the matching is carried out, the requirement for placing the ultrasonic probe on the ultrasonic imaging equipment is met;
and if not, the placement requirement of the ultrasonic probe on the ultrasonic imaging equipment is not met.
4. The method of claim 2, wherein the pre-set neural network model comprises: a second deep neural network model, the ultrasound images including thyroid ultrasound images,
identifying whether thyroid nodules exist in a single-frame image of the thyroid ultrasound image through the second deep neural network model, wherein the second deep neural network model is obtained by using multiple groups of image data through machine learning training, and each group of data in the multiple groups of data comprises: thyroid ultrasonic image samples containing thyroid nodules and thyroid nodules not containing thyroid nodules, wherein nodule position information and/or nodule size information are marked in the thyroid ultrasonic images containing the thyroid nodules in advance;
if the thyroid nodule is detected, a local image of the thyroid nodule is intercepted and is transmitted to the third deep neural network model as an input.
5. The method of claim 4,
receiving the local image of the thyroid nodule input by the second deep neural network model based on the third deep neural network model, and judging whether the nodule identified by the second deep neural network model is a benign nodule or a malignant nodule, wherein the third deep neural network model is obtained by using multiple groups of image data through machine learning training, and each group of data in the multiple groups of data comprises: benign and malignant thyroid nodule local image samples are pre-labeled.
6. The method of claim 1, wherein the pre-set neural network model further comprises: a fourth deep neural network model, the ultrasound images including thyroid ultrasound images,
inputting a continuous thyroid ultrasound image within a preset threshold time, and a background template matching result, a nodule detection result and a lesion benign and malignant judgment result of a single-frame image output to a thyroid ultrasound image single-frame image within the preset threshold time into the fourth deep neural network model;
and outputting the confidence score of each frame of thyroid ultrasound images in the preset threshold time and the overall confidence of the continuous thyroid ultrasound images based on the fourth deep neural network model.
7. The method of claim 6, further comprising:
if the overall confidence of the continuous thyroid ultrasound images is lower than a threshold value, guiding the acquisition mode of the continuous thyroid ultrasound images through a reminding module in the ultrasound imaging equipment so as to reacquire the continuous thyroid ultrasound images;
and if the overall confidence of the continuous thyroid ultrasound images is higher than a threshold value, selecting a picture with the highest confidence according to the confidence of each frame of image.
8. The method of claim 1, wherein if yes, continuing to determine whether the (N + 1) th frame of image also meets the acquisition requirement after saving the nth frame of image, comprising:
and if the nth frame image meets the acquisition requirement, selecting a target frame image with image quality meeting the requirement from the nth frame image and the (N + 1) th frame image … … (N + M) th frame image for storage, and performing network transmission through a communication module of the ultrasonic imaging equipment within preset time, wherein M is an integer.
9. An ultrasonic image acquisition device based on ultrasonic image equipment, characterized by comprising:
the image acquisition module is used for acquiring continuous ultrasonic images through the ultrasonic imaging equipment;
a neural network module, configured to input the nth frame image in the continuous ultrasound images and the continuous ultrasound images into a preset neural network model, where the preset neural network model is obtained by using multiple sets of image data through machine learning training, and each set of data in the multiple sets of data includes: a sample ultrasound image and a category label of the sample ultrasound image, wherein N is an integer;
the judging module is used for judging whether the Nth frame of image meets the acquisition requirement or not according to the preset neural network model;
the guiding module is used for guiding the acquisition mode of the continuous ultrasonic images through a reminding module in the ultrasonic imaging equipment when the condition is not met so as to obtain the continuous ultrasonic images again;
and the execution module is used for continuously judging whether the (N + 1) th frame of image meets the acquisition requirement after the nth frame of image is stored when the (N + 1) th frame of image meets the acquisition requirement.
10. An ultrasound imaging apparatus comprising: an apparatus body, characterized in that the apparatus body comprises:
the deep learning processor is used for inputting an ultrasonic image into a pre-trained neural network model and judging whether the acquisition of the ultrasonic image meets the requirement or not through the neural network model;
the controller receives the judgment result output by the deep learning processor and controls a display or a memory to execute preset operation;
under the condition that the judgment result of the controller is that the acquisition of the ultrasonic images does not meet the requirement, controlling the display to guide the acquisition mode of the continuous ultrasonic images;
under the condition that the judgment result of the controller is that the acquisition of the ultrasonic image meets the requirement, controlling the memory to store the ultrasonic image;
the wireless transmission module is used for carrying out network transmission on the ultrasonic image through the wireless communication module;
the voice prompt module is used for guiding the acquisition mode of the continuous ultrasonic images in cooperation with the display;
the ultrasonic probe is used for transmitting ultrasonic waves to the neck region of the user and converting the received returned ultrasonic waves to obtain analog signals;
and the signal processor is used for converting the analog signals transmitted by the ultrasonic probe into digital image signals and transmitting the digital image signals to the deep learning processor.
CN202111246132.9A 2021-10-25 2021-10-25 Ultrasonic image acquisition method based on ultrasonic imaging equipment and related device Pending CN113842166A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111246132.9A CN113842166A (en) 2021-10-25 2021-10-25 Ultrasonic image acquisition method based on ultrasonic imaging equipment and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111246132.9A CN113842166A (en) 2021-10-25 2021-10-25 Ultrasonic image acquisition method based on ultrasonic imaging equipment and related device

Publications (1)

Publication Number Publication Date
CN113842166A true CN113842166A (en) 2021-12-28

Family

ID=78982998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111246132.9A Pending CN113842166A (en) 2021-10-25 2021-10-25 Ultrasonic image acquisition method based on ultrasonic imaging equipment and related device

Country Status (1)

Country Link
CN (1) CN113842166A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116269507A (en) * 2023-05-23 2023-06-23 中日友好医院(中日友好临床医学研究所) Method and device for classifying hyperuricemia and gouty nephropathy and electronic equipment

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180144466A1 (en) * 2016-11-23 2018-05-24 General Electric Company Deep learning medical systems and methods for image acquisition
CN109069119A (en) * 2016-04-26 2018-12-21 皇家飞利浦有限公司 3D rendering synthesis for ultrasonic fetal imaging
CN109447940A (en) * 2018-08-28 2019-03-08 天津医科大学肿瘤医院 Convolutional neural networks training method, ultrasound image recognition positioning method and system
CN109671062A (en) * 2018-12-11 2019-04-23 成都智能迭迦科技合伙企业(有限合伙) Ultrasound image detection method, device, electronic equipment and readable storage medium storing program for executing
CN109674494A (en) * 2019-01-29 2019-04-26 深圳瀚维智能医疗科技有限公司 Ultrasonic scan real-time control method, device, storage medium and computer equipment
CN111598875A (en) * 2020-05-18 2020-08-28 北京小白世纪网络科技有限公司 Method, system and device for building thyroid nodule automatic detection model
CN112215842A (en) * 2020-11-04 2021-01-12 上海市瑞金康复医院 Malignant nodule edge detection image processing method based on benign thyroid template
CN112294360A (en) * 2019-07-23 2021-02-02 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method and device
CN112381006A (en) * 2020-11-17 2021-02-19 深圳度影医疗科技有限公司 Ultrasonic image analysis method, storage medium and terminal equipment
CN112971844A (en) * 2019-12-16 2021-06-18 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic image acquisition quality evaluation method and ultrasonic imaging equipment
CN113116390A (en) * 2019-12-31 2021-07-16 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic image detection method and ultrasonic imaging equipment
CN113299391A (en) * 2021-05-25 2021-08-24 李玉宏 Risk assessment method for remote thyroid nodule ultrasonic image
CN113344864A (en) * 2021-05-21 2021-09-03 江苏乾君坤君智能网络科技有限公司 Ultrasonic thyroid nodule benign and malignant prediction method based on deep learning

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109069119A (en) * 2016-04-26 2018-12-21 皇家飞利浦有限公司 3D rendering synthesis for ultrasonic fetal imaging
US20180144466A1 (en) * 2016-11-23 2018-05-24 General Electric Company Deep learning medical systems and methods for image acquisition
CN109447940A (en) * 2018-08-28 2019-03-08 天津医科大学肿瘤医院 Convolutional neural networks training method, ultrasound image recognition positioning method and system
CN109671062A (en) * 2018-12-11 2019-04-23 成都智能迭迦科技合伙企业(有限合伙) Ultrasound image detection method, device, electronic equipment and readable storage medium storing program for executing
CN109674494A (en) * 2019-01-29 2019-04-26 深圳瀚维智能医疗科技有限公司 Ultrasonic scan real-time control method, device, storage medium and computer equipment
CN112294360A (en) * 2019-07-23 2021-02-02 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method and device
CN112971844A (en) * 2019-12-16 2021-06-18 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic image acquisition quality evaluation method and ultrasonic imaging equipment
CN113116390A (en) * 2019-12-31 2021-07-16 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic image detection method and ultrasonic imaging equipment
CN111598875A (en) * 2020-05-18 2020-08-28 北京小白世纪网络科技有限公司 Method, system and device for building thyroid nodule automatic detection model
CN112215842A (en) * 2020-11-04 2021-01-12 上海市瑞金康复医院 Malignant nodule edge detection image processing method based on benign thyroid template
CN112381006A (en) * 2020-11-17 2021-02-19 深圳度影医疗科技有限公司 Ultrasonic image analysis method, storage medium and terminal equipment
CN113344864A (en) * 2021-05-21 2021-09-03 江苏乾君坤君智能网络科技有限公司 Ultrasonic thyroid nodule benign and malignant prediction method based on deep learning
CN113299391A (en) * 2021-05-25 2021-08-24 李玉宏 Risk assessment method for remote thyroid nodule ultrasonic image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116269507A (en) * 2023-05-23 2023-06-23 中日友好医院(中日友好临床医学研究所) Method and device for classifying hyperuricemia and gouty nephropathy and electronic equipment
CN116269507B (en) * 2023-05-23 2023-07-25 中日友好医院(中日友好临床医学研究所) Method and device for classifying hyperuricemia and gouty nephropathy and electronic equipment

Similar Documents

Publication Publication Date Title
CN110738263B (en) Image recognition model training method, image recognition method and image recognition device
CN101179997B (en) Stylus-aided touchscreen control of ultrasound imaging devices
WO2011126860A2 (en) Medical diagnosis using biometric sensor protocols
EP2508116B1 (en) Image-display device and capsule-type endoscope system
US20060064321A1 (en) Medical image management system
CN110610181A (en) Medical image identification method and device, electronic equipment and storage medium
US10610202B2 (en) Ultrasonic imaging system and controlling method thereof
JP2014150804A (en) Ultrasonic image diagnostic apparatus
KR102545008B1 (en) Ultrasound imaging apparatus and control method for the same
EP3477655A1 (en) Method of transmitting a medical image, and a medical imaging apparatus performing the method
CN116058864A (en) Classification display method of ultrasonic data and ultrasonic imaging system
CN113842166A (en) Ultrasonic image acquisition method based on ultrasonic imaging equipment and related device
CN111067569A (en) Portable ultrasonic image intelligent processing system and auxiliary analysis method thereof
US20220172840A1 (en) Information processing device, information processing method, and information processing system
US20040225476A1 (en) Inspection apparatus for diagnosis
EP3485818B1 (en) Ultrasound diagnosis apparatus and method of controlling the same
WO2017010810A1 (en) Apparatus and method for recording ultrasonic image
CN113053494B (en) PACS system based on artificial intelligence and design method thereof
CN212816305U (en) Portable ultrasonic image intelligent processing system
CA2960847A1 (en) Body scanning device
CN113223693B (en) Electrocardiogram machine online interaction method, electrocardiograph machine and storage medium
EP4368114A1 (en) Systems and methods for transforming ultrasound images
CN104573293B (en) A kind of method of adjustment of medical application, device and system
CN114464289B (en) ERCP report generation method, ERCP report generation device, electronic equipment and computer readable storage medium
US20220304575A1 (en) Mobile-Powered Desktop ECG System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination