CN112488982A - Ultrasonic image detection method and device - Google Patents

Ultrasonic image detection method and device Download PDF

Info

Publication number
CN112488982A
CN112488982A CN201910857268.XA CN201910857268A CN112488982A CN 112488982 A CN112488982 A CN 112488982A CN 201910857268 A CN201910857268 A CN 201910857268A CN 112488982 A CN112488982 A CN 112488982A
Authority
CN
China
Prior art keywords
reference frame
target object
frame
target
subsequent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910857268.XA
Other languages
Chinese (zh)
Inventor
张兆东
范镒
乔徽
高强
王博
陈波
龚倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bokece Shanghai Robot Co ltd
Original Assignee
Bokece Shanghai Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bokece Shanghai Robot Co ltd filed Critical Bokece Shanghai Robot Co ltd
Priority to CN201910857268.XA priority Critical patent/CN112488982A/en
Publication of CN112488982A publication Critical patent/CN112488982A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The application provides an ultrasonic image detection method and device, relates to the technical field of ultrasonic image processing, and comprises the following steps: acquiring an ultrasonic video stream, and determining a reference frame comprising a target object and the position of the target object in the reference frame from the ultrasonic video stream through a pre-trained neural network model; performing target tracking on subsequent frames of the reference frame by adopting a target tracking algorithm according to the reference frame and the position of the target object in the reference frame; and when the target tracking is successfully carried out on the subsequent frame of the reference frame, the position of the target object in the subsequent frame is obtained. The neural network model can accurately acquire the position of the target object in the reference frame, and the target object in the subsequent frame of the reference frame is accurately tracked in a target tracking mode, so that the position of the target object in the subsequent frame of the reference frame can be quickly acquired in the target tracking mode, the target object detection speed is improved, and the real-time performance of ultrasonic image detection is further ensured.

Description

Ultrasonic image detection method and device
Technical Field
The application relates to the technical field of ultrasonic image processing, in particular to an ultrasonic image detection method and device.
Background
Ultrasonic imaging is to scan human body with ultrasonic sound beam, and to receive and process reflected signal to obtain the image of internal organ. The ultrasonic image can obtain the required image without dyeing the living tissue, which is beneficial to detecting the living tissue. According to the ultrasonic image, operations such as hydrops, hematocele, empyema and injection of therapeutic drugs can be guided to acupuncture and extraction, features in the ultrasonic image are generally required to be extracted in the current ultrasonic image processing, and then a target in the ultrasonic image is detected by adopting an image feature matching algorithm.
Disclosure of Invention
An object of the embodiments of the present application is to provide an ultrasound image detection method and apparatus, so as to solve the problem in the prior art that the robustness of the detection result of an ultrasound image is not high.
In a first aspect, an embodiment of the present application provides an ultrasound image detection method, where the method includes: acquiring an ultrasonic video stream, and determining a reference frame comprising a target object and the position of the target object in the reference frame from the ultrasonic video stream through a pre-trained neural network model; performing target tracking on subsequent frames of the reference frame by adopting a target tracking algorithm according to the reference frame and the position of the target object in the reference frame; and when the target tracking is successfully carried out on the subsequent frame of the reference frame, acquiring the position of the target object in the subsequent frame.
In the method, the ultrasonic video stream is detected, the pre-trained neural network model is utilized to determine the reference frame comprising the target object from the ultrasonic video stream, and meanwhile, the position of the target object in the reference frame can be accurately obtained through the neural network model, so that the target object in the subsequent frame of the reference frame can be conveniently tracked in a target tracking mode according to the reference frame and the position of the target object in the reference frame, the position of the target object in the subsequent frame of the reference frame can be rapidly obtained in the target tracking mode, the target object detection speed is improved, and the real-time performance of ultrasonic image detection is further ensured.
Optionally, after performing target tracking on a subsequent frame of the reference frame by using a target tracking algorithm according to the reference frame and the position of the target object in the reference frame, the method further includes: if the target object is not tracked in the current processing frame in the subsequent frames after the reference frame, determining a new reference frame comprising the target object and the position of the target object in the new reference frame from the subsequent frames of the current processing frame through the neural network model again; performing target tracking on subsequent frames of the new reference frame by adopting a target tracking algorithm according to the new reference frame and the position of the target object in the new reference frame; and when the target tracking is successfully carried out on the subsequent frame of the new reference frame, acquiring the position of the target object in the subsequent frame of the new reference frame.
In the implementation process, when the target object is not tracked in the current processing frame in the subsequent frame of the reference frame, that is, when the target tracking fails, the new reference frame including the target object can be determined from the subsequent frame of the current processing frame through the neural network model again, and the position of the target object in the new reference frame is obtained, so that the target tracking is continued subsequently, the accuracy of the target object tracking is ensured, and only when the target tracking is not successfully performed, the accurate position of the target object is obtained through the neural network model, so that the process of calculating through the neural network model can be reduced, and the target object detection speed is further improved.
Optionally, the performing target tracking on a subsequent frame of the reference frame by using a target tracking algorithm according to the reference frame and the position of the target object in the reference frame includes: judging whether the number of frames at intervals between a current processing frame and the reference frame in a subsequent frame after the reference frame exceeds a preset number of frames; if yes, determining a new reference frame comprising the target object from the subsequent frames of the current processing frame through the neural network model again, and determining the position of the target object in the new reference frame; performing target tracking on subsequent frames of the new reference frame by adopting a target tracking algorithm according to the new reference frame and the position of the target object in the new reference frame; and when the target tracking is successfully carried out on the subsequent frame of the new reference frame, acquiring the position of the target object in the subsequent frame of the new reference frame.
In the implementation process, whether the number of frames spaced between the current processing frame and the reference frame in the subsequent frame of the reference frame exceeds a preset number of frames is judged, if so, the target tracking is already carried out for a period of time, and the target tracking is continued possibly inaccurate, so that a new reference frame can be determined again when the number of frames spaced between the current processing frame and the reference frame exceeds the preset number of frames, and the target tracking is carried out on the target according to the new reference frame and the position of the target in the new reference frame, thereby ensuring the accuracy of the target tracking.
Optionally, the acquiring an ultrasound video stream, and determining a reference frame including a target object and a position of the target object in the reference frame from the ultrasound video stream through a pre-trained neural network model includes: acquiring a sample data set, wherein the sample data set comprises a training set and a test set, and each sample data in the sample data set comprises an ultrasonic image and position information of an actual target object corresponding to the ultrasonic image; training a pre-established neural network model according to the training set; inputting a plurality of ultrasonic images in the test set into the trained neural network model, and acquiring the output position information of a prediction target object corresponding to each ultrasonic image; and when the deviation between the position information of the predicted target object and the position information of the actual target object is smaller than a preset value, determining the trained neural network model as a pre-trained neural network model.
In the implementation process, before the reference frame including the target object is determined in the ultrasonic video stream, the neural network needs to be established and trained according to the sample data, the trained neural network has high identification accuracy, and the reference frame including the target object can be accurately determined in the ultrasonic video stream, so that the accuracy of ultrasonic image detection is improved.
Optionally, the hidden layer of the pre-established neural network model includes a convolutional layer, a linear rectifying layer, a local response normalization layer, a pooling layer, and a full-link layer.
Optionally, a correlation filter is used for target tracking of a subsequent frame of the reference frame.
In a second aspect, an embodiment of the present application provides an ultrasound image detection apparatus, including: the device comprises a reference frame and target object position acquisition module, a reference frame and target object position acquisition module and a target object position acquisition module, wherein the reference frame and the target object position acquisition module are used for acquiring an ultrasonic video stream and determining a reference frame comprising a target object and a position of the target object in the reference frame from the ultrasonic video stream through a pre-trained neural network model; the target tracking module is used for tracking the target of the subsequent frames of the reference frame by adopting a target tracking algorithm according to the reference frame and the position of the target object in the reference frame; and the target object position acquisition module is used for acquiring the position of the target object in the subsequent frame when the subsequent frame of the reference frame is successfully subjected to target tracking.
Optionally, the apparatus further comprises: the reference frame and target object position acquisition module is further configured to determine, when the target object is not tracked in a current processing frame in subsequent frames after the reference frame, a new reference frame including the target object and a position of the target object in the new reference frame from the subsequent frames of the current processing frame through the neural network model again; the target tracking module is further used for performing target tracking on a subsequent frame of the new reference frame by adopting a target tracking algorithm according to the new reference frame and the position of the target object in the new reference frame; and the target object position acquisition module is further used for acquiring the position of the target object in the subsequent frame of the new reference frame when the subsequent frame of the new reference frame is successfully subjected to target tracking.
Optionally, the target tracking module comprises: an interval judgment unit, configured to judge whether a number of frames in an interval between a current processing frame and the reference frame in a subsequent frame after the reference frame exceeds a preset number of frames; a reference frame and target object position acquiring unit, configured to determine, when a number of frames at an interval between a current processing frame and a reference frame in a subsequent frame after the reference frame exceeds a preset number of frames, a new reference frame including a target object and a position of the target object in the new reference frame from the subsequent frame of the current processing frame through the neural network model again; the target tracking unit is used for tracking the target of the subsequent frame of the new reference frame by adopting a target tracking algorithm according to the new reference frame and the position of the target object in the new reference frame; and the target object position acquisition unit is used for acquiring the position of the target object in the subsequent frame of the new reference frame when the subsequent frame of the new reference frame is successfully subjected to target tracking.
Optionally, the apparatus further comprises: the system comprises a sample data set acquisition module, a target object acquisition module and a target object acquisition module, wherein the sample data set comprises a training set and a test set, and each sample data in the sample data set comprises an ultrasonic image and position information of an actual target object corresponding to the ultrasonic image; the training module is used for training a pre-established neural network model according to the training set; the device comprises a position information acquisition module of a prediction target object, a position information acquisition module and a position information acquisition module, wherein the position information acquisition module is used for inputting a plurality of ultrasonic images in a test set into a trained neural network model and acquiring the output position information of the prediction target object corresponding to each ultrasonic image; and the neural network model determining module is used for determining the trained neural network model as a pre-trained neural network model when the deviation between the position information of the predicted target object and the position information of the actual target object is smaller than a preset value.
In a third aspect, embodiments of the present application provide a computing device, including a processor and a memory, where the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the steps in the method as provided in the first aspect are executed.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps in the method as provided in the first aspect.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a block diagram of a computing device according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an ultrasound image detection method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a convolutional neural network model provided in an embodiment of the present application;
fig. 4 is a schematic diagram of an ultrasound image detection apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The utility model relates to an ultrasonic image processing technology field, the field is acoustics, medicine, optics and electronics combine together's technical field, ultrasonic imaging utilizes ultrasonic sound beam to scan the human body, through the receipt to reflection signal, handle, in order to obtain the image of internal organ, the ultrasonic wave has good resolving power to human soft tissue, be favorable to discerning biological tissue's small pathological change, and the ultrasonic image shows that the vital tissue can not need the dyeing to handle, can obtain required image, be favorable to detecting the vital tissue, the ultrasonic image detection method that this application provided, can be accurate quick inspect the ultrasonic image, in order to guarantee the real-time of ultrasonic image detection, thereby improve the not high problem of the testing result robustness of ultrasonic image among the prior art.
The ultrasound image detection method provided by the present application may be executed by a computing device, and the following briefly explains the computing device, please refer to fig. 1, where fig. 1 is a schematic structural diagram of a computing device provided in an embodiment of the present invention, and the computing device may include: at least one processor 110, such as a CPU, at least one communication interface 120, at least one memory 130, and at least one communication bus 140. Wherein the communication bus 140 is used for realizing direct connection communication of these components. The communication interface 120 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The memory 130 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. Memory 130 may optionally be at least one memory device located remotely from the aforementioned processor. The memory 130 stores computer readable instructions that, when executed by the processor 110, cause the computing device to perform the method processes of fig. 2, described below.
It will be appreciated that the configuration shown in FIG. 1 is merely illustrative, and that computing device 100 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof. In this embodiment of the application, the computing device 100 may be, but is not limited to, a dedicated detection device, a desktop, a notebook computer, a smart phone, an intelligent wearable device, an in-vehicle device, and other physical devices, and may also be a virtual device such as a virtual machine. In addition, the computing device 100 need not be a single device, but may be a combination of multiple devices, such as a server cluster, and so forth.
Referring to fig. 2, fig. 2 is a flowchart of an ultrasound image detection method according to an embodiment of the present application, where the method includes the following steps:
step S110: acquiring an ultrasonic video stream, and determining a reference frame comprising a target object and the position of the target object in the reference frame from the ultrasonic video stream through a pre-trained neural network model.
The method comprises the steps that an ultrasonic video stream can be generally obtained through an ultrasonic probe, the ultrasonic video stream comprises continuous images of a certain tissue shot by the ultrasonic probe within a period of time, the continuous images are also called as an ultrasonic image frame sequence, after the ultrasonic video stream is obtained, ultrasonic image frames in the ultrasonic video stream can be identified through a pre-trained neural network model, the identified ultrasonic image frames of a target object are determined as a reference frame, and meanwhile the position of the target object in the reference frame is determined.
The neural network model can comprise a convolutional neural network model, a long-time memory neural network model, a BP neural network model and the like, different neural networks can be adopted according to actual requirements, the convolutional neural network model is taken as an example to identify the ultrasonic video stream, and a reference frame is determined, and the part is specifically explained in the following contents.
Step S120: and performing target tracking on the subsequent frames of the reference frame by adopting a target tracking algorithm according to the reference frame and the position of the target object in the reference frame.
The tracking is to locate a certain target in consecutive video frames, and when tracking a certain object in the previous frame, the information of the object is known, and the information includes the position of the object in the previous frame and the running speed and direction, so that the position of the object in the next frame can be predicted by using the known information in the next frame, that is, the tracking algorithm tracks points by using all known information, but the detection algorithm starts again every time, so that the target in the image is tracked by using the target tracking algorithm, and the result is faster than the result obtained by detecting the target in the image through the neural network model.
Therefore, after the reference frame is determined from the ultrasound video stream through the neural network model, a target tracking algorithm can be performed according to the reference frame and the position of the target object in the reference frame, so as to perform rapid detection on the subsequent frame of the reference frame, for example, if the reference frame is determined to be the ultrasound image frame t, the ultrasound image frame t +1 can be checked by using the target tracking algorithm.
The correlation filtering is an algorithm for tracking a target in an image based on correlation, and the algorithm solves the correlation between all pixels in the processed image and the target, and determines the pixel with high enough correlation as the target, so as to find the position of the target.
As an embodiment, a correlation filter may be used to perform target tracking on a frame subsequent to the reference frame, and a specific algorithm is briefly described according to the following example. If the reference frame is F and the filter is h, calculating the two-dimensional Fourier transform F of the reference frame F as
Figure BDA0002198377060000081
Fourier transform of filter h
Figure BDA0002198377060000082
Then the correlation G is defined as: g ═ F-*Wherein G, F and H are both matrices.
Wherein, acquiring the filter requires an image training set, and the image in the image training set is recorded as fiIts corresponding Fourier transform is FiThe target in each image in the training set of images is known, and image fiThe corresponding target is giTarget giCorresponding Fourier transform GiThus, a filter solution problem can be obtained as shown in the following equation:
Figure BDA0002198377060000091
the above formula has a closed solution, so that the target G in the above formula can be alignediTo H*Derivation and calculation with 0 yields the following:
Figure BDA0002198377060000092
H*a filter is represented.
In addition, since the filter needs to adapt to the environment change quickly to follow the target, the update rule of the filter H is set as:
Figure BDA0002198377060000093
wherein A isi=ηGi⊙Fi *+(1-η)Ai-1,Bi=ηFi⊙Fi *+(1-η)Bi-1Eta is a parameter representing the rate at which updates are controlled, a specific value of eta can be determined experimentally, and the filter is updated for the first time, Ai=ηGi⊙Fi *,Bi=ηFi⊙Fi *. By adopting the updating method, more rights can be givenThe adjacent frames are replayed and the influence of the previous frames on the current frame is attenuated at an exponential speed.
Step S130: and judging whether the target tracking is successfully carried out on the subsequent frames of the reference frame.
Because the operator moves or changes the position of the ultrasonic probe so that the ultrasonic video stream acquired by the ultrasonic probe has no target, it needs to be determined whether the target tracking is successfully performed on the subsequent frame of the reference frame, and if the target tracking is successfully performed on the subsequent frame of the reference frame, step S140 may be performed: the position of the target object in the subsequent frame is obtained.
In the method, the ultrasonic video stream is detected, the pre-trained neural network model is utilized to determine the reference frame comprising the target object from the ultrasonic video stream, and meanwhile, the position of the target object in the reference frame can be accurately obtained through the neural network model, so that the target object in the subsequent frame of the reference frame can be conveniently tracked in a target tracking mode according to the reference frame and the position of the target object in the reference frame, the position of the target object in the subsequent frame of the reference frame can be rapidly obtained in the target tracking mode, the target object detection speed is improved, and the real-time performance of ultrasonic image detection is further ensured.
Alternatively, if the target object is not tracked in the current processing frame in the subsequent frame after the reference frame, step S150 is executed: and determining a new reference frame including the target object from the subsequent frames of the current processing frame and the position of the target object in the new reference frame again through the neural network model.
Step S160: and performing target tracking on the subsequent frames of the new reference frame by adopting a target tracking algorithm according to the new reference frame and the position of the target object in the new reference frame.
Step S170: and when the target tracking is successfully carried out on the subsequent frame of the new reference frame, acquiring the position of the target object in the subsequent frame of the new reference frame.
For example, if the reference frame is determined to be the ultrasound image frame t, the ultrasound image frame t +1 may be checked by using a target tracking algorithm, and if the ultrasound image frame t +1 is successfully tracked, the target tracking may continue for the ultrasound image frame t +2, and if the target tracking is not successfully performed for the ultrasound image frame t +1, a new reference frame including the target object needs to be determined for the subsequent frame of the ultrasound image frame t +1 by the neural network model, for example, the ultrasound image frame t +2 is detected to have no target object by the neural network model, whether the ultrasonic image frame t +3 includes the target object or not is continuously detected, if the ultrasonic image frame t +3 includes the target object, the ultrasound image frame t +3 may be determined as a new reference frame, and target detection may be performed on subsequent frames of the ultrasound image frame t +3 in a target tracking manner.
In the implementation process, when the target object is not tracked in the current processing frame in the subsequent frame of the reference frame, that is, when the target tracking fails, the new reference frame including the target object can be determined from the subsequent frame of the current processing frame through the neural network model again, and the position of the target object in the new reference frame is obtained, so that the target tracking is continued subsequently, the accuracy of the target object tracking is ensured, and only when the target tracking is not successfully performed, the accurate position of the target object is obtained through the neural network model, so that the process of calculating through the neural network model can be reduced, and the target object detection speed is further improved.
Since the target tracking algorithm has a reduced accuracy with the increase of the tracking time, a rule can be set to avoid inaccuracy due to the increase of the tracking result of the filter with the increase of the tracking time.
As an implementation manner, whether the number of frames at an interval between a current processing frame and a reference frame in a subsequent frame after a reference frame exceeds a preset frame number may be determined, if yes, a new reference frame including a target object and a position of the target object in the new reference frame are determined from the subsequent frame of the current processing frame through a neural network model again, then, a target tracking algorithm is adopted to perform target tracking on the subsequent frame of the new reference frame according to the new reference frame and the position of the target object in the new reference frame, and when the target tracking is successfully performed on the subsequent frame of the new reference frame, the position of the target object in the subsequent frame of the new reference frame is obtained.
For example, the reference frame is determined as the ultrasonic image frame t, and the target tracking algorithm is adopted to track the ultrasonic image frame t + N successfully, at this time, whether N is greater than a preset value N or not can be judged, if N is greater than the preset value N, it is indicated that the target tracking algorithm may have the problem of inaccurate tracking, the reference frame can be re-determined in the subsequent frames of the ultrasonic image frame t + N, and the target tracking accuracy is ensured.
It is to be understood that, when the target tracking fails, a new reference frame including the target object may be determined again from the subsequent frames of the current processing frame through the neural network model, after the new reference frame is determined, it may be determined whether a number of frames spaced between the current processing frame and the new reference frame in the subsequent frames of the new reference frame exceeds a preset number of frames, and if so, another new reference frame including the target object may be determined again from the subsequent frames of the current processing frame through the neural network model.
In the implementation process, whether the number of frames spaced between the current processing frame and the reference frame in the subsequent frame of the reference frame exceeds a preset number of frames is judged, if so, the target tracking is already carried out for a period of time, and the target tracking is continued possibly inaccurate, so that a new reference frame can be determined again when the number of frames spaced between the current processing frame and the reference frame exceeds the preset number of frames, and the target tracking is carried out on the target according to the new reference frame and the position of the target in the new reference frame, thereby ensuring the accuracy of the target tracking.
Optionally, before the ultrasound video stream is identified according to the convolutional neural network model and the reference frame is determined, the step of obtaining the convolutional neural network model is as follows:
the method comprises the steps of obtaining a sample data set, wherein the sample data set comprises a training set and a test set, and each sample data in the sample data set comprises an ultrasonic image and position information of an actual target object corresponding to the ultrasonic image.
And training the pre-established neural network model according to the training set.
And inputting a plurality of ultrasonic images in the test set into the trained neural network model, and acquiring the output position information of the prediction target object corresponding to each ultrasonic image.
And when the deviation between the position information of the predicted target object and the position information of the actual target object is smaller than a preset value, determining the trained neural network model as a pre-trained neural network model.
The neural network model trained by the method can obtain the accurate position of the target vein in the image, the neural network model is end-to-end, the input end is the original image, and the output end is the target parameter. After the neural network model is established, the neural network model is repeatedly trained by using a training set, and internal parameters of the neural network are adaptively adjusted on the training set until output target parameters of the neural network reach acceptable precision. After the neural network training model is completed, the parameters in the model can be considered to be appropriate, and for the images in the same scene outside the training set, the neural network can also output target parameters with higher precision.
Fig. 3 is a schematic diagram of a convolutional neural network model provided in an embodiment of the present application, where hidden layers of the neural network model include a convolutional layer, a linear rectifying layer, a local response normalization layer, a pooling layer, and a full link layer.
In the implementation process, before the reference frame including the target object is determined in the ultrasonic video stream, the neural network needs to be established and trained according to the sample data, the trained neural network has high identification accuracy, and the reference frame including the target object can be accurately determined in the ultrasonic video stream, so that the accuracy of ultrasonic image detection is improved.
Based on the same inventive concept, an ultrasound image detection apparatus 200 is further provided in the embodiment of the present application, and fig. 4 is a structural block diagram of the ultrasound image detection apparatus 200 provided in the embodiment of the present application. The apparatus may be a module, a program segment, or code on a computing device. It should be understood that the ultrasound image detection apparatus 200 corresponds to the above-mentioned embodiment of the method of fig. 2, and can perform the steps related to the embodiment of the method of fig. 2, and the specific functions of the ultrasound image detection apparatus 200 can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy.
Optionally, the ultrasound image detection apparatus 200 includes:
a reference frame and target position acquiring module 210, configured to acquire an ultrasound video stream, and determine a reference frame including a target and a position of the target in the reference frame from the ultrasound video stream through a pre-trained neural network model.
And the target tracking module 220 is configured to perform target tracking on subsequent frames of the reference frame by using a target tracking algorithm according to the reference frame and the position of the target object in the reference frame.
And the target position obtaining module 230 is configured to obtain a position of the target in the subsequent frame when the subsequent frame of the reference frame is successfully subjected to target tracking.
Optionally, the ultrasound image detection apparatus 200 further comprises:
the reference frame and target object position obtaining module is further configured to determine, when the target object is not tracked in the current processing frame in the subsequent frames after the reference frame, a new reference frame including the target object and a position of the target object in the new reference frame from the subsequent frames of the current processing frame through the neural network model again.
And the target tracking module is also used for carrying out target tracking on the subsequent frames of the new reference frame by adopting a target tracking algorithm according to the new reference frame and the position of the target object in the new reference frame.
And the target position acquisition module is further used for acquiring the position of the target in the subsequent frame of the new reference frame when the subsequent frame of the new reference frame is successfully subjected to target tracking.
Optionally, the target tracking module 220 comprises:
and the interval judgment unit is used for judging whether the number of frames of the interval between the current processing frame and the reference frame in the subsequent frame after the reference frame exceeds the preset number of frames.
And the reference frame and target object position acquisition unit is used for determining a new reference frame comprising the target object and the position of the target object in the new reference frame from the subsequent frames of the current processing frame through the neural network model again when the frame number of the interval between the current processing frame and the reference frame in the subsequent frames after the reference frame exceeds the preset frame number.
And the target tracking unit is used for tracking the target of the subsequent frame of the new reference frame by adopting a target tracking algorithm according to the new reference frame and the position of the target object in the new reference frame.
And the target object position acquisition unit is used for acquiring the position of the target object in the subsequent frame of the new reference frame when the subsequent frame of the new reference frame is successfully subjected to target tracking.
Optionally, the ultrasound image detection apparatus 200 further comprises:
the system comprises a sample data set acquisition module and a sample data set acquisition module, wherein the sample data set comprises a training set and a test set, and each sample data in the sample data set comprises an ultrasonic image and position information of an actual target object corresponding to the ultrasonic image.
And the training module is used for training the pre-established neural network model according to the training set.
And the position information acquisition module of the prediction target object is used for inputting a plurality of ultrasonic images in the test set into the trained neural network model and acquiring the output position information of the prediction target object corresponding to each ultrasonic image.
And the neural network model determining module is used for determining the trained neural network model as a pre-trained neural network model when the deviation between the position information of the predicted target object and the position information of the actual target object is less than a preset value.
The embodiment of the present application provides a readable storage medium, and when being executed by a processor, a computer program performs the method processes performed by the computing device in the method embodiment shown in fig. 2.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
In summary, the present application provides an ultrasound image detection method and apparatus, in the method, an ultrasound video stream is detected first, a pre-trained neural network model is used to determine a reference frame including a target object from the ultrasound video stream, and meanwhile, a position of the target object in the reference frame can be accurately obtained through the neural network model, so that a target tracking manner is subsequently adopted to track the target object in a subsequent frame of the reference frame according to the reference frame and the position of the target object in the reference frame, and thus, the position of the target object in the subsequent frame of the reference frame can be rapidly obtained through the target tracking manner, the detection speed of the target object is improved, and the real-time performance of ultrasound image detection is further ensured.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of one logic function, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The above embodiments are merely examples of the present application and are not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An ultrasound image detection method, the method comprising:
acquiring an ultrasonic video stream, and determining a reference frame comprising a target object and the position of the target object in the reference frame from the ultrasonic video stream through a pre-trained neural network model;
performing target tracking on subsequent frames of the reference frame by adopting a target tracking algorithm according to the reference frame and the position of the target object in the reference frame;
and when the target tracking is successfully carried out on the subsequent frame of the reference frame, acquiring the position of the target object in the subsequent frame.
2. The method of claim 1, wherein after performing target tracking on a frame subsequent to the reference frame by using a target tracking algorithm according to the reference frame and the position of the target object in the reference frame, the method further comprises:
if the target object is not tracked in the current processing frame in the subsequent frames after the reference frame, determining a new reference frame comprising the target object and the position of the target object in the new reference frame from the subsequent frames of the current processing frame through the neural network model again;
performing target tracking on subsequent frames of the new reference frame by adopting a target tracking algorithm according to the new reference frame and the position of the target object in the new reference frame;
and when the target tracking is successfully carried out on the subsequent frame of the new reference frame, acquiring the position of the target object in the subsequent frame of the new reference frame.
3. The method according to claim 1, wherein the performing target tracking on the subsequent frame of the reference frame by using a target tracking algorithm according to the reference frame and the position of the target object in the reference frame comprises:
judging whether the number of frames at intervals between a current processing frame and the reference frame in a subsequent frame after the reference frame exceeds a preset number of frames;
if yes, determining a new reference frame comprising the target object from the subsequent frames of the current processing frame through the neural network model again, and determining the position of the target object in the new reference frame;
performing target tracking on subsequent frames of the new reference frame by adopting a target tracking algorithm according to the new reference frame and the position of the target object in the new reference frame;
and when the target tracking is successfully carried out on the subsequent frame of the new reference frame, acquiring the position of the target object in the subsequent frame of the new reference frame.
4. The method according to claim 1, wherein the acquiring an ultrasound video stream and determining a reference frame including a target object and a position of the target object in the reference frame from the ultrasound video stream by a pre-trained neural network model comprises:
acquiring a sample data set, wherein the sample data set comprises a training set and a test set, and each sample data in the sample data set comprises an ultrasonic image and position information of an actual target object corresponding to the ultrasonic image;
training a pre-established neural network model according to the training set;
inputting a plurality of ultrasonic images in the test set into the trained neural network model, and acquiring the output position information of a prediction target object corresponding to each ultrasonic image;
and when the deviation between the position information of the predicted target object and the position information of the actual target object is smaller than a preset value, determining the trained neural network model as a pre-trained neural network model.
5. The method of claim 4, wherein the hidden layers of the pre-established neural network model comprise a convolutional layer, a linear rectifying layer, a partial response normalization layer, a pooling layer and a full connection layer.
6. The method of claim 1, wherein a correlation filter is used for target tracking of subsequent frames of the reference frame.
7. An ultrasound image detection apparatus, the apparatus comprising:
the device comprises a reference frame and target object position acquisition module, a reference frame and target object position acquisition module and a target object position acquisition module, wherein the reference frame and the target object position acquisition module are used for acquiring an ultrasonic video stream and determining a reference frame comprising a target object and a position of the target object in the reference frame from the ultrasonic video stream through a pre-trained neural network model;
the target tracking module is used for tracking the target of the subsequent frames of the reference frame by adopting a target tracking algorithm according to the reference frame and the position of the target object in the reference frame;
and the target object position acquisition module is used for acquiring the position of the target object in the subsequent frame when the subsequent frame of the reference frame is successfully subjected to target tracking.
8. The ultrasound image detection apparatus of claim 7, wherein the apparatus further comprises:
the reference frame and target object position acquisition module is further configured to determine, when the target object is not tracked in a current processing frame in subsequent frames after the reference frame, a new reference frame including the target object and a position of the target object in the new reference frame from the subsequent frames of the current processing frame through the neural network model again;
the target tracking module is further used for performing target tracking on a subsequent frame of the new reference frame by adopting a target tracking algorithm according to the new reference frame and the position of the target object in the new reference frame;
and the target object position acquisition module is further used for acquiring the position of the target object in the subsequent frame of the new reference frame when the subsequent frame of the new reference frame is successfully subjected to target tracking.
9. A computing device comprising a processor and a memory, the memory storing computer readable instructions which, when executed by the processor, perform the steps of the method of any one of claims 1 to 6.
10. A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201910857268.XA 2019-09-11 2019-09-11 Ultrasonic image detection method and device Pending CN112488982A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910857268.XA CN112488982A (en) 2019-09-11 2019-09-11 Ultrasonic image detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910857268.XA CN112488982A (en) 2019-09-11 2019-09-11 Ultrasonic image detection method and device

Publications (1)

Publication Number Publication Date
CN112488982A true CN112488982A (en) 2021-03-12

Family

ID=74920482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910857268.XA Pending CN112488982A (en) 2019-09-11 2019-09-11 Ultrasonic image detection method and device

Country Status (1)

Country Link
CN (1) CN112488982A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052116A (en) * 2021-04-06 2021-06-29 深圳华声医疗技术股份有限公司 Ultrasonic video data processing method and device, ultrasonic equipment and storage medium
CN115578676A (en) * 2022-10-27 2023-01-06 浙江宇鑫纺织印染有限公司 Green energy-saving intelligent dyeing and finishing process and system thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052116A (en) * 2021-04-06 2021-06-29 深圳华声医疗技术股份有限公司 Ultrasonic video data processing method and device, ultrasonic equipment and storage medium
CN113052116B (en) * 2021-04-06 2022-02-22 深圳华声医疗技术股份有限公司 Ultrasonic video data processing method and device, ultrasonic equipment and storage medium
CN115578676A (en) * 2022-10-27 2023-01-06 浙江宇鑫纺织印染有限公司 Green energy-saving intelligent dyeing and finishing process and system thereof
CN115578676B (en) * 2022-10-27 2023-05-23 浙江宇鑫纺织印染有限公司 Green energy-saving intelligent dyeing and finishing process and system thereof

Similar Documents

Publication Publication Date Title
US11842487B2 (en) Detection model training method and apparatus, computer device and storage medium
US20210158533A1 (en) Image processing method and apparatus, and storage medium
CN111161311A (en) Visual multi-target tracking method and device based on deep learning
KR20210048523A (en) Image processing method, apparatus, electronic device and computer-readable storage medium
US10891473B2 (en) Method and device for use in hand gesture recognition
CN110660102B (en) Speaker recognition method, device and system based on artificial intelligence
US10832032B2 (en) Facial recognition method, facial recognition system, and non-transitory recording medium
US20220383661A1 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN111104925B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN111242933B (en) Retinal image artery and vein classification device, apparatus, and storage medium
EP3404513A1 (en) Information processing apparatus, method, and program
CN112488982A (en) Ultrasonic image detection method and device
CN112446911A (en) Centerline extraction, interface interaction and model training method, system and equipment
CN114022614A (en) Method and system for estimating confidence of three-dimensional reconstruction target position
CN113570594A (en) Method and device for monitoring target tissue in ultrasonic image and storage medium
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN116993812A (en) Coronary vessel centerline extraction method, device, equipment and storage medium
CN110414562B (en) X-ray film classification method, device, terminal and storage medium
CN111968160A (en) Image matching method and storage medium
CN116958724A (en) Training method and related device for product classification model
CN110934565B (en) Method and device for measuring pupil diameter and computer readable storage medium
CN112991414A (en) Vslam feature point depth determination device
CN108520237B (en) Risk behavior identification method
CN116823829B (en) Medical image analysis method, medical image analysis device, computer equipment and storage medium
CN111260692A (en) Face tracking method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination