CN113052116B - Ultrasonic video data processing method and device, ultrasonic equipment and storage medium - Google Patents

Ultrasonic video data processing method and device, ultrasonic equipment and storage medium Download PDF

Info

Publication number
CN113052116B
CN113052116B CN202110366289.9A CN202110366289A CN113052116B CN 113052116 B CN113052116 B CN 113052116B CN 202110366289 A CN202110366289 A CN 202110366289A CN 113052116 B CN113052116 B CN 113052116B
Authority
CN
China
Prior art keywords
image
ultrasonic
ultrasound
original
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110366289.9A
Other languages
Chinese (zh)
Other versions
CN113052116A (en
Inventor
董振鑫
姚斌
刘远兮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wisonic Medical Technology Co ltd
Original Assignee
Shenzhen Wisonic Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wisonic Medical Technology Co ltd filed Critical Shenzhen Wisonic Medical Technology Co ltd
Priority to CN202110366289.9A priority Critical patent/CN113052116B/en
Publication of CN113052116A publication Critical patent/CN113052116A/en
Application granted granted Critical
Publication of CN113052116B publication Critical patent/CN113052116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an ultrasonic video data processing method and device, ultrasonic equipment and a storage medium. The method comprises the following steps: acquiring an image cleaning request, wherein the image cleaning request comprises an original ultrasonic video; extracting N frames of original ultrasonic images from an original ultrasonic video, wherein N is more than or equal to 2; extracting the characteristics of the original ultrasonic image to obtain an image characteristic value corresponding to the original ultrasonic image; inputting the image characteristic value corresponding to the original ultrasonic image into a pre-trained no-load image classification model, and obtaining an image classification result corresponding to the original ultrasonic image; and deleting the no-load ultrasonic images in the original ultrasonic video based on the image classification result corresponding to the N frames of original ultrasonic images to obtain the effective ultrasonic video. The method can save the storage resource of the ultrasonic video, improve the analysis and identification efficiency of the ultrasonic video and avoid the time waste in the process of watching and analyzing.

Description

Ultrasonic video data processing method and device, ultrasonic equipment and storage medium
Technical Field
The present invention relates to the field of ultrasound imaging technologies, and in particular, to a method and an apparatus for processing ultrasound video data, an ultrasound device, and a storage medium.
Background
In the process of carrying out ultrasonic scanning on human tissues by the ultrasonic equipment, the ultrasonic equipment records an ultrasonic video, wherein the ultrasonic video comprises a plurality of frames of ultrasonic images which are sequenced according to a time sequence. In the process of recording an ultrasonic video, an idle-load ultrasonic image is inevitably doped in an effective ultrasonic image by the existing ultrasonic equipment, so that unnecessary time waste caused by watching the idle-load ultrasonic image can be caused when a doctor analyzes the ultrasonic video, and the storage space of the ultrasonic video is larger, thereby causing the waste of storage resources. The unloaded ultrasonic image is an ultrasonic image which does not contain characteristic information corresponding to human tissues, namely an ultrasonic image formed by an unloaded ultrasonic probe. The effective ultrasound image is a concept corresponding to the idle ultrasound image, and is an ultrasound image including characteristic information corresponding to human tissue.
Disclosure of Invention
The embodiment of the invention provides an ultrasonic video data processing method and device, ultrasonic equipment and a storage medium, and aims to solve the problems of storage resource waste and watching time waste caused by doping of no-load ultrasonic images in the conventional ultrasonic video.
An ultrasound video data processing method comprising:
acquiring an image cleaning request, wherein the image cleaning request comprises an original ultrasonic video;
extracting N frames of original ultrasonic images from the original ultrasonic video, wherein N is more than or equal to 2;
extracting the characteristics of the original ultrasonic image to obtain an image characteristic value corresponding to the original ultrasonic image;
inputting the image characteristic value corresponding to the original ultrasonic image into a pre-trained no-load image classification model, and obtaining an image classification result corresponding to the original ultrasonic image;
and deleting the no-load ultrasonic images in the original ultrasonic video based on the image classification results corresponding to the N frames of original ultrasonic images to obtain the effective ultrasonic video.
An ultrasound video data processing apparatus comprising:
the system comprises an image cleaning request acquisition module, a processing module and a processing module, wherein the image cleaning request acquisition module is used for acquiring an image cleaning request which comprises an original ultrasonic video;
the original ultrasonic image extraction module is used for extracting N frames of original ultrasonic images from the original ultrasonic video, wherein N is more than or equal to 2;
an original characteristic value acquisition module, configured to perform characteristic extraction on the original ultrasound image, and acquire an image characteristic value corresponding to the original ultrasound image;
the image classification result acquisition module is used for inputting the image characteristic value corresponding to the original ultrasonic image into a pre-trained no-load image classification model and acquiring the image classification result corresponding to the original ultrasonic image;
and the effective ultrasonic video acquisition module is used for deleting the no-load ultrasonic images in the original ultrasonic video based on the image classification result corresponding to the N frames of original ultrasonic images to acquire the effective ultrasonic video.
According to the ultrasonic video data processing method, the ultrasonic video data processing device, the ultrasonic equipment and the storage medium, the original ultrasonic image is subjected to feature extraction to extract the image feature value of the original ultrasonic image, so that the image feature value is used as the input of a subsequent classification model, the feasibility of classification model identification is guaranteed, and the reliability and the efficiency of image cleaning are improved; the image characteristic values are input into a no-load image classification model, an image classification result can be rapidly obtained, whether the original ultrasonic image is the no-load ultrasonic image or not is determined, and then all the no-load ultrasonic images in the original ultrasonic video are deleted, so that the formed effective ultrasonic video does not contain the no-load ultrasonic images, the storage resources of the effective ultrasonic video are saved, the analysis and identification efficiency of the effective ultrasonic video is improved, and the waste of the watching and analyzing time is avoided.
An ultrasound video data processing method comprising:
acquiring a target tracking request, wherein the target tracking request comprises an ultrasonic video to be tracked, and the ultrasonic video to be tracked is the original ultrasonic video or the effective ultrasonic video;
extracting Q frames of ultrasound images to be tracked which are sequenced according to a time sequence from the ultrasound video to be tracked;
receiving an image selection request, and determining a starting ultrasonic image and an ultrasonic image to be processed which is sequenced behind the starting ultrasonic image from the ultrasonic images to be tracked in the Q frames;
receiving a region selection request, and determining a target tissue region from the starting ultrasonic image;
and based on the target tissue region, performing target tracking on the ultrasonic image to be processed by adopting a target tracking algorithm to obtain a target ultrasonic video.
An ultrasound video data processing apparatus comprising:
a target tracking request obtaining module, configured to obtain a target tracking request, where the target tracking request includes an ultrasonic video to be tracked, and the ultrasonic video to be tracked is the original ultrasonic video or the effective ultrasonic video;
the ultrasonic image to be tracked extracting module is used for extracting Q frames of ultrasonic images to be tracked which are sequenced according to a time sequence from the ultrasonic video to be tracked;
the starting ultrasonic image determining module is used for receiving an image selection request, and determining a starting ultrasonic image and an ultrasonic image to be processed which is sequenced behind the starting ultrasonic image from the ultrasonic images to be tracked in Q frames;
a target tissue area determination module, configured to receive an area selection request, and determine a target tissue area from the starting ultrasound image;
and the target ultrasonic video acquisition module is used for tracking the target of the ultrasonic image to be processed by adopting a target tracking algorithm based on the target tissue region to acquire a target ultrasonic video.
An ultrasound device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, said processor implementing the above ultrasound video data processing method when executing said computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the above-mentioned ultrasound video data processing method.
According to the ultrasonic video data processing method, the ultrasonic video data processing device, the ultrasonic equipment and the storage medium, after Q frames of ultrasonic images to be tracked which are sequenced according to the time sequence are extracted from the ultrasonic video to be tracked, the starting ultrasonic image and the ultrasonic images to be tracked can be automatically determined based on an image selection request triggered by a user, the target tissue area can be automatically determined based on an area selection request triggered by the user, and the target tissue area can be automatically determined in a man-machine interaction mode so as to meet the requirement of the user for carrying out targeted watching on human tissues corresponding to the target tissue area. Then, based on the target tissue area, a target tracking algorithm is adopted to perform target tracking on the ultrasonic image to be processed to obtain a target ultrasonic video, so that targeted screening and extraction of the target ultrasonic video are realized, storage resources corresponding to the target ultrasonic video are saved, and the efficiency of watching and analyzing the ultrasonic video of specific human tissues is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic view of an ultrasound apparatus in an embodiment of the present invention;
FIG. 2 is a flow chart of a method of ultrasound video data processing in accordance with one embodiment of the present invention;
FIG. 3 is another flow chart of a method for ultrasound video data processing according to an embodiment of the invention;
FIG. 4 is another flow chart of a method for ultrasound video data processing according to an embodiment of the present invention;
FIG. 5 is another flow chart of a method for ultrasound video data processing according to an embodiment of the invention;
FIG. 6 is another flow chart of a method for ultrasound video data processing according to an embodiment of the present invention;
FIG. 7 is another flow chart of a method for ultrasound video data processing according to an embodiment of the invention;
FIG. 8 is another flow chart of a method for ultrasound video data processing in an embodiment of the present invention;
FIG. 9 is a schematic diagram of an ultrasound video data processing apparatus according to an embodiment of the present invention;
fig. 10 is another schematic diagram of an ultrasound video data processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of the present invention without any creative work belong to the protection scope of the present invention.
The ultrasonic video data processing method provided by the embodiment of the invention can be applied to the ultrasonic equipment shown in fig. 1, and the ultrasonic equipment comprises a main controller, an ultrasonic probe connected with the main controller, a beam forming processor, an image processor and a display screen.
The main controller is a controller of the ultrasonic equipment, and the main controller is connected with other functional modules in the ultrasonic equipment, including but not limited to an ultrasonic probe, a beam forming processor, an image processor, a display screen and the like, and is used for controlling the work of each functional module.
An ultrasound probe is a transmitting and receiving device of ultrasound waves. In this example, in order to ensure that the original ultrasound images at different angles can have a larger coverage of transverse scanning, and thus ensure that the original ultrasound images at different angles have a larger overlapping range, the conventional ultrasound probe generally comprises a plurality of strip-shaped piezoelectric transducers (each single piezoelectric transducer is called an array element) with the same size arranged at equal intervals; or a plurality of piezoelectric transducers are arranged in a two-dimensional array, namely array elements are arranged in a two-dimensional matrix shape. A piezoelectric transducer in the ultrasonic probe excites and converts voltage pulses applied to the piezoelectric transducer into mechanical vibration, so that ultrasonic waves are emitted outwards; ultrasonic waves are transmitted in media such as human tissues and the like, echo analog signals such as reflected waves and scattered waves can be generated, each piezoelectric transducer can convert the echo analog signals into echo electric signals, the echo electric signals are amplified and subjected to analog-to-digital conversion, the echo electric signals are converted into echo digital signals, and then the echo digital signals are sent to a beam synthesis processor.
The beam forming processor is connected with the ultrasonic probe and used for receiving the echo digital signals sent by the ultrasonic probe, carrying out beam forming on the echo digital signals of one or more channels, acquiring one or more paths of echo forming signals and sending the echo forming signals to the image processor.
The image processor is connected with the beam forming processor and used for receiving the echo synthesis signals sent by the beam forming processor, carrying out image processing processes such as image synthesis and space compounding on the echo synthesis signals, forming a target composite ultrasonic image, and sending the target composite ultrasonic image to the display screen so that the display screen displays the target composite ultrasonic image.
As an example, the image processor may be a Graphics Processing Unit (GPU), which is a processor designed to perform mathematical and geometric calculations necessary for rendering complex Graphics, and is helpful to improve the generation efficiency of the ultrasound image. In the example, the image processor is specially used for image processing, so that the main controller is liberated from the task of image processing, more system tasks can be executed, and the overall performance of the ultrasonic equipment can be improved.
An embodiment of the present invention provides an ultrasound video data processing method, which may be applied to a device including a memory, a processor, and a computer program stored in the memory and executable on the processor, and specifically, the method may be applied to the image processor in fig. 1 as an example for description, as shown in fig. 2, the ultrasound video data processing method includes the following steps:
s201: an image cleaning request is obtained, wherein the image cleaning request comprises an original ultrasonic video.
S202: extracting N frames of original ultrasonic images from the original ultrasonic video, wherein N is more than or equal to 2.
S203: and extracting the characteristics of the original ultrasonic image to obtain an image characteristic value corresponding to the original ultrasonic image.
S204: and inputting the image characteristic value corresponding to the original ultrasonic image into a pre-trained no-load image classification model, and obtaining an image classification result corresponding to the original ultrasonic image.
S205: and deleting the no-load ultrasonic images in the original ultrasonic video based on the image classification result corresponding to the N frames of original ultrasonic images to obtain the effective ultrasonic video.
Wherein the image cleaning request is a request for realizing cleaning of an idle ultrasound image in the ultrasound video. The raw ultrasound video is the ultrasound video that needs to be cleaned of the unloaded ultrasound images. The unloaded ultrasonic image is an ultrasonic image which does not contain characteristic information corresponding to human tissues, namely an ultrasonic image formed by the unloaded ultrasonic probe.
As an example, in step S201, the image processor may receive an image cleaning request triggered by a user, for example, an image cleaning request triggered by clicking a cleaning button on a display interface of the ultrasound device after the user operates the ultrasound device to determine the original ultrasound video, or an image cleaning request input in a command line or other shortcut, so that the original ultrasound video to be cleaned is included in the image cleaning request.
The original ultrasonic images refer to ultrasonic images in the original ultrasonic video, and N is the number of the original ultrasonic images in the original ultrasonic video.
As an example, in step S202, after receiving the image cleaning request, the image processor needs to perform image extraction on the original ultrasound video, and extract N frames of original ultrasound images, specifically, N frames of original ultrasound images sorted according to the time sequence, so as to analyze whether each frame of original ultrasound image is an empty ultrasound image one by one. In this example, each frame of original ultrasound image carries a timestamp, and a time sequence corresponding to the N frames of original ultrasound images can be determined according to the timestamp, that is, a sequence of the N frames of original ultrasound images in the original ultrasound video is determined.
The image feature value is a numerical value determined by feature extraction of an image.
As an example, in step S203, since the image data generally cannot be used as an input of the classification model, after the image processor extracts N original ultrasound images from the original ultrasound video, it needs to use a feature extraction algorithm, including but not limited to a SIFT (Scale-invariant feature transform) algorithm, to perform feature extraction on each original ultrasound image, and obtain an image feature value corresponding to each original ultrasound image, so as to use the image feature value as an input of a subsequent classification model, thereby ensuring feasibility of classification model identification and improving reliability and efficiency of image cleaning.
The unloaded image classification model is a classification model which is trained in advance and used for identifying whether the ultrasonic image is the unloaded ultrasonic image. Namely, the no-load image classification model is a classification model determined by adopting a target classification model to train the training ultrasonic image in advance. And the image classification result is an output result after the image characteristic value corresponding to the original ultrasonic image is identified by adopting a no-load image classification model. Generally, the image classification result can be represented as a classification label, and the classification label is used for identifying whether the image is an empty ultrasound image. For example, the classification label "1" or "+" is used to indicate an empty ultrasound image, and the classification label "0" or "-" is used to indicate not an empty ultrasound image but a valid ultrasound image.
As an example, in step S204, the image processor inputs the image feature value corresponding to each original ultrasound image into a pre-trained idle image classification model, and obtains an image classification result output by the idle image classification model, so as to perform an image cleaning operation according to the image classification result. The image classification result includes whether the image is an empty ultrasound image or not (i.e., an effective ultrasound image).
As an example, in step S205, after obtaining the image classification result corresponding to the N frames of original ultrasound images, the image processor may delete the empty ultrasound image in the original ultrasound video based on the image classification result corresponding to the N frames of original ultrasound images, so as to obtain an effective ultrasound video. For example, if M idle ultrasound images exist in the image classification results corresponding to the N original ultrasound images, the M idle ultrasound images in the original ultrasound video are deleted, and the remaining N-M effective ultrasound images are integrated according to the time sequence to form an effective ultrasound video. The active ultrasound video can be understood as an ultrasound video formed after deleting all the empty ultrasound images in the original ultrasound video. Understandably, the N-M effective ultrasonic videos in the effective ultrasonic videos are sequenced according to the sequence of the timestamps carried by the effective ultrasonic videos, and the continuity of all the effective ultrasonic images in the effective ultrasonic videos is guaranteed.
In the ultrasonic video data processing method provided by the embodiment, the original ultrasonic image is subjected to feature extraction to extract the image feature value of the original ultrasonic image, so that the image feature value is used as the input of a subsequent classification model, the feasibility of classification model identification is guaranteed, and the reliability and the efficiency of image cleaning are improved; the image characteristic value is input into the no-load image classification model, an image classification result can be rapidly obtained, whether the original ultrasonic image is the no-load ultrasonic image or not is determined, and then all the no-load ultrasonic images in the original ultrasonic video are deleted, so that the formed effective ultrasonic video does not contain the no-load ultrasonic image, the storage resource of the effective ultrasonic video is saved, the analysis and identification efficiency of the effective ultrasonic video is improved, and the waste of the watching and analyzing time is avoided.
In an embodiment, before step S203, that is, before performing feature extraction on the original ultrasound image and acquiring an image feature value corresponding to the original ultrasound image, the ultrasound video data processing method further includes: and carrying out image preprocessing on the original ultrasonic image to obtain an updated original ultrasonic image.
As an example, before performing feature extraction on the original ultrasound image, the image processor may further perform image preprocessing on the original ultrasound image by using an image preprocessing algorithm to obtain an updated original ultrasound image, so as to perform feature extraction on the updated original ultrasound image subsequently, and obtain an image feature value corresponding to the updated original ultrasound image. In this example, the image preprocessing algorithms include, but are not limited to, image filtering smoothing, edge sharpening, and contrast enhancement. Understandably, the original ultrasonic image is subjected to image preprocessing, so that background noise in the original ultrasonic image or other factors influencing image processing can be eliminated or weakened, human tissues or idle bright stripes or other key factors are highlighted, and the processing efficiency of subsequent image processing is improved.
In an embodiment, as shown in fig. 3, step S203, performing feature extraction on the original ultrasound image to obtain an image feature value corresponding to the original ultrasound image, includes:
s301: based on the target sampling rule, the original ultrasound image is divided into K sampling sub-regions.
S302: and acquiring a target sampling window, wherein the target sampling window comprises a central point and H neighborhood points.
S303: and traversing the current sampling point corresponding to each sampling sub-region by adopting a target sampling window to obtain a sampling value corresponding to the current sampling point.
S304: and counting and splicing the occurrence times of the sampling values corresponding to all current sampling points in the sampling sub-region, and acquiring the characteristic sub-vectors corresponding to the sampling sub-region.
S305: and splicing the characteristic subvectors corresponding to the K sampling subregions to obtain an image characteristic value corresponding to the original ultrasonic image.
The target sampling rule is a rule for realizing sampling region division of the original ultrasonic image. The sampling sub-region refers to a region formed by dividing the original ultrasound image based on a target sampling rule.
As an example, in step S301, the image processor may divide the original ultrasound image into K sampling sub-regions blocki (i is greater than or equal to 1 and less than or equal to K) according to a target adoption requirement determined when feature extraction is performed in the process of training the no-load image classification model, so as to calculate texture features of each sampling sub-region blocki subsequently, and then splice the texture features of the K sampling sub-regions blocki according to a specific sequence determined in a target sampling rule, thereby obtaining an image feature value corresponding to the original ultrasound image, so that the length of the image feature value matches the length of the no-load image classification model, and thus ensuring feasibility of identification of the no-load image classification model.
Wherein the target sampling window is a window for implementing feature sampling.
As an example, in step S302, the image processor may adopt a target sampling window determined when feature extraction is performed in the process of training the no-load image classification model, where the target sampling window is composed of 1 central point and H neighborhood points, so as to perform subsequent sampling based on the target sampling window, thereby obtaining an image feature value.
As an example, in step S303, the image processor traverses the current sampling point corresponding to each sampling sub-region by using the target sampling window, compares the gray value of the central point with the gray values of the H neighborhood points according to the gray value of 1 central point and the gray value of the H neighborhood points corresponding to the current sampling window, and determines the feature sub-vector of the pixel point currently traversed by the target sampling window. In this example, when sampling is performed by using a target sampling window formed by a central point and H neighborhood points, the gray values of 1 central point and H neighborhood points are compared; if the gray value of the neighborhood point is smaller than that of the central point, the point value of the neighborhood point is marked as '0'; on the contrary, if the gray value of the neighborhood point is not less than the gray value of the central point, the point value of the neighborhood point is marked as '1'; and splicing the point values of the H neighborhood points according to a specific sequence to obtain a sampling value of the target sampling window corresponding to the current sampling point.
In this example, when the target sampling window includes the center point and H neighborhood points, the number of pattern types corresponding to the formed sampling values is 2H. For example, when the number H of neighborhood points is 4, if the point values of the 4 neighborhood points are sorted according to a specific order of top, bottom, left, right, all the formed sampling values are as shown in the following table one:
Mode01:0000 Mode05:0100 Mode09:1000 Mode13:1100
Mode02:0001 Mode06:0101 Mode10:1001 Mode14:1101
Mode03:0010 Mode07:0110 Mode11:1010 Mode15:1110
Mode04:0011 Mode08:0111 Mode12:1011 Mode16:1111
as an example, step S304: the image processor counts the occurrence times of the sampling values corresponding to all the current sampling points in the sampling sub-area, namely the occurrence times of the sampling values in the sampling sub-area are counted and defined as XModej(1≤j≤2H). Then, the image processor will 2HNumber of occurrences of individual sample value XModejSplicing is carried out according to a specific sequence, and the characteristic sub-vector corresponding to the ith sampling sub-region blocki (i is more than or equal to 1 and less than or equal to K) is obtained as follows:
Figure BDA0003007681110000101
as an example, in step S205, the image processor may apply the eigenvector V corresponding to the K sampling sub-regions blockiblockiSplicing to form an image characteristic value V corresponding to the original ultrasonic imageTotal=[Vblock1,Vblock2,Vblock3,...,VblockK]Therefore, the image characteristic value is used as the input of a subsequent classification model, the feasibility of classification model identification is guaranteed, and the reliability and the efficiency of image cleaning are improved.
In an embodiment, as shown in fig. 4, step S203, performing feature extraction on the original ultrasound image to obtain an image feature value corresponding to the original ultrasound image, includes:
s401: and extracting the features of the original ultrasonic image by adopting a feature extraction algorithm corresponding to the at least two feature descriptors to obtain feature vector components corresponding to the at least two feature descriptors.
S402: and splicing the characteristic vector components corresponding to at least two characteristic descriptors according to the splicing sequence of the characteristic descriptors to obtain the image characteristic value corresponding to the original ultrasonic image.
The feature descriptors include, but are not limited to, the gray scale range, connectivity, edge shape, area size, and the like of the image mentioned in the present embodiment. The feature extraction algorithm is an algorithm for realizing feature descriptor extraction. The feature vector component refers to a vector determined by each feature extraction algorithm for extracting features of the original ultrasound image.
As an example, in step S401, the image processor performs feature extraction on the original ultrasound image by using a feature extraction algorithm corresponding to at least two feature descriptors, so as to obtain feature vector components corresponding to the at least two feature descriptors.
As an example, in step S402, the image processor splices the feature vector components corresponding to the acquired at least two feature descriptors to acquire an image feature value corresponding to the original ultrasound image, so that the image feature value can reflect information corresponding to the at least two feature descriptors, and can more sufficiently and effectively reflect information of the original ultrasound image, thereby ensuring accuracy and effectiveness of subsequent processing on the ultrasound image.
In an embodiment, as shown in fig. 5, before step S201, that is, before acquiring an image cleaning request, the image cleaning request includes the original ultrasound video, the ultrasound video data processing method further includes:
s501: and acquiring a training ultrasonic image, performing feature extraction on the training ultrasonic image, and acquiring an image feature value corresponding to the training ultrasonic image.
S502: and marking the training ultrasonic image to obtain a classification label corresponding to the training ultrasonic image.
S503: and obtaining model training samples based on the image characteristic values and the classification labels corresponding to the training ultrasonic images, and dividing the model training samples into a training set and a test set.
S504: and inputting the model training samples in the training set into a target classification model for training to obtain an initial image classification model.
S505: and testing the initial image classification model by adopting the model training sample in the test set to obtain the test accuracy, and determining the initial image classification model as the no-load image classification model if the test accuracy reaches the accuracy threshold.
The training ultrasound image is an ultrasound image used for training an unloaded image classification model.
As an example, in step S501, the image processor may obtain a training ultrasound image, and then perform feature extraction on the training ultrasound image by using a feature extraction algorithm, including but not limited to a Scale-invariant feature transform (SIFT) algorithm, to obtain an image feature value corresponding to the training ultrasound image. In this example, the process of extracting the features of the training ultrasound image is the same as the process of extracting the features in step S203, and is not repeated here to avoid repetition.
The classification label is a label which is labeled to the training ultrasonic image in advance and is used for indicating whether the training ultrasonic image is an idle ultrasonic image.
As an example, in step S502, the image processor may receive an annotation instruction input by a user, and perform annotation on the training ultrasound image to obtain a classification label corresponding to the training ultrasound image. For example, the classification label "1" or "+" is used to indicate an unloaded ultrasound image, and the classification label "0" or "-" is used to indicate not an unloaded ultrasound image but a valid ultrasound image.
The model training samples are samples for training the no-load image classification model, and specifically are samples which can be input into the target classification model for model training. The target classification model is a model or algorithm employed for training the empty-load image classification model, for example, the target classification model may be, but is not limited to, a Support Vector Machine (SVM) or a decision tree algorithm.
As an example, in step S503, the image processor may combine the image feature values and the classification labels corresponding to the training ultrasound image to form a model training sample, for example, Vtrain=(v1,v2,...,vn,label),VtrainTraining samples for the model, v1,v2,...,vnFor image feature values, label is a classification label. The image processor may then divide all model training samples into a training set and a test set at a particular scale (e.g., 9: 1).
As an example, in step S504, the image processor may input all the model training samples in the training set into the target classification model for model training, so as to update the model parameters in the target classification model, and form an initial image classification model, which may process the received image feature values to output the corresponding classification labels.
The accuracy threshold is a preset threshold corresponding to the accuracy for evaluating the model convergence criterion.
As an example, in step S505, the image processor may input all the model training samples in the test set into the initial image classification model for testing, and obtain the test accuracy. In this example, the test accuracy S is equal to a/B, where a is the number of classification labels carried by the model training samples and the number of classification labels input and output by the initial image classification model, and B is the number of all model training samples in the test set. Then, the image processor can compare the test accuracy with an accuracy threshold, if the test accuracy is greater than the accuracy threshold, the initial image classification model is determined to reach a model convergence standard, and at the moment, the initial image classification model can be determined to be a trained no-load image classification model, so that whether any ultrasonic image is a no-load ultrasonic image or not can be rapidly determined based on the no-load image classification model, and the cleaning efficiency of the no-load ultrasonic image can be improved.
An embodiment of the present invention provides an ultrasound video data processing method, which is described by taking an example that the method is applied to an image processor in fig. 1, as shown in fig. 6, the ultrasound video data processing method includes the following steps:
s601: and acquiring a target tracking request, wherein the target tracking request comprises an ultrasonic video to be tracked, and the ultrasonic video to be tracked is the original ultrasonic video or the effective ultrasonic video in the embodiment.
S602: and extracting Q frames of ultrasound images to be tracked which are sequenced according to the time sequence from the ultrasound video to be tracked.
S603: and receiving an image selection request, and determining a starting ultrasonic image and a to-be-processed ultrasonic image sequenced behind the starting ultrasonic image from the Q frames of to-be-tracked ultrasonic images.
S604: a region selection request is received, and a target tissue region is determined from the starting ultrasound image.
S605: and based on the target tissue area, performing target tracking on the ultrasonic image to be processed by adopting a target tracking algorithm to obtain a target ultrasonic video.
The target tracking request is a request for realizing tracking whether an ultrasonic image in the ultrasonic video contains specific human tissues. The ultrasonic video to be tracked refers to an object to which a target tracking request is directed, that is, a video on which target tracking processing needs to be performed.
As an example, in step S601, the image processor may receive a target tracking request triggered by a user, for example, after the user operates the ultrasound device to determine the ultrasound video to be tracked, the target tracking request triggered by clicking a tracking button on a display interface of the ultrasound device is received, so that the image processor may determine the ultrasound video to be tracked, which needs to be tracked according to the target tracking request. In this example, the ultrasound video to be tracked may be an original ultrasound video that is not cleaned in the above embodiment, or may also be an effective ultrasound video that is obtained after the original ultrasound video is cleaned by a no-load ultrasound image, and may be autonomously determined by a user according to actual needs.
The ultrasonic images to be tracked refer to ultrasonic images in the ultrasonic video to be tracked, and Q is the number of the ultrasonic images to be tracked in the ultrasonic video to be tracked.
As an example, in step S602, after receiving the target tracking request, the image processor needs to perform image extraction on the ultrasound video to be tracked, and extract Q frames of ultrasound images to be tracked, specifically, Q frames of ultrasound images to be tracked, which are sorted according to a time sequence. In this example, each frame of ultrasound images to be tracked carries a timestamp, and a time sequence corresponding to the Q frames of ultrasound images to be tracked can be determined according to the timestamp, so as to determine a sequence of the Q frames of ultrasound images to be tracked in the ultrasound video to be tracked.
Wherein the image selection request is a user-triggered request for selecting the starting ultrasound image. The starting ultrasound image is a user-selected ultrasound image to be tracked for use as a starting frame for target tracking. The ultrasound image to be processed is the ultrasound image to be tracked which is sequenced after the starting ultrasound image according to the time sequence in the ultrasound image to be tracked of the Q frame.
As an example, in step S603, the image processor may receive an image selection request triggered by a click selection or other manner when the user views the ultrasound image to be tracked currently displayed on the ultrasound device, so as to determine the currently displayed ultrasound image to be tracked as a start ultrasound image, and accordingly, determine the ultrasound image to be tracked, which is ordered after the start ultrasound image according to the time sequence, as the ultrasound image to be processed.
Wherein the area selection request is a request triggered by a user for selecting a target tissue area. The target tissue region refers to a tissue region which needs to be tracked and is selected by a user, and is a region formed by scanning specific human tissues by the ultrasonic probe.
As an example, in step S604, the image processor may receive a region selection request triggered by the user during the process of viewing the starting ultrasound image, so as to determine a target tissue region to be subjected to target tracking from the starting ultrasound image, where the target tissue region is a region of the human tissue in the starting ultrasound image that is selected by the user.
The target tracking algorithm is an algorithm for achieving target tracking, and the algorithm includes, but is not limited to, Struck, SCM, ASLA, and KCF algorithms.
As an example, in step S605, the image processor may perform target tracking on the ultrasound images to be processed by using a target tracking algorithm based on a target tissue region autonomously selected by a user, so as to determine the ultrasound images to be tracked corresponding to the target tissue region as target ultrasound images, then sort all the target ultrasound images according to the sequence of the timestamps carried by the target ultrasound images to form target ultrasound videos, and screen out the target ultrasound videos including the target tissue region from the ultrasound videos to be tracked, so as to implement targeted screening and extraction on the target ultrasound videos, thereby saving storage resources corresponding to the target ultrasound videos, and facilitating improvement of efficiency of viewing and analyzing the ultrasound videos of specific human tissues.
Further, after the target ultrasound video is acquired, the ultrasound video data processing method further includes: and acquiring a tissue identifier corresponding to the target tissue area, and storing the tissue identifier and the target ultrasonic video in a system database in a correlation manner. The tissue identifier is an identifier for uniquely identifying a specific human tissue, and for example, an identifier for uniquely identifying a target tissue, which is a human liver, using the tissue identifier S001. Understandably, the tissue identification and the target ultrasonic video are stored in an associated manner, so that the target ultrasonic video can be uniformly and effectively managed, and the efficiency of watching and analyzing the ultrasonic video of the specific human tissue is improved.
As an example, if the user triggers the region selection request and determines at least two target tissue regions from the starting ultrasound image, in step S605, the image processor may execute at least two target tracking threads in parallel, where each target tracking thread may perform target tracking on the ultrasound image to be processed by using a target tracking algorithm based on one target tissue region, and acquire a target ultrasound video, so as to improve efficiency of performing target tracking on human tissues corresponding to the at least two target tissue regions.
In the ultrasound video data processing method provided by this embodiment, after Q frames of ultrasound images to be tracked, which are sorted according to a time sequence, are extracted from an ultrasound video to be tracked, a start ultrasound image and an ultrasound image to be processed may be autonomously determined based on an image selection request triggered by a user, a target tissue area may be autonomously determined based on an area selection request triggered by the user, and the target tissue area may be autonomously determined in a human-computer interaction manner, so as to meet a requirement of the user for performing targeted viewing on a human tissue corresponding to the target tissue area. Then, based on the target tissue area, a target tracking algorithm is adopted to perform target tracking on the ultrasonic image to be processed, and a target ultrasonic video is obtained, so that targeted screening and extraction of the target ultrasonic video are realized, storage resources corresponding to the target ultrasonic video are saved, and the efficiency of watching and analyzing the ultrasonic video of specific human tissues is improved.
In one embodiment, as shown in fig. 7, step S604 of receiving a region selection request to determine a target tissue region from the starting ultrasound image includes:
s701: and receiving a region selection request, wherein the region selection request comprises a region marking type.
S702: and displaying a region labeling interface corresponding to the region labeling type, receiving region labeling parameters corresponding to the region labeling type and input by a user, and determining an initial tissue region from the initial ultrasonic image.
S703: and if the region marking type is the full wrapping type, determining the initial tissue region as the target tissue region.
S704: and if the region labeling type is the foreground labeling type, identifying and segmenting the initial tissue region, and determining the target tissue region.
The region labeling type is used for labeling the type corresponding to the target tissue region. The region labeling type comprises a full wrapping type labeling type and a foreground type labeling type. The full-wrapping type labeling is a labeling type for manually labeling all labeling points in the target tissue area by a user to form a closed area. The foreground labeling type is to identify and divide the tissue regions in the image by adopting a machine algorithm (including but not limited to a GrabCut algorithm) in advance so as to identify different tissue regions for a user to select a determined type. The region labeling parameter is a parameter for labeling the target tissue region.
As an example, in step S701, the image processor may receive a region selection request triggered by a user, so as to determine a region labeling type corresponding to a target tissue region selected by the user, so as to jump to a corresponding region labeling interface for labeling according to the region labeling type.
As an example, in step S702, after acquiring the region annotation type, the image processor controls to display a region annotation interface corresponding to the region annotation type, receives a region annotation parameter corresponding to the region annotation type input by the user, and determines an initial tissue region from the initial ultrasound image. For example, in a region labeling interface corresponding to the fully-wrapped type labeling type, the coordinate information corresponding to all the determined labeling points selected by the user can be received as a region labeling parameter, and the initial tissue region is determined from the starting ultrasound image based on the region labeling parameter. For another example, in the area labeling interface corresponding to the foreground type labeling type, the method may first display that the starting ultrasound image is identified by using a machine algorithm, all tissue areas included in the starting ultrasound image are determined, then receive all tissue areas from a user, select an area identifier determined by a certain tissue area as an area labeling parameter, and determine the initial tissue area from the starting ultrasound image.
As an example, in step S703, when the region annotation type is the full wrapping type, the image processor indicates that the initial tissue region is the tissue region manually selected and determined by the user, which accurately contains the characteristic information of the human tissue that the user needs to view, and excludes other interference information, so that the initial tissue region is directly determined as the target tissue region, which is helpful to ensure the validity of the characteristic information in the target tissue region.
As an example, in step S704, when the region labeling type is the foreground labeling type, the image processor may use, but not limited to, automatic segmentation algorithms such as Grab Cut, water diffusion method, and level set to automatically identify and segment the initial tissue region in the starting ultrasound image, so as to obtain the target tissue region, so as to eliminate interference of other information in the starting ultrasound image except the target tissue region, and to help ensure validity of the feature information in the target tissue region.
In an embodiment, as shown in fig. 8, in step S605, performing target tracking on the ultrasound image to be processed by using a target tracking algorithm based on the target tissue region to obtain a target ultrasound video, the method includes:
s801: and storing the starting frame identifier corresponding to the starting ultrasonic image into the target tissue frame sequence corresponding to the target tissue area.
S802: and sequentially determining the current ultrasonic images according to the time sequence corresponding to the ultrasonic images to be processed.
S803: and performing target tracking on the current ultrasonic image by adopting a target tracking algorithm, and judging whether the current ultrasonic image contains a target tissue area.
S804: and if the current ultrasonic image contains the target tissue area, storing the current frame identifier corresponding to the current ultrasonic image into the target tissue frame sequence corresponding to the target tissue area, and repeatedly executing the time sequence corresponding to the ultrasonic image to be processed to sequentially determine the current ultrasonic image.
S805: and if the current ultrasonic image does not contain the target tissue area, acquiring the target ultrasonic video based on the target tissue frame sequence.
The starting frame mark is a frame mark for uniquely identifying the starting ultrasonic image. The target tissue frame sequence is used for storing frame identifications corresponding to all ultrasound images to be tracked, which correspond to the target tissue region. The frame identification here is an identification for uniquely identifying the ultrasound image. The current ultrasonic image refers to an ultrasonic image which is subjected to target tracking processing by the image processor at the current moment.
As an example, after determining the starting ultrasound image, the pending ultrasound image and the target tissue region in step S801, the image processor may first create a sequence of target tissue frames in, but not limited to, a queue or stack, which may be associated with a tissue identifier for storing frame identifiers of all target ultrasound images including the target tissue region. The image processor may then store a start frame identification P0 corresponding to the start ultrasound image in the sequence of target tissue frames such that the start frame identification is the 1 st frame identification stored in the sequence of target tissue frames.
As an example, in step S802, the image processor may sequentially determine, as the current ultrasound image, the ultrasound images to be processed that need to be subjected to the target tracking processing according to the time sequence of all the ultrasound videos to be tracked extracted and determined from the ultrasound videos to be tracked. For example, the 1 st ultrasound image to be processed after the starting ultrasound image may be determined as the current ultrasound image.
As an example, in step S803, the image processor may perform target tracking on the current ultrasound image by using a target tracking algorithm, and determine whether the current ultrasound image includes the target tissue region. For example, the image processor may input the target tissue region determined in the starting ultrasound image into a target tracking algorithm, and perform a processing determination using the target tracking algorithm to determine whether the current ultrasound image includes the target tissue region.
As an example, in step S804, when the image processor performs target tracking on the current ultrasound image by using a target tracking algorithm and determines that the current ultrasound image includes the target tissue region, the image processor determines that the current ultrasound image is the target ultrasound image including the target tissue region, and at this time, the current frame identifier corresponding to the current ultrasound image may be stored in the target tissue frame sequence corresponding to the target tissue region. Understandably, when the current frame identifier is stored in the target tissue frame sequence, step S802 needs to be repeatedly executed, that is, the step S803-S805 needs to be repeatedly executed to sequentially determine the current ultrasound image according to the time sequence corresponding to the ultrasound images to be processed, for example, after the 1 st ultrasound image to be processed after the start ultrasound image is the current ultrasound image, and after the current frame identifier P1 corresponding to the current frame ultrasound image is stored in the target tissue frame sequence, the 2 nd ultrasound image to be processed after the start ultrasound image is the current ultrasound image.
As an example, in step S805, when the image processor performs target tracking on the current ultrasound image by using a target tracking algorithm and determines that the current ultrasound image does not include a target tissue region, it indicates that the current ultrasound image does not include the target tissue region, that is, the current ultrasound image is not the target ultrasound image, and at this time, it is determined that tracking of a human tissue corresponding to the target tissue region in the ultrasound video to be tracked is finished, and ultrasound images corresponding to all frame identifiers can be determined as target ultrasound images based on all frame identifiers recorded in the sequence of target tissue frames; according to the sequence of the target tissue frame sequences, all target ultrasonic images are integrated to form a target ultrasonic video, so that all target ultrasonic images in the target ultrasonic images contain target tissue areas, the target ultrasonic video is screened and extracted in a targeted mode, storage resources corresponding to the target ultrasonic video are saved, and the efficiency of watching and analyzing the ultrasonic video of specific human tissues is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, an ultrasound video data processing apparatus is provided, and the ultrasound video data processing apparatus corresponds to the ultrasound video data processing method in the above-mentioned embodiment one to one. As shown in fig. 9, the ultrasound video data processing apparatus includes an image cleaning request acquisition module 901, an original ultrasound image extraction module 902, an original feature value acquisition module 903, an image classification result acquisition module 904, and an effective ultrasound video acquisition module 905. The functional modules are explained in detail as follows:
an image cleaning request obtaining module 901, configured to obtain an image cleaning request, where the image cleaning request includes an original ultrasound video.
The original ultrasound image extraction module 902 is configured to extract N frames of original ultrasound images from an original ultrasound video, where N is greater than or equal to 2.
An original feature value obtaining module 903, configured to perform feature extraction on the original ultrasound image, and obtain an image feature value corresponding to the original ultrasound image.
And an image classification result obtaining module 904, configured to input the image feature value corresponding to the original ultrasound image into a pre-trained empty-load image classification model, and obtain an image classification result corresponding to the original ultrasound image.
An effective ultrasound video obtaining module 905 is configured to delete a no-load ultrasound image in the original ultrasound video based on an image classification result corresponding to the N frames of original ultrasound images, so as to obtain an effective ultrasound video.
Preferably, the ultrasound video data processing apparatus further comprises: and the image preprocessing module is used for preprocessing the original ultrasonic image to obtain an updated original ultrasonic image.
Preferably, the raw feature value obtaining module 903 includes:
and the sampling sub-region dividing unit is used for dividing the original ultrasonic image into K sampling sub-regions based on the target sampling rule.
And the target sampling window acquisition unit is used for acquiring a target sampling window, and the target sampling window comprises a central point and H neighborhood points.
And the target sampling window traversing unit is used for traversing the current sampling point corresponding to each sampling sub-region by adopting the target sampling window to obtain the sampling value corresponding to the current sampling point.
And the characteristic sub-vector acquisition unit is used for counting and splicing the occurrence times of the sampling values corresponding to all current sampling points in the sampling sub-region to acquire the characteristic sub-vectors corresponding to the sampling sub-region.
And the first characteristic value acquisition unit is used for splicing the characteristic subvectors corresponding to the K sampling subregions to acquire the image characteristic value corresponding to the original ultrasonic image.
Preferably, the raw feature value obtaining module 903 includes:
and the feature vector component acquisition unit is used for extracting features of the original ultrasonic image by adopting a feature extraction algorithm corresponding to the at least two feature descriptors to acquire feature vector components corresponding to the at least two feature descriptors.
And the second characteristic value acquisition unit is used for splicing the characteristic vector components corresponding to the at least two characteristic descriptors according to the splicing sequence of the characteristic descriptors to acquire the image characteristic value corresponding to the original ultrasonic image.
Preferably, the ultrasound video data processing apparatus further comprises:
and the training characteristic value acquisition module is used for acquiring a training ultrasonic image, extracting the characteristics of the training ultrasonic image and acquiring an image characteristic value corresponding to the training ultrasonic image.
And the classification label acquisition module is used for labeling the training ultrasonic image and acquiring a classification label corresponding to the training ultrasonic image.
And the model training sample acquisition module is used for acquiring model training samples based on the image characteristic values and the classification labels corresponding to the training ultrasonic images and dividing the model training samples into a training set and a test set.
And the initial model acquisition module is used for inputting the model training samples in the training set into the target classification model for training to acquire an initial image classification model.
And the no-load model obtaining module is used for testing the initial image classification model by adopting the model training samples in the test set to obtain the test accuracy, and if the test accuracy reaches the accuracy threshold, determining the initial image classification model as the no-load image classification model.
In one embodiment, an ultrasound apparatus is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the ultrasound video data processing method in the foregoing embodiments is implemented, for example, S201 to S205 shown in fig. 2, or shown in fig. 3 to fig. 5, which is not described herein again to avoid repetition. Alternatively, the processor implements the functions of each module/unit in the ultrasound video data processing apparatus in the embodiment when executing the computer program, for example, the functions of the image cleaning request obtaining module 901, the original ultrasound image extracting module 902, the original feature value obtaining module 903, the image classification result obtaining module 904, and the valid ultrasound video obtaining module 905 shown in fig. 9, and are not described herein again to avoid repetition.
In an embodiment, a computer-readable storage medium is provided, and a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the ultrasound video data processing method in the foregoing embodiments, for example, S201 to S205 shown in fig. 2, or shown in fig. 3 to fig. 5, which are not repeated herein to avoid repetition. Alternatively, when being executed by the processor, the computer program implements the functions of the modules/units in the embodiment of the ultrasound video data processing apparatus, for example, the functions of the image cleaning request obtaining module 901, the original ultrasound image extracting module 902, the original characteristic value obtaining module 903, the image classification result obtaining module 904, and the effective ultrasound video obtaining module 905 shown in fig. 9, and in order to avoid repetition, details are not repeated here.
For specific limitations of the ultrasound video data processing apparatus, reference may be made to the above limitations of the ultrasound video data processing method, which are not described herein again. The modules in the ultrasonic video data processing device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the ultrasound device, and can also be stored in a memory in the ultrasound device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an embodiment, an ultrasound video data processing apparatus is provided, and the ultrasound video data processing apparatus corresponds to the ultrasound video data processing method in the above-mentioned embodiment one to one. As shown in fig. 10, the ultrasound video data processing apparatus includes a target tracking request acquisition module 1001, an ultrasound image to be tracked extraction module 1002, a starting ultrasound image determination module 1003, a target tissue region determination module 1004, and a target ultrasound video acquisition module 1005. The functional modules are explained in detail as follows:
a target tracking request obtaining module 1001, configured to obtain a target tracking request, where the target tracking request includes an ultrasound video to be tracked, and the ultrasound video to be tracked is the original ultrasound video or the effective ultrasound video.
The module 1002 for extracting an ultrasound image to be tracked is configured to extract Q frames of ultrasound images to be tracked, which are ordered according to a time sequence, from an ultrasound video to be tracked.
The starting ultrasound image determining module 1003 is configured to receive an image selection request, and determine a starting ultrasound image and ultrasound images to be processed that are ordered behind the starting ultrasound image from the Q frames of ultrasound images to be tracked.
A target tissue region determination module 1004 for receiving the region selection request to determine a target tissue region from the starting ultrasound image.
A target ultrasound video obtaining module 1005, configured to perform target tracking on the ultrasound image to be processed by using a target tracking algorithm based on the target tissue region, and obtain a target ultrasound video.
Preferably, the target tissue region determining module 1004 includes:
and the area selection request receiving unit is used for receiving an area selection request, and the area selection request comprises an area marking type.
And the initial tissue area determining unit is used for displaying an area marking interface corresponding to the area marking type, receiving the area marking parameters corresponding to the area marking type input by the user and determining the initial tissue area from the initial ultrasonic image.
And the first area determining unit is used for determining the initial tissue area as the target tissue area if the area marking type is the full wrapping type.
And the second area determining unit is used for identifying and segmenting the initial tissue area and determining the target tissue area if the area marking type is the foreground marking type.
Preferably, the target ultrasound video acquisition module 1005 includes:
and the starting frame identifier storage unit is used for storing the starting frame identifier corresponding to the starting ultrasonic image into the target tissue frame sequence corresponding to the target tissue area.
And the current ultrasonic image determining unit is used for sequentially determining the current ultrasonic images according to the time sequence corresponding to the ultrasonic images to be processed.
And the target tracking judgment unit is used for tracking the target of the current ultrasonic image by adopting a target tracking algorithm and judging whether the current ultrasonic image contains a target tissue area.
And the current frame identifier storage unit is used for storing the current frame identifier corresponding to the current ultrasonic image into the target tissue frame sequence corresponding to the target tissue area if the current ultrasonic image contains the target tissue area, and repeatedly executing the time sequence corresponding to the ultrasonic image to be processed to sequentially determine the current ultrasonic image.
And the target ultrasonic video acquisition unit is used for acquiring the target ultrasonic video based on the target tissue frame sequence if the current ultrasonic image does not contain the target tissue area.
For specific limitations of the ultrasound video data processing apparatus, reference may be made to the above limitations of the ultrasound video data processing method, which are not described herein again. The modules in the ultrasonic video data processing device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the ultrasound device, and can also be stored in a memory in the ultrasound device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, an ultrasound apparatus is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the ultrasound video data processing method in the foregoing embodiments is implemented, for example, S601-S605 shown in fig. 6 or shown in fig. 7 to 8, which is not described herein again to avoid repetition. Alternatively, the processor implements the functions of each module/unit in the ultrasound video data processing apparatus in this embodiment when executing the computer program, for example, the functions of the target tracking request acquisition module 1001, the to-be-tracked ultrasound image extraction module 1002, the starting ultrasound image determination module 1003, the target tissue region determination module 1004, and the target ultrasound video acquisition module 1005 shown in fig. 10, which are not described herein again to avoid repetition.
In an embodiment, a computer-readable storage medium is provided, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for processing ultrasound video data in the foregoing embodiments is implemented, for example, S601-S605 shown in fig. 6 or shown in fig. 7 to 8, which is not repeated here to avoid repetition. Alternatively, when being executed by the processor, the computer program implements the functions of the modules/units in the embodiment of the ultrasound video data processing apparatus, such as the functions of the target tracking request acquisition module 1001, the to-be-tracked ultrasound image extraction module 1002, the starting ultrasound image determination module 1003, the target tissue region determination module 1004, and the target ultrasound video acquisition module 1005 shown in fig. 10, which are not repeated herein in order to avoid repetition.
It will be understood by those of ordinary skill in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is merely used as an example, and in practical applications, the above function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above described functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be appreciated by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An ultrasound video data processing method, comprising:
acquiring an image cleaning request, wherein the image cleaning request comprises an original ultrasonic video;
extracting N frames of original ultrasonic images which are sequenced according to a time sequence from the original ultrasonic video so as to analyze whether each frame of original ultrasonic image is a no-load ultrasonic image one by one, wherein N is more than or equal to 2;
extracting the characteristics of the original ultrasonic image to obtain an image characteristic value corresponding to the original ultrasonic image;
inputting the image characteristic value corresponding to the original ultrasonic image into a pre-trained idle-load image classification model, and obtaining an image classification result corresponding to the original ultrasonic image, wherein the idle-load image classification model is a pre-trained classification model for identifying whether the ultrasonic image is an idle-load ultrasonic image, and the idle-load ultrasonic image is an ultrasonic image which does not contain characteristic information corresponding to human tissues, namely an ultrasonic image formed by an idle-load ultrasonic probe;
deleting the no-load ultrasonic images in the original ultrasonic video based on the image classification results corresponding to the N frames of original ultrasonic images to obtain an effective ultrasonic video,
the extracting the features of the original ultrasound image to obtain the image feature value corresponding to the original ultrasound image includes:
dividing the original ultrasonic image into K sampling sub-regions based on a target sampling rule;
acquiring a target sampling window, wherein the target sampling window comprises a central point and H neighborhood points;
traversing the current sampling point corresponding to each sampling sub-region by adopting the target sampling window to obtain a sampling value corresponding to the current sampling point;
counting and splicing the occurrence times of sampling values corresponding to all the current sampling points in the sampling sub-region, and acquiring a characteristic sub-vector corresponding to the sampling sub-region;
splicing the feature subvectors corresponding to the K sampling subregions to obtain an image feature value corresponding to the original ultrasonic image, wherein before the image cleaning request is obtained and comprises the original ultrasonic video, the ultrasonic video data processing method further comprises the following steps:
acquiring a training ultrasonic image, performing feature extraction on the training ultrasonic image, and acquiring an image feature value corresponding to the training ultrasonic image;
labeling the training ultrasonic image to obtain a classification label corresponding to the training ultrasonic image;
obtaining model training samples based on the image characteristic values corresponding to the training ultrasonic images and the classification labels, and dividing the model training samples into a training set and a test set;
inputting the model training samples in the training set into a target classification model for training to obtain an initial image classification model;
and testing the initial image classification model by adopting the model training samples in the test set to obtain the test accuracy, and if the test accuracy reaches an accuracy threshold, determining the initial image classification model as a no-load image classification model.
2. The method for processing ultrasound video data according to claim 1, wherein before the feature extraction is performed on the original ultrasound image to obtain the image feature value corresponding to the original ultrasound image, the method for processing ultrasound video data further comprises: and carrying out image preprocessing on the original ultrasonic image to obtain an updated original ultrasonic image.
3. The method for processing ultrasound video data according to claim 1, wherein the extracting the features of the original ultrasound image to obtain the image feature values corresponding to the original ultrasound image comprises:
extracting the features of the original ultrasonic image by adopting a feature extraction algorithm corresponding to at least two feature descriptors to obtain feature vector components corresponding to the at least two feature descriptors;
and splicing the characteristic vector components corresponding to at least two characteristic descriptors according to the splicing sequence of the characteristic descriptors to obtain an image characteristic value corresponding to the original ultrasonic image.
4. An ultrasonic video data processing method is characterized in that,
acquiring a target tracking request, wherein the target tracking request comprises an ultrasonic video to be tracked, and the ultrasonic video to be tracked is the original ultrasonic video or the effective ultrasonic video in any one of claims 1-3;
extracting Q frames of ultrasound images to be tracked which are sequenced according to a time sequence from the ultrasound video to be tracked;
receiving an image selection request, and determining a starting ultrasonic image and an ultrasonic image to be processed which is sequenced behind the starting ultrasonic image from the ultrasonic images to be tracked in the Q frames;
receiving a region selection request, and determining a target tissue region from the starting ultrasonic image;
and based on the target tissue region, performing target tracking on the ultrasonic image to be processed by adopting a target tracking algorithm to obtain a target ultrasonic video.
5. The ultrasound video data processing method according to claim 4, wherein said receiving a region selection request, determining a target tissue region from said starting ultrasound image, comprises:
receiving a region selection request, wherein the region selection request comprises a region marking type;
displaying a region labeling interface corresponding to the region labeling type, receiving a region labeling parameter corresponding to the region labeling type and input by a user, and determining an initial tissue region from the initial ultrasonic image;
if the region marking type is a full wrapping type marking type, determining the initial tissue region as a target tissue region;
and if the region labeling type is a foreground labeling type, identifying and segmenting the initial tissue region, and determining a target tissue region.
6. The ultrasound video data processing method according to claim 4, wherein the performing target tracking on the ultrasound image to be processed based on the target tissue region by using a target tracking algorithm to obtain a target ultrasound video comprises:
storing the starting frame identifier corresponding to the starting ultrasonic image into the target tissue frame sequence corresponding to the target tissue area;
sequentially determining the current ultrasonic images according to the time sequence corresponding to the ultrasonic images to be processed;
performing target tracking on the current ultrasonic image by adopting the target tracking algorithm, and judging whether the current ultrasonic image contains the target tissue area;
if the current ultrasonic image contains the target tissue area, storing a current frame identifier corresponding to the current ultrasonic image into a target tissue frame sequence corresponding to the target tissue area, and repeatedly executing the time sequence corresponding to the ultrasonic image to be processed to sequentially determine the current ultrasonic image;
and if the current ultrasonic image does not contain the target tissue area, acquiring a target ultrasonic video based on the target tissue frame sequence.
7. Ultrasound video data processing apparatus for implementing the ultrasound video data processing method according to one of claims 1 to 3, characterized by comprising:
the system comprises an image cleaning request acquisition module, a processing module and a processing module, wherein the image cleaning request acquisition module is used for acquiring an image cleaning request which comprises an original ultrasonic video;
an original ultrasonic image extraction module, configured to extract the N frames of original ultrasonic images from the original ultrasonic video, where N is greater than or equal to 2;
an original characteristic value acquisition module, configured to perform characteristic extraction on the original ultrasound image, and acquire an image characteristic value corresponding to the original ultrasound image;
the image classification result acquisition module is used for inputting the image characteristic value corresponding to the original ultrasonic image into a pre-trained no-load image classification model and acquiring the image classification result corresponding to the original ultrasonic image;
and the effective ultrasonic video acquisition module is used for deleting the no-load ultrasonic images in the original ultrasonic video based on the image classification result corresponding to the N frames of original ultrasonic images to acquire the effective ultrasonic video.
8. Ultrasound video data processing apparatus for implementing the ultrasound video data processing method according to one of claims 4 to 6, characterized by comprising:
a target tracking request obtaining module, configured to obtain a target tracking request, where the target tracking request includes an ultrasound video to be tracked, and the ultrasound video to be tracked is the original ultrasound video or the valid ultrasound video in any one of claims 1 to 3;
the ultrasonic image to be tracked extracting module is used for extracting Q frames of ultrasonic images to be tracked which are sequenced according to a time sequence from the ultrasonic video to be tracked;
the starting ultrasonic image determining module is used for receiving an image selection request, and determining a starting ultrasonic image and an ultrasonic image to be processed which is sequenced behind the starting ultrasonic image from the ultrasonic images to be tracked in Q frames;
a target tissue area determination module, configured to receive an area selection request, and determine a target tissue area from the starting ultrasound image;
and the target ultrasonic video acquisition module is used for tracking the target of the ultrasonic image to be processed by adopting a target tracking algorithm based on the target tissue region to acquire a target ultrasonic video.
9. An ultrasound device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor implements the ultrasound video data processing method according to any of claims 1 to 6 when executing said computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the ultrasound video data processing method according to any one of claims 1 to 6.
CN202110366289.9A 2021-04-06 2021-04-06 Ultrasonic video data processing method and device, ultrasonic equipment and storage medium Active CN113052116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110366289.9A CN113052116B (en) 2021-04-06 2021-04-06 Ultrasonic video data processing method and device, ultrasonic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110366289.9A CN113052116B (en) 2021-04-06 2021-04-06 Ultrasonic video data processing method and device, ultrasonic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113052116A CN113052116A (en) 2021-06-29
CN113052116B true CN113052116B (en) 2022-02-22

Family

ID=76517504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110366289.9A Active CN113052116B (en) 2021-04-06 2021-04-06 Ultrasonic video data processing method and device, ultrasonic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113052116B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486195A (en) * 2021-08-17 2021-10-08 深圳华声医疗技术股份有限公司 Ultrasonic image processing method and device, ultrasonic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2779151A1 (en) * 2013-03-11 2014-09-17 Renesas Electronics Europe Limited Video output checker
CN104156696A (en) * 2014-07-23 2014-11-19 华南理工大学 Bi-directional-image-based construction method for quick local changeless feature descriptor
CN104980681A (en) * 2015-06-15 2015-10-14 联想(北京)有限公司 Video acquisition method and video acquisition device
CN108852268A (en) * 2018-04-23 2018-11-23 浙江大学 A kind of digestive endoscopy image abnormal characteristic real-time mark system and method
CN110087097A (en) * 2019-06-05 2019-08-02 西安邮电大学 It is a kind of that invalid video clipping method is automatically removed based on fujinon electronic video endoscope
CN110414571A (en) * 2019-07-05 2019-11-05 浙江网新数字技术有限公司 A kind of website based on Fusion Features reports an error screenshot classification method
CN111553191A (en) * 2020-03-30 2020-08-18 深圳壹账通智能科技有限公司 Video classification method and device based on face recognition and storage medium
CN111612093A (en) * 2020-05-29 2020-09-01 Oppo广东移动通信有限公司 Video classification method, video classification device, electronic equipment and storage medium
CN111986237A (en) * 2020-09-01 2020-11-24 安徽炬视科技有限公司 Real-time multi-target tracking algorithm irrelevant to number of people
CN112085534A (en) * 2020-09-11 2020-12-15 中德(珠海)人工智能研究院有限公司 Attention analysis method, system and storage medium
CN112488982A (en) * 2019-09-11 2021-03-12 磅客策(上海)机器人有限公司 Ultrasonic image detection method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014240B (en) * 2010-12-01 2013-07-31 深圳市蓝韵实业有限公司 Real-time medical video image denoising method
US10783379B2 (en) * 2017-08-23 2020-09-22 Bossa Nova Robotics Ip, Inc. Method for new package detection
CN111950424B (en) * 2020-08-06 2023-04-07 腾讯科技(深圳)有限公司 Video data processing method and device, computer and readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2779151A1 (en) * 2013-03-11 2014-09-17 Renesas Electronics Europe Limited Video output checker
CN104156696A (en) * 2014-07-23 2014-11-19 华南理工大学 Bi-directional-image-based construction method for quick local changeless feature descriptor
CN104980681A (en) * 2015-06-15 2015-10-14 联想(北京)有限公司 Video acquisition method and video acquisition device
CN108852268A (en) * 2018-04-23 2018-11-23 浙江大学 A kind of digestive endoscopy image abnormal characteristic real-time mark system and method
CN110087097A (en) * 2019-06-05 2019-08-02 西安邮电大学 It is a kind of that invalid video clipping method is automatically removed based on fujinon electronic video endoscope
CN110414571A (en) * 2019-07-05 2019-11-05 浙江网新数字技术有限公司 A kind of website based on Fusion Features reports an error screenshot classification method
CN112488982A (en) * 2019-09-11 2021-03-12 磅客策(上海)机器人有限公司 Ultrasonic image detection method and device
CN111553191A (en) * 2020-03-30 2020-08-18 深圳壹账通智能科技有限公司 Video classification method and device based on face recognition and storage medium
CN111612093A (en) * 2020-05-29 2020-09-01 Oppo广东移动通信有限公司 Video classification method, video classification device, electronic equipment and storage medium
CN111986237A (en) * 2020-09-01 2020-11-24 安徽炬视科技有限公司 Real-time multi-target tracking algorithm irrelevant to number of people
CN112085534A (en) * 2020-09-11 2020-12-15 中德(珠海)人工智能研究院有限公司 Attention analysis method, system and storage medium

Also Published As

Publication number Publication date
CN113052116A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN109741346B (en) Region-of-interest extraction method, device, equipment and storage medium
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
CN110781839A (en) Sliding window-based small and medium target identification method in large-size image
CN111862044A (en) Ultrasonic image processing method and device, computer equipment and storage medium
CN110853005A (en) Immunohistochemical membrane staining section diagnosis method and device
CN112750121B (en) System and method for detecting digital image quality of pathological slide
CN113139950B (en) Target object identification method and device
CN111899246A (en) Slide digital information quality detection method, device, equipment and medium
CN113052116B (en) Ultrasonic video data processing method and device, ultrasonic equipment and storage medium
CN111027343A (en) Bar code area positioning method and device
CN115082487B (en) Ultrasonic image section quality evaluation method and device, ultrasonic equipment and storage medium
EP3971762A1 (en) Method, device and system for processing image
CN113570594A (en) Method and device for monitoring target tissue in ultrasonic image and storage medium
CN113160199B (en) Image recognition method and device, computer equipment and storage medium
US20210312630A1 (en) Histogram-based method for auto segmentation of integrated circuit structures from sem images
EP2902969B1 (en) Image processing device, image processing method, and image processing program
CN112801940B (en) Model evaluation method, device, equipment and medium
CN109859218B (en) Pathological graph key area determination method and device, electronic equipment and storage medium
CN110852384A (en) Medical image quality detection method, device and storage medium
CN112435274A (en) Remote sensing image planar ground object extraction method based on object-oriented segmentation
CN116740646A (en) Group identification method and system for monitoring waiting bird habitat
CN115830025A (en) Leukocyte classification counting method, system, storage medium and computer equipment
CN114372970B (en) Surgical reference information generation method and device
CN113486195A (en) Ultrasonic image processing method and device, ultrasonic equipment and storage medium
CN118570559B (en) Target calibration method, target identification method, electronic device, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant