CN118285835A - Ultrasonic scanning integrity prediction method, system and device - Google Patents

Ultrasonic scanning integrity prediction method, system and device Download PDF

Info

Publication number
CN118285835A
CN118285835A CN202410416939.XA CN202410416939A CN118285835A CN 118285835 A CN118285835 A CN 118285835A CN 202410416939 A CN202410416939 A CN 202410416939A CN 118285835 A CN118285835 A CN 118285835A
Authority
CN
China
Prior art keywords
information
image sequence
ultrasonic
probe
ultrasonic scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410416939.XA
Other languages
Chinese (zh)
Inventor
石一磊
曹旭
胡敬良
牟立超
侯雨
陈咏虹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maide Intelligent Technology Wuxi Co ltd
Original Assignee
Maide Intelligent Technology Wuxi Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maide Intelligent Technology Wuxi Co ltd filed Critical Maide Intelligent Technology Wuxi Co ltd
Priority to CN202410416939.XA priority Critical patent/CN118285835A/en
Publication of CN118285835A publication Critical patent/CN118285835A/en
Pending legal-status Critical Current

Links

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The application provides an ultrasonic scanning integrity prediction method, an ultrasonic scanning integrity prediction system and an ultrasonic scanning integrity prediction device, wherein the ultrasonic scanning integrity prediction method comprises the following steps: acquiring motion information of an ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information; determining probe space posture information corresponding to the ultrasonic scanning image sequence according to the motion information; determining a target object detection area according to the spatial attitude information of the probe; and acquiring the ultrasonic scanning integrity according to the duty ratio of the target object detection area in the target object. On the one hand, compared with the mode relying on personnel experience and subjective judgment in the related art, the method utilizes the spatial attitude information of the ultrasonic probe to determine the target object detection area detected by the probe, so that the ultrasonic scanning integrity is objectively determined according to the target object detection area, and the method is beneficial to improving the prediction accuracy of the ultrasonic scanning integrity; on the other hand, the scheme has higher automation degree, and is beneficial to improving the prediction efficiency of the ultrasonic scanning integrity.

Description

Ultrasonic scanning integrity prediction method, system and device
Technical Field
The application relates to the technical field of ultrasonic imaging, in particular to an ultrasonic scanning integrity prediction method, an ultrasonic scanning integrity prediction system and an ultrasonic scanning integrity prediction device.
Background
The ultrasonic scanning, which may also be called as ultrasonic examination, is a medical imaging examination method, which is based on the principle that ultrasonic waves transmit and reflect echoes in a human body, and images the internal tissues of the human body through high-frequency ultrasonic waves, so that various section structures of organs and surrounding organs can be clearly displayed, and the structures and movement conditions of the internal organ tissues of the human body are displayed on a screen in an image form, thereby being convenient for doctors to make imaging judgment and diagnosis according to the characteristics of the images.
At present, in ultrasonic scanning, the scanning integrity is manually evaluated by operators, so that the scanning omission probability is high.
Disclosure of Invention
The embodiment of the application aims to provide an ultrasonic scanning integrity prediction method, an ultrasonic scanning integrity prediction system and an ultrasonic scanning integrity prediction device, which are used for improving the scanning omission probability.
In a first aspect, an embodiment of the present application provides an ultrasound scanning integrity prediction method, including: acquiring motion information of an ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information; according to the motion information, determining probe space posture information corresponding to the ultrasonic scanning image sequence; determining a target object detection area according to the probe space attitude information; and acquiring the ultrasonic scanning integrity according to the duty ratio of the target object exploration area in the target object.
In the implementation process of the scheme, the motion information of the ultrasonic probe is obtained by utilizing the ultrasonic scanning image sequence and the corresponding inertial information thereof, then the probe spatial attitude information of the ultrasonic probe is obtained by the motion information, the target object detection area can be determined by the probe spatial attitude information, and then the ultrasonic scanning integrity is determined according to the duty ratio of the target object detection area in the target object; on the one hand, compared with the mode relying on personnel experience and subjective judgment in the related art, the scheme utilizes the space attitude information of the ultrasonic probe to determine the target object detection area detected by the probe, so that the ultrasonic scanning integrity is objectively determined according to the target object detection area, and the prediction accuracy of the ultrasonic scanning integrity is improved; on the other hand, the scheme has higher automation degree, and is beneficial to improving the prediction efficiency of the ultrasonic scanning integrity.
In an implementation manner of the first aspect, the acquiring motion information of the ultrasound probe according to the ultrasound scanning image sequence and the corresponding inertial information thereof includes: taking the ultrasonic scanning image sequence and the corresponding inertial information thereof as the input of a backbone network in a spatial attitude estimation model, and acquiring the motion information of an ultrasonic probe output by the backbone network; the trunk network comprises a feature extraction sub-network and a temporal information processing sub-network connected with the feature extraction sub-network;
The determining the probe space posture information corresponding to the ultrasonic scanning image sequence according to the motion information comprises the following steps: and taking the motion information of the ultrasonic probe as the input of a spatial gesture estimation module of the spatial gesture estimation model to acquire probe spatial gesture information output by the spatial gesture estimation module.
In the implementation process of the scheme, the ultrasonic scanning image sequence and the corresponding inertial information are used as the input of the spatial attitude estimation model, so that the probe spatial attitude information output by the spatial attitude estimation model is obtained, the spatial attitude estimation model can extract key features accurately, process time sequence information in input data, and is favorable for improving the accuracy of the probe spatial attitude information, thereby improving the prediction accuracy of the ultrasonic scanning integrity prediction method.
In one implementation manner of the first aspect, the method further includes: acquiring a sub-training image sequence with contextual consistency with the original training image sequence and sub-training inertial information with contextual consistency with the original training inertial information; taking the original training image sequence and the original training inertia information as the input of the spatial attitude estimation model, and acquiring first spatial attitude information output by the spatial attitude estimation model; taking the sub-training image sequence and the sub-training inertia information as the input of the spatial attitude estimation model, and acquiring second spatial attitude information output by the spatial attitude estimation model; training the spatial pose estimation model in consideration of self-consistency constraint; the self-consistency constraint is used for constraining the difference value between the first space gesture information and the second space gesture information to be smaller than a preset difference value threshold.
In the implementation process of the scheme, the sub-training image sequence with the context consistency with the original training image sequence and the sub-training inertia information with the context consistency with the original training inertia information are obtained, the first space posture information is obtained according to the original training image sequence and the original training inertia information, the second space posture information is obtained according to the sub-training image sequence and the sub-training inertia information, then the self-consistency constraint is constructed according to the first space posture information and the second space posture information, and the model is trained by considering the self-consistency constraint, on one hand, the probability of model estimation instability caused by differences of ultrasonic scanning frame rates, scanning methods, scanning speeds and the like is greatly reduced, and the prediction stability and the prediction accuracy of the ultrasonic scanning integrity prediction method are improved; on the other hand, the sub-training image sequence and the sub-training inertia information realize the data augmentation processing of the original training image sequence and the original training inertia information, and the space attitude estimation model is jointly processed by the original training image sequence, the original training inertia information, the sub-training image sequence and the sub-training inertia information, so that the prediction accuracy of the space attitude estimation model is further improved, and the prediction accuracy of the scanning completeness prediction method is further improved.
In an implementation manner of the first aspect, the acquiring a sub-training image sequence having a contextual consistency with the original training image sequence and sub-training inertia information having a contextual consistency with the original training inertia information includes: performing random interval sampling operation and/or overturning operation on an original training image sequence to obtain a sub-training image sequence with contextual consistency with the original training image sequence; acquiring sub-training inertia information corresponding to the sub-training image sequence; wherein the sub-training inertial information has contextual consistency with the original training inertial information.
In the implementation process of the scheme, the subsequence with the context consistency with the original sequence can be obtained by carrying out random interval sampling and/or turning operation on the original sequence, on one hand, the diversity of training data is improved by a random interval sampling and/or turning mode, and therefore the generalization capability of a space attitude estimation model is improved; on the other hand, the subsequence can be obtained through random interval sampling and/or overturning operation, so that the method is beneficial to improving the obtaining efficiency of the subsequence, and further improving the training efficiency of the spatial attitude estimation model.
In an implementation manner of the first aspect, the acquiring motion information of the ultrasound probe according to the ultrasound scanning image sequence and the corresponding inertial information thereof includes: and acquiring at least one piece of motion information of coordinate position variation, attitude angle variation and orientation information of the ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information thereof.
In the implementation process of the scheme, at least one piece of motion information of the coordinate position change quantity, the attitude angle change quantity and the orientation information of the ultrasonic probe can be obtained through the ultrasonic scanning image sequence and the corresponding inertial information, so that the accurate positioning of the ultrasonic probe is realized, the accuracy of a target object exploration area determined in the subsequent step is improved, and the prediction accuracy of the ultrasonic scanning integrity prediction method is improved.
In an implementation manner of the first aspect, the acquiring motion information of the ultrasound probe according to the ultrasound scanning image sequence and the corresponding inertial information thereof includes: acquiring motion information of an ultrasonic probe according to the breast ultrasonic scanning image sequence and the corresponding inertial information;
The determining the probe space posture information corresponding to the ultrasonic scanning image sequence according to the motion information comprises the following steps: determining probe space posture information corresponding to the breast ultrasonic scanning image sequence according to the motion information;
The determining the target object detection area according to the probe space attitude information comprises the following steps: determining a mammary gland exploration area according to the spatial posture information of the probe;
The obtaining the ultrasonic scanning integrity according to the duty ratio of the target object exploration area in the target object comprises the following steps: and acquiring the integrity of the ultrasonic scanning of the mammary gland according to the duty ratio of the mammary gland exploration area in the mammary gland.
In the implementation process of the scheme, the ultrasonic scanning integrity prediction method can be applied to a breast ultrasonic scanning scene, so that the prediction accuracy and the prediction efficiency of the breast ultrasonic scanning integrity are improved.
In a second aspect, an embodiment of the present application provides an ultrasound scanning integrity prediction system, the system comprising: ultrasonic probe, inertial measurement equipment and host computer, wherein:
the ultrasonic probe is electrically connected with the upper computer and is used for acquiring an ultrasonic scanning image sequence and sending the ultrasonic scanning image sequence to the upper computer;
The inertial measurement device is arranged on the ultrasonic probe, is electrically connected with the upper computer, and is used for acquiring inertial information corresponding to the ultrasonic scanning image sequence and sending the inertial information to the upper computer;
The upper computer is used for acquiring the motion information of the ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information; according to the motion information, determining probe space posture information corresponding to the ultrasonic scanning image sequence; determining a target object detection area according to the probe space attitude information; and acquiring the ultrasonic scanning integrity according to the duty ratio of the target object exploration area in the target object.
In a third aspect, an embodiment of the present application provides an ultrasound scanning integrity prediction apparatus, the apparatus including:
The motion information acquisition unit is used for acquiring motion information of the ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information thereof;
the spatial gesture determining unit is used for determining probe spatial gesture information corresponding to the ultrasonic scanning image sequence according to the motion information;
The target object detection area determining unit is used for determining a target object detection area according to the probe space attitude information;
and the ultrasonic scanning integrity acquisition unit acquires ultrasonic scanning integrity according to the duty ratio of the target object exploration area in the target object.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: the device comprises a processor, a memory and a communication bus, wherein the processor and the memory complete communication with each other through the communication bus; the memory has stored therein computer program instructions executable by the processor which, when read and executed by the processor, perform the method of the first aspect or any one of the possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions which, when read and executed by a processor, perform the method provided by the first aspect or any one of the possible implementations of the first aspect.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of an ultrasonic scanning integrity prediction method according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a spatial pose estimation model according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a breast disk model and a breast probing area according to an embodiment of the present application;
Fig. 4 is a schematic structural diagram of an ultrasound scanning integrity prediction system according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an ultrasound scanning integrity prediction apparatus according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. The following examples are only for more clearly illustrating the technical aspects of the present application, and thus are merely examples, and are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion.
In the description of embodiments of the present application, the technical terms "first," "second," and the like are used merely to distinguish between different objects and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated, a particular order or a primary or secondary relationship. In the description of the embodiments of the present application, the meaning of "plurality" is two or more unless explicitly defined otherwise.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In the description of the embodiments of the present application, the term "and/or" is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Ultrasonic scanning has wide application in the medical field, such as chest and abdomen ultrasonic diagnosis, obstetrical and gynecological ultrasonic diagnosis, cardiovascular ultrasonic diagnosis, etc., and has the advantage of safety and painless. In the accurate medical era, the ultrasonic scanning is taken as an important means, and accurate diagnosis information can be provided for medical staff, so that important basis is provided for scenes such as emergency treatment, operation guidance and treatment effect evaluation, and the like, and the completeness of the ultrasonic scanning represents whether the ultrasonic scanning can provide more accurate diagnosis information or not.
At present, related technologies mostly evaluate whether ultrasonic scanning is complete or not through a manual evaluation mode, namely an operator evaluates whether the ultrasonic scanning is complete or not according to own experience after the ultrasonic scanning is completed, the evaluation mode depends on personal experience of the operator and subjective judgment of the operator, scanning missing situations often occur, and scanning missing probability is high.
Based on the above, the embodiment of the application provides an ultrasonic scanning integrity prediction method, which utilizes an ultrasonic scanning image sequence and corresponding inertial information thereof to acquire motion information of an ultrasonic probe, further acquires probe spatial attitude information of the ultrasonic probe through the motion information, and can determine a target object detection area according to the probe spatial attitude information, and further determines the ultrasonic scanning integrity according to the duty ratio of the target object detection area in a target object. On the one hand, compared with the mode relying on personnel experience and subjective judgment in the related art, the scheme utilizes the space attitude information of the ultrasonic probe to determine the target object detection area detected by the probe, so that the ultrasonic scanning integrity is objectively determined according to the target object detection area, and the prediction accuracy of the ultrasonic scanning integrity is improved; on the other hand, the scheme has higher automation degree, and is beneficial to improving the prediction efficiency of the ultrasonic scanning integrity.
The ultrasound scanning integrity prediction method is described in detail below. Referring to fig. 1, an embodiment of the present application provides an ultrasound scanning integrity prediction method, which includes:
Step S110: acquiring motion information of an ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information;
step S120: determining probe space posture information corresponding to the ultrasonic scanning image sequence according to the motion information;
step S130: determining a target object detection area according to the spatial attitude information of the probe;
Step S140: and acquiring the ultrasonic scanning integrity according to the duty ratio of the target object detection area in the target object.
Step S110 and step S120 will be described in detail first:
it is understood that the above-mentioned ultrasonic scanning image sequence in step S110 refers to a series of images continuously acquired during an ultrasonic scanning process, and the images are arranged in a time sequence, and the ultrasonic scanning image sequence may be acquired by an ultrasonic probe. The inertial information in step S110 described above, such as triaxial position information, attitude angle information, and orientation information of the ultrasonic probe, may be acquired by an inertial measurement device mounted on the ultrasonic probe, that is, an IMU device. In addition, it is understood that the inertial information refers to inertial information synchronized with the ultrasound scanning image sequence, and may also be understood to be inertial information aligned in time with the ultrasound scanning image sequence, and taking a certain image in the ultrasound scanning image sequence as an example, the inertial information corresponding to the image may be inertial information acquired by the inertial measurement device when the image is acquired.
The motion information acquired in step S110 is described below. As an optional embodiment of the ultrasound scanning integrity prediction method, the step S110 includes: and acquiring at least one piece of motion information of coordinate position variation, attitude angle variation and orientation information of the ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information thereof. This embodiment is, for example: and acquiring at least one of the three-axis coordinate position variation (deltax, deltay, deltaz), the attitude angle variation (deltaroll, deltapitch, deltayaw) and the probe orientation of the ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information thereof.
According to the scheme, at least one piece of motion information of the coordinate position variable quantity, the attitude angle variable quantity and the orientation information of the ultrasonic probe can be obtained through the ultrasonic scanning image sequence and the corresponding inertial information, so that the accurate positioning of the ultrasonic probe is realized, the accuracy of a target object exploration area determined in the subsequent step is improved, and the prediction accuracy of the ultrasonic scanning integrity prediction method is improved.
The following describes the specific embodiment of the step S110:
Embodiment one: processing the ultrasonic scanning image sequence by using an image processing technology such as an optical flow method, feature tracking and the like to acquire first motion information; directly analyzing the inertial information, for example, directly acquiring second motion information by adopting an integral mode, and then fusing the first motion information and the second motion information by adopting fusion modes such as averaging, weighted averaging, median taking and the like, so as to acquire more accurate motion information;
embodiment two: the ultrasonic scanning image sequence is processed by using image processing technologies such as an optical flow method, feature tracking and the like and combining inertial information, for example, the inertial information can be used for assisting in tracking the feature points of the image and the like, so that the motion information of the ultrasonic probe is obtained.
It is understood that the probe spatial pose information in the above step S120 may include at least one of probe position information, pose angle information, and orientation information of the ultrasound probe in the object model. The probe position information, the attitude angle information and the orientation information are already described in the description of the motion information, and thus are not described in detail. In addition, it is understood that the object model may be a two-dimensional model, such as a breast two-dimensional disk model, or may be a three-dimensional model. Accordingly, when the target object model is a two-dimensional model, the probe position information, the attitude angle information, and the orientation information may be two-dimensional data, and when the target object model is a three-dimensional model, the probe position information, the attitude angle information, and the orientation information may be three-dimensional data.
While the foregoing describes two implementations of step S110, it will be appreciated that step S110 is not limited to the two implementations described above, and that a network model-based implementation may be employed, and that the spatial pose estimation model employed by the embodiment of the present application is described below in conjunction with an alternative implementation of step S110 and step S120:
As an optional embodiment of the ultrasound scanning integrity prediction method, the step S110 includes:
Taking the ultrasonic scanning image sequence and the corresponding inertial information as the input of a backbone network in a spatial attitude estimation model, and acquiring the motion information of an ultrasonic probe output by the backbone network; the trunk network comprises a feature extraction sub-network and a temporal information processing sub-network connected with the feature extraction sub-network;
The step S120 includes: and taking the motion information of the ultrasonic probe as the input of the spatial gesture estimation module of the spatial gesture estimation model, and acquiring the probe spatial gesture information output by the spatial gesture estimation module.
The spatial attitude estimation model in the above embodiment is described below. Referring to fig. 2, the spatial pose estimation model may include:
A backbone network 210 for estimating motion information of the ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information thereof;
the spatial pose estimation module 220 is configured to determine probe spatial pose information corresponding to the ultrasound scanning image sequence according to the motion information estimated by the backbone network 210.
The backbone network 210 includes:
a feature extraction sub-network 211 for performing feature extraction, which may employ a feature extraction network such as a depth residual network ResNet;
The temporal information processing sub-network 212 is connected to the feature extraction sub-network 211, and is configured to process temporal information in the ultrasound scanning image sequence and the corresponding inertial information, and assist the backbone network 210 in motion information estimation through a temporal context, and may be a network capable of processing time sequence information, such as a long-short-term memory network LSTM.
The spatial pose estimation model described above may use the mean absolute error MAE and the pearson correlation loss to calculate model predicted probe spatial pose informationThe loss with the real tag θ, the loss function may be:
Wherein Cov represents covariance; sigma represents standard deviation; and 1 represents L 1 normalization.
According to the scheme, the ultrasonic scanning image sequence and the corresponding inertial information are used as the input of the spatial attitude estimation model, so that the probe spatial attitude information output by the spatial attitude estimation model is obtained, the spatial attitude estimation model can extract key features accurately, process time sequence information in input data, and is beneficial to improving the accuracy of the probe spatial attitude information, and the prediction accuracy of the ultrasonic scanning integrity prediction method is improved.
In addition, when the spatial attitude estimation model is utilized to estimate the spatial attitude information of the probe, each image and each inertial information in the ultrasonic scanning image sequence can be respectively processed, so that the spatial attitude information of the probe at all moments can be obtained. The ultrasonic scanning image sequence formed by N images can be divided into a plurality of image groups with equal or unequal quantity, the image groups formed by M (M < N) continuous images are taken as an example, M groups of probe space attitude change amounts corresponding to all moments (which can be understood as acquisition moments of the images) in the current image group are obtained for the images in each image group and the corresponding inertia information of the images, then the M groups of probe space attitude change amounts are subjected to averaging processing, and then probe space attitude information corresponding to each moment can be obtained according to the averaged probe space attitude change amounts.
In the practical application process of the spatial pose estimation model, significant differences exist among scanned image sequences due to differences in frame rate, scanning method and scanning speed of ultrasonic scanning, and the differences further influence the stability of model estimation. Specifically:
(1) The frame rate of ultrasound scanning is different: the frame rate determines the number of images acquired per unit time. If the frame rates are different, the number of images obtained in the same time period will be different, resulting in differences in the length and detail richness of the image sequence, which will affect how easy the model finds stable modes or features, and thus the accuracy and stability of the model estimation.
(2) Scanning method difference: different scanning methods (e.g., linear scanning, sector scanning, etc.) may produce different image sequences. These methods have specific requirements on the manner of operation, angle and speed of the ultrasound probe, and therefore result in differences in image characteristics and structure. These differences may cause confusion or errors in the model when processing image sequences of different scanning methods, thereby affecting the stability of the model estimation.
(3) Scanning speed difference: the speed of scanning determines the speed of movement of the probe during scanning. If the scanning speed is too high, details in the image sequence may be lost or blurred; if the scanning speed is too slow, although more details may be obtained, the scanning time and discomfort to the user may be increased. Such differences may lead to variations in the quality and characteristics of the image sequence, which in turn affect the stability of the model estimation.
In summary, the differences of the frame rate of the ultrasonic scanning, the scanning method, the scanning speed and the like may cause inconsistency between image sequences, and the inconsistency may interfere with the estimation process of the model, so that the model is difficult to extract stable and effective features from the image sequences, thereby causing instability of model estimation. Therefore, in order to improve stability and accuracy of model estimation, the embodiment of the application provides the following schemes:
As an optional implementation manner of the ultrasound scanning integrity prediction method, the ultrasound scanning integrity prediction method further includes: acquiring a sub-training image sequence with contextual consistency with the original training image sequence and sub-training inertial information with contextual consistency with the original training inertial information; taking the original training image sequence and the original training inertia information as the input of a spatial attitude estimation model, and acquiring first spatial attitude information output by the spatial attitude estimation model; taking the sub-training image sequence and the sub-training inertia information as the input of the spatial attitude estimation model, and acquiring second spatial attitude information output by the spatial attitude estimation model; training a spatial attitude estimation model in consideration of self-consistency constraint; the self-consistency constraint is used for constraining the difference value between the first space gesture information and the second space gesture information to be smaller than a preset difference value threshold. This embodiment is, for example:
Acquiring a sub-training image sequence I sub 'with the context consistency with the original training image I' and sub-training inertia information U sub 'with the context consistency with the original training inertia information U';
Taking the original training image I ' and the original training inertia information U ' as the input of a space attitude estimation model, and acquiring first space attitude information theta ' output by the space attitude estimation model;
Taking the sub-training image sequence I sub ' and the sub-training inertia information U sub ' as the input of the spatial attitude estimation model to acquire second spatial attitude information theta sub ' output by the spatial attitude estimation model;
Training the spatial pose estimation model taking into account self-consistency constraints, which may be:
Lconsistency=||θ′-θsub′||1
A preset difference threshold epsilon may be set for the self-consistency constraint such that L consistency is less than the preset difference threshold epsilon.
It is understood that the context consistency refers to at least one of spatial consistency, temporal consistency, and semantic consistency, wherein:
Spatial consistency means that the images in the sub-sequence should be consistent with the images in the original sequence in terms of anatomy, organ position, and features. For example, if the original sequence shows a continuous change of an organ from a starting position to an ending position, the generated subsequence should also reflect this continuous change, rather than a jump or break.
Time consistency: the images and inertial information in the sub-sequence should be arranged in the same temporal order as the original sequence. This means that the generated sub-sequences cannot randomly shuffle the temporal order in the original sequence to ensure temporal consistency.
Semantic consistency: the images in the sub-sequence should contain semantic information similar to the original sequence. For example, if the original sequence is used to detect a particular structure or lesion, then the generated subsequence should also contain sufficient information to support this detection task, rather than irrelevant or misleading information.
In the proposed sequence self-consistency strategy, the subsequence is generated by randomly sampling and turning the original sequence at intervals, so that the generated subsequence can be ensured to maintain the context consistency of the original sequence to a certain extent. This strategy helps the model to better learn and understand the inherent structure and patterns of the data, thereby improving the stability and accuracy of model estimation. Meanwhile, the estimation process of the model can be further constrained by comparing the processed subsequence with the estimation parameters of the original sequence, and the robustness of the model to differences and noise is improved.
In the implementation process of the scheme, the sub-training image sequence with the context consistency with the original training image sequence and the sub-training inertia information with the context consistency with the original training inertia information are obtained, the first space posture information is obtained according to the original training image sequence and the original training inertia information, the second space posture information is obtained according to the sub-training image sequence and the sub-training inertia information, then the self-consistency constraint is constructed according to the first space posture information and the second space posture information, and the model is trained by considering the self-consistency constraint, on one hand, the probability of model estimation instability caused by differences of ultrasonic scanning frame rates, scanning methods, scanning speeds and the like is greatly reduced, and the prediction stability and the prediction accuracy of the ultrasonic scanning integrity prediction method are improved; on the other hand, the sub-training image sequence and the sub-training inertia information realize the data augmentation processing of the original training image sequence and the original training inertia information, and the space attitude estimation model is jointly processed by the original training image sequence, the original training inertia information, the sub-training image sequence and the sub-training inertia information, so that the prediction accuracy of the space attitude estimation model is further improved, and the prediction accuracy of the scanning completeness prediction method is further improved.
The scheme for acquiring the sub-training image sequence and the sub-training inertia information is described as follows:
As an optional implementation manner of the ultrasound scanning integrity prediction method, the acquiring the sub-training image sequence having the contextual consistency with the original training image sequence and the sub-training inertia information having the contextual consistency with the original training inertia information includes: performing random interval sampling operation and/or overturning operation on the original training image sequence to obtain a sub-training image sequence with contextual consistency with the original training image sequence; acquiring sub-training inertia information corresponding to the sub-training image sequence; the sub-training inertia information has contextual consistency with the original training inertia information.
It will be appreciated that in ultrasound scanning, there is often a strong contextual relationship between adjacent image frames or adjacent inertial information due to the nature of the continuous scanning. Even if sampling and flipping are performed at random intervals, a certain continuity can be maintained in the subsequence as long as the sampling is performed at proper intervals and in proper flipping manner, thereby maintaining the context consistency with the original sequence. Thus, the above-described random interval sampling operation and flipping operation are not irregular operations, and the purpose of the random interval sampling operation and the flipping operation is to obtain a sub-sequence having a contextual consistency with the original sequence, that is: and processing the original training image sequence according to a certain sampling method and/or a certain overturning strategy, so as to obtain a sub-training image sequence with contextual consistency with the original training image sequence.
In the implementation process of the scheme, the subsequence with the context consistency with the original sequence can be obtained by carrying out random interval sampling and/or turning operation on the original sequence, on one hand, the diversity of training data is improved by a random interval sampling and/or turning mode, and therefore the generalization capability of a space attitude estimation model is improved; on the other hand, the subsequence can be obtained through random interval sampling and/or overturning operation, so that the method is beneficial to improving the obtaining efficiency of the subsequence, and further improving the training efficiency of the spatial attitude estimation model.
The above scheme only introduces a way of acquiring the sub-training image sequence first and then acquiring the sub-training inertia information corresponding to the sub-training image sequence, and it can be understood that the sub-training inertia information can be determined by performing random interval sampling and/or flipping operation on the original training inertia information first and then determining the sub-training image sequence corresponding to the sub-training inertia information.
Of course, in addition to employing a random interval sampling operation and a flip operation to obtain a sub-sequence that has contextual consistency with the original sequence, the sub-sequence may be obtained as follows:
(1) Sliding window sampling: setting a sliding window with fixed size or changing according to a certain rule, continuously sliding the window on the original sequence or sliding the window according to a certain step length, and collecting data in the window each time as a subsequence;
(2) Sampling based on feature selection: by analyzing the characteristics of the original sequence, selecting key frames or data segments which can represent the main information or context of the original sequence as subsequences;
(3) Data enhancement techniques: more sub-sequences are generated using data enhancement techniques, such as performing minor rotation, scaling or translation operations on the original sequence to simulate data changes at different perspectives.
(4) Machine learning based sampling: the original sequence is learned and analyzed using a machine learning algorithm (e.g., clustering, classification, or self-encoder) to generate sub-sequences based on the learned data distribution or feature representation.
It will be appreciated that each of the above operations should be directed to operations in which the subsequence has contextual consistency with the original sequence.
The following describes the above steps S130 and S140 in detail:
It can be understood that after the probe spatial attitude information is determined, a target object detection area can be determined in the target object model according to the probe spatial attitude information, and the method specifically comprises the following steps: the method comprises the steps of determining a target object exploration area corresponding to all images in an ultrasonic scanning image sequence, and then obtaining the ultrasonic scanning integrity by determining the area or volume ratio of the target object exploration area in the target object.
The following describes the above step S130 and step S140 by taking a specific application scenario as an example:
The ultrasonic scanning completeness prediction method can be applied to a breast ultrasonic scanning scene, and at this time, the ultrasonic scanning completeness prediction method can comprise the following steps:
step S110: acquiring motion information of an ultrasonic probe according to the breast ultrasonic scanning image sequence and the corresponding inertial information;
Step S120: determining probe space posture information corresponding to the breast ultrasonic scanning image sequence according to the motion information;
Step S130: determining a mammary gland exploration area according to the spatial posture information of the probe;
step S140: and acquiring the ultrasonic scanning integrity of the mammary gland according to the occupancy rate of the mammary gland exploration area in the mammary gland.
Referring to fig. 3, in the above application scenario, the target object model may be a breast disc model, the coloring portion in fig. 3 is a breast probing area, step S140 may determine the integrity of the breast ultrasound scan by calculating the ratio of the area of the breast probing area to the whole area of the breast disc model, which may be:
wherein a scan is the area of the mammary gland exploration area; a breast is the overall area of the breast disk model.
In addition, it will be appreciated that when the object model is a two-dimensional model, the probe spatial pose information may include two-axis coordinate information of the probe, i.e., x-axis and y-axis coordinate information shown in fig. 3.
According to the scheme, the ultrasonic scanning completeness prediction method can be applied to a breast ultrasonic scanning scene, so that the prediction accuracy and the prediction efficiency of the breast ultrasonic scanning completeness are improved.
Referring to fig. 4, based on the same inventive concept, an ultrasound scanning integrity prediction system 300 is further provided in an embodiment of the present application, where the system includes: an ultrasonic probe 310, an inertial measurement device 320, and an upper computer 330, wherein:
the ultrasonic probe 310 is electrically connected with the upper computer 330, and is used for acquiring an ultrasonic scanning image sequence and transmitting the ultrasonic scanning image sequence to the upper computer 330;
the inertial measurement device 320 is installed on the ultrasonic probe 310, electrically connected with the upper computer 330, and used for acquiring inertial information corresponding to the ultrasonic scanning image sequence and transmitting the inertial information to the upper computer 330;
The upper computer 330 is configured to obtain motion information of the ultrasound probe according to the ultrasound scanning image sequence and the corresponding inertial information thereof; determining probe space posture information corresponding to the ultrasonic scanning image sequence according to the motion information; determining a target object detection area according to the spatial attitude information of the probe; and acquiring the ultrasonic scanning integrity according to the duty ratio of the target object detection area in the target object.
It will be appreciated that the inertial measurement device 320 may use an accelerometer to obtain the amount of change in the coordinate position of the ultrasonic probe 310, a gyroscope to obtain the amount of change in the attitude angle of the ultrasonic probe 310, and a magnetometer to obtain the orientation information of the ultrasonic probe 310.
As an optional implementation manner of the ultrasound scanning integrity prediction system, the obtaining, by the upper computer 330, motion information of the ultrasound probe according to the ultrasound scanning image sequence and corresponding inertial information thereof includes:
taking the ultrasonic scanning image sequence and the corresponding inertial information thereof as the input of a backbone network in a spatial attitude estimation model, and acquiring the motion information of an ultrasonic probe output by the backbone network; the trunk network comprises a feature extraction sub-network and a temporal information processing sub-network connected with the feature extraction sub-network;
The determining the probe space posture information corresponding to the ultrasonic scanning image sequence according to the motion information comprises the following steps:
And taking the motion information of the ultrasonic probe as the input of a spatial gesture estimation module of the spatial gesture estimation model to acquire probe spatial gesture information output by the spatial gesture estimation module.
As an alternative embodiment of the ultrasound scanning integrity prediction system, the upper computer 330 is further configured to:
acquiring a sub-training image sequence with contextual consistency with the original training image sequence and sub-training inertial information with contextual consistency with the original training inertial information;
Taking the original training image sequence and the original training inertia information as the input of the spatial attitude estimation model, and acquiring first spatial attitude information output by the spatial attitude estimation model;
Taking the sub-training image sequence and the sub-training inertia information as the input of the spatial attitude estimation model, and acquiring second spatial attitude information output by the spatial attitude estimation model;
Training the spatial pose estimation model in consideration of self-consistency constraint; the self-consistency constraint is used for constraining the difference value between the first space gesture information and the second space gesture information to be smaller than a preset difference value threshold.
As an optional implementation manner of the ultrasound scanning integrity prediction system, the obtaining, by the upper computer 330, a sub-training image sequence having a contextual consistency with an original training image sequence and sub-training inertia information having a contextual consistency with the original training inertia information includes:
Performing random interval sampling operation and/or overturning operation on an original training image sequence to obtain a sub-training image sequence with contextual consistency with the original training image sequence;
Acquiring sub-training inertia information corresponding to the sub-training image sequence; wherein the sub-training inertial information has contextual consistency with the original training inertial information.
As an optional implementation manner of the ultrasound scanning integrity prediction system, the obtaining, by the upper computer 330, motion information of the ultrasound probe according to the ultrasound scanning image sequence and corresponding inertial information thereof includes:
And acquiring at least one piece of motion information of coordinate position variation, attitude angle variation and orientation information of the ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information thereof.
As an optional implementation manner of the ultrasound scanning integrity prediction system, the obtaining, by the upper computer 330, motion information of the ultrasound probe according to the ultrasound scanning image sequence and corresponding inertial information thereof includes:
acquiring motion information of an ultrasonic probe according to the breast ultrasonic scanning image sequence and the corresponding inertial information;
The determining the probe space posture information corresponding to the ultrasonic scanning image sequence according to the motion information comprises the following steps:
according to the motion information, determining probe space posture information corresponding to the breast ultrasonic scanning image sequence;
the determining the target object detection area according to the probe space attitude information comprises the following steps:
Determining a mammary gland exploration area according to the spatial posture information of the probe;
the obtaining the ultrasonic scanning integrity according to the duty ratio of the target object exploration area in the target object comprises the following steps:
and acquiring the integrity of the ultrasonic scanning of the mammary gland according to the duty ratio of the mammary gland exploration area in the mammary gland.
Referring to fig. 5, based on the same inventive concept, an ultrasound scanning integrity prediction apparatus 400 is further provided in an embodiment of the present application, where the apparatus includes:
A motion information obtaining unit 410, configured to obtain motion information of the ultrasound probe according to the ultrasound scanning image sequence and the corresponding inertial information thereof;
a spatial pose determining unit 420, configured to determine probe spatial pose information corresponding to the ultrasound scanning image sequence according to the motion information;
A target object detection area determining unit 430, configured to determine a target object detection area according to the spatial pose information of the probe;
The ultrasonic scanning integrity obtaining unit 440 obtains the ultrasonic scanning integrity according to the duty ratio of the object exploration area in the object.
As an optional embodiment of the ultrasound scanning integrity prediction apparatus, the motion information obtaining unit 410 is specifically configured to:
Taking the ultrasonic scanning image sequence and the corresponding inertial information thereof as the input of a backbone network in a spatial attitude estimation model, and acquiring the motion information of an ultrasonic probe output by the backbone network; the trunk network comprises a feature extraction sub-network and a temporal information processing sub-network connected with the feature extraction sub-network.
The above-described spatial pose determination unit 420 specifically functions to:
And taking the motion information of the ultrasonic probe as the input of a spatial gesture estimation module of the spatial gesture estimation model to acquire probe spatial gesture information output by the spatial gesture estimation module.
As an alternative embodiment of the ultrasound scanning integrity prediction apparatus, the ultrasound scanning integrity prediction apparatus 400 further includes:
the model training unit is used for acquiring a sub-training image sequence with the context consistency with the original training image sequence and sub-training inertia information with the context consistency with the original training inertia information; taking the original training image sequence and the original training inertia information as the input of the spatial attitude estimation model, and acquiring first spatial attitude information output by the spatial attitude estimation model; taking the sub-training image sequence and the sub-training inertia information as the input of the spatial attitude estimation model, and acquiring second spatial attitude information output by the spatial attitude estimation model; training the spatial pose estimation model in consideration of self-consistency constraint; the self-consistency constraint is used for constraining the difference value between the first space gesture information and the second space gesture information to be smaller than a preset difference value threshold.
As an optional implementation manner of the ultrasound scanning integrity prediction apparatus, the model training unit is specifically configured to:
Performing random interval sampling operation and/or overturning operation on an original training image sequence to obtain a sub-training image sequence with contextual consistency with the original training image sequence; acquiring sub-training inertia information corresponding to the sub-training image sequence; wherein the sub-training inertial information has contextual consistency with the original training inertial information.
As an optional embodiment of the ultrasound scanning integrity prediction apparatus, the motion information obtaining unit 410 is specifically configured to:
And acquiring at least one piece of motion information of coordinate position variation, attitude angle variation and orientation information of the ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information thereof.
As an optional embodiment of the ultrasound scanning integrity prediction apparatus, the motion information obtaining unit 410 is specifically configured to: acquiring motion information of an ultrasonic probe according to the breast ultrasonic scanning image sequence and the corresponding inertial information;
The above-described spatial pose determination unit 420 specifically functions to: according to the motion information, determining probe space posture information corresponding to the breast ultrasonic scanning image sequence;
the object exploration area determination unit 430 specifically is configured to: determining a mammary gland exploration area according to the spatial posture information of the probe;
the ultrasound scanning integrity acquisition unit 440 is specifically configured to: and acquiring the integrity of the ultrasonic scanning of the mammary gland according to the duty ratio of the mammary gland exploration area in the mammary gland.
Fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application. Referring to fig. 6, the electronic device 500 includes: processor 510, memory 520, and communication interface 530, which are interconnected and communicate with each other by a communication bus 540 and/or other forms of connection mechanisms (not shown).
The Memory 520 includes one or more (Only one is shown in the figure), which may be, but is not limited to, a random access Memory (Random Access Memory, abbreviated as RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, abbreviated as PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, abbreviated as EPROM), an electrically erasable programmable Read Only Memory (Electric Erasable Programmable Read-Only Memory, abbreviated as EEPROM), and the like. Processor 510 and other possible components may access memory 520, read and/or write data therein.
Processor 510 includes one or more (only one shown) which may be an integrated circuit chip having signal processing capabilities. The processor 510 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a micro control unit (Micro Controller Unit, MCU), a network processor (Network Processor, NP), or other conventional processor; but may also be a special purpose Processor including a digital signal Processor (DIGITAL SIGNAL Processor), application SPECIFIC INTEGRATED Circuits (ASIC), field programmable gate array (Field Programmable GATE ARRAY FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Communication interface 530 includes one or more (only one shown) that may be used to communicate directly or indirectly with other devices for data interaction. For example, communication interface 530 may be an ethernet interface; may be a mobile communications network interface, such as an interface of a 3G, 4G, 5G network; or may be other types of interfaces with data transceiving functionality.
One or more computer program instructions may be stored in memory 520 that may be read and executed by processor 510 to implement the ultrasound scanning integrity prediction method provided by embodiments of the present application, as well as other desired functions.
It is to be understood that the configuration shown in fig. 6 is illustrative only, and that electronic device 500 may also include more or fewer components than shown in fig. 6, or have a different configuration than shown in fig. 6. The components shown in fig. 6 may be implemented in hardware, software, or a combination thereof. For example, the electronic device 500 may be a single server (or other device with computing capabilities), a combination of multiple servers, a cluster of a large number of servers, etc., and may be either a physical device or a virtual device.
The embodiment of the application also provides a computer readable storage medium, and the computer readable storage medium is stored with computer program instructions which execute the ultrasonic scanning integrity prediction method provided by the embodiment of the application when being read and run by a processor of a computer. For example, a computer-readable storage medium may be implemented as memory 520 in electronic device 500 in fig. 6.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. An ultrasound scan integrity prediction method, the method comprising:
acquiring motion information of an ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information;
according to the motion information, determining probe space posture information corresponding to the ultrasonic scanning image sequence;
determining a target object detection area according to the probe space attitude information;
and acquiring the ultrasonic scanning integrity according to the duty ratio of the target object exploration area in the target object.
2. The method of claim 1, wherein the obtaining motion information of the ultrasound probe according to the ultrasound scan image sequence and the corresponding inertial information thereof comprises:
taking the ultrasonic scanning image sequence and the corresponding inertial information thereof as the input of a backbone network in a spatial attitude estimation model, and acquiring the motion information of an ultrasonic probe output by the backbone network; the trunk network comprises a feature extraction sub-network and a temporal information processing sub-network connected with the feature extraction sub-network;
The determining the probe space posture information corresponding to the ultrasonic scanning image sequence according to the motion information comprises the following steps:
And taking the motion information of the ultrasonic probe as the input of a spatial gesture estimation module of the spatial gesture estimation model to acquire probe spatial gesture information output by the spatial gesture estimation module.
3. The ultrasound scan integrity prediction method of claim 2, further comprising:
acquiring a sub-training image sequence with contextual consistency with the original training image sequence and sub-training inertial information with contextual consistency with the original training inertial information;
Taking the original training image sequence and the original training inertia information as the input of the spatial attitude estimation model, and acquiring first spatial attitude information output by the spatial attitude estimation model;
Taking the sub-training image sequence and the sub-training inertia information as the input of the spatial attitude estimation model, and acquiring second spatial attitude information output by the spatial attitude estimation model;
Training the spatial pose estimation model in consideration of self-consistency constraint; the self-consistency constraint is used for constraining the difference value between the first space gesture information and the second space gesture information to be smaller than a preset difference value threshold.
4. The ultrasound scan integrity prediction method of claim 3, wherein the obtaining a sub-training image sequence having contextual consistency with an original training image sequence and sub-training inertia information having contextual consistency with original training inertia information comprises:
Performing random interval sampling operation and/or overturning operation on an original training image sequence to obtain a sub-training image sequence with contextual consistency with the original training image sequence;
Acquiring sub-training inertia information corresponding to the sub-training image sequence; wherein the sub-training inertial information has contextual consistency with the original training inertial information.
5. The method for predicting the integrity of an ultrasonic scan according to any one of claims 1 to 4, wherein the acquiring motion information of an ultrasonic probe according to an ultrasonic scan image sequence and corresponding inertial information thereof comprises:
And acquiring at least one piece of motion information of coordinate position variation, attitude angle variation and orientation information of the ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information thereof.
6. The method for predicting the integrity of an ultrasonic scan according to any one of claims 1 to 4, wherein the acquiring motion information of an ultrasonic probe according to an ultrasonic scan image sequence and corresponding inertial information thereof comprises:
acquiring motion information of an ultrasonic probe according to the breast ultrasonic scanning image sequence and the corresponding inertial information;
The determining the probe space posture information corresponding to the ultrasonic scanning image sequence according to the motion information comprises the following steps:
according to the motion information, determining probe space posture information corresponding to the breast ultrasonic scanning image sequence;
the determining the target object detection area according to the probe space attitude information comprises the following steps:
Determining a mammary gland exploration area according to the spatial posture information of the probe;
the obtaining the ultrasonic scanning integrity according to the duty ratio of the target object exploration area in the target object comprises the following steps:
and acquiring the integrity of the ultrasonic scanning of the mammary gland according to the duty ratio of the mammary gland exploration area in the mammary gland.
7. An ultrasound scan integrity prediction system, the system comprising: ultrasonic probe, inertial measurement equipment and host computer, wherein:
the ultrasonic probe is electrically connected with the upper computer and is used for acquiring an ultrasonic scanning image sequence and sending the ultrasonic scanning image sequence to the upper computer;
The inertial measurement device is arranged on the ultrasonic probe, is electrically connected with the upper computer, and is used for acquiring inertial information corresponding to the ultrasonic scanning image sequence and sending the inertial information to the upper computer;
The upper computer is used for acquiring the motion information of the ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information; according to the motion information, determining probe space posture information corresponding to the ultrasonic scanning image sequence; determining a target object detection area according to the probe space attitude information; and acquiring the ultrasonic scanning integrity according to the duty ratio of the target object exploration area in the target object.
8. An ultrasound scan integrity prediction apparatus, the apparatus comprising:
The motion information acquisition unit is used for acquiring motion information of the ultrasonic probe according to the ultrasonic scanning image sequence and the corresponding inertial information thereof;
the spatial gesture determining unit is used for determining probe spatial gesture information corresponding to the ultrasonic scanning image sequence according to the motion information;
The target object detection area determining unit is used for determining a target object detection area according to the probe space attitude information;
and the ultrasonic scanning integrity acquisition unit acquires ultrasonic scanning integrity according to the duty ratio of the target object exploration area in the target object.
9. An electronic device, comprising: the device comprises a processor, a memory and a communication bus, wherein the processor and the memory complete communication with each other through the communication bus; the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1-6.
10. A computer readable storage medium storing computer instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1 to 6.
CN202410416939.XA 2024-04-08 2024-04-08 Ultrasonic scanning integrity prediction method, system and device Pending CN118285835A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410416939.XA CN118285835A (en) 2024-04-08 2024-04-08 Ultrasonic scanning integrity prediction method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410416939.XA CN118285835A (en) 2024-04-08 2024-04-08 Ultrasonic scanning integrity prediction method, system and device

Publications (1)

Publication Number Publication Date
CN118285835A true CN118285835A (en) 2024-07-05

Family

ID=91680597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410416939.XA Pending CN118285835A (en) 2024-04-08 2024-04-08 Ultrasonic scanning integrity prediction method, system and device

Country Status (1)

Country Link
CN (1) CN118285835A (en)

Similar Documents

Publication Publication Date Title
Droste et al. Automatic probe movement guidance for freehand obstetric ultrasound
US20210192758A1 (en) Image processing method and apparatus, electronic device, and computer readable storage medium
US11074732B2 (en) Computer-aided diagnostic apparatus and method based on diagnostic intention of user
CN110599421B (en) Model training method, video fuzzy frame conversion method, device and storage medium
WO2010052929A1 (en) Image processing apparatus, image processing method, program, and program recording medium
JP2019530490A (en) Computer-aided detection using multiple images from different views of the region of interest to improve detection accuracy
CN110400298B (en) Method, device, equipment and medium for detecting heart clinical index
KR20160032586A (en) Computer aided diagnosis apparatus and method based on size model of region of interest
US10178941B2 (en) Image processing apparatus, image processing method, and computer-readable recording device
US20120289833A1 (en) Image processing device, image processing method, program, recording medium, image processing system, and probe
CN112149615B (en) Face living body detection method, device, medium and electronic equipment
CN112435341B (en) Training method and device for three-dimensional reconstruction network, and three-dimensional reconstruction method and device
JP7296171B2 (en) Data processing method, apparatus, equipment and storage medium
CN111091127A (en) Image detection method, network model training method and related device
WO2020027228A1 (en) Diagnostic support system and diagnostic support method
US20150223901A1 (en) Method and system for displaying a timing signal for surgical instrument insertion in surgical procedures
Patra et al. Multi-anatomy localization in fetal echocardiography videos
JP7266599B2 (en) Devices, systems and methods for sensing patient body movement
CN116324897A (en) Method and system for reconstructing a three-dimensional surface of a tubular organ
CN118285835A (en) Ultrasonic scanning integrity prediction method, system and device
CN112488982A (en) Ultrasonic image detection method and device
Luo et al. Externally navigated bronchoscopy using 2-D motion sensors: Dynamic phantom validation
CN112885435A (en) Method, device and system for determining image target area
JP2024140988A (en) Information processing device, information processing method, and information processing program
CN116077093A (en) Early myocardial infarction detection device, method and medium based on echocardiography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination