EP4008261A1 - Procédé de traitement de données, appareil, dispositif et support d'informations - Google Patents

Procédé de traitement de données, appareil, dispositif et support d'informations Download PDF

Info

Publication number
EP4008261A1
EP4008261A1 EP20846593.0A EP20846593A EP4008261A1 EP 4008261 A1 EP4008261 A1 EP 4008261A1 EP 20846593 A EP20846593 A EP 20846593A EP 4008261 A1 EP4008261 A1 EP 4008261A1
Authority
EP
European Patent Office
Prior art keywords
image
reconstruction
target
result data
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20846593.0A
Other languages
German (de)
English (en)
Other versions
EP4008261A4 (fr
Inventor
Qiong HE
Xiaochen Xu
Jinhua Shao
Jin Sun
Houli Duan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Hisky Medical Technologies Co Ltd
Original Assignee
Wuxi Hisky Medical Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Hisky Medical Technologies Co Ltd filed Critical Wuxi Hisky Medical Technologies Co Ltd
Publication of EP4008261A1 publication Critical patent/EP4008261A1/fr
Publication of EP4008261A4 publication Critical patent/EP4008261A4/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/485Diagnostic techniques involving measuring strain or elastic properties
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/06Measuring blood flow
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5269Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving detection or reduction of artifacts
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52023Details of receivers
    • G01S7/52025Details of receivers for pulse systems
    • G01S7/52026Extracting wanted echo signals
    • G01S7/52028Extracting wanted echo signals using digital techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52023Details of receivers
    • G01S7/52036Details of receivers using analysis of echo signal for target characterisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Definitions

  • the present application relates to the technical field of ultrasound image processing and, in particular, to a data processing method and apparatus, a device and a storage medium.
  • the present application provides a data processing method and apparatus, a device, and a storage medium to solve disadvantages of low judgment accuracy in the prior art
  • a first aspect of the present application provides a data processing method, including:
  • a second aspect of the present application provides a data processing apparatus, including:
  • a third aspect of the present application provides a computer device, including: at least one processor and a memory; where the memory stores a computer program; and the at least one processor executes the computer program stored in the memory to implement the method provided in the first aspect.
  • a fourth aspect of the present application provides a computer-readable storage medium in which a computer program is stored, and the method provided in the first aspect is implemented when the computer program is executed,.
  • the data processing method and apparatus, device, and storage medium provided in the present application, by performing feature extraction on a related parameter of a detected object using a pre-trained feature extraction model to obtain second target result data, and further performing corresponding processing on the detected object based on the second target result data, accuracy of judging the state of the detected object can be improved effectively.
  • Image reconstruction refers to the technology of obtaining shape information of a three-dimensional object through digital processing of data measured outside of an object.
  • Image reconstruction technology may be used in radiological medical equipment to display images of various parts of a human body, that is, the computed tomography technology, or CT technology for short. And it may also be applied in other fields.
  • Image processing refers to the technology of analyzing an image with a computer to achieve a desired result. In the embodiments of the present application, it refers to perform image post-processing and signal extraction on a reconstructed result image to improve image clarity and highlight image features, and obtain a related parameter of a detected object, such as a velocity, a direction, an acceleration, a strain, a strain rate, an elastic modulus and other quantitative parameters of the detected object, etc.
  • the data processing method provided by the embodiments of the present application is applicable to the following data processing system.
  • FIG. 1 it is a schematic structural diagram of a data processing system to which an embodiment of the present application is applicable.
  • the data processing system includes a cloud computing platform, a data collecting system and a display system.
  • the data collecting system is responsible for collecting data to be processed, where the data to be processed may include a collected original ultrasonic echo signal.
  • the cloud computing platform is responsible for performing corresponding processing on the data to be processed to obtain a required result.
  • the display system is responsible for displaying related data or the result obtained during the processing of the cloud computing platform.
  • the data processing system may also include a local computing platform for sharing part of processing tasks of the cloud computing platform.
  • This embodiment provides a data processing method for processing an ultrasonic echo signal to obtain required result data.
  • An execution subject of this embodiment is a data processing apparatus, which may be set in a cloud computing platform. Or the apparatus may be partly set in a local computing platform, and other parts are set in the cloud computing platform.
  • FIG. 2 it is a schematic flowchart of a data processing method provided by this embodiment, and the method includes: step 101: obtaining first target result data according to an original ultrasonic echo signal, where the first target result data includes a related parameter of a detected object.
  • the original ultrasonic echo signal may be obtained from a data collecting terminal, or may be collected and stored in advance, such as stored in a cloud computing platform, or stored in a local computing platform and sent to the cloud computing platform when needed for processing, or processed by the local computing platform, etc., and the specific obtaining method is not limited.
  • the first target result data may be obtained according to the original ultrasonic echo signal, where the first target result data includes related parameters of the detected object, such as related parameters representing a moving velocity (such as a velocity of a blood flow), a moving direction (such as a direction of the blood flow), an elasticity (such as a strain, a strain rate, etc.) of the detected object, which may specifically include a displacement, a velocity, an acceleration, a strain, a strain rate, an elastic modulus and other quantitative parameters, etc.
  • related parameters of the detected object such as related parameters representing a moving velocity (such as a velocity of a blood flow), a moving direction (such as a direction of the blood flow), an elasticity (such as a strain, a strain rate, etc.) of the detected object, which may specifically include a displacement, a velocity, an acceleration, a strain, a strain rate, an elastic modulus and other quantitative parameters, etc.
  • the first target result data may also include parameters related to image features, such as a contrast, a texture feature, and other quantitative parameters, and may also include information such as a distribution feature of scatterers, a density of the scatterers, and a size of the scatterers. There are no specific restrictions.
  • the first target result data may be in a form of data or in a form of an image, such as a pseudo-color image.
  • the detected object may be human or animal tissues such as a liver, a kidney, a spleen, or other objects in the air or geology, which may be determined according to actual needs, and is not limited in the embodiment of the present application.
  • processing such as image reconstruction and image processing may be performed on the original ultrasonic echo signal to obtain the first target result data.
  • the specific processing method may be the prior art, which is not limited in this embodiment.
  • Step 102 performing feature extraction on the first target result data using a pre-trained feature extraction model to obtain second target result data.
  • the pre-trained feature extraction model may be a machine learning model or an artificial intelligence model, where the training of the feature extraction model may be performed using a large amount of pre-collected training data and labeled data obtained from labeling the training data.
  • the specific training process is consistent with the training process of an existing neural network model, which will not be repeated here.
  • Types of parameters included in the training data are consistent with those in the first target result data, such as different velocities of a blood flow, directions of a blood flow, and elasticity information.
  • the labeled data may be texture features, uniformity, etc., or the labeled data may also be a state of the detected object corresponding to the training data, such as whether it is liver fibrosis, cirrhosis and its specific staging, whether it is fatty liver and its specific staging, whether it is tumor and benign or malignant.
  • the detail may be set according to actual needs.
  • the trained feature extraction model may perform feature extraction and result prediction based on the first target result data to obtain the second target result data, where the second target result data may be an image texture feature, uniformity and other features of the detected object, and may also be a state feature of the detected object obtained after feature analysis and weighting of these features, such as whether the detected object is liver fibrosis, cirrhosis and its specific staging, fatty liver and its specific staging, tumor and benign or malignant, etc.
  • the state features output by the model may be labels corresponding to different states, for example, 0 means "normal", 1 means "fatty liver", etc., and the detail may be set according to actual needs, which is not limited in this embodiment.
  • At least two models such as a machine learning model and an artificial intelligence model may be used in parallel for feature extraction, and the results of each model are synthesized to obtain the second target result data.
  • three different models are used for feature extraction, if the state features of the detected object are acquired, where the results of two models are "1" and the result of one model is "0", the result should be "1” following the principle of "the minority is subordinate to the majority", however, this is only an exemplary description rather than a limitation.
  • Step 103 performing corresponding processing on the detected object based on the second target result data.
  • corresponding processing may be performed on the detected object based on the second target result data. For example, judging the state of the detected object based on the state feature of the detected object. For another example, displaying the state of the detected object, or displaying the second target result data of the detected object, etc.
  • the second target result data may assist a related person to understand the state of the detected object. Such as assisting a doctor in diagnosis and so on.
  • the method provided in this embodiment may be executed by a cloud computing platform, or may be executed by a local computing platform, or partly executed by a local computing platform and partly executed by a cloud computing platform, and the detail may be set according to actual needs, which is not limited in this embodiment.
  • the data processing method provided in this embodiment by performing feature extraction on the related parameter of the detected object using the pre-trained feature extraction model to obtain the second target result data, and further performing corresponding processing on the detected object based on the second target result data, combining the detection with a neural network, improves the judgment accuracy of the state of the detected object effectively.
  • This embodiment further supplements the method provided in Embodiment I.
  • FIG. 3 it is a schematic flowchart of the data processing method provided by this embodiment.
  • step 101 specifically includes: step 1011: performing image reconstruction on the original ultrasonic echo signal to obtain a target reconstruction result image.
  • the target reconstruction result image such as an ultrasound image, a B-mode ultrasonic image, etc.
  • the target reconstruction result image may be in the form of radio frequency, envelope, grayscale, etc.
  • Step 1012 performing image processing on the target reconstruction result image to obtain first target result data.
  • image processing needs to be performed on the target reconstruction result image to improve image clarity and highlight image features. For example, grayscale correction, grayscale expansion and compression, ⁇ correction, histogram equalization, electronic amplification, interpolation processing, etc., are performed. Finally, the related parameter of the detected object, that is, the first target result data is obtained.
  • the specific image processing method may be set according to actual needs, which is not limited here.
  • step 1011 may specifically include: step 10111: performing image reconstruction on the original ultrasonic echo signal using a spatial point-based image reconstruction algorithm to obtain a first reconstruction result image, where the spatial point-based image reconstruction algorithm is an image reconstruction algorithm compatible with multiple types of probes; and taking the first reconstruction result image as the target reconstruction result image.
  • the performing image reconstruction on the original ultrasonic echo signal using the spatial point-based image reconstruction algorithm to obtain the first reconstruction result image includes: performing, according to pre-configured parameters of a probe and a display parameter, image reconstruction on the original ultrasonic echo signal using the spatial point-based image reconstruction algorithm to obtain the first reconstruction result image, where the parameters of the probe include an identifier of the probe, a Cartesian coordinate zero point of the probe, and a first coordinate of each array element of the probe, and the display parameter includes a second coordinate of the first reconstruction result image.
  • the spatial point-based image reconstruction algorithm includes: predefined parameters of a probe, that is, a probe is defined in a unified format according to physical parameters of the probe to form a probe parameter index table, where the probe parameter index table is composed of an identification code of the probe model (i.e., the identifier of the probe), a Cartesian coordinate zero point of the probe and a coordinate position of each element of the probe (i.e., the first coordinate), a type of the probe currently used may be identified by the identification code, and the parameters of the probe may be searched in the probe parameter index table.
  • a probe defining module may be set to manage the parameters of the probe.
  • the display parameter is composed of a definition of a coordinate range, a coordinate position (Xi, Yi, Zi) or a pixel size ( ⁇ Xi, ⁇ Yi, ⁇ Zi) of a target image (that is, the target reconstruction result image).
  • an image defining module may be set to manage the display parameter.
  • a probe identifying module may also be set to identify the probe.
  • the types of probe include linear array, convex array, phased array, two-dimensional area array and other types.
  • the probe is composed of multiple array elements, and the arrangement and size of the array elements have an impact on the image reconstruction algorithm.
  • adaptive beam combination is realized (adaptation here refers to performing according to different coordinate requirements.
  • the specific method may be an existing technology, such as delay overlaying, etc.).
  • the coordinate zero point of the probe is a middle position of the probe (X0, Y0, Z0), the coordinate position of each element of the probe is (Xt, Yt, Zt), and a center plane of an imaging plane of the probe is the XZ plane, the plane which is perpendicular to the imaging plane of the probe and parallel to a tangent plane at the zero position of the probe is the XY plane.
  • the convex array probe Take the convex array probe as an example (not limited to the convex array probe): a position, a center frequency, a bandwidth and other parameters of the convex array probe are written into the probe defining module; a specific probe code is programed by using several pins of the convex array probe, the probe identifying module may identify the probe code when the probe is connected to the data processing system, and may further search the related parameters in the probe defining module; the display mode of the image (that is, the display parameter) is defined in the image defining module, and image reconstruction is performed according to such mode.
  • This image reconstruction method is suitable for any probe, that is, it realizes ultrasound image reconstruction compatible with multiple types of probes, thereby improving the flexibility and efficiency of image reconstruction.
  • step 1012 may specifically include: step 10121: performing image post-processing and signal extraction on the target reconstruction result image to obtain the first target result data, where the first target result data includes at least one of a displacement, a velocity, an acceleration, a strain, a strain rate, an elastic modulus, a contrast, a texture feature, a distribution feature of scatterers, a density of scatterers, and a size of scatterers.
  • the image processing may also be compatible with multiple types of probes, and the probe defining module, the probe identifying module, and the image defining module are still used.
  • the probe identifying module identifies the type of the probe currently used by designing an identification code of the probe, and searches the parameters of the probe in the index table; the display parameter is defined in the image defining module, and image reconstruction is performed based on this parameter; and the image defining module performs image processing to obtain a data processing result (that is, the first target result data) that does not depend on the type of the probe, thereby realizing the compatibility of multiple types of probes.
  • image post-processing and signal extraction are the process of image processing
  • the image processing in this embodiment includes the whole process of image post-processing and signal extraction.
  • a convex array is used for Doppler signal processing (a signal extraction method in the step of image processing)
  • a signal obtained by using a traditional image reconstruction algorithm is along an emission direction of the convex array (fan beam)
  • Doppler signal extraction is performed, the obtained direction of the blood flow is also along the emission direction of the convex array.
  • the distribution of the velocity of the blood flow in the horizontal or vertical direction in a Cartesian coordinate system is required, it can only be obtained by getting a component along a corresponding angle.
  • the distribution of the velocity of the blood flow in the horizontal or vertical direction in the Cartesian coordinate system is possible to directly obtain the distribution of the velocity of the blood flow in the horizontal or vertical direction in the Cartesian coordinate system (specifically, the distribution can be obtained by using autocorrelation, short time Fourier transform and other existing technologies based on the first target result data).
  • the method is also applicable to array elements of other types of probe, such as phased array and area array.
  • the performing image reconstruction on the original ultrasonic echo signal to obtain the target reconstruction result image includes: in step 2011, for each probe, performing, according to an image reconstruction algorithm corresponding to a type of the probe, image reconstruction on the original ultrasonic echo signal to obtain a second reconstruction result image.
  • image reconstruction is performed on each probe according to the respective image reconstruction algorithm configured to obtain the second reconstruction result image.
  • a solution is provided when image reconstruction algorithms of multiple types of probes are not compatible, image reconstruction for each probe is performed according to the respective image reconstruction algorithm configured, that is, different types of probes may need to adopt different image reconstruction algorithms, a corresponding image reconstruction algorithm may be configured for a respective type of probe, and the image reconstruction algorithm corresponding to the probe is determined according to the type of the probe to perform image reconstruction after using different types of probes to collect the original ultrasonic echo signal.
  • the specific reconstruction method is the existing technology, which will not be repeated here.
  • Step 2012 performing spatial interpolation processing on the second reconstruction result image to obtain a third reconstruction result image, and taking the third reconstruction result image as the target reconstruction result image.
  • the third reconstruction result image obtained through spatial interpolation processing is substantially equivalent to the first reconstruction result image obtained by the above spatial point-based image reconstruction algorithm.
  • the difference is that the effects are slightly different, where the first reconstruction result image is obtained by direct reconstruction, and the third reconstruction result image is obtained by interpolating the traditional reconstruction result.
  • Spatial interpolation processing may be implemented in a variety of ways, such as linear interpolation, non-linear interpolation and etc.
  • the method may further include:
  • the obtained first target result data may also be used to assist in diagnosing and have certain reference significance. Therefore, the first target result data may be displayed, however, it needs to be displayed after digital scan conversion. Therefore, it is necessary to perform digital scan conversion on the first target result data to obtain the converted result data, and then perform display processing on the converted result data.
  • step 103 may specifically include: step 1031: judging a state of the detected object based on the second target result data.
  • the method may further include: step 104: performing display processing on the state of the detected object.
  • the method further includes: step 203: judging a state of the detected object based on the first target result data.
  • the obtained first target result data may also be used to assist in diagnosing and have certain reference significance, therefore, the state of the detected object may be judged based on the first target result data. For example, thresholds of different parameters and levels of the parameters may be set, where different levels correspond to different states of the detected object, etc., and the details will not be repeated here.
  • the method in the embodiment of the present application is executed by the cloud computing platform.
  • the local computing platform obtains the first target result data according to the original ultrasonic echo signal, and sends the first target result data to the cloud computing platform; the cloud computing platform performs feature extraction on the first target result data using the pre-trained feature extraction model to obtain the second target result data, and performs corresponding processing on the detected object based on the second target result data. That is, step 101 is executed by the local computing platform, and steps 102-103 are processed by the cloud computing platform.
  • the data processing method by performing feature extraction on a related parameter of a detected object using a pre-trained feature extraction model to obtain the second target result data, and further performing corresponding processing on the detected object based on the second target result data, thereby improving accuracy of judging the state of the detected object effectively. Furthermore, by performing image reconstruction using a spatial point-based image reconstruction algorithm which can be compatible with multiple types of probes, thereby improving the flexibility and efficiency of image reconstruction. Furthermore, by performing image processing based on a target reconstruction result image compatible with multiple types of probes, thereby improving the accuracy of the related parameter of the detected object. Both the obtained first target result data and the second target result data may be used to assist a related person in diagnosing the detected object, thereby improving the diagnosis efficiency.
  • This embodiment provides a data processing apparatus for executing the method in the above Embodiment I.
  • the data processing apparatus 30 includes a first processing module 31, a second processing module 32 and a third processing module 33.
  • the first processing module 31 is configured to obtain first target result data according to an original ultrasonic echo signal, where the first target result data includes a related parameter of a detected object;
  • the second processing module 32 is configured to perform feature extraction on the first target result data using a pre-trained feature extraction model to obtain second target result data;
  • the third processing module 33 is configured to perform corresponding processing on the detected object based on the second target result data.
  • the data processing apparatus by performing feature extraction on a related parameter of a detected object using a pre-trained feature extraction model to obtain second target result data, and further performing corresponding processing on the detected object based on the second target result data, thereby improving accuracy of judging the state of the detected object effectively.
  • This embodiment further supplements the apparatus provided in the above Embodiment III.
  • the first processing module is specifically configured to:
  • the first processing module is specifically configured to:
  • the first processing module is specifically configured to: perform, according to pre-configured parameters of a probe and a display parameter, image reconstruction on the original ultrasonic echo signal using the spatial point-based image reconstruction algorithm to obtain the first reconstruction result image; where the parameters of the probe include an identifier of the probe, a Cartesian coordinate zero point of the probe, and a first coordinate of each array element of the probe, and the display parameter includes a second coordinate of the first reconstruction result image.
  • the first processing module is specifically configured to: perform image post-processing and signal extraction on the target reconstruction result image to obtain the first target result data, where the first target result data includes at least one of a displacement, a velocity, an acceleration, a strain, a strain rate, an elastic modulus, a contrast, a texture feature, a distribution feature of scatterers, a density of scatterers, and a size of scatterers.
  • the first processing module is specifically configured to:
  • the first processing module is further configured to:
  • the third processing module is specifically configured to: judge a state of the detected object based on the second target result data.
  • the third processing module is further configured to: perform display processing on the state of the detected object.
  • the first processing module is further configured to judge a state of the detected object based on the first target result data.
  • the data processing apparatus of this embodiment by performing feature extraction on a related parameter of a detected object using a pre-trained feature extraction model to obtain second target result data, and further performing corresponding processing on the detected object based on the second target result data, thereby improving accuracy of judging the state of the detected object effectively. Furthermore, by performing image reconstruction using a spatial point-based image reconstruction algorithm which can be compatible with multiple types of probes, thereby improving the flexibility and efficiency of image reconstruction. Furthermore, by performing image processing based on a target reconstruction result image compatible with multiple types of probes, thereby improving the accuracy of the related parameter of the detected object. Both the obtained first target result data and the second target result data may be used to assist a related person in diagnosing the detected object, thereby improving the diagnosis efficiency.
  • the data processing system may include a data collecting system, a local computing platform, a cloud computing platform, and a display system.
  • FIG. 5 it is a schematic structural diagram of a data processing system provided by this embodiment.
  • the first processing module in the data processing apparatus is set in a local computing platform, and the second processing module and the third processing module in the data processing apparatus are set in the cloud computing platform.
  • the computer device may be the above cloud computing platform, or may include the above cloud computing platform and a local computing platform. Specifically, it may be a desktop computer, a notebook computer, a server, and other computer device.
  • the computer device 50 includes: at least one processor 51 and a memory 52; where the memory stores a computer program; and the at least one processor executes the computer program stored in the memory to implement the method provided in the above embodiments.
  • the computer device of this embodiment by performing feature extraction on a related parameter of a detected object using a pre-trained feature extraction model to obtain second target result data, and further performing corresponding processing on the detected object based on the second target result data, thereby improving accuracy of judging the state of the detected object effectively. Furthermore, by performing image reconstruction using a spatial point-based image reconstruction algorithm which can be compatible with multiple types of probes, thereby improving the flexibility and efficiency of image reconstruction. Furthermore, by performing image processing based on a target reconstruction result image compatible with multiple types of probes, thereby improving the accuracy of the related parameter of the detected object. Both the obtained first target result data and the second target result data may be used to assist a related person in diagnosing the detected object, thereby improving the diagnosis efficiency.
  • This embodiment provides a computer-readable storage medium in which a computer program is stored, and the method provided in any of the above embodiments is implemented when the computer program is executed.
  • the computer-readable storage medium of this embodiment by performing feature extraction on a related parameter of a detected object using a pre-trained feature extraction model to obtain second target result data, and further performing corresponding processing on the detected object based on the second target result data, thereby improving accuracy of judging the state of the detected object effectively. Furthermore, by performing image reconstruction using a spatial point-based image reconstruction algorithm which can be compatible with multiple types of probes, thereby improving the flexibility and efficiency of image reconstruction. Furthermore, by performing image processing based on a target reconstruction result image compatible with multiple types of probes, thereby improving the accuracy of the related parameter of the detected object. Both the obtained first target result data and the second target result data may be used to assist a related person in diagnosing the detected object, thereby improving the diagnosis efficiency.
  • the disclosed apparatus and method may be implemented in other ways.
  • the apparatus embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation.
  • multiple units or components may be combined or integrated into another system, or some features may be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, apparatus or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware with a software functional unit.
  • the above integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium.
  • the above software functional unit is stored in the storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to execute a part of the steps of the method described in each embodiment of the present application.
  • the above storage medium includes: a U disk, a protable hardisk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk and other mediums that can store program codes.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physiology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Hematology (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Analysis (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Image Processing (AREA)
EP20846593.0A 2019-08-01 2020-07-28 Procédé de traitement de données, appareil, dispositif et support d'informations Pending EP4008261A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910706620.XA CN110313941B (zh) 2019-08-01 2019-08-01 数据处理方法、装置、设备及存储介质
PCT/CN2020/105006 WO2021018101A1 (fr) 2019-08-01 2020-07-28 Procédé de traitement de données, appareil, dispositif et support d'informations

Publications (2)

Publication Number Publication Date
EP4008261A1 true EP4008261A1 (fr) 2022-06-08
EP4008261A4 EP4008261A4 (fr) 2022-09-07

Family

ID=68125188

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20846593.0A Pending EP4008261A4 (fr) 2019-08-01 2020-07-28 Procédé de traitement de données, appareil, dispositif et support d'informations

Country Status (10)

Country Link
US (1) US20220148238A1 (fr)
EP (1) EP4008261A4 (fr)
JP (1) JP7296171B2 (fr)
KR (1) KR20220032067A (fr)
CN (1) CN110313941B (fr)
AU (1) AU2020322893B2 (fr)
BR (1) BR112022001790A2 (fr)
CA (1) CA3149335C (fr)
MX (1) MX2022001384A (fr)
WO (1) WO2021018101A1 (fr)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110313941B (zh) * 2019-08-01 2021-02-19 无锡海斯凯尔医学技术有限公司 数据处理方法、装置、设备及存储介质
US11651496B2 (en) * 2021-03-11 2023-05-16 Ping An Technology (Shenzhen) Co., Ltd. Liver fibrosis recognition method based on medical images and computing device using thereof
CN113647976B (zh) * 2021-08-17 2023-08-15 逸超科技(武汉)有限公司 回波数据封装方法、装置、设备及可读存储介质
US12089997B2 (en) * 2021-12-15 2024-09-17 GE Precision Healthcare LLC System and methods for image fusion
CN115761713B (zh) * 2022-07-05 2023-05-23 广西北投信创科技投资集团有限公司 一种车牌识别方法、系统、电子设备和可读存储介质
CN117218433B (zh) * 2023-09-13 2024-08-27 珠海圣美生物诊断技术有限公司 居家多癌种检测装置和多模态融合模型构建方法及装置

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6468216B1 (en) * 2000-08-24 2002-10-22 Kininklijke Philips Electronics N.V. Ultrasonic diagnostic imaging of the coronary arteries
JP5726081B2 (ja) * 2009-09-16 2015-05-27 株式会社日立メディコ 超音波診断装置及び弾性画像の分類プログラム
CN102579071B (zh) * 2011-01-14 2015-11-25 深圳迈瑞生物医疗电子股份有限公司 一种三维超声成像的方法及系统
JP5680703B2 (ja) * 2013-05-08 2015-03-04 株式会社日立メディコ 超音波診断装置
JP2015136467A (ja) * 2014-01-22 2015-07-30 日立アロカメディカル株式会社 超音波診断装置
CN103784166B (zh) * 2014-03-03 2015-08-19 哈尔滨工业大学 多功能一体化数字超声诊断系统
KR102408440B1 (ko) * 2015-03-09 2022-06-13 삼성메디슨 주식회사 프리셋 선택 방법 및 이를 위한 초음파 영상 장치
CN104983442B (zh) * 2015-05-14 2017-11-14 常州迪正雅合电子科技有限公司 一种三维/四维超声成像系统中三维探头的驱动方法
US10959702B2 (en) * 2016-06-20 2021-03-30 Butterfly Network, Inc. Automated image acquisition for assisting a user to operate an ultrasound device
CN106361376A (zh) * 2016-09-23 2017-02-01 华南理工大学 一种用于脊柱侧弯的超声宽景成像方法
CN107582097A (zh) * 2017-07-18 2018-01-16 中山大学附属第医院 一种基于多模态超声组学的智能辅助决策系统
CN107661121A (zh) * 2017-10-23 2018-02-06 深圳开立生物医疗科技股份有限公司 一种超声探头自适应匹配方法及系统
EP3703572A1 (fr) * 2017-11-02 2020-09-09 Koninklijke Philips N.V. Système ultrasonore intelligent pour détecter des artefacts d'image
WO2019100212A1 (fr) * 2017-11-21 2019-05-31 深圳迈瑞生物医疗电子股份有限公司 Système ultrasonore et procédé de planification d'ablation
CN108095763A (zh) * 2018-01-18 2018-06-01 北京索瑞特医学技术有限公司 复合探头及测量系统
CN108268897A (zh) * 2018-01-19 2018-07-10 北京工业大学 一种乳腺肿瘤多模态超声多层次计算机辅助诊断方法
CN110313941B (zh) * 2019-08-01 2021-02-19 无锡海斯凯尔医学技术有限公司 数据处理方法、装置、设备及存储介质

Also Published As

Publication number Publication date
AU2020322893B2 (en) 2024-03-28
AU2020322893A1 (en) 2022-02-24
KR20220032067A (ko) 2022-03-15
EP4008261A4 (fr) 2022-09-07
JP2022542442A (ja) 2022-10-03
MX2022001384A (es) 2022-06-02
BR112022001790A2 (pt) 2022-03-22
CA3149335A1 (fr) 2021-02-04
CN110313941B (zh) 2021-02-19
JP7296171B2 (ja) 2023-06-22
CN110313941A (zh) 2019-10-11
CA3149335C (fr) 2024-01-09
WO2021018101A1 (fr) 2021-02-04
US20220148238A1 (en) 2022-05-12

Similar Documents

Publication Publication Date Title
EP4008261A1 (fr) Procédé de traitement de données, appareil, dispositif et support d'informations
AU2017316625B2 (en) Computer-aided detection using multiple images from different views of a region of interest to improve detection accuracy
US9035941B2 (en) Image processing apparatus and image processing method
JP5484444B2 (ja) 医用画像診断装置、容積計算方法
RU2677055C2 (ru) Автоматическая сегментация трехплоскостных изображений для ультразвуковой визуализации в реальном времени
KR101805624B1 (ko) 장기 모델 영상 생성 방법 및 장치
CN112955934B (zh) 识别医学图像中的介入设备
CN113260314B (zh) 用于对比增强成像的系统和方法
US11534133B2 (en) Ultrasonic detection method and ultrasonic imaging system for fetal heart
EP2601637B1 (fr) Système et procédé de segmentation multi-modalité d'un tissu interne avec rétroaction en direct
KR20130010732A (ko) 복수의 3차원 볼륨 영상들을 이용하여 3차원 볼륨 파노라마 영상 생성 방법 및 장치
EP2272427A1 (fr) Dispositif, procédé et programme de traitement d'image
KR20130016942A (ko) 3차원 볼륨 파노라마 영상 생성 방법 및 장치
CN114332120A (zh) 图像分割方法、装置、设备及存储介质
WO2024126468A1 (fr) Classification d'échocardiogramme par apprentissage automatique
US10417764B2 (en) System and methods for diagnostic image analysis and image quality assessment
EP3533031B1 (fr) Procédé et appareil permettant de segmenter une image bidimensionnelle d'une structure anatomique
CN112669450B (zh) 人体模型构建方法和个性化人体模型构建方法
RU2798711C1 (ru) Способ обработки данных, аппаратное устройство, устройство и носитель информации
US20210128113A1 (en) Method and apparatus for displaying ultrasound image of target object
CN110520933B (zh) 超声心动图上下文测量工具
WO2021099171A1 (fr) Systèmes et procédés de criblage par imagerie
Zhao et al. Dual Generative Adversarial Network For Ultrasound Localization Microscopy
WO2024013114A1 (fr) Systèmes et procédés de criblage d'imagerie
Lai et al. Designs and Implementation of Three Dimensional Nuchal Translucency

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220201

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

A4 Supplementary search report drawn up and despatched

Effective date: 20220808

RIC1 Information provided on ipc code assigned before grant

Ipc: G01S 7/52 20060101ALI20220802BHEP

Ipc: A61B 8/08 20060101ALI20220802BHEP

Ipc: A61B 8/06 20060101AFI20220802BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)