CN113616235B - Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe - Google Patents

Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe Download PDF

Info

Publication number
CN113616235B
CN113616235B CN202010376559.XA CN202010376559A CN113616235B CN 113616235 B CN113616235 B CN 113616235B CN 202010376559 A CN202010376559 A CN 202010376559A CN 113616235 B CN113616235 B CN 113616235B
Authority
CN
China
Prior art keywords
information
target
current
ultrasonic
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010376559.XA
Other languages
Chinese (zh)
Other versions
CN113616235A (en
Inventor
李楠
胡冉杰
陈长龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Chengdu ICT Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Chengdu ICT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Chengdu ICT Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010376559.XA priority Critical patent/CN113616235B/en
Publication of CN113616235A publication Critical patent/CN113616235A/en
Application granted granted Critical
Publication of CN113616235B publication Critical patent/CN113616235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/54Control of the diagnostic device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/58Testing, adjusting or calibrating the diagnostic device
    • A61B8/585Automatic set-up of the device

Abstract

The embodiment of the invention provides an ultrasonic detection method, an ultrasonic detection device, an ultrasonic detection system, ultrasonic detection equipment, an ultrasonic detection storage medium and an ultrasonic probe, wherein the ultrasonic detection method comprises the following steps: acquiring current target information and current ultrasonic image information associated with current operation of an ultrasonic probe and a user in the process of detecting a target object by using the ultrasonic probe; transmitting the current target information and the current ultrasonic image information to a server; receiving target adjustment information returned by a server; and outputting the target adjustment information in a target output mode. The invention can adjust the ultrasonic detection method of the next step according to the current operation of the user, and improves the efficiency of ultrasonic detection.

Description

Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe
Technical Field
The present invention relates to the field of medical detection technologies, and in particular, to an ultrasonic detection method, apparatus, system, device, storage medium, and ultrasonic probe.
Background
Ultrasonic detection is a common medical detection mode, and the principle is that an ultrasonic transducer of an ultrasonic probe forms ultrasonic waves through electric excitation, when the ultrasonic waves are transmitted into a human body, reflection and refraction occur, the ultrasonic waves return to the ultrasonic transducer to be converted into analog voltage signals, and an ultrasonic image is obtained after a series of processing.
Because the morphology and structure of various tissues of the human body are different, the degree of reflection, refraction and absorption of ultrasonic waves are different, and doctors diagnose patients by the characteristics of wave patterns, curves or images reflected in ultrasonic images. In general, in actual operation, a professional ultrasonic examination doctor selects an appropriate ultrasonic probe for detection according to a specific examination item and a detection position, and the position, the direction and the pressure of the probe can influence the imaging quality of ultrasonic, so that in the detection process, the spatial posture and the force of the probe need to be adjusted in real time according to the current imaging result so as to capture an optimal ultrasonic image.
At present, when an ultrasonic doctor detects ultrasonic waves of a patient, the spatial posture and the force of the ultrasonic probe can be adjusted in real time only through the technology and experience of the ultrasonic doctor, so that the technical and experience requirements of the ultrasonic doctor are high, and the adjustment of the probe is carried out by the doctor according to the detection method, no auxiliary measures can be taken, so that the ultrasonic detection efficiency is low.
Disclosure of Invention
The embodiment of the invention provides an ultrasonic detection method, an ultrasonic detection device, an ultrasonic detection system, ultrasonic detection equipment, a storage medium and an ultrasonic probe.
In a first aspect, an ultrasonic detection method is provided, where the method is used for a terminal device, and includes: acquiring current target information and current ultrasonic image information associated with current operation of an ultrasonic probe and a user in the process of detecting a target object by using the ultrasonic probe; the method comprises the steps that current target information and current ultrasonic image information are sent to a server, and target adjustment information is determined by the server according to the current target information, the current ultrasonic image information and a neural network model, wherein the target adjustment information is adjustment information from the current target information to target information corresponding to the next operation of a user; receiving target adjustment information returned by a server; and outputting the target adjustment information in a target output mode.
In some implementations of the first aspect, the current target information includes at least one of pose information, pressure information, and motion information; the pose information includes at least one of probe direction information and probe coordinate information of the ultrasonic probe.
In some implementations of the first aspect, outputting the target adjustment information in a target output manner specifically includes: the target adjustment information is output as at least one signal of a visual signal, an audio signal, and a tactile signal.
In some implementations of the first aspect, after acquiring the current target information and the current ultrasound image information of the ultrasound probe associated with the current operation of the user, before sending the current target information and the current ultrasound image information to the server, the method further includes: encoding the current target information and the current ultrasonic image information to obtain encoded information; the method for transmitting the current target information and the current ultrasonic image information to the server specifically comprises the following steps: the encoded information is sent to a server for the server to determine target adjustment information based on the encoded information and the neural network model.
In a second aspect, there is provided an ultrasonic detection method for a server, comprising: acquiring current target information and current ultrasonic image information, wherein the current target information is target information related to the current operation of an ultrasonic probe and a user in the process of detecting a target object by using the ultrasonic probe, and the current ultrasonic image information is ultrasonic image information related to the current operation of the ultrasonic probe and the user; inputting current ultrasonic image information into a neural network model to obtain target information corresponding to the next operation, wherein the neural network model comprises a first neural network model which is obtained by training according to a plurality of training samples, each training sample comprises ultrasonic image information of a first operation and target information corresponding to a second operation corresponding to the ultrasonic image information, and the second operation is the next operation of the first operation; comparing the difference between the current target information and the target information corresponding to the next operation to obtain target adjustment information; and sending the target adjustment information to the terminal equipment so as to be used for the terminal equipment to output the target adjustment information, wherein the target adjustment information is used for adjusting the current target information to the target information corresponding to the next operation by the user.
In some implementations of the second aspect, before acquiring the current target information and the current ultrasound image information, the method further includes: pre-training a first neural network model; pre-training a first neural network model, specifically comprising: acquiring a plurality of training samples, wherein each training sample comprises ultrasonic image information of a first operation in the process of detecting a target object by using an ultrasonic probe and target information corresponding to a second operation corresponding to the ultrasonic image information; respectively inputting each training sample into a first neural network model for training to obtain a plurality of first prediction results; judging whether a first preset condition is met or not according to target information corresponding to each first prediction result and the second operation; if the first preset condition is not met, the model parameters of the first neural network model are adjusted, and the adjusted first neural network model is trained by using a plurality of training samples until the first preset condition is met, so that the trained first neural network model is obtained.
In some implementations of the second aspect, the neural network model further includes a second neural network model, and after each training sample is input to the first neural network model for training to obtain a plurality of first prediction results, the method further includes: pre-training a second neural network model; pre-training a second neural network model, specifically comprising: acquiring static data of a target object associated with the first prediction result, wherein the static data comprises at least one of the height, weight, sex and age of the target object; respectively inputting the first prediction result and static data corresponding to the first prediction result into a second neural network model for training to obtain a plurality of second prediction results; judging whether a second preset condition is met or not according to target information corresponding to each second prediction result and the second operation; if the second preset condition is not met, the model parameters of the second neural network model are adjusted, the adjusted second neural network model is trained by using the first prediction result and static data corresponding to the first prediction result until the second preset condition is met, and the trained second neural network model is obtained.
In some implementations of the second aspect, the current target information includes at least one of pose information, pressure information, and motion information; the pose information includes probe direction information and probe coordinate information of the ultrasonic probe.
In a third aspect, there is provided an ultrasonic detection apparatus for a terminal device, comprising: the acquisition module is used for acquiring current target information and current ultrasonic image information related to the current operation of the ultrasonic probe and a user in the process of detecting a target object by using the ultrasonic probe; the sending module is used for sending the current target information and the current ultrasonic image information to the server, and determining target adjustment information according to the current target information, the current ultrasonic image information and the neural network model by the server, wherein the target adjustment information is adjustment information from the current target information to target information corresponding to the next operation of a user; the receiving module is used for receiving target adjustment information returned by the server; and the output module is used for outputting the target adjustment information in a target output mode.
In some implementations of the third aspect, the current target information includes at least one of pose information, pressure information, and motion information; the pose information includes at least one of probe direction information and probe coordinate information of the ultrasonic probe.
In some implementations of the third aspect, the output module is specifically configured to: the target adjustment information is output as at least one signal of a visual signal, an audio signal, and a tactile signal.
In some implementations of the third aspect, after acquiring the current target information and the current ultrasound image information of the ultrasound probe associated with the current operation of the user, before sending the current target information and the current ultrasound image information to the server, the method further includes: the method is used for encoding the current target information and the current ultrasonic image information to obtain encoded information.
In some implementations of the third aspect, the sending module is specifically configured to: the encoded information is sent to a server for the server to determine target adjustment information based on the encoded information and the neural network model.
In a fourth aspect, an ultrasonic detection system is provided, the system including a terminal device and a server, specifically including: the terminal equipment is used for acquiring current target information and current ultrasonic image information related to the current operation of the ultrasonic probe and a user in the process of detecting a target object by using the ultrasonic probe; transmitting the current target information and the current ultrasonic image information to a server; the server is used for acquiring current target information and current ultrasonic image information, inputting the current ultrasonic image information into the neural network model and outputting target information corresponding to the next operation; comparing the difference between the current target information and the target information corresponding to the next operation to obtain target adjustment information; transmitting the target adjustment information to the terminal equipment; the terminal equipment is also used for receiving the target adjustment information returned by the server and outputting the target adjustment information in a target output mode.
In some implementations of the fourth aspect, the neural network model is trained from a plurality of training samples, each training sample including ultrasound image information of a first operation and target information corresponding to a second operation corresponding to the first operation, the second operation being a next operation of the first operation.
In some implementations of the fourth aspect, the target adjustment information is adjustment information that adjusts from current target information to target information corresponding to a next operation by the user.
In a fifth aspect, there is provided an ultrasonic probe comprising: the device comprises a pressure sensor, a gyroscope, an ultrasonic transducer, a pulse excitation circuit, a communication module, a low-noise operational amplifier and an analog-to-digital converter; in the process of detecting a target object by using the ultrasonic probe, the pressure sensor is used for collecting pressure information of the ultrasonic probe in real time; the gyroscope is used for acquiring pose information of the ultrasonic probe in real time, wherein the pose information comprises probe direction information and probe coordinate information of the ultrasonic probe; the communication module is configured to send the pressure information and the pose information to the terminal device for the terminal device to perform the ultrasound detection method of the first aspect or some of the realizations of the first aspect.
In a sixth aspect, there is provided an ultrasonic testing apparatus comprising: a processor and a memory storing computer program instructions; the processor, when reading and executing the computer program instructions, implements the first or second aspect, or the ultrasound detection method in some implementations of the first or second aspect.
In a seventh aspect, there is provided a computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the first or second aspects, or some of the realizable modes of the first or second aspects, an ultrasound detection method.
According to the ultrasonic detection method, in the process of ultrasonic detection by a user through the ultrasonic probe, the current ultrasonic probe information and the current ultrasonic image information related to the current operation of the user are obtained in real time, the adjustment information of the current operation is output according to the ultrasonic probe information and the ultrasonic image information of the current operation, the user is guided to adjust from the current operation to the next operation according to the output adjustment information, even an inexperienced doctor can capture the optimal ultrasonic image in the next operation under the guidance of the adjustment information, and the detection efficiency of the user in ultrasonic detection is effectively improved.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are needed to be used in the embodiments of the present invention will be briefly described, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a training method of a neural network model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a zeroing operation of an ultrasound probe provided by an embodiment of the present invention;
FIG. 3 is a flowchart of another training method of a neural network model according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of an ultrasonic detection method according to an embodiment of the present invention;
FIG. 5 is a flow chart of another ultrasonic detection method according to an embodiment of the present invention;
FIG. 6 is a flow chart of yet another method for ultrasonic testing according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an ultrasonic detection device according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an ultrasonic detection system according to an embodiment of the present invention;
fig. 9 is a schematic structural view of an ultrasonic probe according to an embodiment of the present invention;
Fig. 10 is a schematic hardware structure of an ultrasonic detection apparatus according to an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely configured to illustrate the invention and are not configured to limit the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the invention by showing examples of the invention.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
Ultrasonic detection is a common medical detection mode, and the principle is that an ultrasonic transducer of an ultrasonic probe forms ultrasonic waves through electric excitation, the ultrasonic waves are transmitted to human tissues, the ultrasonic waves are reflected back to the ultrasonic transducer to be converted into analog voltage signals, the analog voltage signals are converted into digital signal images through a low-noise operational amplifier and an analog-to-digital converter, and finally the digital signal images are processed by adopting an imaging algorithm to obtain ultrasonic images.
During ultrasonic detection, the position, direction and pressure of an ultrasonic probe can influence the imaging quality of ultrasonic, and the requirements of the position, direction and pressure of the probe during ultrasonic detection are different for different tissues or organs, so that an ultrasonic doctor needs to adjust the spatial posture and the force of the probe in real time according to the imaging result in the detection process to capture the optimal ultrasonic image.
At present, when an ultrasonic doctor carries out ultrasonic detection on a patient, the spatial posture and the force of the probe can be adjusted in real time only by continuously adjusting the ultrasonic detection method of the ultrasonic doctor so as to achieve the optimal imaging effect. The adjustment of the ultrasonic detection method is totally based on the technical experience accumulated by the ultrasonic doctor at ordinary times, and the detection efficiency during ultrasonic detection is seriously affected by no adjustment mode which can be referred to or used. In addition, the ultrasonic detection mode completely depends on doctors, is greatly influenced by subjective factors of the doctors, and sometimes can influence the accuracy of ultrasonic detection methods due to the reasons of the doctors, so that the imaging effect of ultrasonic images is influenced.
In order to solve the above problems, an embodiment of the present invention provides an ultrasonic detection method, in which current ultrasonic probe information and current ultrasonic image information associated with a current operation of a user are acquired in real time during an ultrasonic detection process of the user using an ultrasonic probe, the current ultrasonic image information is input into a neural network model to obtain ultrasonic probe information corresponding to a next operation, differences between the ultrasonic probe information of the current operation and the ultrasonic probe information corresponding to the next operation are compared, adjustment information can be obtained, the user is guided to adjust the current operation according to the output adjustment information, and an optimal ultrasonic image is captured according to the adjusted next operation.
It should be noted that, in the ultrasonic detection method provided by the embodiment of the present invention, the current ultrasonic image information associated with the current operation of the user needs to be input into the pre-trained neural network model, so that the neural network model needs to be trained before the current ultrasonic probe information and the current ultrasonic image information are acquired. Accordingly, a specific implementation of the training method for a neural network model provided in the embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a training method of a neural network model according to an embodiment of the present invention, as shown in fig. 1, an execution subject of the method may be a server, and the training method of the neural network model may include the following steps:
s101, acquiring an input training sample and an output training sample.
The training samples are the data basis for training the neural network model, for which a plurality of training samples first need to be acquired.
Specifically, in the process that an ultrasonic expert utilizes an ultrasonic probe to carry out ultrasonic detection on a target object, target information and ultrasonic image information associated with the operation (or ultrasonic detection method) of the ultrasonic expert are acquired in real time, and a plurality of training samples are obtained according to the target information and the ultrasonic image information.
Each training sample comprises an input training sample and an output training sample, the input training sample comprises ultrasonic image information corresponding to a first operation, the output sample comprises ultrasonic probe information corresponding to a second operation, and the second operation is the next operation of the first operation.
In one embodiment, the target information is ultrasound probe information associated with an operation of an ultrasound expert, and may include at least one of pose information, pressure information, and motion information of the ultrasound probe; the pose information may include at least one of probe direction information and probe coordinate information of the ultrasound probe.
S102, respectively inputting each input training sample into a first neural network model for training to obtain a plurality of first prediction results.
And respectively inputting the ultrasonic image information of the plurality of first operations into the first neural network model to obtain a plurality of first prediction results.
S103, judging whether the first prediction result and the output training sample meet a first preset condition. If yes, executing S104; if not, S105 is performed.
And acquiring a first loss function value of the first neural network model according to the first prediction result of the ultrasonic image information of each first operation and the ultrasonic probe information of the second operation.
Judging whether the first loss function value meets a first preset condition, wherein the first preset condition is that the change rate of the first loss function value is in a preset range.
It will be appreciated that the preset range may be specifically defined according to actual needs, and is not specifically defined herein.
And S104, stopping iteration, and completing training of the first neural network model to obtain a trained first neural network model.
If the judgment result of S103 is yes, it indicates that the first prediction result and the output training sample have satisfied the first preset condition, and meet the requirement of training the first neural network model, where the first neural network model can be already used in the ultrasonic detection method.
S105, adjusting model parameters of the first neural network model, and returning to S102.
If the judgment result in S103 is no, it indicates that the first prediction result and the output training sample fail to meet the training requirement of the first neural network model, the relevant parameters of the current model need to be adjusted, and after adjustment, S102 is returned, and the adjusted first neural network model is continuously trained by using the input training sample until the first loss function value meets the first preset condition, and iteration is stopped, so as to obtain the trained first neural network model.
It may be appreciated that the execution subject server of the training method of the neural network model may include a cloud platform or a cloud server.
The specific implementation of each of the above steps will be described in detail below.
Referring to S101, the acquiring a plurality of training samples specifically includes the following steps:
s1011, setting a plurality of scanning tasks of ultrasonic detection.
In particular, each scanning task may specify an ultrasound examination of an organ or tissue of the human body.
S1012, scanning is performed by the ultrasound specialist according to the designated scanning task.
Specifically, ultrasonic probe information and ultrasonic image information corresponding to each operation performed by the expert in performing the scanning task are recorded.
As a specific embodiment, first, in order to ensure that the start point of each ultrasonic probe starts from the same relative position of the patient, a zeroing operation is required for the ultrasonic probe before scanning.
Fig. 2 is a schematic diagram of a zeroing operation of an ultrasonic probe according to an embodiment of the present invention, where the ultrasonic probe is placed horizontally under the left side of the examination couch, flush with the feet of the patient, flush with the right shoulder of the patient, and in line with the direction of the length dimension of the examination couch, and the direction of the body of the patient is also in line with the direction of the length dimension of the examination couch, as shown in fig. 2.
Then, the expert picks up the ultrasonic probe to scan the patient, and when a target region (an organ or tissue designated by the scanning task) appears in the ultrasonic screen, recording of ultrasonic probe information and ultrasonic image information at each step of operation of the expert is started.
S1013, obtaining a plurality of training samples according to the ultrasonic probe information and the ultrasonic image information under each step of operation of the expert.
Specifically, each training sample includes ultrasound image information corresponding to a first operation and ultrasound probe information corresponding to a second operation, the second operation being a next operation of the first operation.
The ultrasonic detection is to scan a human body by using an ultrasonic probe, and receive and process reflected signals to obtain ultrasonic images of organs in the human body, and because organs or tissues in the human body have obvious differences among different heights, weights, ages, sexes, and the like, in order to further improve the accuracy of the neural network model, the influence caused by static factors such as the heights, the weights, the ages, the sexes and the like can be taken into consideration during model training.
Fig. 3 is a flow chart of another training method of a neural network model according to an embodiment of the present invention, as shown in fig. 3, an execution subject of the method may be a server, S301-S302 are consistent with S301-S302, and after S302, the training method of a neural network model further includes the following steps:
s303, acquiring static data associated with the first prediction result.
Specifically, static data of a target object associated with the first prediction result is acquired.
Wherein the static data includes at least one of height, weight, gender, age of the target subject.
S304, respectively inputting the first prediction result and static data corresponding to the first prediction result into a second neural network model for training to obtain a plurality of second prediction results.
Specifically, after the first neural network model, a multi-layer fully-connected neural network is established to obtain a second neural network model, a first prediction result output by the first neural network model and static data corresponding to the first prediction result are used as input training samples, and the second neural network model is trained to obtain a plurality of second prediction results.
S305, judging whether the second prediction result and the output training sample meet a second preset condition. If yes, executing S306; if not, S307 is performed.
And acquiring a second loss function value of the second neural network model according to each second prediction result and the ultrasonic probe information of the second operation.
Judging whether the second loss function value meets a second preset condition, wherein the second preset condition is that the change rate of the second loss function value is in a preset range.
S306, stopping iteration, and completing training of the second neural network model to obtain a trained second neural network model.
If the judgment result of S305 is yes, it indicates that the second prediction result and the output training sample have satisfied the second preset condition, and meet the requirement of training the second neural network model, where the second neural network model can be already used in the ultrasonic detection method.
S307, the model parameters of the second neural network model are adjusted, and S304 is executed in a return mode.
If the judgment result in S305 is no, it indicates that the second prediction result and the output training sample fail to meet the training requirement of the second neural network model, the relevant parameters of the current model need to be adjusted, and after adjustment, S304 is returned, and the adjusted second neural network model is continuously trained by using the first prediction result and the static data corresponding to the first prediction result until the second loss function value meets the second preset condition, and the iteration is stopped, so as to obtain the trained second neural network model.
As a specific embodiment, optionally, the first neural network model may be a convolutional neural network (Convolutional Neural Networks, CNN) model, where the CNN model includes a plurality of convolutional layers, a pooling layer, and a full-connection layer, and the training process of the first neural network model specifically includes:
and step 1, inputting an ultrasonic image pic corresponding to the first operation as an input training sample into the CNN model, and outputting a first prediction result.
The ultrasonic image pic is of fixed resolution, high-dimensional characteristics of the ultrasonic image pic are extracted through a CNN model, and a final full-connection layer outputs a first prediction result, wherein the method comprises the following steps: probe position coordinates (x) out1 ,y out1 ) Coordinates of probe direction (angle) (u out1 ,v out1 ,w out1 ) Probe pressure coordinates (pressure out1 )。
And 2, training a first prediction result according to the formula (1).
loss1=μ x1 (x out1 -x) 2y1 (y out1 -y) 2u1 (u out1 -u) 2v1 (v out1 -v) 2w1 (w out1 -w) 2pressure1 (pressure out1 -pressure) 21 L1+λ 2 L2 (1)
Wherein loss1 is the minimum loss function of the first neural network model, and (x, y), (u, v, w), and pressure are probe position coordinates, probe direction coordinates, and probe pressure coordinates corresponding to the second operation in the output training sample, respectively, μ x1 、μ y1 、μ u1 、μ v1 、μ w1 、μ pressure1 To adjust the corresponding weights of the parameters to be regressed in the first neural network model, lambda 1 L1 is L1 regularization, lambda 1 L2 is L2 regularization.
And step 3, when the first loss function value is smaller than a first preset condition, training the first neural network model is completed, and a trained first neural network model is obtained.
As a specific embodiment, optionally, after the first neural network model, a plurality of fully connected layers are built to obtain a second neural network model, and a training process of the second neural network model specifically includes:
step 1, obtaining a first prediction result output by a first neural network model and static data corresponding to the first prediction result.
A first prediction result comprising: probe position coordinates (x) out1 ,y out1 ) Coordinates of probe direction (angle) (u out1 ,v out1 ,w out1 ) Probe pressure coordinates (pressure out1 )。
Static data corresponding to the first prediction result includes: age, gender, height, shoulder width, weight.
Wherein age, height, shoulder width, weight data are normalized to the [0,1] interval, gender data are converted to one-hot format.
And 2, respectively inputting the first predicted result and static data corresponding to the first predicted result into a second neural network model for training to obtain a second predicted result.
The last full-connection layer of the second neural network model outputs a second prediction result, including: probe position coordinates (x) out2 ,y out2 ) Coordinates of probe direction (angle) (u out2 ,v out2 ,w out2 ) Probe pressure coordinates (pressure out2 )。
And 3, training a second prediction result according to the formula (2).
loss2=μ x2 (x out2 -x) 2y2 (y out2 -y) 2u2 (u out2 -u) 2v2 (v out2 -v) 2w2 (w out2 -w) 2pressure2 (pressure out2 -pressure) 21 L1+λ 2 L2 (2)
Wherein loss2 is the minimum loss function of the second neural network model, and (x, y), (u, v, w) and pressure are probe position coordinates, probe direction coordinates and probe pressure coordinates corresponding to the second operation in the output training sample, respectively, μ x2 、μ y2 、μ u2 、μ v2 、μ w2 、μ pressure2 For adjusting the corresponding weight of each parameter to be regressed in the second neural network model, lambda 1 L1 is L1 regularization, lambda 1 L2 is L2 regularization.
And step 3, when the second loss function value is smaller than a second preset condition, training the second neural network model is completed, and a trained second neural network model is obtained.
It can be understood that, in order to improve the accuracy of the neural network model, the first neural network model and the second neural network model can also be continuously trained by using new training samples in practical application, so as to continuously update the neural network model and improve the accuracy of the neural network model, thereby improving the accuracy of the prediction result.
The above is a specific implementation manner of the training method of the neural network model provided by the embodiment of the invention. The neural network model obtained through the training can be applied to an ultrasonic detection method provided in the following embodiment.
Fig. 4 is a schematic flow chart of an ultrasonic detection method according to an embodiment of the present invention, as shown in fig. 4, the ultrasonic detection method may include the following steps:
s401, the terminal equipment acquires current target information and current ultrasonic image information.
S402, the terminal equipment sends the current target information and the current ultrasonic image information to the server.
S403, the server determines target adjustment information according to the current target information, the current ultrasonic image information and the first neural network model.
S404, the server sends the target adjustment information to the terminal equipment.
S405, the terminal equipment outputs the target adjustment information in a target output mode.
After receiving the target adjustment information returned by the server, the terminal equipment outputs the target adjustment information in a target output mode, wherein the target adjustment information is used for adjusting the current target information to target information corresponding to the next operation of the user.
The specific implementation of each of the above steps will be described in detail below.
First, referring to S401, in a process in which a user detects a target object using an ultrasonic probe, a terminal device acquires current target information and current ultrasonic image information associated with a current operation of the user. The current target information is ultrasonic probe information corresponding to current operation of a user, and comprises at least one of pose information, pressure information and motion information of the ultrasonic probe, wherein the pose information comprises probe direction information and probe coordinate information of the ultrasonic probe.
As a specific embodiment, when the ultrasonic probe is used for the first time, the user carries out zero resetting operation on the ultrasonic probe, the examination bed and the patient are kept consistent in direction, then the user picks up the ultrasonic probe and contacts the human body, and the system records the ultrasonic probe information and the ultrasonic image information under the current operation of the user.
Then, referring to S402, after acquiring the current target information and the current ultrasound image information, the terminal device transmits the current target information and the current ultrasound image information to the server.
Optionally, before the terminal device sends the current target information and the current ultrasonic image information to the server, the terminal device may further encode the current target information and the current ultrasonic image information to obtain encoded information, and send the encoded information to the server, so that the server determines the target adjustment information according to the encoded information and the neural network model.
Next, referring to S403, the server determines target adjustment information according to the current target information, the current ultrasound image information, and the neural network model, and specifically includes:
s4031, the server receives the current target information and the current ultrasound image information sent by the terminal device.
S4032, the server inputs the current ultrasonic image information into the first neural network model to obtain target information corresponding to the next operation.
S4033, comparing the difference between the current target information and the target information corresponding to the next operation to obtain target adjustment information.
As a specific embodiment, the current target information may include position-angle-direction-pressure data of the ultrasonic probe under the current operation of the user, and after the current ultrasonic image information is input into the first neural network model, target information corresponding to the next operation is output, including the position-angle-direction-pressure data of the ultrasonic probe, and the current target information is compared with the target information corresponding to the next operation to obtain adjustment information of the ultrasonic probe.
Finally, referring to S405, the terminal device outputs target adjustment information in a target output manner, specifically including: the target adjustment information is output as at least one signal of a visual signal, an audio signal, and a tactile signal.
Optionally, the user adjusts the moving direction (up, down, left, right), the probe angle and the using pressure of the ultrasonic probe according to the audio signal in the next operation, so as to achieve the optimal imaging effect.
According to the ultrasonic detection method, in the process of ultrasonic detection by a user through the ultrasonic probe, the current ultrasonic probe information and the current ultrasonic image information related to the current operation of the user are obtained in real time, the adjustment information of the current operation is output according to the ultrasonic probe information and the ultrasonic image information of the current operation, the user is guided to adjust from the current operation to the next operation according to the output adjustment information, even an inexperienced doctor can capture the optimal ultrasonic image in the next operation under the guidance of the adjustment information, and the detection efficiency of the user in ultrasonic detection is effectively improved.
In addition, as mentioned above, since there are significant differences in organs or tissues in the human body of different heights, weights, ages, sexes, the influence of static factors such as heights, weights, ages, sexes, etc. can be taken into consideration when performing model training, a trained second neural network model is obtained. Therefore, the second neural network model obtained through the training can be applied to the ultrasonic detection method provided in the following embodiment.
Fig. 5 is a schematic flow chart of another ultrasonic detection method according to an embodiment of the present invention, as shown in fig. 5, the ultrasonic detection method may include the following steps:
S501, the terminal equipment acquires current target information, current ultrasonic image information and static data of a target object.
Specifically, in a process that a user detects a target object by using an ultrasonic probe, a terminal device acquires current target information associated with a current operation of the user, current ultrasonic image information, and static data of the target object. The current target information is ultrasonic probe information corresponding to the current operation of the user, and the static data comprises at least one of the height, weight, age and gender of the target object.
S502, the terminal equipment sends the current target information, the current ultrasonic image information and the static data to the server.
S503, the server determines target adjustment information according to the current target information, the current ultrasonic image information, the static data and the second neural network model.
Specifically, the server receives current target information, current ultrasonic image information and static data sent by the terminal equipment; the server inputs the current ultrasonic image information and static data into a second neural network model to obtain target information corresponding to the next operation; and comparing the difference between the current target information and the target information corresponding to the next operation to obtain target adjustment information.
S504, the server sends the target adjustment information to the terminal equipment.
S505, the terminal equipment outputs the target adjustment information in a target output mode.
According to the ultrasonic detection method provided by the embodiment of the invention, the positions, states and the like of organs/tissues in human bodies with different heights, weights, ages and sexes are considered to be different, so that when the current operation (or ultrasonic detection method) of a user is adjusted, static data of the patient is acquired, the static data and the current ultrasonic image are input into a pre-trained neural network model together, more accurate ultrasonic probe information of the next operation of the user is determined, and the accuracy of the adjustment information is further improved.
In the prior art, even for the same organ/tissue, because of different types of diseases occurring on the organ, the ultrasonic detection methods are different, for example, heart diseases occurring on the heart are classified into a plurality of types including arrhythmia, coronary heart disease, myocardial infarction, etc., and the corresponding ultrasonic detection methods of different types of diseases are different. In the process of ultrasonic testing patients, how to accurately find the initial testing position corresponding to each disease is a urgent problem for the doctor lacking the experience of ultrasonic testing.
In order to guide a user to smoothly find an initial detection position corresponding to each disease when the user performs ultrasonic detection by using an ultrasonic probe, the embodiment of the invention provides another ultrasonic detection method, and a specific implementation of the ultrasonic detection method provided by the embodiment of the invention is described below with reference to the accompanying drawings.
Fig. 6 is a schematic flow chart of another ultrasonic detection method according to an embodiment of the present invention, and as shown in fig. 6, an execution subject of the ultrasonic detection method is a server, and the ultrasonic detection method may include the following steps:
s601, acquiring initial position information of an ultrasonic probe when a user performs ultrasonic detection on a first target object, initial ultrasonic image information corresponding to the initial position information, and disease information, height and shoulder width of the first target object.
The initial position information is position information of an ultrasonic probe corresponding to the first step operation when a user utilizes the ultrasonic probe to carry out ultrasonic detection on the first target object, and the initial ultrasonic image information is ultrasonic image information corresponding to the first step operation.
The disease information of the first target object includes a disease category.
S602, acquiring a first sample collection according to disease information of the first target object.
Specifically, a plurality of origin sample sets are pre-established, including:
step 1, acquiring starting point position information of an ultrasonic probe and disease information of a second target object when an expert performs ultrasonic detection on the second target object.
And 2, establishing a starting point sample set according to the plurality of starting point position information, and classifying the starting point sample set according to the disease information to obtain a plurality of starting point sample sets.
Each disease category corresponds to a starting point sample set, and the first sample set is the starting point sample set corresponding to the disease category of the first target object.
S603, calculating relative displacement according to the initial position information, the height and the shoulder width of the first target object and the starting point position information in the first sample collection.
S604, judging whether the relative displacement meets a preset threshold, if so, executing S605, and if not, executing S606.
S605, obtaining moving direction information according to the initial position information and the starting point position information in the first sample set.
And S606, obtaining movement direction information according to the relative displacement, the initial ultrasonic image information and the third neural network model.
S607, the moving direction information is sent to the terminal device, so that the terminal device outputs the moving direction information according to the target output mode.
As a specific embodiment, the ultrasonic detection method includes:
step 1, obtaining the starting point position (x) in the first sample set corresponding to the disease categories A and A of the first target object init ,y init ) And the initial position (x, y) of the ultrasonic probe corresponding to the first step operation and the height and the width of the shoulder of the first target object when the user carries out ultrasonic detection on the first target object.
Wherein, (x) init ,y init ) The average of all starting point position coordinates in the set for the first sample set corresponding to a.
Step 2, calculating the initial position (x, y) of the ultrasonic probe relative to the starting position (x) according to the formula (3) init ,y init ) Is a relative displacement of relatve_drift.
And 3, judging whether the relative displacement relatve_drift is larger than a preset threshold value.
Step 4, when the relative displacement relatve_drift is greater than the preset threshold, the initial position (x, y) is shown to be away from the starting position (x init ,y init ) Further, the moving direction of the ultrasonic probe, i.e., -mean (relatve_drift), is determined directly from the relative displacement relatve_drift of the initial position with respect to the start position at this time.
And 5, when the relative displacement relatve_drift is smaller than or equal to a preset threshold value, obtaining movement direction information according to the relative displacement relatve_drift, the initial ultrasonic image information and the third neural network model.
When the relative displacement relatve_drift is less than or equal to the preset threshold, the relative displacement relatve_drift represents that the initial position (x, y) is away from the starting position (x init ,y init ) In this case, it is inaccurate to determine the moving direction of the ultrasonic probe based on the measurement (relative_drift) because the average of the coordinates of the plurality of starting points in the first sample set corresponding to A is affected by noise, resulting in (x) init ,y init ) Not accurate enough, thus affecting the accuracy of the-mean (relatve_drift).
At this time, the moving direction needs to be determined together according to the relative displacement relatve_drift and the initial ultrasound image information, including: inputting the relative displacement relatve_drift and the current ultrasonic image information into a third neural network model, and outputting two starting point position coordinates (x out3 ,y out3 ) Obtain the suggested moving direction (-x) of the ultrasonic probe out3 ,-y out3 )。
Step 6, moving direction (-x) out3 ,-y out3 ) To the terminal device for the terminal device to output the moving direction (-x) out3 ,-y out3 )。
Specific implementations of the above steps will be described in detail below.
Referring to S606, when the relative displacement does not meet the preset threshold, the movement direction information needs to be obtained according to the relative displacement, the initial ultrasound image information and the third neural network model, so that the third neural network model needs to be trained in advance before the movement direction information is obtained according to the relative displacement, the initial ultrasound image information and the third neural network model, specifically including:
S6061, acquiring an input training sample and an output training sample.
The method specifically comprises the following steps:
step 1, obtaining an organ/tissue panorama of an expert when carrying out ultrasonic detection on an organ or tissue of a second target object, and a starting point position coordinate of an ultrasonic probe corresponding to the first operation.
And 2, obtaining a neighborhood of the starting point position coordinate according to the organ/tissue panorama and the starting point position coordinate.
And 3, determining the position coordinates in the neighborhood and the ultrasonic image information corresponding to the position coordinates according to the organ/tissue panorama.
And 4, calculating the relative displacement of each position coordinate in the neighborhood and the starting point position coordinate.
And 5, taking the relative displacement of the position coordinate in each neighborhood and the starting point position coordinate and the ultrasonic image information corresponding to the position coordinate in the neighborhood as input training samples, and taking the starting point position coordinate as output training samples.
And S6062, respectively inputting each input training sample into a third neural network model for training to obtain a plurality of third prediction results.
And respectively inputting the relative displacement of the position coordinates in each neighborhood and the starting point position coordinates and the ultrasonic image information corresponding to the position coordinates in the neighborhood into a third neural network model to obtain a plurality of third prediction results.
S6063, judging whether the third prediction result and the output training sample meet a third preset condition. If yes, executing S6064; if not, S6065 is performed.
And obtaining a third loss function value of the third neural network model according to the starting point position coordinates of each third prediction result corresponding to the first operation.
Judging whether the third loss function value meets a third preset condition, wherein the third preset condition is that the change rate of the third loss function value is in a preset range.
And S6064, stopping iteration, and completing training of the third neural network model to obtain a trained third neural network model.
If the judgment result of S6063 is yes, it indicates that the third prediction result and the output training sample have satisfied a third preset condition, and meet the requirement of training a third neural network model, where the third neural network model may be already used in the method for determining the starting position of ultrasonic detection.
And S6065, adjusting the model parameters of the third neural network model, and returning to S6062.
If the judgment result in S6063 is no, it indicates that the third prediction result and the output training sample fail to meet the training requirement of the third neural network model, the relevant parameters of the current model need to be adjusted, and after adjustment, S6062 is returned, and the adjusted third neural network model is continuously trained by using the input training sample until the third loss function value meets the third preset condition, and the iteration is stopped, so as to obtain the trained third neural network model.
As a specific embodiment, pre-training the third neural network model includes:
step 1, position coordinates (x 1 ,y 1 ) With the position coordinates of the starting point (x init ,y init ) Relative displacement ((x) 1 -x init )/width,(y 1 -y init )/height)、(x 1 ,y 1 ) The corresponding ultrasound image information pic1 is used as an input training sample and is input into a third neural network model.
Step 2, outputting a third prediction result (x out3 ,y out3 )。
And 3, training a third prediction result according to the formula (4).
loss3=((x out3 -x 1 +x init )/width) 2 +((y out3 -y 1 +y init )/height) 21 L1+λ 2 L2 (4)
Wherein loss3 is the minimum loss function of the third neural network model, lambda 1 L1 is L1 regularization, lambda 1 L2 is L2 regularization.
In one embodiment, referring to S6061, the organ/tissue panorama is required to be generated in advance, specifically including:
and 1, carrying out zero resetting operation on the ultrasonic probe before scanning, namely, horizontally placing the ultrasonic probe at the position which is horizontally arranged at the left lower part of the examination bed and is flush with the feet of the patient and the right shoulder of the patient, and enabling the ultrasonic probe to be consistent with the direction of the length dimension of the examination bed, and enabling the body direction of the patient to be consistent with the direction of the length dimension of the examination bed.
Step 2, the expert scans the organs and tissues, and only scans one organ or tissue at a time. When an expert picks up the ultrasonic probe to scan a target organ or tissue of a human body, the movement track of the ultrasonic probe can be calculated according to the acceleration data of the gyroscope in the ultrasonic probe and the acceleration time, so that the position range of the target organ or tissue is determined, and at the moment, the system records the position of the target organ/tissue.
And 3, covering the target organ/tissue by using each position, direction and pressure which can be contacted by the ultrasonic probe by the expert, recording the ultrasonic probe information and corresponding ultrasonic image information of each position, direction and pressure of the ultrasonic probe by using the system, and obtaining all possible scanning results of the target organ/tissue according to all ultrasonic image information obtained by scanning the target organ/tissue so as to generate a panoramic image (or a data model) of the target organ/tissue.
Alternatively, the expert can scan organs and tissues of human bodies with different heights, weights, ages and sexes to obtain panoramic images of target organs or tissues of human bodies with different heights, weights, ages and sexes and relative positions of the target organs or tissues in the human bodies.
As a specific embodiment, the process of generating the organ/tissue panorama specifically includes:
step 1, the lowest center of the examination bed is taken as the origin of coordinates (0, 0), and the upper right corner is (x) max ,y max ) The upper left corner is (-x) max ,y max ) And height and width (width of both shoulders) width of the patient are measured in advance.
Step 2, the expert scans the organ/tissue of the patient at different positions and directions by using the ultrasonic probe and uses different pressures, and the system records ultrasonic probe information and ultrasonic image information under each position, direction and pressure of the ultrasonic probe, wherein the ultrasonic probe information comprises probe coordinates (x, y), probe directions (u, v, w) and probe pressure.
And 3, the data values of the ultrasonic probe information and the ultrasonic image information are continuous variables, so that the sampling values are required to be stored after discretization at a certain frequency.
And 4, storing all sampling values as a data set to generate an organ/tissue panorama.
According to the ultrasonic detection method, in the process that a user carries out ultrasonic detection on a target object by utilizing the ultrasonic probe, initial position information of the ultrasonic probe and disease information of the target object, which are related to first-step operation of the user, are acquired, a starting point sample set corresponding to the disease information of the target object is acquired, and according to relatively accurate starting point position information stored in the starting point sample set, movement direction information of the initial position information corresponding to the first-step operation of the user is acquired, the user is guided to adjust the initial position of the ultrasonic probe according to the acquired movement direction information, and therefore the initial detection position corresponding to the disease of the target object is accurately found. For doctors lacking in ultrasonic detection experience, even if an ultrasonic probe is used for carrying out the first operation of ultrasonic detection, the correct initial detection position corresponding to the disease of the target object is not found, and according to the obtained moving direction information, the error of the first operation can be effectively corrected in the next operation, so that the correct initial detection position for carrying out ultrasonic detection on the target object is smoothly found.
Based on the specific implementation manner of the ultrasonic detection method provided by the embodiment of the invention, the embodiment of the invention also provides a structural schematic diagram of the ultrasonic detection device. The structure of the ultrasonic detection apparatus is described below with reference to fig. 7.
Fig. 7 is a schematic structural diagram of an ultrasonic detection apparatus according to an embodiment of the present invention, as shown in fig. 7, where an ultrasonic detection apparatus 700 is used for a terminal device, and specifically includes: the device comprises an acquisition module 710, a transmission module 720, a reception module 730 and an output module 740.
The acquiring module 710 is configured to acquire current target information and current ultrasonic image information associated with a current operation of the ultrasonic probe by a user during a process of detecting a target object by using the ultrasonic probe.
The sending module 720 is configured to send the current target information and the current ultrasound image information to the server, so that the server determines target adjustment information according to the current target information, the current ultrasound image information, and the neural network model, where the target adjustment information is adjustment information from the current target information to target information corresponding to a next operation of the user.
And the receiving module 730 is configured to receive the target adjustment information returned by the server.
The output module 740 is configured to output the target adjustment information in a target output manner.
In some embodiments, the current target information includes at least one of pose information, pressure information, and motion information; the pose information includes at least one of probe direction information and probe coordinate information of the ultrasonic probe.
In some embodiments, the output module 740 is specifically configured to: the target adjustment information is output as at least one signal of a visual signal, an audio signal, and a tactile signal.
In some embodiments, after acquiring the current target information and the current ultrasound image information of the ultrasound probe associated with the current operation of the user, the encoding module 750 is further included before transmitting the current target information and the current ultrasound image information to the server: the method is used for encoding the current target information and the current ultrasonic image information to obtain encoded information.
In some embodiments, the sending module 720 is specifically configured to: the encoded information is sent to a server for the server to determine target adjustment information based on the encoded information and the neural network model.
According to the ultrasonic detection device, in the process of ultrasonic detection by a user through the ultrasonic probe, the current ultrasonic probe information and the current ultrasonic image information related to the current operation of the user are obtained in real time, the adjustment information of the current operation is output according to the ultrasonic probe information and the ultrasonic image information of the current operation, the user is guided to adjust from the current operation to the next operation according to the output adjustment information, even an inexperienced doctor can capture the optimal ultrasonic image in the next operation under the guidance of the adjustment information, and the detection efficiency of the user in ultrasonic detection is effectively improved.
Based on the specific implementation manner of the ultrasonic detection method provided by the embodiment of the invention, the embodiment of the invention also provides a structural schematic diagram of an ultrasonic detection system. The structure of the ultrasonic detection system is described below in conjunction with fig. 8.
Fig. 8 is a schematic structural diagram of an ultrasonic detection system according to an embodiment of the present invention, and as shown in fig. 8, the ultrasonic detection system 800 may include: terminal device 810 and server 820.
The terminal device 810 is configured to obtain current target information and current ultrasonic image information associated with a current operation of the ultrasonic probe by using the ultrasonic probe in a process of detecting the target object; and transmitting the current target information and the current ultrasonic image information to a server.
The server 820 is configured to acquire current target information and current ultrasonic image information, input the current ultrasonic image information into the neural network model, and output target information corresponding to a next operation; comparing the difference between the current target information and the target information corresponding to the next operation to obtain target adjustment information; and sending the target adjustment information to the terminal equipment.
The terminal device 810 is further configured to receive the target adjustment information returned by the server, and output the target adjustment information in a target output manner.
In some embodiments, the neural network model is trained from a plurality of training samples, each training sample including ultrasound image information of a first operation and target information corresponding to a second operation corresponding to the first operation, the second operation being a next operation of the first operation.
In some embodiments, the target adjustment information is adjustment information for adjusting from current target information to target information corresponding to a next operation of the user.
According to the ultrasonic detection system provided by the embodiment of the invention, in the process of ultrasonic detection by a user through the ultrasonic probe, the current ultrasonic probe information and the current ultrasonic image information related to the current operation of the user are obtained in real time, the adjustment information of the current operation is output according to the ultrasonic probe information and the ultrasonic image information of the current operation, the user is guided to adjust from the current operation to the next operation according to the output adjustment information, even an inexperienced doctor can capture the optimal ultrasonic image in the next operation under the guidance of the adjustment information, and the detection efficiency of the user in ultrasonic detection is effectively improved.
Based on the specific implementation manner of the ultrasonic detection method provided by the embodiment of the invention, the embodiment of the invention also provides a structural schematic diagram of the ultrasonic probe. The structure of the ultrasonic probe is described below with reference to fig. 9.
Fig. 9 is a schematic structural diagram of an ultrasonic probe according to an embodiment of the present invention, and as shown in fig. 9, the ultrasonic probe 900 may include: pressure sensor 910, gyroscope 920, ultrasonic transducer 930, pulse excitation circuit 940, communication module 950, low noise op-amp, and analog-to-digital converter 960.
In some embodiments, the pressure sensor 910 is used to acquire pressure information of the ultrasound probe in real time during detection of a target object with the ultrasound probe.
In some embodiments, gyroscope 920 is used to acquire pose information of the ultrasound probe in real-time, where the pose information includes probe direction information and probe coordinate information of the ultrasound probe.
In some embodiments, the communication module 950 is configured to send the pressure information and the pose information to the terminal device, so that the terminal device performs the method of the corresponding part of the embodiment shown in fig. 4 provided by the embodiment of the present invention.
In some embodiments, the communication module 950 includes an information processing unit therein for processing received information.
According to the ultrasonic probe provided by the embodiment of the invention, the pressure sensor and the gyroscope can be used for acquiring the pose information and the pressure information of the ultrasonic probe associated with the user operation during ultrasonic detection in real time.
Fig. 10 is a schematic hardware structure of an ultrasonic detection apparatus according to an embodiment of the present invention.
As shown in fig. 10, the ultrasonic detection device 1000 in the present embodiment includes an input device 1001, an input interface 1002, a central processor 1003, a memory 1004, an output interface 1005, and an output device 1006. The input interface 1002, the central processing unit 1003, the memory 1004, and the output interface 1005 are connected to each other through a bus 1010, and the input device 1001 and the output device 1006 are connected to the bus 1010 through the input interface 1002 and the output interface 1005, respectively, and further connected to other components of the ultrasonic detection device 1000.
Specifically, the input device 1001 receives input information from the outside, and transmits the input information to the central processor 1003 through the input interface 1002; the central processor 1003 processes the input information based on computer executable instructions stored in the memory 1004 to generate output information, temporarily or permanently stores the output information in the memory 1004, and then transmits the output information to the output device 1006 through the output interface 1005; the output device 1006 outputs the output information to the outside of the ultrasonic detection device 1000 for use by the user.
In one embodiment, the ultrasonic testing apparatus 1000 shown in fig. 10 includes: a memory 1004 for storing a program; a processor 1003 is configured to execute a program stored in the memory, so as to perform a method according to the embodiment of fig. 1 to 6 provided by the embodiment of the present invention.
The embodiment of the invention also provides a computer storage medium, and the computer storage medium is stored with computer program instructions; the computer program instructions, when executed by a processor, implement the methods of the embodiments of fig. 1-6 provided by embodiments of the present invention.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor Memory devices, read-Only Memory (ROM), flash Memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In the foregoing, only the specific embodiments of the present invention are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present invention is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present invention, and they should be included in the scope of the present invention.

Claims (13)

1. An ultrasonic detection method for a terminal device, the method comprising:
acquiring current target information and current ultrasonic image information associated with current operation of a user by using an ultrasonic probe in the process of detecting a target object by using the ultrasonic probe;
The current target information and the current ultrasonic image information are sent to a server, so that the server inputs the current ultrasonic image information into a neural network model to obtain target information corresponding to the next operation, and the difference between the current target information and the target information corresponding to the next operation is compared to obtain target adjustment information, wherein the target adjustment information is adjustment information from the current target information to target information corresponding to the next operation of a user;
receiving the target adjustment information returned by the server;
outputting the target adjustment information in a target output mode;
before the acquiring the current target information and the current ultrasonic image information associated with the current operation of the ultrasonic probe by the user, the method further comprises:
receiving the moving direction information returned by the server;
outputting the moving direction information according to the target output mode;
the moving direction information is obtained according to initial position information and starting point position information in a first sample set under the condition that the relative displacement meets a preset threshold value, and is obtained according to the initial position information, initial ultrasonic image information and a third neural network model under the condition that the relative displacement does not meet the preset threshold value;
The initial position information is position information of an ultrasonic probe corresponding to a first step operation, the initial ultrasonic image information is ultrasonic image information corresponding to the first step operation, the relative displacement is obtained according to the initial position information, the height and the shoulder width of the target object and the starting position information in the first sample application book, the first starting sample set is obtained according to disease information of the target object, and the first starting sample set corresponds to disease category of the target object.
2. The method of claim 1, wherein the current target information comprises at least one of pose information, pressure information, and motion information;
the pose information includes at least one of probe direction information and probe coordinate information of the ultrasonic probe.
3. The method according to claim 1 or 2, wherein outputting the target adjustment information in a target output manner specifically comprises:
and outputting the target adjustment information in a mode of at least one of a visual signal, an audio signal and a tactile signal.
4. The method of claim 1, wherein after the acquiring the current target information and the current ultrasound image information associated with the current operation of the ultrasound probe by the user, before the transmitting the current target information and the current ultrasound image information to a server, the method further comprises:
Encoding the current target information and the current ultrasonic image information to obtain encoded information;
the step of sending the current target information and the current ultrasonic image information to a server specifically includes:
and sending the encoded information to the server for the server to determine the target adjustment information according to the encoded information and the neural network model.
5. An ultrasonic testing method for a server, the method comprising:
acquiring current target information and current ultrasonic image information, wherein the current target information is target information related to the current operation of a user by an ultrasonic probe in the process of detecting a target object by the ultrasonic probe, and the current ultrasonic image information is ultrasonic image information related to the current operation of the ultrasonic probe and the user;
inputting the current ultrasonic image information into a neural network model to obtain target information corresponding to the next operation, wherein the neural network model comprises a first neural network model which is obtained by training according to a plurality of training samples, each training sample comprises ultrasonic image information of a first operation and target information corresponding to a second operation corresponding to the first operation, and the second operation is the next operation of the first operation;
Comparing the difference between the current target information and the target information corresponding to the next operation to obtain target adjustment information;
the target adjustment information is sent to terminal equipment so as to be used for the terminal equipment to output the target adjustment information, and the target adjustment information is used for the user to adjust from the current target information to target information corresponding to the next operation;
prior to the acquiring the current target information and the current ultrasound image information, the method further comprises:
acquiring initial position information of an ultrasonic probe when a user performs ultrasonic detection on the target object, initial ultrasonic image information corresponding to the initial position information, and disease information, height and shoulder width of the target object, wherein the initial position information is the position information of the ultrasonic probe corresponding to a first step operation, and the initial ultrasonic image information is the ultrasonic image information corresponding to the first step operation;
acquiring a first starting point sample set according to the disease information of the target object, wherein the first starting point sample set corresponds to the disease category of the target object;
calculating relative displacement according to the initial position information, the height and the shoulder width of the target object and the starting point position information in the first sample collection;
Under the condition that the relative displacement meets a preset threshold value, obtaining moving direction information according to the initial position information and the starting point position information in the first sample collection;
under the condition that the relative displacement does not meet the preset threshold value, obtaining moving direction information according to the relative displacement, the initial ultrasonic image information and a third neural network model;
and sending the moving direction information to the terminal equipment so as to be used for the terminal equipment to output the moving direction information according to a target output mode.
6. The method of claim 5, wherein prior to the acquiring current target information and current ultrasound image information, the method further comprises: pre-training the first neural network model;
the pre-training the first neural network model specifically comprises the following steps:
acquiring a plurality of training samples, wherein each training sample comprises ultrasonic image information of a first operation in the process of detecting the target object by using the ultrasonic probe and target information corresponding to a second operation corresponding to the ultrasonic image information;
respectively inputting each training sample into the first neural network model for training to obtain a plurality of first prediction results;
Judging whether a first preset condition is met or not according to target information corresponding to the first prediction result and the second operation;
and if the first preset condition is not met, adjusting the model parameters of the first neural network model, and training the adjusted first neural network model by using the plurality of training samples until the first preset condition is met, so as to obtain a trained first neural network model.
7. The method of claim 6, wherein the neural network model further comprises a second neural network model, and wherein after the separately inputting each training sample into the first neural network model for training, the method further comprises: pre-training the second neural network model;
the pre-training the second neural network model specifically comprises:
acquiring static data of the target object associated with the first prediction result, wherein the static data comprises at least one of the height, weight, sex and age of the target object;
respectively inputting the first prediction result and static data corresponding to the first prediction result into the second neural network model for training to obtain a plurality of second prediction results;
Judging whether a second preset condition is met or not according to target information corresponding to the second operation of each second prediction result;
if the second preset condition is not met, the model parameters of the second neural network model are adjusted, and the adjusted second neural network model is trained by utilizing the first prediction result and static data corresponding to the first prediction result until the second preset condition is met, so that the trained second neural network model is obtained.
8. The method of claim 5, wherein the current target information includes at least one of pose information, pressure information, and motion information;
the pose information comprises probe direction information and probe coordinate information of the ultrasonic probe.
9. An ultrasonic testing apparatus for use with a terminal device, the apparatus comprising:
the acquisition module is used for acquiring current target information and current ultrasonic image information associated with the current operation of the ultrasonic probe and a user in the process of detecting a target object by using the ultrasonic probe;
the sending module is used for sending the current target information and the current ultrasonic image information to a server, inputting the current ultrasonic image information to a neural network model to obtain target information corresponding to the next operation, and comparing the difference between the current target information and the target information corresponding to the next operation to obtain target adjustment information, wherein the target adjustment information is adjustment information from the current target information to target information corresponding to the next operation of a user;
The receiving module is used for receiving the target adjustment information returned by the server;
the output module is used for outputting the target adjustment information in a target output mode;
the receiving module is further used for receiving moving direction information returned by the server before the current target information and the current ultrasonic image information which are related to the current operation of the ultrasonic probe and the user are obtained;
the output module is further used for outputting the moving direction information according to the target output mode;
the moving direction information is obtained according to initial position information and starting point position information in a first sample set under the condition that the relative displacement meets a preset threshold value, and is obtained according to the initial position information, initial ultrasonic image information and a third neural network model under the condition that the relative displacement does not meet the preset threshold value;
the initial position information is position information of an ultrasonic probe corresponding to a first step operation, the initial ultrasonic image information is ultrasonic image information corresponding to the first step operation, the relative displacement is obtained according to the initial position information, the height and the shoulder width of the target object and the starting position information in the first sample application book, the first starting sample set is obtained according to disease information of the target object, and the first starting sample set corresponds to disease category of the target object.
10. An ultrasonic detection system, characterized in that the system comprises a terminal device and a server;
the terminal equipment is used for acquiring current target information and current ultrasonic image information related to the current operation of the ultrasonic probe and a user in the process of detecting a target object by using the ultrasonic probe; transmitting the current target information and the current ultrasonic image information to a server;
the server is used for acquiring the current target information and the current ultrasonic image information, inputting the current ultrasonic image information into a neural network model and outputting target information corresponding to the next operation, wherein the neural network model is obtained by training according to a plurality of training samples, each training sample comprises ultrasonic image information of a first operation and target information corresponding to a second operation corresponding to the first operation, and the second operation is the next operation of the first operation; comparing the difference between the current target information and the target information corresponding to the next operation to obtain target adjustment information; the target adjustment information is sent to terminal equipment;
the terminal equipment is also used for receiving the target adjustment information returned by the server and outputting the target adjustment information in a target output mode, wherein the target adjustment information is adjustment information from the current target information to target information corresponding to the next operation of a user;
The server is further configured to:
before the current target information and the current ultrasonic image information are acquired, acquiring initial position information of an ultrasonic probe when a user performs ultrasonic detection on the target object, initial ultrasonic image information corresponding to the initial position information, and disease information, height and shoulder width of the target object, wherein the initial position information is position information of the ultrasonic probe corresponding to a first step operation, and the initial ultrasonic image information is ultrasonic image information corresponding to the first step operation;
acquiring a first starting point sample set according to the disease information of the target object, wherein the first starting point sample set corresponds to the disease category of the target object;
calculating relative displacement according to the initial position information, the height and the shoulder width of the target object and the starting point position information in the first sample collection;
under the condition that the relative displacement meets a preset threshold value, obtaining moving direction information according to the initial position information and the starting point position information in the first sample collection;
under the condition that the relative displacement does not meet the preset threshold value, obtaining moving direction information according to the relative displacement, the initial ultrasonic image information and a third neural network model;
And sending the moving direction information to the terminal equipment so as to be used for the terminal equipment to output the moving direction information according to a target output mode.
11. An ultrasonic probe is characterized by comprising a pressure sensor, a gyroscope, an ultrasonic transducer, a pulse excitation circuit, a communication module, a low-noise operational amplifier and an analog-to-digital converter;
in the process of detecting a target object by using an ultrasonic probe, the pressure sensor is used for collecting pressure information of the ultrasonic probe in real time;
the gyroscope is used for acquiring pose information of the ultrasonic probe in real time, wherein the pose information comprises probe direction information and probe coordinate information of the ultrasonic probe;
the communication module is configured to send the pressure information and the pose information to a terminal device for the terminal device to perform the ultrasonic detection method according to any one of claims 1-8.
12. An ultrasonic testing apparatus, the apparatus comprising:
a processor and a memory storing computer program instructions;
the processor reads and executes the computer program instructions to implement the ultrasonic detection method according to any one of claims 1-8.
13. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the ultrasound detection method of any of claims 1-8.
CN202010376559.XA 2020-05-07 2020-05-07 Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe Active CN113616235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010376559.XA CN113616235B (en) 2020-05-07 2020-05-07 Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010376559.XA CN113616235B (en) 2020-05-07 2020-05-07 Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe

Publications (2)

Publication Number Publication Date
CN113616235A CN113616235A (en) 2021-11-09
CN113616235B true CN113616235B (en) 2024-01-19

Family

ID=78376916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010376559.XA Active CN113616235B (en) 2020-05-07 2020-05-07 Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe

Country Status (1)

Country Link
CN (1) CN113616235B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008086742A (en) * 2006-01-19 2008-04-17 Toshiba Corp Locus indicating device of ultrasonic probe and ultrasonic diagnostic apparatus
KR20080094452A (en) * 2007-04-20 2008-10-23 주식회사 메디슨 Ultrasound system
CN103584888A (en) * 2013-12-02 2014-02-19 深圳市恩普电子技术有限公司 Ultrasonic target motion tracking method
CN105813573A (en) * 2013-12-09 2016-07-27 皇家飞利浦有限公司 Imaging view steering using model-based segmentation
CN106571758A (en) * 2016-11-03 2017-04-19 深圳开立生物医疗科技股份有限公司 Stepper motor out-of-step compensation method and device
WO2018195821A1 (en) * 2017-04-26 2018-11-01 深圳迈瑞生物医疗电子股份有限公司 Image data adjustment method and device
CN109452953A (en) * 2018-09-26 2019-03-12 深圳达闼科技控股有限公司 Method, apparatus, ultrasonic probe and the terminal of a kind of adjustment detection position
CN109549667A (en) * 2018-12-29 2019-04-02 无锡祥生医疗科技股份有限公司 Ultrasonic transducer scanning system, method and supersonic imaging apparatus
CN109567865A (en) * 2019-01-23 2019-04-05 上海浅葱网络技术有限公司 A kind of intelligent ultrasonic diagnostic equipment towards Non-medical-staff
CN109788944A (en) * 2016-09-26 2019-05-21 富士胶片株式会社 The control method of diagnostic ultrasound equipment and diagnostic ultrasound equipment
CN109953759A (en) * 2017-12-26 2019-07-02 深圳先进技术研究院 A kind of fetus MR imaging method and its device
WO2019174953A1 (en) * 2018-03-12 2019-09-19 Koninklijke Philips N.V. Ultrasound imaging dataset acquisition for neural network training and associated devices, systems, and methods
CN110477950A (en) * 2019-08-29 2019-11-22 浙江衡玖医疗器械有限责任公司 Ultrasonic imaging method and device
CN110517757A (en) * 2018-05-21 2019-11-29 美国西门子医疗系统股份有限公司 The medical ultrasound image of tuning
CN110584714A (en) * 2019-10-23 2019-12-20 无锡祥生医疗科技股份有限公司 Ultrasonic fusion imaging method, ultrasonic device, and storage medium
CN110753517A (en) * 2017-05-11 2020-02-04 韦拉索恩股份有限公司 Ultrasound scanning based on probability mapping
CN110967730A (en) * 2019-12-09 2020-04-07 深圳开立生物医疗科技股份有限公司 Ultrasonic image processing method, system, equipment and computer storage medium
CN110974294A (en) * 2019-12-19 2020-04-10 上海尽星生物科技有限责任公司 Ultrasonic scanning method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100834577B1 (en) * 2006-12-07 2008-06-02 한국전자통신연구원 Home intelligent service robot and method capable of searching and following moving of target using stereo vision processing
US20200113542A1 (en) * 2018-10-16 2020-04-16 General Electric Company Methods and system for detecting medical imaging scan planes using probe position feedback

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008086742A (en) * 2006-01-19 2008-04-17 Toshiba Corp Locus indicating device of ultrasonic probe and ultrasonic diagnostic apparatus
KR20080094452A (en) * 2007-04-20 2008-10-23 주식회사 메디슨 Ultrasound system
CN103584888A (en) * 2013-12-02 2014-02-19 深圳市恩普电子技术有限公司 Ultrasonic target motion tracking method
CN105813573A (en) * 2013-12-09 2016-07-27 皇家飞利浦有限公司 Imaging view steering using model-based segmentation
CN109788944A (en) * 2016-09-26 2019-05-21 富士胶片株式会社 The control method of diagnostic ultrasound equipment and diagnostic ultrasound equipment
CN106571758A (en) * 2016-11-03 2017-04-19 深圳开立生物医疗科技股份有限公司 Stepper motor out-of-step compensation method and device
WO2018195821A1 (en) * 2017-04-26 2018-11-01 深圳迈瑞生物医疗电子股份有限公司 Image data adjustment method and device
CN110753517A (en) * 2017-05-11 2020-02-04 韦拉索恩股份有限公司 Ultrasound scanning based on probability mapping
CN109953759A (en) * 2017-12-26 2019-07-02 深圳先进技术研究院 A kind of fetus MR imaging method and its device
WO2019174953A1 (en) * 2018-03-12 2019-09-19 Koninklijke Philips N.V. Ultrasound imaging dataset acquisition for neural network training and associated devices, systems, and methods
CN110517757A (en) * 2018-05-21 2019-11-29 美国西门子医疗系统股份有限公司 The medical ultrasound image of tuning
CN109452953A (en) * 2018-09-26 2019-03-12 深圳达闼科技控股有限公司 Method, apparatus, ultrasonic probe and the terminal of a kind of adjustment detection position
CN109549667A (en) * 2018-12-29 2019-04-02 无锡祥生医疗科技股份有限公司 Ultrasonic transducer scanning system, method and supersonic imaging apparatus
CN109567865A (en) * 2019-01-23 2019-04-05 上海浅葱网络技术有限公司 A kind of intelligent ultrasonic diagnostic equipment towards Non-medical-staff
CN110477950A (en) * 2019-08-29 2019-11-22 浙江衡玖医疗器械有限责任公司 Ultrasonic imaging method and device
CN110584714A (en) * 2019-10-23 2019-12-20 无锡祥生医疗科技股份有限公司 Ultrasonic fusion imaging method, ultrasonic device, and storage medium
CN110967730A (en) * 2019-12-09 2020-04-07 深圳开立生物医疗科技股份有限公司 Ultrasonic image processing method, system, equipment and computer storage medium
CN110974294A (en) * 2019-12-19 2020-04-10 上海尽星生物科技有限责任公司 Ultrasonic scanning method and device

Also Published As

Publication number Publication date
CN113616235A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
US8094899B2 (en) Medical image diagnostic device
JP7194691B2 (en) Ultrasound clinical feature detection and related apparatus, systems, and methods
CN110177504B (en) Method for measuring parameters in ultrasonic image and ultrasonic imaging system
US8343053B2 (en) Detection of structure in ultrasound M-mode imaging
US9119556B2 (en) Ultrasonic diagnostic apparatus
JP2016506809A (en) Ultrasound imaging system and method
US10085714B2 (en) Ultrasound diagnostic apparatus and method of producing ultrasound image
US7399278B1 (en) Method and system for measuring amniotic fluid volume and/or assessing fetal weight
CN111513754A (en) Ultrasonic imaging equipment and quality evaluation method of ultrasonic image
CN113870227B (en) Medical positioning method and device based on pressure distribution, electronic equipment and storage medium
CN113616235B (en) Ultrasonic detection method, device, system, equipment, storage medium and ultrasonic probe
JP7093093B2 (en) Ultrasonic urine volume measuring device, learning model generation method, learning model
US20220313214A1 (en) Ultrasonic diagnostic apparatus, image processing apparatus, and image processing method
CN115813434A (en) Method and system for automated assessment of fractional limb volume and fat lean mass from fetal ultrasound scans
JP7215053B2 (en) Ultrasonic image evaluation device, ultrasonic image evaluation method, and ultrasonic image evaluation program
JP2019118694A (en) Medical image generation apparatus
CN115279275A (en) Ultrasonic diagnostic apparatus and method of operating the same
CN113012057A (en) Continuous training of AI networks in ultrasound scanners
CN113662579A (en) Ultrasonic diagnostic apparatus, medical image processing apparatus and method, and storage medium
CN114271850B (en) Ultrasonic detection data processing method and ultrasonic detection data processing device
CN111696085B (en) Rapid ultrasonic evaluation method and equipment for lung impact injury condition on site
CN111938704B (en) Bladder volume detection method and device and electronic equipment
US20230263501A1 (en) Determining heart rate based on a sequence of ultrasound images
CN111050663B (en) Ultrasonic signal processing device, ultrasonic diagnostic device, and ultrasonic signal arithmetic processing method
JP2023075904A (en) Ultrasonic imaging device, ultrasonic imaging system, ultrasonic imaging method, and ultrasonic imaging program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant