CN112465786A - Model training method, data processing method, device, client and storage medium - Google Patents

Model training method, data processing method, device, client and storage medium Download PDF

Info

Publication number
CN112465786A
CN112465786A CN202011389787.7A CN202011389787A CN112465786A CN 112465786 A CN112465786 A CN 112465786A CN 202011389787 A CN202011389787 A CN 202011389787A CN 112465786 A CN112465786 A CN 112465786A
Authority
CN
China
Prior art keywords
image data
neural network
ultrasonic image
target
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011389787.7A
Other languages
Chinese (zh)
Inventor
王健宗
李泽远
朱星华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011389787.7A priority Critical patent/CN112465786A/en
Publication of CN112465786A publication Critical patent/CN112465786A/en
Priority to PCT/CN2021/097420 priority patent/WO2022116502A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal

Abstract

The application relates to model construction in artificial intelligence, and provides a model training method, a data processing method, a device, a client and a storage medium, wherein the method comprises the following steps: acquiring ultrasonic image data containing a target detection object, and pre-training a preset neural network through the ultrasonic image data to obtain a target neural network; extracting a plurality of ultrasonic image data from the ultrasonic image data, and training a target neural network through the plurality of ultrasonic image data to obtain a training result; determining gradient data of the target neural network based on the training result, encrypting the gradient data and sending the encrypted gradient data to a federal learning server; receiving target gradient data sent by a federal learning server; and updating the model parameters of the target neural network according to the target gradient data until the updated target neural network converges to obtain the ultrasonic image data processing model. The method and the device can improve the detection effect and accuracy of the ultrasonic image data processing model.

Description

Model training method, data processing method, device, client and storage medium
Technical Field
The present application relates to the field of model construction technology in artificial intelligence, and in particular, to a model training method, a data processing method, an apparatus, a client, and a storage medium.
Background
Currently, a doctor performs ultrasonic examination on a target detection object such as a liver, a spleen, a kidney and the like of a human body to obtain an ultrasonic image of the target detection object, and the doctor examines characteristics of key parts in the ultrasonic image to obtain an ultrasonic examination result of the target detection object such as the liver and the like of a patient. In the existing deep learning algorithm, an ultrasonic image data processing model can be established through ultrasonic image data of a target detection object to determine an ultrasonic inspection result of the target detection object, however, due to privacy of medical data, the quality of the existing ultrasonic image data processing model is often limited by the scale and quality of a sample data set, and the detection effect and accuracy of the established ultrasonic image data processing model of the target detection object are not high.
Disclosure of Invention
The application mainly aims to provide a model training method, a data processing device, a client and a storage medium, aims to improve the detection effect and accuracy of an ultrasonic image data processing model of a target detection object, and can be applied to the field of smart medical treatment of smart cities, so that the construction of the smart cities is promoted.
In a first aspect, the present application provides a model training method, applied to a client, including:
acquiring ultrasonic image data containing a target detection object, and pre-training a preset neural network through the ultrasonic image data to obtain a target neural network;
extracting a plurality of ultrasonic image data from the ultrasonic image data, and training the target neural network through the plurality of ultrasonic image data to obtain a training result;
determining gradient data of the target neural network based on the training result, encrypting the gradient data and sending the encrypted gradient data to a federal learning server;
receiving target gradient data sent by the federated learning server, wherein the target gradient data is determined by the federated learning server based on the gradient data sent by a plurality of clients;
and updating the model parameters of the target neural network according to the target gradient data until the updated target neural network converges to obtain an ultrasonic image data processing model of the target detection object.
In a second aspect, the present application further provides an ultrasound image data processing method, including:
acquiring ultrasonic image data of a target detection object;
detecting the ultrasonic image data of the target detection object through an ultrasonic image data processing model to obtain an ultrasonic image data processing result of the target detection object;
the ultrasound image data processing model is obtained by training according to the model training method.
In a third aspect, the present application further provides a model training apparatus, including:
the acquisition module is used for acquiring ultrasonic image data containing a target detection object;
the pre-training module is used for pre-training a preset neural network through the ultrasonic image data to obtain a target neural network;
the training module is used for extracting a plurality of ultrasonic image data from the ultrasonic image data and training the target neural network through the plurality of ultrasonic image data to obtain a training result;
a determination module for determining gradient data of the target neural network based on the training result;
the sending module is used for encrypting the gradient data and sending the gradient data to a federal learning server;
the receiving module is used for receiving target gradient data sent by the federal learning server, and the target gradient data is determined by the federal learning server through joint learning based on the gradient data sent by a plurality of clients;
and the updating module is used for updating the model parameters of the target neural network according to the target gradient data until the updated target neural network converges so as to obtain the ultrasonic image data processing model of the target detection object.
In a fourth aspect, the present application further provides a client comprising a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the steps of the model training method or the ultrasound image data processing method as described above.
In a fifth aspect, the present application further provides a computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the model training method or the ultrasound image data processing method as described above.
The application provides a model training method, a data processing method, a device, a client and a storage medium, the model training method comprises the steps of obtaining ultrasonic image data containing a target detection object, pre-training a preset neural network through the ultrasonic image data to obtain a target neural network, extracting a plurality of ultrasonic image data from the ultrasonic image data, training the target neural network through the plurality of ultrasonic image data to obtain a training result, determining gradient data of the target neural network based on the training result, encrypting the gradient data and then sending the gradient data to a federal learning server, receiving target gradient data sent by the federal learning server, determining the target gradient data by the federal learning server based on the gradient data sent by a plurality of clients through combined learning, and updating model parameters of the target neural network according to the target gradient data, and until the updated target neural network converges to obtain an ultrasonic image data processing model of the target detection object. The neural network is trained cooperatively through the ultrasonic image data and the ultrasonic image data, and the utilization rate of the sample data set is greatly improved through a federal learning mode, and the detection effect and accuracy of the ultrasonic image data processing model of the target detection object are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart illustrating steps of a model training method according to an embodiment of the present disclosure;
FIG. 2 is a flow diagram illustrating sub-steps of the model training method of FIG. 1;
FIG. 3 is a schematic diagram of a scenario for implementing the model training method provided in this embodiment;
fig. 4 is a schematic flowchart illustrating steps of a method for processing ultrasound image data according to an embodiment of the present disclosure;
FIG. 5 is a schematic block diagram of a model training apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic block diagram of an ultrasound image data processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic block diagram of a structure of a client according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation. In addition, although the division of the functional blocks is made in the device diagram, in some cases, it may be divided in blocks different from those in the device diagram.
The embodiment of the application provides a model training method, a data processing device, a client and a storage medium. The model training method can be applied to a client, the client comprises terminal equipment or a server, and the terminal equipment can be electronic equipment such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant and wearable equipment; the server may be a single server or a server cluster including a plurality of servers.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating steps of a model training method according to an embodiment of the present disclosure.
As shown in fig. 1, the model training method includes steps S101 to S105.
S101, ultrasonic image data containing a target detection object are obtained, and pre-training is carried out on a preset neural network through the ultrasonic image data to obtain a target neural network.
Conventionally, detection methods of target detection objects such as liver, spleen, kidney, pancreas, etc. of a human body include an invasive detection method and a non-invasive detection method. The non-invasive detection method is mainly ultrasonic examination, and a doctor can obtain ultrasonic image data containing the target detection object by performing ultrasonic examination on internal organs of the abdominal region of a patient through ultrasonic equipment. The doctor can know the hepatic fibrosis condition of the patient through the characteristics of the key parts in the ultrasonic images. However, the manual observation is inefficient, which is not suitable for large-scale and accurate screening of liver fibrosis related diseases. In addition, the detection effect and accuracy of the ultrasonic image data processing model constructed in the prior art are not high, and therefore, a technical scheme for improving the detection effect and accuracy of the ultrasonic image data processing model of the target detection object is needed to be provided.
In one embodiment, the target detection object includes organs such as a liver, a spleen, a kidney, and a pancreas, and ultrasound image data of a relevant part of the target detection object of the patient is acquired by an ultrasound device, and the acquired ultrasound image data is stored in a memory or a cloud, so that the ultrasound image data can be called later.
It should be noted that, in order to further ensure the privacy and security of the related medical data such as the ultrasound image data, the related information such as the ultrasound image data may also be stored in a node of a block chain, and the technical solution of the present application may also be applicable to adding other data files stored in the block chain, and the block chain referred to in the present invention is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm, and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
In one embodiment, due to privacy of medical data, the conventional ultrasound image detection method has the problems of large repeatability of sample data, small amount of model data and low model accuracy. Therefore, the method also provides a model training method based on federal learning, ultrasonic image data of a plurality of hospitals are shared in a federal learning mode, and combined modeling is carried out by using the neural network, so that the training effect of the neural network can be improved on the premise of protecting privacy and safety of medical data, and the detection effect and accuracy of the ultrasonic image data processing model constructed by the neural network are improved.
In one embodiment, the data types of the ultrasound image data owned by each hospital (client) are the same, and the user information corresponding to each ultrasound image data is different, so that the joint training can be performed in a horizontal federal learning manner. That is, under the condition that the user features of two sample data sets (ultrasound image data) are overlapped more and the user information is overlapped less, the data sets are segmented according to the horizontal direction (namely the dimension of the user information), and the data of the part with the same user features and the same user information is extracted for training.
In one embodiment, the preset neural network is pre-trained through ultrasonic image data to obtain the target neural network. It should be noted that, by pre-training the preset neural network with the ultrasonic image data including the target detection object, such as migration learning, self-supervision learning, fine tuning, etc., the model parameters of the neural network can be optimized, and the deep-level network can be trained more smoothly, so that the model training effect of the neural network is better, and the detection effect and accuracy of the subsequent ultrasonic image data processing model are improved.
In an embodiment, as shown in fig. 2, the pre-training of the preset neural network through the ultrasound image data to obtain the target neural network includes: sub-step S1011 to sub-step S1012.
And a substep S1011, extracting multiple frames of image data in the ultrasound image data, and acquiring key information in each frame of image data.
The multi-frame image data may be extracted from the ultrasound image data randomly, or the multi-frame image data may be obtained by extracting frame images from the ultrasound image data at intervals of a preset number of frames. The key information in each frame of image data is characteristic information of key parts in the ultrasound image, illustratively, the key parts include left liver, right liver, spleen, portal vein, hepatic vein, spleen, cystic wall, etc., the key information is hepatic parenchymal echo and hepatic vein morphology, the hepatic parenchymal echo includes rough echo, asymmetric echo and fragment echo, and the hepatic vein morphology includes rigid morphology and normal morphology.
In one embodiment, a neural network is preset at a client, and the neural network comprises an image extraction sublayer and a classification identification sublayer; extracting multi-frame image data from the ultrasonic image data through an image extraction sublayer; and inputting the multi-frame image data into a classification identification sublayer so as to identify key information in each frame of image data. It should be noted that the image extraction sublayer may extract multi-frame image data from the ultrasound image data at random or at intervals of a preset number of frames, and the classification and identification sublayer may rapidly and accurately classify and identify the feature of the part in each frame of image data, thereby obtaining the key information.
In one embodiment, the key information includes at least one of left liver angle morphology, liver area, right liver excursion, hepatic parenchymal echo, spleen area, portal vein diameter, portal vein blood flow direction, hepatic vein morphology, spleen thickness, spleen length, gallbladder wall thickness, and gallbladder wall morphology. Wherein, the left liver angular morphology comprises an obtuse angle morphology and an acute angle morphology, the hepatic parenchymal echo comprises a rough echo, an asymmetric echo and a fragment echo, the hepatic vein morphology comprises a rigid morphology and a normal morphology, and the gallbladder wall morphology comprises a rough morphology and a smooth morphology. The key information in each frame of image data can be accurately classified and identified through the classification identification sublayer in the neural network.
And a substep S1012, updating preset model parameters of the neural network according to the key information in each frame of image data to obtain a target neural network.
Illustratively, the neural network includes a convolutional neural network CNN, a cyclic neural network RNN, a deep neural network DNN, and the like, and the neural network includes a plurality of convolutional layers, pooling layers, and full-link layers, and the preset neural network is finely tuned. Specifically, the neural network is trained by using key information in each frame of image data to obtain a training result, model gradient data is determined according to the training result and the input key information, and model parameters such as weights of a plurality of convolution layers, pooling layers and full-connection layers are updated according to the model gradient data to obtain the target neural network. Therefore, model parameters of the target neural network are optimized, the sensitivity of the target neural network to key information is improved, the model training effect of the target neural network is better, and the detection effect and accuracy of a subsequent ultrasonic image data processing model are improved.
In one embodiment, the pre-training of the preset neural network through the ultrasound image data to obtain the target neural network includes: pre-training a preset neural network through ultrasonic image data to obtain a pre-training result, and determining candidate gradient data of a target neural network according to the pre-training result; encrypting the candidate gradient data and sending the encrypted candidate gradient data to a federated learning server so that the federated learning server performs joint learning based on the candidate gradient data sent by the plurality of clients to obtain joint gradient data; and updating the model parameters of the neural network according to the joint gradient data returned by the federal learning server until the updated neural network converges to obtain the target neural network.
It should be noted that, gradient data of the target neural network may be determined according to the pre-training result of the target neural network and the input sample data set, and the gradient data is taken as candidate gradient data. In the federal learning process, the federal learning server sends public keys to the clients of a plurality of hospitals, the clients of all the hospitals are in safe connection with the federal learning server, calculated gradient data are encrypted according to the public keys, and then the encrypted gradient data are sent to the federal learning server. The federated learning server carries out federated averaging on the gradient data sent by the plurality of clients to obtain updated combined gradient data, and returns the combined gradient data to the clients of all hospitals, so that all the clients can receive the combined gradient data, update the model parameters of the neural network according to the combined gradient data, and then carry out a new round of model training on the updated neural network until the updated neural network converges to obtain a target neural network, wherein the target neural network is subjected to federated learning and pre-training, has higher sensitivity to key information, greatly improves the pre-training effect of the neural network, and is beneficial to improving the detection effect and accuracy of a subsequent ultrasound image data processing model.
Step S102, extracting a plurality of ultrasonic image data from the ultrasonic image data, and training the target neural network through the plurality of ultrasonic image data to obtain a training result.
The client may extract a plurality of ultrasound image data from the ultrasound image data randomly or at preset intervals, or may extract a plurality of ultrasound image data from the ultrasound image data through an ultrasound image extraction sublayer in the neural network, which is not specifically limited in the embodiment of the present application.
It should be noted that, a plurality of ultrasound image data are input into the target neural network, so as to train the target neural network through the plurality of ultrasound image data, and a plurality of training results can be output, for example, the training results are classification labels of feature information of key parts in the ultrasound image. The training phase is repeated for a plurality of iterations, and the pre-training process of the ultrasound image data is usually only once. Therefore, the embodiment of the application takes the ultrasonic image data as the main part and the ultrasonic image data as the auxiliary part, the target neural network is trained together to obtain the training result, and the detection effect and the accuracy of the target neural network on the hepatic fibrosis of the target detection object to be detected can be effectively improved.
And S103, determining gradient data of the target neural network based on the training result, encrypting the gradient data and sending the encrypted gradient data to a federal learning server.
Illustratively, a transfer function of the target neural network is created; carrying out forward propagation training on a target neural network through a plurality of ultrasonic image data serving as training samples, and determining loss functions of all full connection layers of a conversion function in forward propagation; determining the minimum value of the loss functions of all the fully-connected layers of the conversion function in forward propagation, and taking the loss function of the minimum value as a target function; calculating the objective function according to element-by-element multiplication, and determining an error function of the objective function; calculating an error function according to a random gradient descent method, and determining gradient data of a target neural network; and after encrypting the gradient data by using the public key sent by the federal learning server, sending the encrypted gradient data to the federal learning server.
In one embodiment, a client performs forward propagation training on a target neural network by using a plurality of ultrasonic image data as training samples to obtain all conversion functions from a first layer to a last layer of fully-connected layers in forward propagation, and then continues training according to the training samples to obtain all loss functions from the first layer to the last layer of fully-connected layers; and then determining a loss function corresponding to the minimum value as an objective function according to the value of the loss function, wherein the objective function includes a bias parameter, then calculating the objective function according to element multiplication to obtain an error function corresponding to the objective function, and the error function includes an error function of the objective function of the last fully-connected layer corresponding to the minimum value of the loss function and an error function of the objective function of the non-last fully-connected layer corresponding to the minimum value of the loss function, and finally obtaining gradient data of the target neural network by using a random gradient descent method in the gradient descent method, wherein the use of the random gradient descent method is beneficial to increasing the calculation speed, and the calculation can also be performed by using a batch gradient method, and the embodiment of the present application is not particularly limited.
And step S104, receiving target gradient data sent by the federal learning server, wherein the target gradient data is determined by the federal learning server through joint learning based on the gradient data sent by a plurality of clients.
In an embodiment, the federal learning server receives gradient data sent by a plurality of clients, performs federal averaging on the gradient data sent by the plurality of clients to obtain target gradient data, and returns the target gradient data to each client.
For example, the federated learning server performs weighted average on a plurality of gradient data sent by a plurality of clients, and target gradient data
Figure BDA0002811943190000091
Wherein k represents the iteration number, n represents the number of a plurality of clients participating in the joint learning, and nkRepresenting the number of gradient data uploaded by multiple clients per iteration.
In an embodiment, a client receives target gradient data sent by a federal learning server, the target gradient data is determined by the federal learning server through joint learning based on the gradient data sent by a plurality of clients, the target gradient data can update the model parameters of the target neural network, the update frequency of the target neural network is greatly accelerated, the target neural network can be automatically updated in idle time, the gradient data of the clients of a plurality of hospitals are combined, the growth upper limit of an ultrasonic image data processing model of each client is improved, and the problem of too small data volume in the model training process is solved.
And S105, updating model parameters of the target neural network according to the target gradient data until the updated target neural network converges to obtain an ultrasonic image data processing model of the target detection object.
And the client receives the updated combined gradient data and performs a new round of model training according to the combined gradient data so as to update the model parameters of the target neural network until the updated target neural network converges to obtain the ultrasonic image data processing model. The ultrasonic image data processing model is subjected to federal learning and pre-training, has higher sensitivity to key information, greatly improves the training effect of a target neural network, and can greatly improve the detection effect and accuracy of the ultrasonic image data processing model.
In one embodiment, a transfer function of a target neural network is created from target gradient data; adjusting model parameters of the target neural network according to the conversion function to obtain adjusted model parameters; training the target neural network according to the adjusted model parameters, and determining whether the target neural network is in a convergence state; and if the target neural network is determined to be in the convergence state, taking the target neural network as an ultrasonic image data processing model.
Wherein determining whether the target neural network is in a converged state comprises: determining whether the iteration times of the target neural network reach preset iteration times or not, and if the iteration times of the target neural network reach (are equal to) the preset iteration times, determining that the target neural network is in a convergence state; if the iteration times of the target neural network are determined not to reach (be smaller than) the preset iteration times, determining that the target neural network is not in a convergence state; or determining whether the iteration time of the target neural network is greater than or equal to the preset iteration time, and if the iteration time of the target neural network is greater than or equal to the preset iteration time, determining that the target neural network is in a convergence state; and if the iteration time of the target neural network is less than the preset iteration time, determining that the target neural network is not in a convergence state. The preset iteration time and the preset iteration times can be flexibly set by a user, and the specification of the application is not particularly limited.
Further, if the target neural network is determined not to be in the convergence state, continuing to train the target local model according to the training sample, and determining updated model parameters; and training the target neural network according to the updated model parameters until the updated target neural network converges.
Referring to fig. 3, fig. 3 is a schematic view of a scene for implementing the model training method provided in this embodiment.
As shown in fig. 3, the client acquires ultrasound image data including a target detection object, and through the ultrasound image data, pre-training a preset neural network to obtain a target neural network, extracting a plurality of ultrasonic image data from the ultrasonic image data, then obtaining a plurality of ultrasonic image data, training the target neural network to obtain a training result, determining gradient data of the target neural network based on the training result, the gradient data are encrypted and then sent to a federal learning server, and then target gradient data sent by the federal learning server are received, wherein the target gradient data are determined by the federal learning server through joint learning based on the gradient data sent by a plurality of clients, and updating the model parameters of the target neural network according to the target gradient data until the updated target neural network converges, so that each client obtains the ultrasonic image data processing model of the target detection object.
The model training method provided by the above embodiment obtains the ultrasonic image data including the target detection object, and obtains the ultrasonic image data, pre-training a preset neural network to obtain a target neural network, extracting a plurality of ultrasonic image data from the ultrasonic image data, then obtaining a plurality of ultrasonic image data, training the target neural network to obtain a training result, determining gradient data of the target neural network based on the training result, the gradient data are encrypted and then sent to a federal learning server, and then target gradient data sent by the federal learning server are received, wherein the target gradient data are determined by the federal learning server through joint learning based on the gradient data sent by a plurality of clients, and updating the model parameters of the target neural network according to the target gradient data until the updated target neural network converges to obtain the ultrasonic image data processing model of the target detection object. The neural network is trained cooperatively through the ultrasonic image data and the ultrasonic image data, and the utilization rate of the sample data set is greatly improved through a federal learning mode, and the detection effect and accuracy of the ultrasonic image data processing model are improved.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating steps of a method for processing ultrasound image data according to an embodiment of the present disclosure.
As shown in fig. 4, the ultrasound image data processing method includes steps S201 to S202.
Step S201, acquiring ultrasonic image data of a target detection object.
Conventionally, a doctor performs an ultrasonic examination of an abdominal region of a patient by an ultrasonic apparatus, and can obtain ultrasonic image data including target test objects such as a liver, a spleen, a kidney, and a pancreas. The doctor can know the hepatic fibrosis condition of the patient through the characteristics of the key parts in the ultrasonic images. However, the manual observation is inefficient, which is not suitable for large-scale and accurate screening of liver fibrosis related diseases.
In one embodiment, the target detection object includes organs such as a liver, a spleen, a kidney, and a pancreas, and the ultrasound image data of the target detection object of the patient is acquired by the ultrasound device, and the acquired ultrasound image data is stored in the memory or the cloud, so that the ultrasound image data can be called later. It should be noted that, in order to further ensure the privacy and security of the related medical data such as the ultrasound image data, the related information such as the ultrasound image data may also be stored in a node of a block chain.
Step S202, detecting the ultrasonic image data of the target detection object through an ultrasonic image data processing model to obtain an ultrasonic image data processing result of the target detection object.
The ultrasound image data processing model is obtained by training according to the model training method in the embodiment. The target detection object is, for example, a target detection object, and the ultrasound image data processing model is, for example, a convolutional neural network CNN, a cyclic neural network RNN, a deep neural network DNN, or the like.
In one embodiment, detecting the ultrasonic image data of the target detection object through the ultrasonic image data processing model to obtain the ultrasonic image data processing result of the target detection object includes: extracting a plurality of ultrasonic image data from ultrasonic image data of a target detection object; inputting a plurality of ultrasonic image data into a classification identification sublayer of an ultrasonic image data processing model so as to extract key information in each ultrasonic image data; and determining an ultrasonic image data processing result of the target detection object according to the key information in each ultrasonic image data through a normalization sublayer of the ultrasonic image data processing model.
It should be noted that the client may extract a plurality of ultrasound image data from the ultrasound image data randomly or at preset intervals, or may extract a plurality of ultrasound image data from the ultrasound image data through the ultrasound image extraction sublayer in the neural network. The key information is the characteristic information of key parts in the ultrasonic image, and the key parts comprise a left liver, a right liver, a spleen, a portal vein, a hepatic vein, a spleen, a cystic wall and the like. Through the normalization sublayer of the ultrasonic image data processing model, the ultrasonic image data processing result of the target detection object is determined according to the key information in each ultrasonic image data, and the ultrasonic image data processing result comprises normal, hepatic fibrosis or cirrhosis, so that the hepatic fibrosis detection efficiency and accuracy can be effectively improved.
In one embodiment, the key information includes at least one of left liver angle morphology, liver area, right liver excursion, hepatic parenchymal echo, spleen area, portal vein diameter, portal vein blood flow direction, hepatic vein morphology, spleen thickness, spleen length, gallbladder wall thickness, and gallbladder wall morphology. Wherein, the left liver angular morphology comprises an obtuse angle morphology and an acute angle morphology, the hepatic parenchymal echo comprises a rough echo, an asymmetric echo and a fragment echo, the hepatic vein morphology comprises a rigid morphology and a normal morphology, and the gallbladder wall morphology comprises a rough morphology and a smooth morphology. The key information in the ultrasonic image data can be accurately classified and identified through the classification and identification sublayer in the neural network.
In one embodiment, the normalization sublayer normalizes the key information in each ultrasound image data by using a normalization function, such as a softmax function, to output an ultrasound image data processing result, where the ultrasound image data processing result includes normal, liver fibrosis or cirrhosis, or includes probabilities corresponding to respective processing results (normal, liver fibrosis or cirrhosis), or includes feature information of a plurality of key parts of the target object, such as the ultrasound image data processing result includes left liver angle morphology, right liver shift and liver parenchymal echo.
In one embodiment, a plurality of ultrasonic image data are extracted from ultrasonic image data of a target detection object; inputting the multiple ultrasonic image data and the ultrasonic image data into a classification identification sublayer of an ultrasonic image data processing model together so as to extract key information in each ultrasonic image data; and determining an ultrasonic image data processing result of the target detection object according to the key information in each ultrasonic image data through a normalization sublayer of the ultrasonic image data processing model. It should be noted that, the efficiency and accuracy of processing the ultrasound image data can be improved by performing the joint training and recognition on the image data and the video data.
The ultrasonic image data processing method provided by the above embodiment acquires ultrasonic image data of a target detection object, and detects the ultrasonic image data of the target detection object through the ultrasonic image data processing model obtained by training with the model training method in the above embodiment, so as to obtain an ultrasonic image data processing result of the target detection object. Through federal study, the problem of data isolated island caused by privacy of medical data is solved, the medical data owned by each hospital can be fully utilized on the premise of protecting privacy of patients, and the efficiency and accuracy of ultrasonic image data processing of target detection objects are improved.
Referring to fig. 5, fig. 5 is a schematic block diagram of a model training apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the model training apparatus 300 includes: an acquisition module 301, a pre-training module 302, a training module 303, a determination module 304, a sending module 305, a receiving module 306, and an updating module 307.
An obtaining module 301, configured to obtain ultrasonic image data including a target detection object;
a pre-training module 302, configured to pre-train a preset neural network through the ultrasound image data to obtain a target neural network;
a training module 303, configured to extract multiple ultrasound image data from the ultrasound image data, train the target neural network through the multiple ultrasound image data, and obtain a training result;
a determining module 304 for determining gradient data of the target neural network based on the training result;
a sending module 305, configured to encrypt the gradient data and send the encrypted gradient data to a federal learning server;
a receiving module 306, configured to receive target gradient data sent by the federated learning server, where the target gradient data is determined by the federated learning server through joint learning based on gradient data sent by multiple clients;
and an updating module 307, configured to update the model parameter of the target neural network according to the target gradient data until the updated target neural network converges, so as to obtain an ultrasound image data processing model of the target detection object.
In one embodiment, the pre-training module 302 includes:
and the extraction submodule is used for extracting multi-frame image data in the ultrasonic image data and acquiring key information in each frame of image data.
And the updating submodule is used for updating preset model parameters of the neural network according to the key information in each frame of image data to obtain the target neural network.
In one embodiment, the neural network includes an image extraction sub-layer and a classification identification sub-layer; the extraction submodule is further configured to:
extracting multi-frame image data from the ultrasonic image data through the image extraction sublayer;
and inputting the multi-frame image data into the classification identification sublayer so as to identify key information in each frame of image data.
In one embodiment, the update module 307 comprises:
creating a transfer function of the target neural network from the target gradient data;
adjusting the model parameters of the target neural network according to the conversion function to obtain adjusted model parameters;
training the target neural network according to the adjusted model parameters, and determining whether the trained target neural network is in a convergence state;
and if the trained target neural network is determined to be in a convergence state, taking the trained target neural network as an ultrasonic image data processing model.
In one embodiment, the pre-training module 302 is further configured to:
pre-training a preset neural network through the ultrasonic image data to obtain a pre-training result, and determining candidate gradient data of the target neural network according to the pre-training result;
encrypting the candidate gradient data and sending the encrypted candidate gradient data to a federated learning server so that the federated learning server performs joint learning based on the candidate gradient data sent by a plurality of clients to obtain joint gradient data;
and updating the model parameters of the neural network according to the combined gradient data returned by the federal learning server until the updated neural network converges to obtain the target neural network.
Referring to fig. 6, fig. 6 is a schematic block diagram of an ultrasound image data processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the ultrasound image data processing apparatus 400 includes:
an obtaining module 401, configured to obtain ultrasonic image data of a target detection object;
a detection module 402, configured to detect, through an ultrasonic image data processing model, ultrasonic image data of the target detection object to obtain an ultrasonic image data processing result of the target detection object;
the ultrasound image data processing model is obtained by training according to the model training method in the embodiment.
In one embodiment, the detection module 402 is further configured to:
extracting a plurality of ultrasonic image data from the ultrasonic image data of the target detection object;
inputting the plurality of ultrasonic image data into a classification identification sublayer of the ultrasonic image data processing model so as to extract key information in each ultrasonic image data;
and determining an ultrasonic image data processing result of the target detection object according to key information in each ultrasonic image data through a normalization sublayer of the ultrasonic image data processing model.
It should be noted that, as will be clear to those skilled in the art, for convenience and brevity of description, the specific working processes of the above-described apparatus and each module and unit may refer to the corresponding processes in the foregoing embodiment of the model training method or the ultrasound image data processing method, and are not described herein again.
The apparatus provided by the above embodiments may be implemented in the form of a computer program, which may run on a client as shown in fig. 7.
Referring to fig. 7, fig. 7 is a schematic block diagram of a client according to an embodiment of the present disclosure. The client may be a server or a terminal device.
As shown in fig. 7, the client includes a processor, a memory and a network interface connected by a system bus, wherein the memory may include a nonvolatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program includes program instructions that, when executed, cause a processor to perform any one of a model training method or an ultrasound image data processing method.
The processor is used for providing calculation and control capacity and supporting the operation of the whole client.
The internal memory provides an environment for running a computer program in a non-volatile storage medium, and the computer program, when executed by the processor, causes the processor to perform any one of a model training method and an ultrasound image data processing method.
The network interface is used for network communication, such as sending assigned tasks and the like. Those skilled in the art will appreciate that the architecture shown in fig. 7 is a block diagram of only a portion of the architecture associated with the subject application, and does not constitute a limitation on the clients to which the subject application applies, and that a particular client may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
It should be understood that the Processor may be a Central Processing Unit (CPU), and the Processor may be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Wherein, in one embodiment, the processor is configured to execute a computer program stored in the memory to implement the steps of:
acquiring ultrasonic image data containing a target detection object, and pre-training a preset neural network through the ultrasonic image data to obtain a target neural network;
extracting a plurality of ultrasonic image data from the ultrasonic image data, and training the target neural network through the plurality of ultrasonic image data to obtain a training result;
determining gradient data of the target neural network based on the training result, encrypting the gradient data and sending the encrypted gradient data to a federal learning server;
receiving target gradient data sent by the federated learning server, wherein the target gradient data is determined by the federated learning server based on the gradient data sent by a plurality of clients;
and updating the model parameters of the target neural network according to the target gradient data until the updated target neural network converges to obtain an ultrasonic image data processing model of the target detection object.
In one embodiment, when the processor performs the pre-training on the preset neural network through the ultrasound image data to obtain the target neural network, the processor is configured to perform:
extracting multi-frame image data in the ultrasonic image data, and acquiring key information in each frame of image data;
and updating preset model parameters of the neural network according to the key information in each frame of image data to obtain the target neural network.
In one embodiment, the processor, in implementing the neural network, includes an image extraction sub-layer and a classification identification sub-layer; the processor is used for realizing that when the extraction of the multi-frame image data in the ultrasonic image data is realized and the key information in each frame of image data is acquired:
extracting multi-frame image data from the ultrasonic image data through the image extraction sublayer;
and inputting the multi-frame image data into the classification identification sublayer so as to identify key information in each frame of image data.
In one embodiment, the processor, when implementing the updating of the model parameters of the target neural network according to the target gradient data until the updated target neural network converges to obtain the ultrasound image data processing model of the target detection object, is further configured to implement:
creating a transfer function of the target neural network from the target gradient data;
adjusting the model parameters of the target neural network according to the conversion function to obtain adjusted model parameters;
training the target neural network according to the adjusted model parameters, and determining whether the trained target neural network is in a convergence state;
and if the trained target neural network is determined to be in a convergence state, taking the trained target neural network as an ultrasonic image data processing model.
In one embodiment, the processor performs pre-training on a preset neural network through the ultrasound image data to obtain a target neural network, and is configured to perform:
pre-training a preset neural network through the ultrasonic image data to obtain a pre-training result, and determining candidate gradient data of the target neural network according to the pre-training result;
encrypting the candidate gradient data and sending the encrypted candidate gradient data to a federated learning server so that the federated learning server performs joint learning based on the candidate gradient data sent by a plurality of clients to obtain joint gradient data;
and updating the model parameters of the neural network according to the combined gradient data returned by the federal learning server until the updated neural network converges to obtain the target neural network.
Wherein, in one embodiment, the processor is configured to execute a computer program stored in the memory to implement the steps of:
acquiring ultrasonic image data of a target detection object;
detecting the ultrasonic image data of the target detection object through an ultrasonic image data processing model to obtain an ultrasonic image data processing result of the target detection object;
wherein, the ultrasound image data processing model is obtained by training according to the model training method in any one of the previous embodiments.
In one embodiment, when the processor implements the detecting of the ultrasonic image data of the target detection object by the ultrasonic image data processing model to obtain the ultrasonic image data processing result of the target detection object, the processor is configured to implement:
extracting a plurality of ultrasonic image data from the ultrasonic image data of the target detection object;
inputting the plurality of ultrasonic image data into a classification identification sublayer of the ultrasonic image data processing model so as to extract key information in each ultrasonic image data;
and determining an ultrasonic image data processing result of the target detection object according to key information in each ultrasonic image data through a normalization sublayer of the ultrasonic image data processing model.
It should be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the client may refer to the corresponding process in the foregoing embodiment of the model training method or the ultrasound image data processing method, and details are not described herein again.
Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are executed, the method implemented by the computer program instructions may refer to various embodiments of the model training method or the ultrasound image data processing method of the present application.
The computer-readable storage medium may be an internal storage unit of the client described in the foregoing embodiment, for example, a hard disk or a memory of the client. The computer readable storage medium may also be an external storage device of the client, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the client.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments. While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A model training method is applied to a client and comprises the following steps:
acquiring ultrasonic image data containing a target detection object, and pre-training a preset neural network through the ultrasonic image data to obtain a target neural network;
extracting a plurality of ultrasonic image data from the ultrasonic image data, and training the target neural network through the plurality of ultrasonic image data to obtain a training result;
determining gradient data of the target neural network based on the training result, encrypting the gradient data and sending the encrypted gradient data to a federal learning server;
receiving target gradient data sent by the federated learning server, wherein the target gradient data is determined by the federated learning server based on the gradient data sent by a plurality of clients;
and updating the model parameters of the target neural network according to the target gradient data until the updated target neural network converges to obtain an ultrasonic image data processing model of the target detection object.
2. The model training method of claim 1, wherein the pre-training of the pre-set neural network with the ultrasound image data to obtain the target neural network comprises:
extracting multi-frame image data in the ultrasonic image data, and acquiring key information in each frame of image data;
and updating preset model parameters of the neural network according to the key information in each frame of image data to obtain the target neural network.
3. The model training method of claim 2, wherein the neural network comprises an image extraction sub-layer and a classification recognition sub-layer; the extracting of the multi-frame image data in the ultrasound image data and the obtaining of the key information in each frame of the image data include:
extracting multi-frame image data from the ultrasonic image data through the image extraction sublayer;
and inputting the multi-frame image data into the classification identification sublayer so as to identify key information in each frame of image data.
4. The model training method of claim 1, wherein the updating the model parameters of the target neural network according to the target gradient data until the updated target neural network converges to obtain the ultrasound image data processing model of the target detection object comprises:
creating a transfer function of the target neural network from the target gradient data;
adjusting the model parameters of the target neural network according to the conversion function to obtain adjusted model parameters;
training the target neural network according to the adjusted model parameters, and determining whether the trained target neural network is in a convergence state;
and if the trained target neural network is determined to be in a convergence state, taking the trained target neural network as an ultrasonic image data processing model.
5. The model training method of claim 1, wherein the pre-training of the pre-set neural network with the ultrasound image data to obtain the target neural network comprises:
pre-training a preset neural network through the ultrasonic image data to obtain a pre-training result, and determining candidate gradient data of the target neural network according to the pre-training result;
encrypting the candidate gradient data and sending the encrypted candidate gradient data to a federated learning server so that the federated learning server performs joint learning based on the candidate gradient data sent by a plurality of clients to obtain joint gradient data;
and updating the model parameters of the neural network according to the combined gradient data returned by the federal learning server until the updated neural network converges to obtain the target neural network.
6. An ultrasound image data processing method, comprising:
acquiring ultrasonic image data of a target detection object;
detecting the ultrasonic image data of the target detection object through an ultrasonic image data processing model to obtain an ultrasonic image data processing result of the target detection object;
wherein the ultrasound image data processing model is trained according to the model training method of any one of claims 1 to 5.
7. The method for processing ultrasonic image data according to claim 6, wherein the detecting the ultrasonic image data of the target object by the ultrasonic image data processing model to obtain the ultrasonic image data processing result of the target object comprises:
extracting a plurality of ultrasonic image data from the ultrasonic image data of the target detection object;
inputting the plurality of ultrasonic image data into a classification identification sublayer of the ultrasonic image data processing model so as to extract key information in each ultrasonic image data;
and determining an ultrasonic image data processing result of the target detection object according to key information in each ultrasonic image data through a normalization sublayer of the ultrasonic image data processing model.
8. A model training apparatus, characterized in that the model training apparatus comprises:
the acquisition module is used for acquiring ultrasonic image data containing a target detection object;
the pre-training module is used for pre-training a preset neural network through the ultrasonic image data to obtain a target neural network;
the training module is used for extracting a plurality of ultrasonic image data from the ultrasonic image data and training the target neural network through the plurality of ultrasonic image data to obtain a training result;
a determination module for determining gradient data of the target neural network based on the training result;
the sending module is used for encrypting the gradient data and sending the gradient data to a federal learning server;
the receiving module is used for receiving target gradient data sent by the federal learning server, and the target gradient data is determined by the federal learning server through joint learning based on the gradient data sent by a plurality of clients;
and the updating module is used for updating the model parameters of the target neural network according to the target gradient data until the updated target neural network converges so as to obtain the ultrasonic image data processing model of the target detection object.
9. A client, characterized in that the client comprises a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the model training method of any one of claims 1 to 5, or implements the steps of the ultrasound image data processing method of any one of claims 6 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the model training method according to any one of claims 1 to 5 or the steps of the ultrasound image data processing method according to any one of claims 6 to 7.
CN202011389787.7A 2020-12-01 2020-12-01 Model training method, data processing method, device, client and storage medium Pending CN112465786A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011389787.7A CN112465786A (en) 2020-12-01 2020-12-01 Model training method, data processing method, device, client and storage medium
PCT/CN2021/097420 WO2022116502A1 (en) 2020-12-01 2021-05-31 Model training method and device, data processing method and device, client and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011389787.7A CN112465786A (en) 2020-12-01 2020-12-01 Model training method, data processing method, device, client and storage medium

Publications (1)

Publication Number Publication Date
CN112465786A true CN112465786A (en) 2021-03-09

Family

ID=74805218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011389787.7A Pending CN112465786A (en) 2020-12-01 2020-12-01 Model training method, data processing method, device, client and storage medium

Country Status (2)

Country Link
CN (1) CN112465786A (en)
WO (1) WO2022116502A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239972A (en) * 2021-04-19 2021-08-10 温州医科大学 Artificial intelligence auxiliary diagnosis model construction system for medical images
CN113468133A (en) * 2021-05-23 2021-10-01 杭州医康慧联科技股份有限公司 Online sharing system suitable for data model
CN113468364A (en) * 2021-07-21 2021-10-01 京东数科海益信息科技有限公司 Image processing method and device
WO2022116502A1 (en) * 2020-12-01 2022-06-09 平安科技(深圳)有限公司 Model training method and device, data processing method and device, client and storage medium
CN115705678A (en) * 2021-08-09 2023-02-17 腾讯科技(深圳)有限公司 Image data processing method, computer equipment and medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036910B (en) * 2023-09-28 2024-01-12 合肥千手医疗科技有限责任公司 Medical image training method based on multi-view and information bottleneck

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063740A (en) * 2018-07-05 2018-12-21 高镜尧 The detection model of ultrasonic image common-denominator target constructs and detection method, device
DE102017126158A1 (en) * 2017-11-08 2019-05-09 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An ultrasound imaging system
WO2019100718A1 (en) * 2017-11-24 2019-05-31 无锡祥生医疗科技股份有限公司 Method for optimizing ultrasonic imaging system parameter based on deep learning
CN109961009A (en) * 2019-02-15 2019-07-02 平安科技(深圳)有限公司 Pedestrian detection method, system, device and storage medium based on deep learning
CN110874484A (en) * 2019-10-16 2020-03-10 众安信息技术服务有限公司 Data processing method and system based on neural network and federal learning
CN111354005A (en) * 2020-02-28 2020-06-30 浙江德尚韵兴医疗科技有限公司 Full-automatic fetal heart super-image three-blood-vessel segmentation method based on convolutional neural network
WO2020134704A1 (en) * 2018-12-28 2020-07-02 深圳前海微众银行股份有限公司 Model parameter training method based on federated learning, terminal, system and medium
CN111383207A (en) * 2018-12-11 2020-07-07 深圳开立生物医疗科技股份有限公司 Musculoskeletal ultrasonic image processing method, system and device and readable storage medium
CN111553483A (en) * 2020-04-30 2020-08-18 同盾控股有限公司 Gradient compression-based federated learning method, device and system
CN111814985A (en) * 2020-06-30 2020-10-23 平安科技(深圳)有限公司 Model training method under federated learning network and related equipment thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492099B (en) * 2017-08-28 2021-08-20 京东方科技集团股份有限公司 Medical image analysis method, medical image analysis system, and storage medium
CN110797124B (en) * 2019-10-30 2024-04-12 腾讯科技(深圳)有限公司 Model multiterminal collaborative training method, medical risk prediction method and device
CN111428881B (en) * 2020-03-20 2021-12-07 深圳前海微众银行股份有限公司 Recognition model training method, device, equipment and readable storage medium
CN112465786A (en) * 2020-12-01 2021-03-09 平安科技(深圳)有限公司 Model training method, data processing method, device, client and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017126158A1 (en) * 2017-11-08 2019-05-09 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. An ultrasound imaging system
WO2019100718A1 (en) * 2017-11-24 2019-05-31 无锡祥生医疗科技股份有限公司 Method for optimizing ultrasonic imaging system parameter based on deep learning
CN109063740A (en) * 2018-07-05 2018-12-21 高镜尧 The detection model of ultrasonic image common-denominator target constructs and detection method, device
CN111383207A (en) * 2018-12-11 2020-07-07 深圳开立生物医疗科技股份有限公司 Musculoskeletal ultrasonic image processing method, system and device and readable storage medium
WO2020134704A1 (en) * 2018-12-28 2020-07-02 深圳前海微众银行股份有限公司 Model parameter training method based on federated learning, terminal, system and medium
CN109961009A (en) * 2019-02-15 2019-07-02 平安科技(深圳)有限公司 Pedestrian detection method, system, device and storage medium based on deep learning
CN110874484A (en) * 2019-10-16 2020-03-10 众安信息技术服务有限公司 Data processing method and system based on neural network and federal learning
CN111354005A (en) * 2020-02-28 2020-06-30 浙江德尚韵兴医疗科技有限公司 Full-automatic fetal heart super-image three-blood-vessel segmentation method based on convolutional neural network
CN111553483A (en) * 2020-04-30 2020-08-18 同盾控股有限公司 Gradient compression-based federated learning method, device and system
CN111814985A (en) * 2020-06-30 2020-10-23 平安科技(深圳)有限公司 Model training method under federated learning network and related equipment thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邹奕轩 等: "基于卷积神经网络的甲状腺结节超声图像良恶性分类研究", 中国医学装备, vol. 17, no. 03, pages 9 - 13 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022116502A1 (en) * 2020-12-01 2022-06-09 平安科技(深圳)有限公司 Model training method and device, data processing method and device, client and storage medium
CN113239972A (en) * 2021-04-19 2021-08-10 温州医科大学 Artificial intelligence auxiliary diagnosis model construction system for medical images
CN113468133A (en) * 2021-05-23 2021-10-01 杭州医康慧联科技股份有限公司 Online sharing system suitable for data model
CN113468364A (en) * 2021-07-21 2021-10-01 京东数科海益信息科技有限公司 Image processing method and device
CN113468364B (en) * 2021-07-21 2024-04-09 京东科技信息技术有限公司 Image processing method and device
CN115705678A (en) * 2021-08-09 2023-02-17 腾讯科技(深圳)有限公司 Image data processing method, computer equipment and medium

Also Published As

Publication number Publication date
WO2022116502A1 (en) 2022-06-09

Similar Documents

Publication Publication Date Title
CN112465786A (en) Model training method, data processing method, device, client and storage medium
Bradley et al. Nonlinear time-series analysis revisited
Chen et al. Deep feature learning for medical image analysis with convolutional autoencoder neural network
CN111126574B (en) Method, device and storage medium for training machine learning model based on endoscopic image
CN112037912A (en) Triage model training method, device and equipment based on medical knowledge map
CN112259246B (en) Disease prediction method integrating medical concept hierarchy structure and related equipment
CN112259245B (en) Method, device, equipment and computer readable storage medium for determining items to be checked
CN102891751B (en) From the method and apparatus that fingerprint image generates business password
CN110287775B (en) Palm image clipping method, palm image clipping device, computer equipment and storage medium
WO2021151358A1 (en) Triage information recommendation method and apparatus based on interpretation model, and device and medium
CN113379042B (en) Business prediction model training method and device for protecting data privacy
Zhang et al. Joint latent space models for network data with high-dimensional node variables
CN113628726B (en) Traditional Chinese medicine diagnosis and treatment recommendation system and method based on graph neural network and electronic equipment
CN113012803A (en) Computer device, system, readable storage medium and medical data analysis method
CN110414562B (en) X-ray film classification method, device, terminal and storage medium
CN112328879B (en) News recommendation method, device, terminal equipment and storage medium
CN112102939B (en) Cardiovascular and cerebrovascular disease reference information prediction system, method and device and electronic equipment
US11308615B1 (en) Systems and processes for improving medical diagnoses
CN109741833B (en) Data processing method and device
Chakravarthy et al. Convolutional neural network (CNN) for image detection and recognition in medical diagnosis
CN113591458B (en) Medical term processing method, device, equipment and storage medium based on neural network
Wang et al. Compressed sensing using generative models based on fisher information
Kumar et al. Golden Search Optimization based adaptive and diagonal kernel convolution neural network for disease prediction and securing IoT data in cloud
CN117610080B (en) Medical image desensitizing method based on information bottleneck
CN115375934B (en) Method for training analysis of clustered models and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40041544

Country of ref document: HK