CN108399619B - System and device for medical diagnosis - Google Patents

System and device for medical diagnosis Download PDF

Info

Publication number
CN108399619B
CN108399619B CN201810171115.5A CN201810171115A CN108399619B CN 108399619 B CN108399619 B CN 108399619B CN 201810171115 A CN201810171115 A CN 201810171115A CN 108399619 B CN108399619 B CN 108399619B
Authority
CN
China
Prior art keywords
neural network
medical image
data
feature data
medical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810171115.5A
Other languages
Chinese (zh)
Other versions
CN108399619A (en
Inventor
田疆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Publication of CN108399619A publication Critical patent/CN108399619A/en
Application granted granted Critical
Publication of CN108399619B publication Critical patent/CN108399619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

The present disclosure provides a system for medical diagnosis. The system includes one or more processors, and a memory. The memory stores computer readable instructions. Wherein the instructions, when executed by a processor, cause the processor to: acquiring at least one medical image, wherein the at least one medical image is obtained by one-time scanning; and outputting the diagnosis report data having a mapping relation with the at least one medical image. The present disclosure also provides a medical diagnosis apparatus and a training method for performing medical diagnosis using the robot.

Description

System and device for medical diagnosis
Technical Field
The present disclosure relates to a system and apparatus for medical diagnosis.
Background
Artificial intelligence is a big trend for future development. The development of artificial intelligence has attracted extensive attention in various industries at present. Image recognition technology is currently in widespread use. With the continuous enhancement of the generalization capability of the artificial intelligence, the artificial intelligence is utilized to carry out image recognition to replace the prior human recognition image, so that the efficiency is high and the image recognition is more accurate. In a medical system, obtaining a diagnosis report through medical images usually requires interpretation of medical images by a professional doctor. This manner of manually interpreting medical images to obtain a diagnostic report can be time consuming and can consume a significant amount of the physician's effort.
Disclosure of Invention
A first aspect of the present disclosure provides a method of medical diagnosis, comprising: acquiring at least one medical image, wherein the at least one medical image is obtained by one-time scanning; outputting the diagnosis report data having a mapping relation with the at least one medical image.
Optionally, outputting the diagnostic report data having a mapping relationship with the at least one medical image comprises: inputting the at least one medical image to a neural network; and obtaining an output of the neural network, wherein the output of the neural network comprises the diagnostic report data.
Optionally, the neural network comprises at least one first neural network and at least one second neural network; inputting at least one medical image to a neural network, comprising: inputting the at least one medical image to the first neural network to extract at least one feature data of the at least one medical image; and inputting the at least one feature data to the second neural network to obtain the diagnostic report data having a mapping relationship with the at least one medical image through the second neural network; wherein the first and second neural networks are of the same type, or of different types.
Optionally, the first neural network comprises a convolutional neural network; and the second neural network comprises a recurrent neural network.
Optionally, when the at least one feature data includes a plurality of feature data, inputting the at least one feature data into the second neural network to obtain the diagnostic report data having a mapping relationship with the at least one medical image through the second neural network, including: compressing the plurality of feature data into one compressed feature data; inputting the compressed feature data to the second neural network to obtain the diagnostic report data having a mapping relationship with the at least one medical image through the second neural network.
Optionally, compressing the plurality of feature data into one compressed feature data includes: determining a compression weight of each of the plurality of feature data according to a correlation of each of the plurality of feature data with the current content state data of the second neural network, wherein the compression weight is greater the higher the correlation is; and carrying out weighted average on the plurality of characteristic data according to the compression weight so as to obtain the compressed characteristic data.
A second aspect of the present disclosure provides an apparatus for medical diagnosis, comprising: the medical image acquisition module is used for acquiring at least one medical image, and the at least one medical image is obtained by one-time scanning; and a diagnostic report data output module for outputting diagnostic report data having a mapping relation with the at least one medical image.
Optionally, the diagnostic report data output module includes: a neural network input sub-module for inputting the at least one medical image to a neural network; and a neural network output acquisition submodule for acquiring an output of the neural network, wherein the output of the neural network includes the diagnostic report data.
Optionally, the neural network comprises at least one first neural network and at least one second neural network, and the neural network input submodule comprises: a first neural network input unit for inputting the at least one medical image to a first neural network to extract at least one feature data of the at least one medical image; and a second neural network input unit for inputting the at least one feature data to the second neural network to obtain the diagnostic report data having a mapping relation with the at least one medical image through the second neural network; wherein: the first and second neural networks are of the same type, or of different types.
Optionally, when the at least one feature data includes a plurality of feature data, inputting the at least one feature data into the second neural network to obtain the diagnostic report data having a mapping relationship with the at least one medical image through the second neural network, including: compressing the plurality of feature data into one compressed feature data; and inputting the compressed feature data to the second neural network to obtain the diagnostic report data having a mapping relationship with the at least one medical image through the second neural network.
Optionally, compressing the plurality of feature data into one compressed feature data includes: determining a compression weight of each of the plurality of feature data according to a correlation of each of the plurality of feature data with the current content state data of the second neural network, wherein the compression weight is greater the higher the correlation is; and carrying out weighted average on the plurality of characteristic data according to the compression weight so as to obtain the compressed characteristic data.
A third aspect of the present disclosure provides a system for medical diagnosis, comprising one or more processors, and a memory. The memory stores computer readable instructions. The instructions, when executed by the processor, cause the processor to implement the method according to the first aspect of the disclosure.
A fourth aspect of the present disclosure provides a non-volatile storage medium storing computer-executable instructions for implementing the method according to the first aspect of the present disclosure when executed.
A fifth aspect of the present disclosure provides a computer program comprising computer executable instructions for implementing the method according to the first aspect of the present disclosure when executed.
A sixth aspect of the present disclosure provides a training method for medical diagnosis using a robot, including: inputting at least one medical image into a neural network to obtain an output of the neural network, wherein the at least one medical image is a medical image obtained by one scanning, and the output of the neural network comprises character data for describing the at least one medical image; when the consistency of the output of the neural network and a standard answer does not meet a preset condition, repeatedly executing the input operation until training is completed when the consistency of the output of the neural network and the standard answer meets the preset condition, wherein the standard answer comprises diagnosis report data having a mapping relation with the at least one medical image; and outputting the trained neural network.
A seventh aspect of the present disclosure provides a system for training for medical diagnosis with a robot, comprising one or more processors, and a memory. The memory stores computer readable instructions. The instructions, when executed by the processor, cause the processor to implement a method according to a sixth aspect of the present disclosure.
An eighth aspect of the present disclosure provides a non-volatile storage medium storing computer-executable instructions for, when executed, implementing the method according to the sixth aspect of the present disclosure.
A ninth aspect of the present disclosure provides a computer program comprising computer executable instructions for implementing the method according to the sixth aspect of the present disclosure when executed.
A tenth aspect of the present disclosure provides a method of processing a medical image. The method comprises the following steps: acquiring at least one medical image, wherein the at least one medical image is obtained by one-time scanning; and outputting descriptive character data having a mapping relationship with the at least one medical image; wherein the descriptive character data comprises data describing the at least one medical image itself in a natural language.
Optionally, outputting the descriptive character data having a mapping relationship with the at least one medical image comprises: inputting the at least one medical image to a neural network; and obtaining an output of the neural network, wherein the output of the neural network includes the descriptive character data.
Optionally, the neural network comprises at least one first neural network and at least one second neural network; inputting at least one medical image to a neural network, comprising: inputting the at least one medical image to a first neural network to extract at least one feature data of the at least one medical image; and inputting the at least one feature data to the second neural network to obtain the descriptive character data having a mapping relationship with the at least one medical image through the second neural network; wherein the first and second neural networks are of the same type, or of different types.
Optionally, the first neural network comprises a convolutional neural network; and the second neural network comprises a recurrent neural network.
Optionally, when the at least one feature data includes a plurality of feature data, inputting the at least one feature data into the second neural network to obtain the descriptive character data having a mapping relation with the at least one medical image through the second neural network, including: compressing the plurality of feature data into one compressed feature data; inputting the compressed feature data into the second neural network to obtain the descriptive character data having a mapping relation with the at least one medical image through the second neural network.
Optionally, compressing the plurality of feature data into one compressed feature data includes: determining a compression weight of each of the plurality of feature data according to a correlation of each of the plurality of feature data with the current content state data of the second neural network, wherein the compression weight is greater the higher the correlation is; and carrying out weighted average on the plurality of characteristic data according to the compression weight so as to obtain the compressed characteristic data.
An eleventh aspect of the present disclosure provides a medical image processing apparatus including: the medical image acquisition module is used for acquiring at least one medical image, and the at least one medical image is obtained by one-time scanning; the character data output module is used for outputting descriptive character data which has a mapping relation with the at least one medical image; wherein the descriptive character data comprises data describing the at least one medical image itself in a natural language.
A twelfth aspect of the disclosure provides a system for processing medical images, comprising one or more processors, and a memory. The memory stores computer readable instructions. The instructions, when executed by the processor, cause the processor to implement a method of processing medical images according to a tenth aspect of the present disclosure.
A thirteenth aspect of the present disclosure provides a non-volatile storage medium storing computer-executable instructions for implementing the method of processing a medical image according to the tenth aspect of the present disclosure when executed.
A fourteenth aspect of the present disclosure provides a computer program comprising computer executable instructions for implementing the method of processing medical images according to the tenth aspect of the present disclosure when executed.
A fifteenth aspect of the present disclosure provides a training method for processing a medical image with a robot, including: inputting at least one medical image into a neural network to obtain an output of the neural network, wherein the at least one medical image is a medical image obtained by one scanning, and the output of the neural network comprises character data for describing the at least one medical image; when the consistency of the output of the neural network and a standard answer does not meet a preset condition, repeatedly executing the input operation until training is completed when the consistency of the output of the neural network and the standard answer meets the preset condition, wherein the standard answer comprises predetermined descriptive character data which has a mapping relation with the at least one medical image, and the descriptive character data comprises data describing the at least one medical image by a natural language; and outputting the trained neural network.
A sixteenth aspect of the present disclosure provides a training system for processing medical images with a robot, comprising one or more processors, and a memory. The memory stores computer readable instructions. The instructions, when executed by the processor, cause the processor to implement a training method for robotically processing medical images according to a fifteenth aspect of the present disclosure.
A seventeenth aspect of the present disclosure provides a non-volatile storage medium storing computer-executable instructions that, when executed, implement a training method for robotically processing medical images in accordance with the fifteenth aspect of the present disclosure.
An eighteenth aspect of the present disclosure provides a computer program comprising computer executable instructions for, when executed, implementing a training method for robotically processing medical images in accordance with the fifteenth aspect of the present disclosure.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
FIG. 1A schematically illustrates a flow chart of a medical diagnostic method according to an embodiment of the present disclosure;
FIG. 1B schematically shows a flow chart of a method of processing a medical image according to an embodiment of the present disclosure;
FIG. 2A schematically illustrates a flow chart of a medical diagnostic method according to another embodiment of the present disclosure;
FIG. 2B schematically shows a flow chart of a method of processing a medical image according to another embodiment of the present disclosure;
FIG. 3A schematically illustrates a flow diagram of the method of FIG. 2A of inputting at least one medical image to a neural network, according to another embodiment of the present disclosure;
FIG. 3B schematically illustrates a flow chart of a method of inputting at least one medical image to a neural network of FIG. 2B, according to another embodiment of the present disclosure;
fig. 4 schematically illustrates an implementation scenario of a medical diagnostic method or a medical image processing method according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of a medical diagnostic apparatus according to an embodiment of the present disclosure;
fig. 6 schematically shows a block diagram of a diagnostic report data output module in a medical diagnostic apparatus according to an embodiment of the present disclosure;
fig. 7 schematically illustrates a block diagram of a neural network output acquisition sub-module in a medical diagnostic apparatus according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow chart of a training method for medical diagnosis using a robot according to an embodiment of the present disclosure; and
FIG. 9 schematically illustrates a block diagram of a medical diagnostic system according to an embodiment of the present disclosure;
fig. 10 schematically shows a block diagram of a medical image processing apparatus according to an embodiment of the present disclosure;
FIG. 11 schematically illustrates a flow chart of a training method for processing medical images with a robot according to an embodiment of the present disclosure; and
fig. 12 schematically shows a block diagram of a medical image processing system according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
Accordingly, the techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable medium having instructions stored thereon for use by or in connection with an instruction execution system. In the context of this disclosure, a computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, the computer readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the computer readable medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
In a medical system, obtaining a diagnosis report from a medical image usually requires relying on a medical professional to interpret the medical image. If artificial intelligence can be applied to the identification of medical images, it is possible to quickly obtain a diagnosis report from the medical images, which is not only efficient, but also frees the attention of doctors to pay more attention to more medical procedures requiring creative work.
Embodiments of the present disclosure provide a method and apparatus for medical diagnosis. The method includes acquiring at least one medical image, the at least one medical image being a medical image obtained from a scan, and outputting diagnostic report data having a mapping relationship with the at least one medical image.
According to the medical diagnosis method, the medical diagnosis device and the medical diagnosis system, corresponding diagnosis report data can be automatically output according to the medical image obtained by one-time scanning. Therefore, the traditional medical image reading mode depending on the manual reading of the doctor can be changed into the mode of automatically acquiring the diagnosis report according to the medical image, so that the energy of the doctor is relieved, the doctor can be more concentrated in more medical processes needing creative labor, and great convenience is provided for medical work.
The embodiment of the disclosure also provides a medical image processing method and a corresponding device. The method comprises the steps of obtaining at least one medical image, wherein the at least one medical image is obtained through one scanning, and outputting descriptive character data which has a mapping relation with the at least one medical image. Wherein the descriptive character data comprises data describing the at least one medical image itself in a natural language. According to embodiments of the present disclosure, a medical image may be converted into corresponding descriptive character data, which may be data information for natural language interpretation of the medical image, thereby providing another way to read the medical image.
Fig. 1A schematically illustrates a flow chart of a medical diagnostic method according to an embodiment of the present disclosure.
As shown in fig. 1A, the method includes operations S101 and S102A.
In operation S101, at least one medical image is acquired, and the at least one medical image is a medical image obtained by one scan.
In operation S102A, diagnostic report data having a mapping relationship with the at least one medical image is output.
According to the embodiment of the disclosure, corresponding diagnosis report data can be automatically output according to the medical image obtained by one scanning. Therefore, the traditional medical image reading mode depending on the manual reading of the doctor can be changed into the mode of automatically acquiring the diagnosis report according to the medical image, so that the energy of the doctor is relieved, the doctor can be more concentrated in more medical processes needing creative labor, and great convenience is provided for medical work.
Fig. 1B schematically shows a flow chart of a method of processing a medical image according to an embodiment of the present disclosure.
As shown in fig. 1B, the method of processing a medical image according to an embodiment of the present disclosure includes operations S101 and S102B. Wherein operation S101 may refer to the related description in fig. 1A.
In operation S102B, descriptive character data having a mapping relationship with the at least one medical image is output, wherein the descriptive character data includes data describing the at least one medical image itself through a natural language. For example, for a medical image including a part of a human body, according to the embodiment of the present disclosure, the descriptive character data may be, for example, structural shape data (e.g., thickness, shape, or the like) describing the part of the human body in the medical image through natural language, a bone density distribution, a muscle distribution, or the like of the part of the human body reflected by characteristics such as gray scale and/or brightness of the image. The descriptive character data may also be a hint to areas of the medical image that need special attention, such as a hint to abrupt changes in image characteristics of certain areas, and/or the size, location, etc. of these areas. According to the embodiment of the present disclosure, the descriptive character data is a description of the image information itself, and a diagnosis result or health condition of a disease cannot be directly derived from the descriptive character data.
According to the embodiment of the disclosure, by converting the at least one medical image into the descriptive character data with the mapping relation, the information in the at least one medical image can be correspondingly converted into the information which can be read through the natural language, so that a way for reading the at least one medical image is provided. On the basis of the descriptive character data, the medical staff can obtain the corresponding medical diagnosis result based on the information of the at least one medical image reflected by the descriptive character data and/or the information obtained by directly viewing the at least one medical image by combining the individual. In this way, in some application scenarios, a medical worker or the like can be helped to grasp more accurately the information reflected by the at least one medical image itself, for example, the information obtained by directly reading the at least one medical image and the descriptive character data can be mutually verified. Meanwhile, the machine can realize much higher image recognition resolution and recognition speed than human eyes, so that the interpretation precision and efficiency of the medical image can be obviously improved. Moreover, medical image information is output by using descriptive character data, and the omission of some fine image information caused by manual image reading can be avoided. In other application scenarios, for example, for a medical practice intern, or a worker who just participated in medical practice, etc., the information obtained by directly reading the medical image by himself and the descriptive text information obtained by the method according to the embodiment of the present disclosure may be mutually verified and verified, so as to promote the improvement of the medical practice level thereof, etc.
Fig. 2A schematically illustrates a flow chart of a medical diagnostic method according to another embodiment of the present disclosure.
As shown in fig. 2A, in the medical diagnosis method according to another embodiment of the present disclosure, operation S102 includes operation S201A and operation S202A.
A specific medical diagnosis method according to another embodiment of the present disclosure includes operation S101, operation S201A, and operation S202A.
In operation S201A, the at least one medical image is input to the neural network.
In operation S202A, an output of the neural network is obtained, wherein the output of the neural network includes the diagnostic report data.
According to the embodiment of the disclosure, the neural network is utilized to quickly obtain the diagnosis report according to the medical image, so that the scheme that a professional doctor is required to read the medical image in the prior art can be effectively replaced, and the efficiency of medical diagnosis is improved.
Fig. 2B schematically shows a flow chart of a method of processing a medical image according to another embodiment of the present disclosure.
As shown in fig. 2B, a method of processing a medical image according to another embodiment of the present disclosure includes operations S101, S201B, and S202B.
In operation S201B, the at least one medical image is input to the neural network.
In operation S202B, an output of the neural network is obtained, wherein the output of the neural network includes the descriptive character data.
According to the embodiment of the disclosure, the processing method of the medical image can convert the medical image into the data describing the medical image through the natural language through the neural network, and provides another way for reading the medical image.
Fig. 3A schematically illustrates a flowchart of a method of inputting at least one medical image to a neural network in operation S201A in fig. 2A according to another embodiment of the present disclosure
As shown in fig. 3A, according to a medical diagnosis method of another embodiment of the present disclosure, the neural networks including at least one first neural network and at least one second neural network, the first neural network and the second neural network being of the same type or different types, operation S201A may include operation S211A and operation S212A.
In operation S211A, the at least one medical image is input to a first neural network to extract at least one feature data of the at least one medical image.
For example, feature data of each of the feature data may be extracted for at least one medical image, or one feature data may be extracted for some or all of the plurality of medical images. One feature data may be, for example, a feature vector of a medical image.
In operation S212A, the at least one feature data is input to the second neural network to obtain the diagnostic report data having a mapping relationship with the at least one medical image through the second neural network.
Fig. 3B schematically illustrates a flowchart of a method of inputting at least one medical image to a neural network in operation S201B in fig. 2B according to another embodiment of the present disclosure.
As shown in fig. 3B, in the method for processing a medical image according to another embodiment of the present disclosure, the neural network includes at least one first neural network and at least one second neural network, and the first neural network and the second neural network are of the same type or different types. Operation S201B may include operation S211B and operation S212B.
In operation S211B, the at least one medical image is input to a first neural network to extract at least one feature data of the at least one medical image. Similarly, reference may be made to the description related to operation S211A in fig. 3A.
In operation S212B, the at least one feature data is input to the second neural network to obtain the descriptive character data having a mapping relation with the at least one medical image through the second neural network.
According to an embodiment of the present disclosure, the first neural network may be a neural network that excels in image processing, and the second neural network may be a neural network that excels in natural language processing. According to an embodiment of the present disclosure, the first neural network comprises a convolutional neural network, and the second neural network comprises a recurrent neural network.
In particular, a Convolutional Neural Network (CNN) is relatively powerful in processing images, and a Recurrent Neural Network (RNN) is widely used in processing natural languages.
According to embodiments of the present disclosure, the first neural network and the second neural network may also be the same type of neural network. Such as all convolutional neural networks, or all cyclic neural networks, or all other neural networks, etc.
According to the embodiment of the present disclosure, when the at least one feature data includes a plurality of feature data, the at least one feature data is input to the second neural network to obtain the diagnosis report data or the descriptive character data having a mapping relationship with the at least one medical image through the second neural network, for example, the plurality of feature data is compressed into a compressed feature data, and then the compressed feature data is input to the second neural network to obtain the diagnosis report data or the descriptive character data having a mapping relationship with the at least one medical image through the second neural network.
According to the embodiment of the present disclosure, the plurality of feature data are compressed into one compressed feature data, which may be, for example, determining a compression weight of each of the plurality of feature data according to a correlation of each of the plurality of feature data with the current content state data of the second neural network, wherein the compression weight is greater as the correlation is higher, and then performing weighted average on the plurality of feature data according to the compression weight to obtain the compressed feature data.
The medical diagnosis method shown in fig. 1A, 2A, and 3A and the medical image processing method shown in fig. 1B, 2B, and 3B will be further described with reference to fig. 4.
Fig. 4 schematically shows an implementation scenario of a medical diagnosis method or a medical image processing method according to an embodiment of the present disclosure.
As shown in fig. 4, in the medical diagnosis method or the medical image processing method, a medical image is acquired through a neural network, and finally, diagnosis report data or descriptive character data having a mapping relation with the medical image is output.
Specifically, feature data of at least one medical image is first extracted through a first neural network. For example, in the illustration of fig. 4, the feature data of each medical image is extracted separately by the corresponding first neural network. And in the schematic of fig. 4, a plurality of medical images are obtained in one scanning process, so that a plurality of feature data are obtained by the first neural network extraction.
Then processing the plurality of characteristic data and inputting the processed characteristic data into a second neural network, so that the second neural network matches a diagnostic data report with a mapping relation with the medical image according to the input content; alternatively, in the medical image processing method according to the embodiment of the present disclosure, the second neural network is caused to match the descriptive character data having a mapping relationship with the medical image according to the input content.
Processing the plurality of feature data may be, for example, compressing the plurality of feature data into a compressed feature data and inputting the compressed feature data to the second neural network.
In some embodiments, the plurality of feature data may be compressed into one feature data by weighted average compression. In other embodiments, such as the example of fig. 4, a semi-attention mechanism may be introduced to determine a compression weight for each of the plurality of feature data based on a correlation of each of the plurality of feature data with the current content state data of the second neural network.
Specifically, in the example of fig. 4, the first Neural Network includes a full Convolutional Neural Network (FCN) and a Convolutional Neural Network CNN. Among them, the FCN network is one of CNN networks, and has a more powerful image processing capability.
The second neural network may include a Long Short Term Memory (LSTM) (Long Short term) neural network. The LSTM neural network belongs to a special RNN neural network, and can update the internal state according to the input and output of the previous round.
For example, assume that the internal state data of the LSTM neural network can be represented by a matrix, and the feature data of the FCN or CNN extracted medical image can be represented by a feature vector or a matrix. In some embodiments, a correlation analysis (which may be various, including linear and non-linear) may be performed on each feature data with internal state data (e.g., a matrix) of the LSTM neural network. For example, a linear correlation coefficient between two data can be obtained when performing a linear correlation analysis. In the linear correlation analysis, the correlation coefficient is in the range of [0, 1], the closer the correlation coefficient is to 1, the larger the correlation is, and the closer the correlation coefficient is to 0, the smaller the correlation is. According to the embodiment of the present disclosure, the higher the correlation, the larger the compression weight may be when obtaining the compressed feature data from the plurality of feature data.
Fig. 5 schematically illustrates a block diagram of a medical diagnostic apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the apparatus 500 includes a medical image acquisition module 510 and a diagnostic report data output module 520. The apparatus 500 may perform the medical diagnostic method described with reference to fig. 1A, 2A, 3A, and 4.
Specifically, the medical image acquiring module 510 is configured to acquire at least one medical image, where the at least one medical image is a medical image obtained by one scan.
The diagnostic report data output module 520 is used for outputting the diagnostic report data having a mapping relation with the at least one medical image.
Fig. 6 schematically illustrates a block diagram of the diagnostic report data output module 520 in the apparatus 500 according to an embodiment of the present disclosure.
The diagnostic report data output module 520 includes a neural network input submodule 521 and a neural network output acquisition submodule 522.
The neural network input sub-module 521 is used for inputting the at least one medical image to the neural network.
The neural network output obtaining sub-module 522 is used for obtaining the output of the neural network, wherein the output of the neural network includes the diagnosis report data.
Fig. 7 schematically illustrates a block diagram of the neural network output acquisition submodule 522 in the apparatus 500 according to an embodiment of the present disclosure.
As shown in fig. 7, according to an embodiment of the present disclosure, the neural network includes at least one first neural network and at least one second neural network, the first neural network and the second neural network being of the same type, or different types. The neural network output acquisition submodule 522 may include a first neural network input unit 5221 and a second neural network input unit 5222.
The first neural network input unit 5221 is used to input the at least one medical image to the first neural network to extract at least one feature data of the at least one medical image.
The second neural network input unit 5222 is used for inputting the at least one feature data to the second neural network to obtain the diagnostic report data having a mapping relationship with the at least one medical image through the second neural network.
According to the embodiment of the present disclosure, the medical diagnosis apparatus 500 may automatically output corresponding diagnosis report data according to a medical image obtained by one scan. Therefore, the traditional medical image reading mode depending on the manual reading of the doctor can be changed into the mode of automatically acquiring the diagnosis report according to the medical image, so that the energy of the doctor is relieved, the doctor can be more concentrated in more medical processes needing creative labor, and great convenience is provided for medical work.
It is to be understood that the medical image acquisition module 510, the diagnostic report data output module 520, the neural network input sub-module 521, the neural network output acquisition sub-module 522, the first neural network input unit 5221 and the second neural network input unit 5222 may be combined and implemented in one module, or any one of them may be split into a plurality of modules, or a plurality of sub-modules included in any one of them may be split or combined and implemented. Alternatively, at least part of the functionality of one or more of these modules and sub-modules may be combined with at least part of the functionality of other modules and implemented in one module. According to an embodiment of the present invention, each of the medical image acquisition module 510, the diagnostic report data output module 520, the neural network input sub-module 521, the neural network output acquisition sub-module 522, the first neural network input unit 5221, and the second neural network input unit 5222 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in a suitable combination of three implementations of software, hardware, and firmware. Alternatively, at least one of respective sub-modules included in the medical image acquisition module 510, the diagnostic report data output module 520, the neural network input sub-module 521, the neural network output acquisition sub-module 522, the first neural network input unit 5221 and the second neural network input unit 5222 may be at least partially implemented as a computer program module which, when executed by a computer, may perform the functions of the respective modules.
Fig. 8 schematically illustrates a flow chart of a training method for medical diagnosis using a robot according to an embodiment of the present disclosure.
As shown in fig. 8, the training method for medical diagnosis using a robot according to an embodiment of the present disclosure includes operations S801 to S803.
In operation S801, at least one medical image is input to a neural network to obtain an output of the neural network, wherein the at least one medical image is a medical image obtained by one scan, and the output of the neural network includes character data describing the at least one medical image.
In operation S802, when the consistency between the output of the neural network and the standard answer does not satisfy the preset condition, the input operation is repeatedly performed until the training is completed when the consistency between the output of the neural network and the standard answer satisfies the preset condition, wherein the standard answer includes the diagnosis report data having a mapping relation with the at least one medical image.
In operation S803, the trained neural network is output.
According to an embodiment of the present disclosure, a neural network is trained to output a diagnostic report from a medical image. Therefore, the trained neural network can quickly obtain a diagnosis report according to the medical image, so that the scheme that a professional doctor is required to read the medical image in the prior art can be replaced to a certain extent, and the efficiency of medical diagnosis is improved.
Fig. 9 schematically illustrates a block diagram of a medical diagnostic system according to an embodiment of the present disclosure.
As shown in fig. 9, the medical diagnostic system 900 includes a processor 910 and a computer-readable storage medium 920. In some embodiments, the medical diagnostic system 900 may perform the medical diagnostic method described above with reference to fig. 1A, 2A, 3A, and 4 to enable medical diagnosis from medical images. In other embodiments, the diagnostic system 900 may also be used to perform the method described above with reference to fig. 8 to train a robot to perform a medical diagnosis using the robot.
In particular, processor 910 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 910 may also include onboard memory for caching purposes. Processor 910 may be a single processing unit or a plurality of processing units for performing the different actions of the medical diagnostic method flows described with reference to fig. 1A, 2A, 3A, and 4.
Computer-readable storage medium 920 may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
Computer-readable storage medium 920 may include a computer program 921, which computer program 921 may include code/computer-executable instructions that, when executed by processor 910, cause processor 910 to perform a medical diagnostic method flow, such as described above in connection with fig. 1A, 2A, 3A, and 4, and any variations thereof, or a method flow, such as described above in connection with fig. 8, and any variations thereof.
The computer program 921 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 921 may include one or more program modules, including 921A, modules 921B, … …, for example. It should be noted that the division and number of modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, when these program modules are executed by the processor 910, the processor 910 may execute the medical diagnosis method flow described in connection with fig. 1A, fig. 2A, fig. 3A and fig. 4 and any variation thereof, for example, or the method flow described in connection with fig. 8 and any variation thereof, for example.
According to an embodiment of the present invention, at least one of the respective sub-modules included by the medical image acquisition module 510, the diagnostic report data output module 520, the neural network input sub-module 521, the neural network output acquisition sub-module 522, the first neural network input unit 5221 and the second neural network input unit 5222 may be implemented as a computer program module described with reference to fig. 9, which when executed by the processor 910, may implement the respective operations described above.
Fig. 10 schematically shows a block diagram of a medical image processing apparatus 1000 according to an embodiment of the present disclosure.
As shown in fig. 10, the medical image processing apparatus 1000 includes a medical image acquisition module 1010 and a character data output module 1020. The processing device 1000 may perform the processing method of the medical image described with reference to fig. 1B, 2B, 3B, and 4.
The medical image acquiring module 1010 is configured to acquire at least one medical image, where the at least one medical image is a medical image obtained by one scan.
The character data output module 1020 is configured to output descriptive character data having a mapping relationship with the at least one medical image, where the descriptive character data includes data describing the at least one medical image itself through a natural language.
According to an embodiment of the present disclosure, the character data output module 1020 may further include a neural network input sub-module and a neural network output acquisition sub-module. The neural network input sub-module is used for inputting the at least one medical image to the neural network. And the neural network output acquisition sub-module is used for acquiring the output of the neural network, wherein the output of the neural network comprises the descriptive character data.
It is understood that the medical image acquisition module 1010 and the character data output module 1020 may be implemented in a single module, or any one of the modules may be divided into a plurality of modules, or a plurality of sub-modules included in any one of the modules may be implemented in a divided or combined manner. Alternatively, at least part of the functionality of one or more of these modules and sub-modules may be combined with at least part of the functionality of other modules and implemented in one module. According to an embodiment of the present invention, each sub-module in the medical image acquisition module 1010 and the character data output module 1020 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in a suitable combination of three implementations of software, hardware, and firmware. Alternatively, at least one of the respective sub-modules included in the medical image acquisition module 1010 and the character data output module 1020 may be at least partially implemented as a computer program module that, when executed by a computer, may perform the functions of the respective modules.
Fig. 11 schematically illustrates a flow chart of a training method for processing medical images with a robot according to an embodiment of the disclosure.
As shown in fig. 11, the training method for processing a medical image using a robot according to an embodiment of the present disclosure includes operations S1101 to S1103.
In operation S1101, at least one medical image is input to a neural network to obtain an output of the neural network, wherein the at least one medical image is a medical image obtained by one scan, and the output of the neural network includes character data describing the at least one medical image.
In operation S1102, when the consistency between the output of the neural network and the standard answer does not satisfy a preset condition, the input operation is repeatedly performed until the training is completed when the consistency between the output of the neural network and the standard answer satisfies the preset condition, wherein the standard answer includes descriptive character data having a predetermined mapping relation with the at least one medical image, and the descriptive character data includes data describing the at least one medical image by a natural language.
In operation S1103, the trained neural network is output.
According to an embodiment of the present disclosure, a neural network is trained for processing of medical images. Thus, after training, the neural network can translate the medical image into descriptive character data described by natural language, such as text information. Thereby providing another way for medical image reading.
Fig. 12 schematically shows a block diagram of a medical image processing system 1200 according to an embodiment of the present disclosure.
As shown in fig. 12, a system 1200 for processing medical images according to an embodiment of the present disclosure includes a processor 1210 and a computer-readable storage medium 1220. In some embodiments, the medical image processing system 1200 may perform the medical image processing methods described above with reference to fig. 1B, 2B, 3B, and 4 to achieve conversion of the medical image into corresponding descriptive character data. In other embodiments, the medical image processing system 1200 may also be used to perform the method described above with reference to fig. 11 to train a robot to process medical images.
In particular, processor 1210 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 1210 may also include onboard memory for caching purposes. The processor 1210 may be a single processing unit or a plurality of processing units for performing the different actions of the process flow of the medical image processing method described with reference to fig. 1B, 2B, 3B and 4.
Computer-readable storage medium 1220, for example, may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
The computer-readable storage medium 1220 may include a computer program 1221, which computer program 1221 may include code/computer-executable instructions that, when executed by the processor 1210, cause the processor 1210 to perform a flow of a method of processing medical images, such as described above in connection with fig. 1B, 2B, 3B, and 4, and any variations thereof, or a flow of a method of training with a robot to process medical images, such as described above in connection with fig. 11, and any variations thereof.
The computer program 1221 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 1221 may include one or more program modules, including, for example, 1221A, modules 1221B, … …. It should be noted that the division and number of modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, when these program modules are executed by the processor 1210, the processor 1210 may execute the flow of the processing method of the medical image described in the above-mentioned fig. 1B, 2B, 3B and 4 and any variation thereof, or the flow of the method described in the above-mentioned fig. 11 and any variation thereof.
According to an embodiment of the present invention, the medical image acquisition module 1010 and the character data output module 1020 and at least one of the respective sub-modules included therein may be implemented as computer program modules described with reference to fig. 12, which, when executed by the processor 1210, may implement the respective operations described above.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (4)

1. A system for medical diagnosis, comprising:
one or more processors; and
a memory storing computer readable instructions; wherein the instructions, when executed by the processor, cause the processor to:
acquiring at least one medical image, wherein the at least one medical image is obtained by one-time scanning; and
outputting diagnostic report data having a mapping relationship with the at least one medical image, comprising:
inputting the at least one medical image to a first neural network to extract at least one feature data of the at least one medical image; and
inputting the at least one feature data into a second neural network to obtain the diagnosis report data having a mapping relation with the at least one medical image through the second neural network, wherein the diagnosis report data includes descriptive character data, the descriptive character data being data information for natural language interpretation of the at least one medical image, including:
when the at least one characteristic data comprises a plurality of characteristic data, carrying out correlation analysis on each characteristic data and the current content state data of the second neural network to determine the correlation of each characteristic data and the current content state data of the second neural network;
determining a compression weight of each feature data in the plurality of feature data according to the correlation of each feature data and the current content state data of the second neural network, wherein the compression weight is larger when the correlation is higher;
carrying out weighted average on the plurality of characteristic data according to the compression weight so as to obtain the compressed characteristic data; and
inputting the compressed feature data to the second neural network to obtain the diagnostic report data having a mapping relationship with the at least one medical image through the second neural network;
wherein the content of the first and second substances,
the first and second neural networks are of the same type, or of different types.
2. The system of claim 1, wherein:
the first neural network comprises a convolutional neural network; and
the second neural network comprises a recurrent neural network.
3. An apparatus for medical diagnosis, comprising:
the medical image acquisition module is used for acquiring at least one medical image, and the at least one medical image is obtained by one-time scanning; and
a diagnostic report data output module for outputting diagnostic report data having a mapping relationship with the at least one medical image, comprising:
a first neural network input unit for inputting the at least one medical image to a first neural network to extract at least one feature data of the at least one medical image; and
a second neural network input unit for inputting the at least one feature data into a second neural network to obtain the diagnosis report data having a mapping relation with the at least one medical image through the second neural network, wherein the diagnosis report data includes descriptive character data, and the descriptive character data is data information for performing natural language interpretation on the at least one medical image;
wherein the second neural network input unit is specifically configured to, when the at least one feature data includes a plurality of feature data, perform correlation analysis on each feature data and the current content state data of the second neural network, and determine a correlation of each feature data and the current content state data of the second neural network;
determining a compression weight of each feature data in the plurality of feature data according to the correlation of each feature data and the current content state data of the second neural network, wherein the compression weight is larger when the correlation is higher; carrying out weighted average on the plurality of characteristic data according to the compression weight so as to obtain the compressed characteristic data; and inputting the compressed feature data to the second neural network to obtain the diagnostic report data having a mapping relationship with the at least one medical image through the second neural network;
wherein the first and second neural networks are of the same type, or of different types.
4. A training method for medical diagnosis using a robot, comprising:
inputting at least one medical image into a neural network to obtain an output of the neural network, wherein the at least one medical image is a medical image obtained by one scanning, and the output of the neural network comprises character data for describing the at least one medical image;
when the consistency of the output of the neural network and a standard answer does not meet a preset condition, repeatedly executing the input operation until training is completed when the consistency of the output of the neural network and the standard answer meets the preset condition, wherein the standard answer comprises diagnosis report data having a mapping relation with the at least one medical image; the diagnostic report data includes descriptive character data that is data information for natural language interpretation of the at least one medical image;
outputting the trained neural network;
wherein the neural network comprises a first neural network and a second neural network;
the first neural network is used for extracting at least one characteristic data of the at least one medical image;
the second neural network is used for processing the at least one characteristic data to obtain the diagnosis report data with mapping relation with the at least one medical image; wherein the second neural network is specifically configured to, when the at least one feature data includes a plurality of feature data, perform a correlation analysis on each feature data and the current content state data of the second neural network, and determine a correlation of each feature data and the current content state data of the second neural network; determining a compression weight of each feature data in the plurality of feature data according to the correlation of each feature data and the current content state data of the second neural network, wherein the compression weight is larger when the correlation is higher; carrying out weighted average on the plurality of characteristic data according to the compression weight so as to obtain the compressed characteristic data; and processing the compressed feature data to obtain the diagnostic report data having a mapping relationship with the at least one medical image.
CN201810171115.5A 2017-12-22 2018-03-01 System and device for medical diagnosis Active CN108399619B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2017114165041 2017-12-22
CN201711416504 2017-12-22

Publications (2)

Publication Number Publication Date
CN108399619A CN108399619A (en) 2018-08-14
CN108399619B true CN108399619B (en) 2021-12-24

Family

ID=63091402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810171115.5A Active CN108399619B (en) 2017-12-22 2018-03-01 System and device for medical diagnosis

Country Status (1)

Country Link
CN (1) CN108399619B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021012225A1 (en) * 2019-07-24 2021-01-28 Beijing Didi Infinity Technology And Development Co., Ltd. Artificial intelligence system for medical diagnosis based on machine learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150230773A1 (en) * 2014-02-19 2015-08-20 Samsung Electronics Co., Ltd. Apparatus and method for lesion detection
CN106446782A (en) * 2016-08-29 2017-02-22 北京小米移动软件有限公司 Image identification method and device
CN106709254A (en) * 2016-12-29 2017-05-24 天津中科智能识别产业技术研究院有限公司 Medical diagnostic robot system
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion
CN107123027A (en) * 2017-04-28 2017-09-01 广东工业大学 A kind of cosmetics based on deep learning recommend method and system
CN107239733A (en) * 2017-04-19 2017-10-10 上海嵩恒网络科技有限公司 Continuous hand-written character recognizing method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150230773A1 (en) * 2014-02-19 2015-08-20 Samsung Electronics Co., Ltd. Apparatus and method for lesion detection
CN106446782A (en) * 2016-08-29 2017-02-22 北京小米移动软件有限公司 Image identification method and device
CN106709254A (en) * 2016-12-29 2017-05-24 天津中科智能识别产业技术研究院有限公司 Medical diagnostic robot system
CN107239733A (en) * 2017-04-19 2017-10-10 上海嵩恒网络科技有限公司 Continuous hand-written character recognizing method and system
CN107123027A (en) * 2017-04-28 2017-09-01 广东工业大学 A kind of cosmetics based on deep learning recommend method and system
CN107045720A (en) * 2017-05-04 2017-08-15 深圳硅基智能科技有限公司 Artificial neural network and system for recognizing eye fundus image lesion

Also Published As

Publication number Publication date
CN108399619A (en) 2018-08-14

Similar Documents

Publication Publication Date Title
US10810735B2 (en) Method and apparatus for analyzing medical image
CN107909065B (en) Method and device for detecting face occlusion
US20200027210A1 (en) Virtualized computing platform for inferencing, advanced processing, and machine learning applications
US9760990B2 (en) Cloud-based infrastructure for feedback-driven training and image recognition
JP2020522817A (en) Semantic analysis method, device, and storage medium
DE112020003547T5 (en) Transfer learning for neural networks
US20190087647A1 (en) Method and apparatus for facial recognition
CN109784304B (en) Method and apparatus for labeling dental images
US20200364574A1 (en) Neural network model apparatus and compressing method of neural network model
CN107728783B (en) Artificial intelligence processing method and system
CN105512467A (en) Digit visualization mobile terminal medical method
CN112400187A (en) Knockout autoencoder for detecting anomalies in biomedical images
CN108399619B (en) System and device for medical diagnosis
CN112529149A (en) Data processing method and related device
KR102206990B1 (en) Coarse-to-precise hand detection method using deep neural networks
Silva et al. Resource-constrained onboard inference of 3D object detection and localisation in point clouds targeting self-driving applications
US20230108267A1 (en) Source localization of eeg signals
CN111507466A (en) Data processing method and device, electronic equipment and readable medium
CN113592932A (en) Training method and device for deep completion network, electronic equipment and storage medium
JP2022041801A (en) System and method for gaining advanced review understanding using area-specific knowledge base
CN111274813B (en) Language sequence labeling method, device storage medium and computer equipment
CN112101257B (en) Training sample generation method, image processing method, device, equipment and medium
CN115359066A (en) Focus detection method and device for endoscope, electronic device and storage medium
CN109345545A (en) A kind of method, apparatus and computer readable storage medium of segmented image generation
Chiuchisan Implementation of medical image processing algorithm on reconfigurable hardware

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant