CN113705578A - Bile duct form identification method and device - Google Patents

Bile duct form identification method and device Download PDF

Info

Publication number
CN113705578A
CN113705578A CN202111061013.6A CN202111061013A CN113705578A CN 113705578 A CN113705578 A CN 113705578A CN 202111061013 A CN202111061013 A CN 202111061013A CN 113705578 A CN113705578 A CN 113705578A
Authority
CN
China
Prior art keywords
bile duct
images
image
target
targets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111061013.6A
Other languages
Chinese (zh)
Inventor
岳京花
姜楠
周付根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Beijing Tsinghua Changgeng Hospital
Original Assignee
Beihang University
Beijing Tsinghua Changgeng Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, Beijing Tsinghua Changgeng Hospital filed Critical Beihang University
Priority to CN202111061013.6A priority Critical patent/CN113705578A/en
Publication of CN113705578A publication Critical patent/CN113705578A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a bile duct form identification method, which comprises the following steps: the method comprises the steps of image preprocessing, wherein a set S containing a plurality of images is provided, the images in the set S are converted into images with the same resolution and/or size, the images contain bile duct targets, and the bile duct targets are in primary expansion or secondary expansion; combining images obtained after geometric operation or spatial domain transformation is carried out on the images in the set S to form a set T, and merging the set T into the set S; model training, namely inputting the images in the set S as input, outputting the forms of the bile duct targets in the images as output, and sending the images into a deep learning neural network for training to obtain a bile duct form recognition network; and (4) target identification, wherein any image containing the bile duct target is sent to a bile duct form identification network, and form prediction of the bile duct target of any image output by the bile duct form identification network is obtained, wherein the form prediction comprises the probability that the form is primary expansion and the probability of secondary expansion. The bile duct morphology is identified by utilizing the deep learning neural network, so that the problem of low efficiency of manually judging the target morphology is solved.

Description

Bile duct form identification method and device
Technical Field
The invention relates to the technical field of image recognition, in particular to a bile duct shape recognition method and a bile duct shape recognition device.
Background
The image destination identification is realized by comparing the stored information with the current information. In engineering practice, images represent the relevant features of each object composed of image pixels and the relation between the objects by using numbers or symbols. The purpose of image object recognition is to extract object features, as well as abstract representations of relationships between objects. Specifically, image object recognition has two tasks, a first task to recognize what object an image object is, and a second task to recognize what form the object is in. Image target identification technology has been widely used in many fields such as biomedicine, satellite remote sensing, robot vision, cargo detection, target tracking, autonomous vehicle navigation, public security, banking, transportation, military, electronic commerce, multimedia network communication, and the like. In the prior art, the recognition of bile duct morphology in medical images is still in the means of manual recognition.
Disclosure of Invention
In view of this, the invention provides a bile duct shape recognition method, which introduces a deep learning neural network to train the deep learning neural network to recognize the shape of a bile duct target in an image, so as to alleviate the defects of the prior art.
In a first aspect, the present invention provides a bile duct shape recognition method, including: the method comprises the steps of image preprocessing, wherein a set S containing a plurality of images is provided, the images in the set S are converted into images with the same resolution and/or size, the images contain bile duct targets, and the bile duct targets are in primary expansion or secondary expansion; combining images obtained after geometric operation or spatial domain transformation is carried out on the images in the set S to form a set T, and merging the set T into the set S; model training, namely inputting the images in the set S as input, outputting the forms of the bile duct targets in the images as output, and sending the images into a deep learning neural network for training to obtain a bile duct form recognition network; and (4) target identification, wherein any image containing the bile duct target is sent to an image target identification network, and form prediction of any image bile duct target output by the image target identification network is obtained, wherein the form prediction comprises the probability that the form is primary expansion and the probability of secondary expansion.
Optionally, the image preprocessing and the target recognition further include: and utilizing an image segmentation technology or manually sketching bile duct targets contained in the identification image, and cutting the image according to the bile duct targets.
Optionally, the method further comprises: and generating a depth image composition set U of the images in the set S, and merging the set U into the set S.
Optionally, the geometric operation or spatial transformation comprises: and rotating, mirroring and reducing the resolution of the image.
Optionally, a first convolution layer, a first convolution component, a global average pooling layer and a softmax layer are connected in sequence from the input end to the output end of the deep learning neural network; the first convolution layer internally comprises a second convolution component and a maximum pooling dropout component.
Optionally, the deep learning neural network further comprises: an attention module having inputs from the first convolutional layer, outputs combined with the outputs of the first convolutional layer, the attention module including a third convolutional component and a Sigmoid layer.
In a second aspect, the present invention provides a bile duct shape recognition apparatus, including: the image preprocessing module is used for providing a set S containing a plurality of images, converting the images in the set S into the same resolution and/or size, wherein the images contain a bile duct target, and the bile duct target is in a form of primary expansion or secondary expansion; combining images obtained after geometric operation or spatial domain transformation is carried out on the images in the set S to form a set T, and merging the set T into the set S; the model training module is used for inputting the images in the set S and outputting the forms of the bile duct targets in the images, and sending the images into the deep learning neural network for training to obtain a bile duct form recognition network; and the target identification module is used for sending any image containing the bile duct target into the bile duct form identification network, and acquiring the form prediction of the bile duct target of any image output by the bile duct form identification network, wherein the form prediction comprises the probability of primary expansion and the probability of secondary expansion.
In a third aspect, the invention provides a computing device comprising: a processor and a memory storing a program, the processor implementing the method of the first aspect when executing the program.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a program which, when executed, performs the method of the first aspect.
The invention has the following beneficial effects:
the technical scheme provided by the invention can have the following beneficial effects: the bile duct morphology is identified by utilizing the deep learning neural network, so that the problem of low efficiency of manually judging the target morphology is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are one embodiment of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flow chart illustrating a bile duct shape recognition method according to a first embodiment of the invention;
fig. 2(a) -fig. 2(b) are schematic diagrams of a deep learning neural network structure of a bile duct morphology recognition method according to a first embodiment of the present invention, where fig. 2(a) is a schematic diagram of an overall deep learning neural network structure, and fig. 2(b) is a schematic diagram of a first convolution layer of the deep learning neural network structure;
fig. 3(a) -fig. 3(b) are schematic diagrams of another deep learning neural network structure of the bile duct morphology recognition method according to the first embodiment of the present invention, where fig. 3(a) is a schematic diagram of an overall deep learning neural network structure, and fig. 3(b) is a schematic diagram of an attention module of the deep learning neural network structure;
fig. 4 is a schematic structural diagram of a bile duct shape recognition device according to a second embodiment of the invention;
fig. 5 is a schematic structural diagram of a computing device according to a third embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and the described embodiments are some, but not all embodiments of the present invention.
The first embodiment is as follows:
fig. 1 is a flowchart illustrating a bile duct morphology identifying method according to a first embodiment of the present invention, and as shown in fig. 1, the method includes the following three steps.
Step S101: the images used for training are pre-processed. Specifically, a set S containing a plurality of images is provided, the images in the set S are converted into the same resolution and/or size, the images contain bile duct targets, and the bile duct targets are in primary dilatation or secondary dilatation; and forming a set T by images obtained after geometric operation or spatial domain transformation is carried out on the images in the set S, and merging the set T into the set S.
Before the bile duct shape recognition network training, the image data in the set S including a plurality of images needs to be unified in precision or size, and since the resolution of different video devices is different and the scales of the acquired images are also different, the images in the set S are converted into the same resolution and/or size before the model training.
In some optional embodiments, in order to reduce the training difficulty of the deep learning neural network, bile duct targets contained in the identification images are delineated by using an image segmentation technology or manually, and the images are cut according to the bile duct targets. Illustratively, in order to determine the type of dilation of the bile duct target contained in the medical image, the bile duct target in the image is segmented using image segmentation techniques, or the target in the medical image is manually delineated by a physician. Optionally, the physician makes a decision as to whether the bile duct target of the images in set S belongs to a primary or secondary dilation.
And images obtained after geometric operation or spatial domain transformation is carried out on the images in the set S form a set T, the set T is merged into the set S, the number of image samples in the set S is expanded, the data diversity is increased, and the training of the generalization capability of the deep learning neural network is improved. The geometric operations of the image include operations such as translation, rotation, mirroring, affine, and the like of the image. The spatial domain transformation of the image comprises filtering and sampling interpolation of pixels, and linear and nonlinear transformation of pixel values.
In some optional embodiments, the depth images of the images in the set S are generated to form a set U, and the set U is merged into the set S. The depth image is an image with the distance value from the image acquisition equipment to each point in a scene as a pixel value, can directly reflect the geometric shape of the bile duct target, and is introduced as a training set, so that the recognizable characteristic of the bile duct target is increased.
In some alternative embodiments, the geometric operation or spatial transformation comprises: and rotating, mirroring and reducing the resolution of the image. In medical images, for example, the acquired images have a certain rotation angle in consideration of the displacement of the body position of the patient in clinical practice; optionally, a value is randomly selected in the range of [ -0.1 pi, 0.1 pi ] as the rotation angle of the body position, and the image is spatially rotated. Illustratively, contrast adjustment and mirroring operation are carried out according to different parameters of different imaging equipment, wherein the contrast adjustment belongs to spatial domain transformation of an image, and the mirroring operation of the image belongs to geometric operation of the image; for example, a value in the range of [0.3,3] is randomly selected as a contrast adjustment value, the image is subjected to contrast adjustment, and then a mirroring operation is performed in three dimensions by randomly selecting one dimension. The deresolution operation is the sampling of image pixels in the spatial domain transform of the image.
Optionally, the spatial transform further comprises an image pixel value tone scale adjustment, for example, in a medical image, the CT value range commonly used in conventional abdominal CT is [ -1000,1500] HU value, and is-1000 for values lower than-1000 in the image, and is 1500 for values higher than 1500 in the image.
Step S102: and training the bile duct shape recognition network. Specifically, the images in the set S are used as input, the form of the bile duct target in the images is used as output, and the images are sent to a deep learning neural network for training to obtain a bile duct form recognition network.
Fig. 2(a) -2 (b) are schematic diagrams of a deep learning neural network structure of a bile duct morphology recognition method according to a first embodiment of the present invention, where fig. 2(a) is a schematic diagram of an overall deep learning neural network structure, and fig. 2(b) is a schematic diagram of a first convolution layer of the deep learning neural network structure.
In some optional embodiments, as shown in fig. 2(a), a first convolution layer 10, a first convolution component 30, a global average pooling layer 40, and a softmax layer 50 are connected in sequence from the input end to the output end of the deep learning neural network. As shown in fig. 2(b), the first convolution layer (10) includes a second convolution element 11 and a maximum pooling dropout element 12 therein.
The first convolution element 30 and the second convolution element 11 are both elements that perform convolution operations on the image. In the field of image processing, convolution is used for extracting texture information of an image from edge structure information of a shallow layer to texture semantic structure information of a deep layer.
The maximum pooling dropout component 12 performs the maximum pooling operation first, and then performs the dropout operation. The max pooling operation is used to extract image feature textures. And (4) dropout operation, deleting part of neurons with a certain probability during neural network training, training only the rest of neurons, and then recovering the deleted neurons to continue training, wherein the action is to prevent overfitting.
And a global average pooling layer 40 for performing global average pooling on the image for retaining background information.
softmax layer 50, using softmax for classification and normalization. Illustratively, a probability of primary dilation of a bile duct target Pt and a probability of secondary dilation Pf are output, where Pt + Pf is 1.
Optionally, as shown in fig. 2(a), the deep learning neural network comprises 4 first convolutional layers 10.
Fig. 3(a) -3 (b) are schematic diagrams of another deep learning neural network structure of the bile duct morphology recognition method according to the first embodiment of the present invention, where fig. 3(a) is a schematic diagram of an entire deep learning neural network structure, and fig. 3(b) is a schematic diagram of an attention module of the deep learning neural network structure.
In some optional embodiments, as shown in fig. 3(a), the deep learning neural network further comprises: attention module 20, the input of attention module 20 is from first convolutional layer 10, the output of attention module 20 is combined with the output of first convolutional layer 10, attention module 20 includes third convolutional component 21 and Sigmoid layer 22. As shown in fig. 3(b), the third convolution component 21 is a component for performing convolution operation on the image, and the Sigmoid layer 22 is an activation function layer in the neural network, and has a function of introducing nonlinearity.
Illustratively, referring to fig. 3, the output of the first convolutional layer 10 of each stage is combined with the output of the attention module 20 to serve as the input of the first convolutional layer 10 of the next stage, and the input of the attention module 20 is from the output of the first convolutional layer 10 of the present stage.
Step S103: and carrying out target identification by using a bile duct shape identification network. Specifically, any image containing the bile duct target is sent to a bile duct form recognition network, form prediction of the bile duct target of any image output by the bile duct form recognition network is obtained, and the form prediction comprises the probability that the form is primary expansion and the probability of secondary expansion.
In some optional embodiments, in order to improve the identification accuracy of the bile duct morphology identification network, an image segmentation technology or manual delineation is used for identifying bile duct targets contained in the image, and the image is cut according to the bile duct targets. Illustratively, to determine the bile duct dilation type contained in the medical image, the bile duct target in the image is segmented using image segmentation techniques, or the target in the medical image is manually delineated by a physician.
Example two:
the embodiment of the present invention provides a bile duct shape recognition device, which is mainly used for executing the bile duct shape recognition method provided by the above content of the embodiment of the present invention, and the bile duct shape recognition device provided by the embodiment of the present invention is specifically described below.
Fig. 4 is a schematic structural diagram of a bile duct morphology recognition device according to a second embodiment of the invention. As shown in fig. 4, the bile duct morphology recognition apparatus 200 includes the following modules:
the image preprocessing module 201 is used for providing a set S containing a plurality of images, converting the images in the set S into the same resolution and/or size, wherein the images contain a bile duct target, and the bile duct target is in a form of primary expansion or secondary expansion; and forming a set T by images obtained after geometric operation or spatial domain transformation is carried out on the images in the set S, and merging the set T into the set S.
And the model training module 202 is used for inputting the images in the set S and outputting the forms of the bile duct targets in the images, and sending the images into the deep learning neural network for training to obtain the bile duct form recognition network.
And the target identification module 203 is used for sending any image containing the bile duct target into the bile duct form identification network, and acquiring the form prediction of the bile duct target of any image output by the bile duct form identification network, wherein the form prediction comprises the probability that the form is primary expansion and the probability of secondary expansion.
Example three:
the embodiment of the invention also provides the computing equipment. As shown in fig. 5, the city area correlation calculation apparatus 300 of this embodiment includes: a processor 301, a memory 302, and programs stored in the memory 302 and executable on the processor 301. The processor 301 implements the steps in the bile duct shape recognition method embodiments described above when executing the program, such as steps S101 to S103 shown in fig. 1. Alternatively, the processor 301 executes programs to implement the functions of the modules in the embodiments of the devices, for example, the modules in fig. 4 to implement the bile duct shape recognition device.
Illustratively, the program may be partitioned into one or more modules that are stored in the memory 302 and executed by the processor 301 to implement the present invention. The one or more modules may be a series of program instruction segments capable of performing certain functions, which are used to describe the execution of the program in a computing device. For example, the program may be partitioned into a model training module and an object recognition module.
The specific functions of each module are as follows: the image preprocessing module 201 is used for providing a set S containing a plurality of images, converting the images in the set S into the same resolution and/or size, wherein the images contain a bile duct target, and the bile duct target is in a form of primary expansion or secondary expansion; adding the images obtained after geometric operation or spatial domain transformation is carried out on the images in the set S into the set S; the model training module 202 is used for inputting the images in the set S and outputting the forms of the bile duct targets in the images, and sending the images into the deep learning neural network for training to obtain a bile duct form recognition network; and the target identification module 203 is used for sending any image containing the bile duct target into the bile duct form identification network, and acquiring the form prediction of the bile duct target of any image output by the bile duct form identification network, wherein the form prediction comprises the probability that the form is primary expansion and the probability of secondary expansion.
The computing device can be a single chip microcomputer system, a desktop computer, a notebook computer, a palm computer, a cloud server and other computing devices. The computing device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the schematic diagrams are merely examples and do not constitute a limitation of computing devices, and may include more or fewer components than those shown, or some components may be combined, or different components, e.g., the computing devices may also include input-output devices, etc.
The Processor may be a Micro Control Unit (MCU), a Central Processing Unit (CPU), or other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that is the control center for the computing device and that connects the various parts of the overall computing device using various interfaces and lines.
The memory can be used for storing the programs and/or the modules, and the processor realizes various functions of the bile duct shape recognition method and the bile duct shape recognition device by running or executing the programs and/or the modules stored in the memory and calling the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Example four:
the module integrated with the bile duct morphology recognition device may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A bile duct shape recognition method is characterized by comprising the following steps:
the pre-processing of the image is carried out,
a set S comprising a number of images is provided,
converting the images in the set S into the same resolution and/or size, wherein the images comprise bile duct targets, and the bile duct targets are in primary dilatation or secondary dilatation;
combining images obtained after geometric operation or spatial domain transformation is carried out on the images in the set S to form a set T, and merging the set T into the set S;
the training of the model is carried out,
sending the images in the set S as input and the forms of the bile duct targets in the images as output into a deep learning neural network for training to obtain a bile duct form recognition network;
the identification of the target is carried out,
and sending any image containing the bile duct target into a bile duct form recognition network, and acquiring form prediction of the bile duct target of any image output by the bile duct form recognition network, wherein the form prediction comprises the probability of primary expansion and the probability of secondary expansion.
2. The method of claim 1, wherein the image preprocessing and the object recognition further comprise: and marking a bile duct target contained in the image by using an image segmentation technology or manual drawing, and cutting the image according to the bile duct target.
3. The method of claim 1, further comprising: and generating a depth image composition set U of the images in the set S, and merging the set U into the set S.
4. The method of claim 1, wherein the geometric operation or spatial transformation comprises: and rotating, mirroring and reducing the resolution of the image.
5. The method according to claim 1, characterized in that a first convolution layer (10), a first convolution component (30), a global average pooling layer (40) and a softmax layer (50) are connected in sequence from an input end to an output end of the deep learning neural network; the first convolution layer (10) internally comprises a second convolution component (11) and a maximum pooling dropout component (12).
6. The method of claim 5, wherein the deep learning neural network further comprises: an attention module (20), the input of the attention module (20) being from the first convolutional layer (10), the output of the attention module (20) being combined with the output of the first convolutional layer (10), the attention module (20) comprising a third convolutional component (21) and a Sigmoid layer (22).
7. A bile duct morphology recognition device, characterized by comprising:
an image pre-processing module for
A set S comprising a number of images is provided,
converting the images in the set S into the same resolution and/or size, wherein the images comprise bile duct targets, and the bile duct targets are in the form of primary dilatation or secondary dilatation;
adding the images obtained after geometric operation or spatial domain transformation is carried out on the images in the set S into the set S;
model training module for
Sending the images in the set S as input and the forms of the bile duct targets in the images as output into a deep learning neural network for training to obtain a bile duct form recognition network;
an object recognition module for
And sending any image containing the bile duct target into a bile duct form recognition network, and acquiring form prediction of the bile duct target of any image output by the bile duct form recognition network, wherein the form prediction comprises the probability of primary expansion and the probability of secondary expansion.
8. A computing device, comprising: processor and memory storing a program, wherein the processor implements the method of any one of claims 1 to 6 when executing the program.
9. A computer-readable storage medium having a program stored thereon, wherein the program when executed implements the method of any of claims 1-6.
CN202111061013.6A 2021-09-10 2021-09-10 Bile duct form identification method and device Pending CN113705578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111061013.6A CN113705578A (en) 2021-09-10 2021-09-10 Bile duct form identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111061013.6A CN113705578A (en) 2021-09-10 2021-09-10 Bile duct form identification method and device

Publications (1)

Publication Number Publication Date
CN113705578A true CN113705578A (en) 2021-11-26

Family

ID=78659776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111061013.6A Pending CN113705578A (en) 2021-09-10 2021-09-10 Bile duct form identification method and device

Country Status (1)

Country Link
CN (1) CN113705578A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170032222A1 (en) * 2015-07-30 2017-02-02 Xerox Corporation Cross-trained convolutional neural networks using multimodal images
CN109215021A (en) * 2018-09-06 2019-01-15 中国石油大学(华东) A kind of cholelithiasis CT medical image method for quickly identifying based on deep learning
CN109543697A (en) * 2018-11-16 2019-03-29 西北工业大学 A kind of RGBD images steganalysis method based on deep learning
CN110070527A (en) * 2019-04-18 2019-07-30 成都雷熵科技有限公司 One kind being based on the full Connection Neural Network lesion detection method in region
CN110766051A (en) * 2019-09-20 2020-02-07 四川大学华西医院 Lung nodule morphological classification method based on neural network
CN111292307A (en) * 2020-02-10 2020-06-16 刘肖 Digestive system gallstone recognition method and positioning method
CN112233777A (en) * 2020-11-19 2021-01-15 中国石油大学(华东) Gallstone automatic identification and segmentation system based on deep learning, computer equipment and storage medium
CN112598613A (en) * 2019-09-16 2021-04-02 苏州速游数据技术有限公司 Determination method based on depth image segmentation and recognition for intelligent lung cancer diagnosis
CN112906808A (en) * 2021-03-05 2021-06-04 华南师范大学 Image classification method, system, device and medium based on convolutional neural network
CN112967254A (en) * 2021-03-08 2021-06-15 中国计量大学 Lung disease identification and detection method based on chest CT image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170032222A1 (en) * 2015-07-30 2017-02-02 Xerox Corporation Cross-trained convolutional neural networks using multimodal images
CN109215021A (en) * 2018-09-06 2019-01-15 中国石油大学(华东) A kind of cholelithiasis CT medical image method for quickly identifying based on deep learning
CN109543697A (en) * 2018-11-16 2019-03-29 西北工业大学 A kind of RGBD images steganalysis method based on deep learning
CN110070527A (en) * 2019-04-18 2019-07-30 成都雷熵科技有限公司 One kind being based on the full Connection Neural Network lesion detection method in region
CN112598613A (en) * 2019-09-16 2021-04-02 苏州速游数据技术有限公司 Determination method based on depth image segmentation and recognition for intelligent lung cancer diagnosis
CN110766051A (en) * 2019-09-20 2020-02-07 四川大学华西医院 Lung nodule morphological classification method based on neural network
CN111292307A (en) * 2020-02-10 2020-06-16 刘肖 Digestive system gallstone recognition method and positioning method
CN112233777A (en) * 2020-11-19 2021-01-15 中国石油大学(华东) Gallstone automatic identification and segmentation system based on deep learning, computer equipment and storage medium
CN112906808A (en) * 2021-03-05 2021-06-04 华南师范大学 Image classification method, system, device and medium based on convolutional neural network
CN112967254A (en) * 2021-03-08 2021-06-15 中国计量大学 Lung disease identification and detection method based on chest CT image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ISAAC RONALD WARD ET AL.: "《RGB-D image-based Object Detection: from Traditional Methods to Deep Learning Techniques》", 《RGB-D IMAGE ANALYSIS AND PROCESSING 》, pages 1 - 30 *
XUANNANG XU: "《Efficient Multiple Organ Localization in CT Image using 3D Region Proposal Network》", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 *
周尚波: "《深度学习》", 西安:西安电子科技大学出版社, pages: 141 - 142 *

Similar Documents

Publication Publication Date Title
CN108229490B (en) Key point detection method, neural network training method, device and electronic equipment
CN109214366B (en) Local target re-identification method, device and system
CN111814794B (en) Text detection method and device, electronic equipment and storage medium
CN112465828A (en) Image semantic segmentation method and device, electronic equipment and storage medium
CN112329702B (en) Method and device for rapid face density prediction and face detection, electronic equipment and storage medium
CN110852311A (en) Three-dimensional human hand key point positioning method and device
CN111932577B (en) Text detection method, electronic device and computer readable medium
CN114581710A (en) Image recognition method, device, equipment, readable storage medium and program product
Aljelawy et al. Detecting license plate number using ocr technique and raspberry pi 4 with camera
CN114565035A (en) Tongue picture analysis method, terminal equipment and storage medium
Wang et al. An unsupervised heterogeneous change detection method based on image translation network and post-processing algorithm
CN113269752A (en) Image detection method, device terminal equipment and storage medium
CN108154107B (en) Method for determining scene category to which remote sensing image belongs
CN113705578A (en) Bile duct form identification method and device
CN112419249B (en) Special clothing picture conversion method, terminal device and storage medium
CN115346209A (en) Motor vehicle three-dimensional target detection method and device and computer readable storage medium
CN112801960B (en) Image processing method and device, storage medium and electronic equipment
Quach Convolutional networks for vehicle track segmentation
CN113450355A (en) Method for extracting image features based on multi-membrane CT image and 3DCNN network
CN116664604B (en) Image processing method and device, storage medium and electronic equipment
Aqaileh et al. Automatic jordanian license plate detection and recognition system using deep learning techniques
CN113469172B (en) Target positioning method, model training method, interface interaction method and equipment
CN116884003B (en) Picture automatic labeling method and device, electronic equipment and storage medium
CN117912001A (en) License plate detection method, device, equipment and medium
CN117912071A (en) Image detection method, terminal device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination