CN116416485A - Bronchus image generation method, electronic device, and computer-readable storage medium - Google Patents

Bronchus image generation method, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN116416485A
CN116416485A CN202111683588.1A CN202111683588A CN116416485A CN 116416485 A CN116416485 A CN 116416485A CN 202111683588 A CN202111683588 A CN 202111683588A CN 116416485 A CN116416485 A CN 116416485A
Authority
CN
China
Prior art keywords
picture
countermeasure network
target
lung
generation countermeasure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111683588.1A
Other languages
Chinese (zh)
Inventor
李楠宇
陈日清
徐宏
余坤璋
刘润南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Kunbo Biotechnology Co Ltd
Original Assignee
Hangzhou Kunbo Biotechnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Kunbo Biotechnology Co Ltd filed Critical Hangzhou Kunbo Biotechnology Co Ltd
Priority to CN202111683588.1A priority Critical patent/CN116416485A/en
Publication of CN116416485A publication Critical patent/CN116416485A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application relates to the field of medical equipment, and provides a bronchus image generation method, an electronic device and a computer readable storage medium, which can acquire a bronchus image efficiently and conveniently. The method comprises the following steps: obtaining a basic database, wherein the basic database comprises a plurality of real intra-lung pictures and lung membrane pictures; taking the picture selected from the basic database as an original picture; inputting the original picture into a pre-trained target generation countermeasure network, and obtaining a target picture corresponding to the original picture output by the pre-trained target generation countermeasure network. Compared with the prior art, the technical scheme provided by the application can be used for acquiring the bronchus image efficiently and conveniently.

Description

Bronchus image generation method, electronic device, and computer-readable storage medium
Technical Field
The present invention relates to the field of medical devices, and in particular, to a bronchial image generating method, an electronic apparatus, and a computer readable storage medium.
Background
With the development of artificial intelligence (Artificial Intelligence, AI) technology, great breakthroughs have been made in the medical field, such as medical image registration and endoscopic navigation. The performance of AI technology relies on massive high quality medical data as a training set to train models. Taking bronchoscopy navigation of lung intervention as an example, the corresponding algorithm usually needs bronchoscopy on a lung phantom (lung phantom), so that a lung phantom picture is acquired for simulation test, and the cost is up to tens of thousands of dollars; on the other hand, when it is necessary to perform a test using an intra-lung picture as a training set, it is also necessary to acquire data from a living lung.
From the above description, it is known that the cost is extremely high for obtaining both the lung cantom picture and the intra-pulmonary picture, which hinders the popularization and application of the AI technology in the medical field.
Disclosure of Invention
The bronchus image generation method, the electronic device and the computer readable storage medium can acquire the bronchus image efficiently and conveniently.
In one aspect, an embodiment of the present application provides a method for generating a bronchial image, including:
obtaining a basic database, wherein the basic database comprises a plurality of real intra-lung pictures and lung membrane pictures;
taking the picture selected from the basic database as an original picture;
inputting the original picture into a pre-trained target generation countermeasure network, and obtaining a target picture corresponding to the original picture, which is output by the pre-trained target generation countermeasure network.
An aspect of an embodiment of the present application further provides a bronchial image generating apparatus, including:
the acquisition module is used for acquiring a basic database, wherein the basic database comprises a plurality of real intra-lung pictures and lung body membrane pictures;
the selection module is used for taking the picture selected from the basic database as an original picture;
and the generation module is used for inputting the original picture into a pre-trained target generation countermeasure network and obtaining a target picture corresponding to the original picture, which is output by the pre-trained target generation countermeasure network.
An aspect of an embodiment of the present application further provides an electronic device, including: a memory and a processor;
the memory stores executable program code;
the processor, coupled to the memory, invokes the executable program code stored in the memory to perform the bronchial image generation method as provided by the above embodiments.
An aspect of the embodiments of the present application also provides a non-transitory computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements a bronchial image generation method as provided by the above embodiments.
According to the technical scheme provided by the application, on one hand, as the real intra-lung pictures and lung body membrane pictures contained in the basic database are wide in sources, the acquisition mode is simple, the problem of acquisition cost does not exist, and the original pictures are input into a pre-trained target generation countermeasure network to obtain target pictures corresponding to the original pictures output by the pre-trained target generation countermeasure network, so that the bronchus images can be acquired efficiently and conveniently, the cost when various algorithms such as medical image registration and/or endoscope navigation need to adopt lung body models for simulation test or acquire data in vivo is reduced, and the application and popularization of AI technology in the medical field are facilitated; on the other hand, compared with the intra-lung pictures acquired through the bronchoscope and the virtual bronchial tree slice pictures acquired through CT scanning, the mode gap between the simulated intra-lung pictures and the virtual bronchial tree slice pictures generated by the countermeasure network generated by the pre-training target generation is smaller, and the mode gap between the simulated intra-lung pictures and the virtual bronchial tree slice pictures can be reduced, so that matching in the endoscope navigation process is facilitated, and the navigation performance is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, it being obvious that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a flowchart of a bronchial image generation method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an objective generation countermeasure network according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a bronchial image generating apparatus according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a target countermeasure network according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a target countermeasure network for a second generation countermeasure network according to an embodiment of the present disclosure;
FIG. 6a is a schematic diagram of a third generation countermeasure network for the target countermeasure network according to the embodiment of the present application;
FIG. 6b is a schematic diagram of a third generation countermeasure network for the target countermeasure network according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Referring to fig. 1, a flowchart of an implementation of a bronchial image generation method according to an embodiment of the present application is provided. As shown in fig. 1, the method mainly comprises the following steps:
step S101, a basic database is obtained, wherein the basic database comprises a plurality of real intra-lung pictures and lung body membrane pictures.
In the embodiments of the present application, the real intrapulmonary pictures and the lung membrane pictures can exist in the basic database, and the existing means that the sources are wide, and the acquisition mode is simple. For example, the real intrapulmonary image may be derived from data acquisition of the lungs of a patient who has to perform an intervention such as an endoscopic introduction, or may be derived from a history of data acquisition of the lungs of a large number of patients having an intervention such as an endoscopic introduction before the AI technique is applied to the medical field, or may be derived from intrapulmonary images generated by other neural networks or by pre-training the target generation countermeasure network as mentioned in the embodiment of the present application, or the like; in other words, the implementation of the real intra-lung pictures is not intended for the purpose of implementing the technical solution of the embodiment of the present application, and the acquisition of the real intra-lung pictures according to the embodiment of the present application may be regarded as a "just-in-time" behavior, or the link of living body acquisition (for example, generated through a trained neural network) is completely avoided; likewise, the present embodiment is also purely a "just-in-place" behavior when taken with respect to existing lung phantom pictures. On the one hand, the existing real intra-lung pictures and the existing lung phantom pictures have the characteristics of wide sources, simple acquisition modes and the like, so that the cost of the technical scheme of the application is almost negligible when the technical scheme is implemented or the existing real intra-lung pictures and the existing lung phantom pictures are acquired; on the other hand, the real intra-lung pictures and the lung body membrane pictures do not need to have one-to-one association relation, so that the process of acquiring the real intra-lung pictures and the lung body membrane pictures is simplified, and the cost of acquiring the real intra-lung pictures and the lung body membrane pictures is objectively reduced.
And step S102, taking the picture selected from the basic database as an original picture.
The original picture means that it can be used as input data to the neural network (e.g., to generate an countermeasure network, etc.) or for training the neural network.
Step S103, inputting the original picture into a pre-trained target generation countermeasure network, and obtaining a target picture corresponding to the original picture output by the pre-trained target generation countermeasure network.
In the embodiment of the present application, the pre-trained target generation countermeasure network is obtained after training the target generation countermeasure network (Generative Adversarial Networks, GAN), which means that the technical solution of the present application further includes a process of training the GAN, and specifically, the technical solution of the present application further includes: acquiring a training set of a target generation countermeasure network, wherein the training set comprises a simulated lung phantom picture, a simulated lung internal picture, a real lung internal picture, a lung phantom picture and/or a virtual bronchial tree slice picture; and training the target generation countermeasure network by adopting a training set to obtain a trained target generation countermeasure network.
As shown in fig. 2, in the above-described embodiment, the target generation countermeasure network mainly includes the forward generator 201, the reverse generator 202, the forward arbiter 203 corresponding to the forward generator 201, and the reverse arbiter 204 corresponding to the reverse generator 202, wherein the forward generator 201 may be any model that can output a picture, such as the simplest fully-connected neural network or deconvolution network, etc., which means that when one n-dimensional vector is input, both the forward generator 201 and the reverse generator 202 may be one model that can output a picture, and the input of the reverse generator 202 is set to be the same type as the output result of the forward generator 201 or the output result of the forward generator 201 as the input of the forward generator 201. For either the forward arbiter 203 or the backward arbiter 204,it may be any discriminator model, and in the embodiment of the present application, the forward discriminator 203 or the reverse discriminator 204 may be a two-class discriminator, where the input is a picture, and the output is an authenticity label of the picture. As an embodiment of the present application, training the target generation countermeasure network with the training set may be that: constructing a target loss function; and inputting training samples contained in the training set into a generated countermeasure network, and training the generated countermeasure network based on the target loss function until the value of the target loss function is minimum, so as to obtain the trained target loss function. The objective loss function includes a contrast loss function (denoted as L adv ) A coincidence loss function (denoted as L consist ) And a continuous loss function (denoted as L consec ) And so on. In the embodiment of the present application, the expression of the counterloss function is L adv =arg min G arg max D log D (Y) + (1-log D (G (X))), wherein Y represents image data extracted from an image generated by the target generation countermeasure network, X represents image data extracted from a simulated lung phantom picture, a simulated intra-lung picture, a real intra-lung picture, a lung phantom picture, and/or a virtual bronchial tree slice picture, G (X) represents a Generator (Generator) input to X, D (Y) represents a Discriminator (Discriminator) input to Y, and the target of adjusting the countermeasure loss function is to balance the Generator G and the Discriminator D such that data generated by the Generator G is determined to be real; the expression of the coincidence loss function is L consist =arg min G ||X-G T *(G(X))|| 2 The goal is to pass the generated data through the inverse generator G T The error between the generated data and the input data X is minimal; the expression of the continuous loss function is L consec =||F(G(X t )-G(X t-1 ))-F(X t ,X t-1 )|| 2 The objective is to make the generated data and the input data have the same optical flow change, ensure that the generated data also conforms to the same time distribution, and make the expression of the continuous loss function in F (X) t ,X t-1 ) Representing solution X t Optical flow between t and t-1, F (G (X) t )-G(X t-1 ) Representing solutionG(X t ) Optical flow between t and t-1, X t Data X representing time t. In the above embodiment, the training samples included in the training set are input into the generated countermeasure network, and the generated countermeasure network is trained based on the objective loss function until the value of the objective loss function is minimum, so as to obtain the trained objective loss function, which may specifically be: the training set is used as the input of the forward generator, the parameters of the forward discriminator are adjusted according to the output result of the forward generator or the parameters of the forward generator are adjusted according to the output result of the forward discriminator until the value of the target loss function is minimum; or the training set is used as the input of the reverse generator, the parameters of the reverse discriminator are adjusted according to the output result of the reverse generator or the parameters of the reverse generator are adjusted according to the output result of the reverse discriminator until the value of the target loss function is minimum. For example, during training, the forward arbiter receives the true data and the false data generated by the forward arbiter, and parameters of the forward arbiter and the forward arbiter can be simultaneously tuned according to the last output result: if the forward direction judging device judges that the forward direction judging device is correct, the parameters of the forward direction generating device are adjusted so that the generated false data are more realistic; if the forward direction discriminator judges that the error exists, the parameters of the forward direction discriminator are adjusted, so that the next judging error is avoided; training continues until both enter a state of equilibrium, i.e., the target loss function, the consistent loss function, and the continuous loss function are minimal; the tuning of the parameters of the reverse generator and the reverse arbiter may be performed similarly, and will not be described in detail.
The target generation countermeasure network illustrated in fig. 2 is trained, and when the value of the target loss function reaches the minimum, a pre-trained target generation countermeasure network is obtained, and then, according to actual requirements, a target picture corresponding to the original picture can be generated by the pre-trained target generation countermeasure network.
Since the target picture may be a lung membrane picture, a simulated lung internal picture or a virtual bronchial tree slice picture, i.e. there is a need for a lung membrane picture, a simulated lung internal picture or a virtual bronchial tree slice picture, the following is described in the following cases:
when there is a need to obtain a lung phantom image, the original image of the above embodiment is a real intra-lung image in the base database, and the target countermeasure network is a first generation countermeasure network, and at this time, as an embodiment of the present application, inputting the original image into the pre-trained target generation countermeasure network, obtaining the target image corresponding to the original image output by the pre-trained target generation countermeasure network may be: and inputting the selected real intra-lung picture into a pre-trained first generation reactance network to obtain a simulated lung body membrane picture corresponding to the selected real intra-lung picture, which is output by the first generation reactance network. As shown in fig. 4, the target antagonizing network is a schematic structural diagram of the first generating antagonizing network.
When there is a need to obtain a simulated intra-pulmonary picture, the original picture is a lung membrane picture in the base database, and the target countermeasure network is a pre-trained second generation countermeasure network. At this time, as another embodiment of the present application, inputting the original picture into the pre-trained target generation countermeasure network, obtaining the target picture corresponding to the original picture output by the pre-trained target generation countermeasure network may be: and inputting the selected lung body film picture into a second generation countermeasure network trained in advance, and obtaining a simulated intra-lung picture corresponding to the selected lung body film picture, which is output by the second generation countermeasure network. As shown in fig. 5, the target countermeasure network is a schematic structural diagram of the second generation countermeasure network.
When there is a need to obtain a slice picture of the virtual bronchial tree, the original picture is a real intra-lung picture or a lung body membrane picture in the base database, and the target countermeasure network is a pre-trained third generation countermeasure network. At this time, as another embodiment of the present application, inputting the original picture into the pre-trained target generation countermeasure network, obtaining the target picture corresponding to the original picture output by the pre-trained target generation countermeasure network may be: and inputting the selected real intra-lung picture or the lung body membrane picture into a pre-trained third generation countermeasure network, and obtaining a virtual bronchial tree slice picture corresponding to the selected real intra-lung picture or the lung body membrane picture output by the third generation countermeasure network. FIG. 6a is a schematic diagram of a third generation countermeasure network for the target countermeasure network according to an embodiment of the present application, where the third generation countermeasure network uses a real intra-lung picture as an original picture; fig. 6b is a schematic structural diagram of a third generation countermeasure network for the target countermeasure network according to another embodiment of the present application, where the third generation countermeasure network uses a lung membrane picture as an original picture. Compared with a real intra-pulmonary picture acquired through a bronchoscope and a virtual bronchial tree slice picture acquired through CT scanning, the method and the device have the advantages that the modal gap between the simulated intra-pulmonary picture acquired through the second generation countermeasure network trained in advance and the virtual bronchial tree slice picture through the third generation countermeasure network trained in advance is smaller, the modal gap between the two images can be reduced, matching in the navigation process is facilitated, and the navigation performance is improved.
After obtaining the target pictures corresponding to the original pictures output by the pre-trained target generation countermeasure network via step S103, these generated target pictures may be added to the base database so that the target pictures are processed as the original image input target generation countermeasure network. For example, inputting a target picture, i.e., a lung membrane picture, to a target generation countermeasure network, the target generation countermeasure network generating a corresponding simulated intrapulmonary picture or virtual bronchial tree slice picture; for another example, a target picture, i.e., a simulated intra-pulmonary picture, is input to a target generation countermeasure network, the target generation countermeasure network generates a corresponding lung body film picture or virtual bronchial tree slice picture, and so on.
According to the technical scheme provided by the application, on one hand, as the real intra-lung pictures and lung body membrane pictures contained in the basic database are wide in sources, the acquisition mode is simple, the problem of acquisition cost does not exist, and the original pictures are input into a pre-trained target generation countermeasure network to obtain target pictures corresponding to the original pictures output by the pre-trained target generation countermeasure network, so that the bronchus images can be acquired efficiently and conveniently, the cost when various algorithms such as medical image registration and/or endoscope navigation need to adopt lung body models for simulation test or acquire data in vivo is reduced, and the application and popularization of AI technology in the medical field are facilitated; on the other hand, compared with the intra-lung pictures acquired through the bronchoscope and the virtual bronchial tree slice pictures acquired through CT scanning, the mode gap between the simulated intra-lung pictures and the virtual bronchial tree slice pictures generated by the countermeasure network generated by the pre-training target generation is smaller, and the mode gap between the simulated intra-lung pictures and the virtual bronchial tree slice pictures can be reduced, so that matching in the endoscope navigation process is facilitated, and the navigation performance is improved.
Referring to fig. 3, a schematic structural diagram of a bronchial image generating apparatus according to an embodiment of the present application is provided. For convenience of explanation, only portions relevant to the embodiments of the present application are shown. The apparatus may be a computer terminal or a software module configured in the computer terminal. As shown in fig. 3, the apparatus includes: the acquisition module 301, the selection module 302, and the generation module 303 are described in detail below:
the acquiring module 301 is configured to acquire a basic database, where the basic database includes a plurality of real intra-lung pictures and lung body membrane pictures;
a selection module 302, configured to take a picture selected from the base database as an original picture;
the generating module 303 is configured to input the original picture into a pre-trained target generation countermeasure network, and obtain a target picture corresponding to the original picture output by the pre-trained target generation countermeasure network.
Further, when there is a need to obtain a lung membrane picture, the original picture in the above embodiment is a real lung membrane picture in the base database, the target countering network is a first generating countering network, and the generating module 303 is specifically configured to obtain a simulated lung membrane picture corresponding to the selected real lung membrane picture output by the first generating countering network by inputting the selected real lung membrane picture into the first generating countering network trained in advance.
Further, when there is a need to obtain a simulated intra-pulmonary picture, the original picture in the above embodiment is a lung body film picture in the base database, the target countermeasure network is a pre-trained second generation countermeasure network, and the generating module 303 is specifically configured to obtain the simulated intra-pulmonary picture corresponding to the selected lung body film picture output by the second generation countermeasure network by inputting the selected lung body film picture into the pre-trained second generation countermeasure network.
Further, when there is a need to obtain a slice picture of the virtual bronchial tree, the original picture is a real intra-lung picture or a lung body film picture in the base database, the target countermeasure network is a pre-trained third generation countermeasure network, and the generating module 303 is specifically configured to obtain the slice picture of the virtual bronchial tree corresponding to the selected real intra-lung picture or lung body film picture output by the third generation countermeasure network by inputting the selected real intra-lung picture or lung body film picture into the pre-trained third generation countermeasure network.
Further, the apparatus illustrated in fig. 3 may further include an adding module for adding the generated target picture to the base database to enable the target picture to be input as the original image to the target generation countermeasure network for processing.
Further, the apparatus illustrated in fig. 3 may further include a training set acquisition module and a training module, where:
the training set acquisition module is used for acquiring a training set of the target generation countermeasure network, wherein the training set comprises a simulated lung phantom picture, a simulated lung internal picture, a real lung internal picture, a lung phantom picture and a virtual bronchial tree slice picture;
and the training module is used for training the target generation countermeasure network by adopting a training set to obtain a trained target generation countermeasure network.
Further, the objective generating countermeasure network of the above embodiment includes a forward generator, a reverse generator, a forward arbiter corresponding to the forward generator, and a reverse arbiter corresponding to the reverse generator, and the training module may further include a construction unit and an tuning unit, where:
a construction unit configured to construct a target loss function including an antagonistic loss function, a uniform loss function, a continuous loss function;
the optimizing unit is used for inputting training samples contained in the training set into the target generation countermeasure network, and training the target generation countermeasure network based on the target loss function until the value of the target loss function is minimum so as to obtain the trained target loss function.
Further, the counterdamage function is L adv =arg min G arg max D log D (Y) + (1-log D (G (X))), wherein Y represents image data extracted from an image generated by the target generation countermeasure network, X represents image data extracted from a simulated lung phantom picture, a simulated intra-lung picture, a real intra-lung picture, a lung phantom picture, and/or a virtual bronchial tree slice picture, G (X) represents a generator input as X, and D (Y) represents a discriminator input as Y; the consistent loss function is L consist =arg min G ||X-G T *(G(X))|| 2 ,G T Is a reverse generator; continuous loss function of L consec =||F(G(X t )-G(X t-1 ))-F(X t ,X t-1 )|| 2 ,F(X t ,X t-1 ) Representing solution X t Optical flow between t and t-1, F (G (X) t )-G(X t-1 ) (X) represents solving for G (X) t ) Optical flow between t and t-1, X t Data X representing time t
Further, the tuning unit may include a first parameter adjustment unit or a second parameter adjustment unit, where:
the first parameter adjusting unit is used for taking the training set as the input of the forward generator, adjusting the parameters of the forward discriminator according to the output result of the forward generator or adjusting the parameters of the forward generator according to the output result of the forward discriminator until the value of the target loss function is minimum;
and the second parameter adjusting unit is used for taking the training set as the input of the reverse generator, adjusting the parameters of the reverse discriminator according to the output result of the reverse generator or adjusting the parameters of the reverse generator according to the output result of the reverse discriminator until the value of the target loss function is minimum.
The specific process of implementing the respective functions of the above modules may refer to the relevant content in the embodiment shown in fig. 1, and will not be repeated here.
According to the technical scheme provided by the application, on one hand, as the real intra-lung pictures and lung body membrane pictures contained in the basic database are wide in sources, the acquisition mode is simple, the problem of acquisition cost does not exist, and the original pictures are input into a pre-trained target generation countermeasure network to obtain target pictures corresponding to the original pictures output by the pre-trained target generation countermeasure network, so that the bronchus images can be acquired efficiently and conveniently, the cost when various algorithms such as medical image registration and/or endoscope navigation need to adopt lung body models for simulation test or acquire data in vivo is reduced, and the application and popularization of AI technology in the medical field are facilitated; on the other hand, compared with the intra-lung pictures acquired through the bronchoscope and the virtual bronchial tree slice pictures acquired through CT scanning, the mode gap between the simulated intra-lung pictures and the virtual bronchial tree slice pictures generated by the countermeasure network generated by the pre-training target generation is smaller, and the mode gap between the simulated intra-lung pictures and the virtual bronchial tree slice pictures can be reduced, so that matching in the endoscope navigation process is facilitated, and the navigation performance is improved.
Referring to fig. 7, a hardware structure of an electronic device according to an embodiment of the present application is shown.
By way of example, the electronic apparatus may be any of various types of computer system devices that are non-removable or portable and that perform wireless or wired communications. In particular, the electronic apparatus may be a desktop computer, a server, a mobile phone or a smart phone (e.g., an iPhone-based TM, an Android-based TM phone), a Portable game device (e.g., a Nintendo DS (TM), a PlayStation Portable TM, gameboy Advance TM, iPhone (TM)), a laptop computer, a PDA, a Portable internet device, a Portable medical device, a smart camera, a music player, and a data storage device, other handheld devices, and devices such as watches, headphones, pendants, headphones, etc., and the electronic apparatus may also be other wearable devices (e.g., devices such as electronic glasses, electronic clothing, electronic bracelets, electronic necklaces, and other head-mounted devices (HMDs)).
As shown in fig. 7, the electronic device 100 may include a control circuit, which may include a storage and processing circuit 300. The storage and processing circuit 300 may include memory, such as hard disk drive memory, non-volatile memory (e.g., flash memory or other electronically programmable limited delete memory used to form solid state drives, etc.), volatile memory (e.g., static or dynamic random access memory, etc.), and the like, as embodiments of the present application are not limited. Processing circuitry in the storage and processing circuitry 300 may be used to control the operation of the electronic device 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuit 300 may be used to run software in the electronic device 100, such as internet browsing applications, voice over internet protocol (Voice over Internet Protocol, VOIP) telephone call applications, email applications, media playing applications, operating system functions, and the like. Such software may be used to perform some control operations, such as image acquisition based on a camera, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functions implemented based on status indicators such as status indicators of light emitting diodes, touch event detection based on a touch sensor, functions associated with displaying information on multiple (e.g., layered) displays, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in electronic device 100, to name a few.
Further, the memory stores executable program code, and a processor coupled to the memory invokes the executable program code stored in the memory to perform the radio frequency operation prompting method as described in the foregoing embodiments.
Wherein the executable program code comprises the respective modules in the bronchial image generating apparatus as described in the embodiment shown in fig. 3 above, such as: an acquisition module 301, a selection module 302, a generation module 303, and the like. The specific process of implementing the respective functions of the above modules may refer to the related description of fig. 8, and will not be repeated herein.
The electronic device 100 may also include input/output circuitry 420. The input/output circuit 420 is operable to enable the electronic apparatus 100 to input and output data, i.e., to allow the electronic apparatus 100 to receive data from an external device and also to allow the electronic apparatus 100 to output data from the electronic apparatus 100 to the external device. The input/output circuit 420 may further include a sensor 320. The sensors 320 may include ambient light sensors, light and capacitance based proximity sensors, touch sensors (e.g., light based touch sensors and/or capacitive touch sensors, where the touch sensors may be part of a touch display screen or may be used independently as a touch sensor structure), acceleration sensors, and other sensors, among others.
The input/output circuitry 420 may also include one or more displays, such as display 140. Display 140 may include one or a combination of several of a liquid crystal display, an organic light emitting diode display, an electronic ink display, a plasma display, and a display using other display technologies. Display 140 may include an array of touch sensors (i.e., display 140 may be a touch screen display). The touch sensor may be a capacitive touch sensor formed of an array of transparent touch sensor electrodes, such as Indium Tin Oxide (ITO) electrodes, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, etc., as embodiments of the present application are not limited.
The electronic device 100 may also include an audio component 360. Audio component 360 may be used to provide audio input and output functionality for electronic device 100. The audio components 360 in the electronic device 100 may include speakers, microphones, buzzers, tone generators, and other components for generating and detecting sound.
Communication circuitry 380 may be used to provide electronic device 100 with the ability to communicate with external devices. Communication circuitry 380 may include analog and digital input/output interface circuitry, and wireless communication circuitry based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 380 may include radio frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, wireless communication circuitry in communication circuitry 380 may include circuitry for supporting near field communication (Near Field Communication, NFC) by transmitting and receiving near field coupled electromagnetic signals. For example, the communication circuit 380 may include a near field communication antenna and a near field communication transceiver. Communication circuitry 380 may also include cellular telephone transceiver and antenna, wireless local area network transceiver circuitry and antenna, and the like.
The electronic device 100 may further include a battery, power management circuitry, and other input/output units 400. The input/output unit 400 may include buttons, levers, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes, and other status indicators, etc.
A user may control the operation of the electronic device 100 by inputting commands through the input/output circuit 420, and may use output data of the input/output circuit 420 to enable receiving status information and other outputs from the electronic device 100.
Further, the embodiments of the present application also provide a non-transitory computer-readable storage medium, which may be configured in the server in the above embodiments, and on which a computer program is stored, which when executed by a processor, implements the bronchial image generation method described in the above embodiments.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of skill in the art will appreciate that the various illustrative modules/units and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other manners. For example, the apparatus/terminal embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow in the method of the above embodiment, and may also be implemented by a computer program to instruct related hardware. The computer program may be stored in a computer readable storage medium, which computer program, when being executed by a processor, may carry out the steps of the various method embodiments described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (12)

1. A method of generating a bronchial image, the method comprising:
obtaining a basic database, wherein the basic database comprises a plurality of real intra-lung pictures and lung membrane pictures;
taking the picture selected from the basic database as an original picture;
inputting the original picture into a pre-trained target generation countermeasure network, and obtaining a target picture corresponding to the original picture, which is output by the pre-trained target generation countermeasure network.
2. The bronchial image generation method of claim 1, wherein when there is a need to acquire a lung membrane picture, the original picture is a real intra-lung picture in the base database, and the target antagonism network is a first generation antagonism network;
the inputting the original picture into a pre-trained target generation countermeasure network, obtaining a target picture corresponding to the original picture output by the pre-trained target generation countermeasure network, including: and obtaining a simulated lung body film picture corresponding to the selected real intra-lung picture output by the first generation reactance network by inputting the selected real intra-lung picture into the pre-trained first generation reactance network.
3. The bronchial image generation method of claim 1, wherein when there is a need to acquire a simulated intra-pulmonary picture, the original picture is a lung membrane picture in the base database, and the target countermeasure network is a pre-trained second generation countermeasure network;
the inputting the original picture into a pre-trained target generation countermeasure network, obtaining a target picture corresponding to the original picture output by the pre-trained target generation countermeasure network, including: and obtaining a simulated intra-lung picture corresponding to the selected lung body film picture output by the second generation countermeasure network by inputting the selected lung body film picture into the pre-trained second generation countermeasure network.
4. The bronchial image generation method of claim 1, wherein when there is a need to obtain a virtual bronchial tree slice picture, the original picture is a real intra-lung picture or a lung body membrane picture in the base database, and the target countermeasure network is a third generation countermeasure network trained in advance;
the inputting the original picture into a pre-trained target generation countermeasure network, obtaining a target picture corresponding to the original picture output by the pre-trained target generation countermeasure network, including: and inputting the selected real intra-lung picture or the lung body membrane picture into the pre-trained third generation countermeasure network, and obtaining a virtual bronchial tree slice picture corresponding to the selected real intra-lung picture or the lung body membrane picture output by the third generation countermeasure network.
5. The bronchial image generation method of claim 1, wherein the method further comprises:
and adding the generated target picture into the basic database so that the target picture is input into the target generation countermeasure network as an original image to be processed.
6. The bronchial image generation method of claim 1, wherein the method further comprises:
acquiring a training set of a target generation countermeasure network, wherein the training set comprises a simulated lung phantom picture, a simulated lung internal picture, the real lung internal picture, the lung phantom picture and a virtual bronchial tree slice picture;
and training the target generation countermeasure network by adopting the training set to obtain the pre-trained target generation countermeasure network.
7. The bronchial image generation method of claim 6, wherein the target generation countermeasure network includes a forward generator, a reverse generator, a forward arbiter corresponding to the forward generator, and a reverse arbiter corresponding to the reverse generator, the training the target generation countermeasure network using a training set, resulting in a pre-trained target generation countermeasure network, comprising:
constructing a target loss function, wherein the target loss function comprises an antagonism loss function, a coincidence loss function and a continuous loss function;
inputting training samples contained in the training set into a target generation countermeasure network, and training the target generation countermeasure network based on the target loss function until the value of the target loss function is minimum, so as to obtain a trained target loss function.
8. As claimed inThe method for generating a bronchial image according to 6, wherein the contrast loss function is L adv =arg min G arg max D log D (Y) + (1-log D (G (X))), the Y representing image data extracted from an image generated by the target generation countermeasure network, X representing image data extracted from the simulated lung phantom picture, simulated intrapulmonary picture, real intrapulmonary picture, pulmonary phantom picture, and/or virtual bronchial tree slice picture, G (X) representing a generator input as X, D (Y) representing a discriminator input as Y;
the consistent loss function is L consist =arg min G ||X-G T *(G(X))|| 2 The G is T Is a reverse generator;
the continuous loss function is L consec =||F(G(X t )-G(X t-1 ))-F(X t ,X t-1 )|| 2 Said F (X) t ,X t-1 ) Representing solution X t Optical flow between t and t-1, F (G (X) t )-G(X t-1 ) (X) represents solving for G (X) t ) Optical flow between t and t-1, said X t Data X representing time t.
9. The bronchial image generation method of claim 7, wherein inputting the training samples contained in the training set to generate an countermeasure network, training the generated countermeasure network based on the objective loss function until the value of the objective loss function is minimum, and obtaining the trained objective loss function comprises:
taking the training set as the input of the forward generator, adjusting the parameters of the forward discriminator according to the output result of the forward generator or adjusting the parameters of the forward generator according to the output result of the forward discriminator until the value of the target loss function is minimum; or alternatively
And taking the training set as the input of the reverse generator, adjusting the parameters of the reverse discriminator according to the output result of the reverse generator or adjusting the parameters of the reverse generator according to the output result of the reverse discriminator until the value of the target loss function is minimum.
10. A bronchial image generation apparatus, the apparatus comprising:
the acquisition module is used for acquiring a basic database, wherein the basic database comprises a plurality of real intra-lung pictures and lung body membrane pictures;
the selection module is used for taking the picture selected from the basic database as an original picture;
and the generation module is used for inputting the original picture into a pre-trained target generation countermeasure network and obtaining a target picture corresponding to the original picture, which is output by the pre-trained target generation countermeasure network.
11. An electronic device, the electronic device comprising: a memory and a processor;
the memory stores executable program code;
the processor coupled to the memory, invoking the executable program code stored in the memory, performing the bronchial image generation method of any of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the bronchial image generation method according to any of claims 1 to 9.
CN202111683588.1A 2021-12-31 2021-12-31 Bronchus image generation method, electronic device, and computer-readable storage medium Pending CN116416485A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111683588.1A CN116416485A (en) 2021-12-31 2021-12-31 Bronchus image generation method, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111683588.1A CN116416485A (en) 2021-12-31 2021-12-31 Bronchus image generation method, electronic device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN116416485A true CN116416485A (en) 2023-07-11

Family

ID=87058613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111683588.1A Pending CN116416485A (en) 2021-12-31 2021-12-31 Bronchus image generation method, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN116416485A (en)

Similar Documents

Publication Publication Date Title
US10244988B2 (en) Method, apparatus and computer program of using a bio-signal profile
US20220139516A1 (en) Ambient Clinical Intelligence System and Method
CN113435226B (en) Information processing method and device
CN111104980B (en) Method, device, equipment and storage medium for determining classification result
CN111445901A (en) Audio data acquisition method and device, electronic equipment and storage medium
CN108259988A (en) A kind of video playing control method, terminal and computer readable storage medium
CN109389564A (en) Display methods and electronic device
CN110162956A (en) The method and apparatus for determining interlock account
CN110490389B (en) Click rate prediction method, device, equipment and medium
CN108230104A (en) Using category feature generation method, mobile terminal and readable storage medium storing program for executing
KR20150093971A (en) Method for rendering music on the basis of chords and electronic device implementing the same
CN112699268A (en) Method, device and storage medium for training scoring model
CN116416485A (en) Bronchus image generation method, electronic device, and computer-readable storage medium
CN112133319A (en) Audio generation method, device, equipment and storage medium
CN111554314A (en) Noise detection method, device, terminal and storage medium
CN110837557A (en) Abstract generation method, device, equipment and medium
CN110795660A (en) Data analysis method, data analysis device, electronic device, and medium
WO2023229672A1 (en) Inferred body movement using wearable rf antennas
CN113990363A (en) Audio playing parameter adjusting method and device, electronic equipment and storage medium
CN112311652B (en) Message sending method, device, terminal and storage medium
CN109558853B (en) Audio synthesis method and terminal equipment
CN111145723A (en) Method, device, equipment and storage medium for converting audio
US20170086127A1 (en) Apparatus and method for controlling outbound communication
CN110795465B (en) User scale prediction method, device, server and storage medium
CN110163186B (en) Unlocking method based on vein recognition and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination