CN111899217A - Brain image processing method and readable storage medium - Google Patents
Brain image processing method and readable storage medium Download PDFInfo
- Publication number
- CN111899217A CN111899217A CN202010567070.0A CN202010567070A CN111899217A CN 111899217 A CN111899217 A CN 111899217A CN 202010567070 A CN202010567070 A CN 202010567070A CN 111899217 A CN111899217 A CN 111899217A
- Authority
- CN
- China
- Prior art keywords
- brain
- image
- brain image
- spherical
- morphological feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000004556 brain Anatomy 0.000 title claims abstract description 690
- 238000003672 processing method Methods 0.000 title claims abstract description 30
- 230000000877 morphologic effect Effects 0.000 claims abstract description 147
- 238000012545 processing Methods 0.000 claims abstract description 63
- 238000013507 mapping Methods 0.000 claims description 73
- 238000000034 method Methods 0.000 claims description 59
- 238000012549 training Methods 0.000 claims description 34
- 230000011218 segmentation Effects 0.000 claims description 15
- 230000011157 brain segmentation Effects 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 10
- 230000001054 cortical effect Effects 0.000 claims description 9
- 210000004884 grey matter Anatomy 0.000 claims description 8
- 238000012952 Resampling Methods 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 210000003625 skull Anatomy 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 abstract description 9
- 238000002059 diagnostic imaging Methods 0.000 abstract description 5
- 230000015654 memory Effects 0.000 description 36
- 230000008569 process Effects 0.000 description 23
- 238000004891 communication Methods 0.000 description 18
- 238000004422 calculation algorithm Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 11
- 238000000605 extraction Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000002595 magnetic resonance imaging Methods 0.000 description 8
- 238000001816 cooling Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 230000003190 augmentative effect Effects 0.000 description 6
- 210000003710 cerebral cortex Anatomy 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 239000011521 glass Substances 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- 239000002826 coolant Substances 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 239000003990 capacitor Substances 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 210000001652 frontal lobe Anatomy 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000023105 myelination Effects 0.000 description 2
- 210000000869 occipital lobe Anatomy 0.000 description 2
- 210000001152 parietal lobe Anatomy 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000002739 subcortical effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 210000003478 temporal lobe Anatomy 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 210000004885 white matter Anatomy 0.000 description 2
- 208000014644 Brain disease Diseases 0.000 description 1
- 238000005481 NMR spectroscopy Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000002490 cerebral effect Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 238000002610 neuroimaging Methods 0.000 description 1
- 230000001936 parietal effect Effects 0.000 description 1
- 238000012831 peritoneal equilibrium test Methods 0.000 description 1
- 238000012636 positron electron tomography Methods 0.000 description 1
- 238000012877 positron emission topography Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000002603 single-photon emission computed tomography Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000007920 subcutaneous administration Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The application relates to a brain image processing method, a brain image processing system, a readable storage medium and a device, which belong to the technical field of medical imaging, when a brain image is obtained, the brain image can be compared with a known brain image, the known brain image is used as a first brain image, the obtained brain image is used as a second brain image to be compared, the first brain image and the second brain image are respectively processed to respectively obtain brain morphological characteristics, the brain morphological characteristics are analyzed and matched, and if the brain morphological characteristics are matched, the second brain image to be compared and the known brain image can be determined to belong to the same person; the brain image morphological characteristics are stable human body biological identification, a plurality of brain images of the same patient can be accurately found through analysis and matching of the brain morphological characteristics, and meanwhile, compared with the distinguishing mode of identification numbers, certificate numbers and medical record numbers, the scheme of the application can accurately and effectively distinguish the brain images of different patients.
Description
Technical Field
The present application relates to the field of medical imaging technologies, and in particular, to a method, a system, a readable storage medium, and a device for processing a brain image.
Background
With the popularization of medical imaging equipment and the increasing demand of brain images for brain disease auxiliary diagnosis, the processing of brain images becomes increasingly important, and when brain images of a large number of patients are processed, an identification number is generally allocated to the brain image of each patient for distinguishing different patients.
In the related art, the method for differentiating the brain images of different patients generally includes: 1. image identification number; 2. by patient gender-age comparison; 3. comparing the certificate numbers of the patients; 4. and comparing the treatment case numbers.
The above-described way of distinguishing brain images of different patients may have the following problems: 1. when a patient visits a hospital, the acquired brain images need to be found by comparing the previous databases; if the identification design is not perfect, the brain image of the patient cannot be completely called or the brain images of other patients are wrongly called; 2. it is common for a patient to visit different hospitals, but different hospitals have different identification definition systems, which results in the lack of an effective identification system to determine whether images from different sources belong to the same patient, and no effective solution has been proposed.
Disclosure of Invention
Based on this, it is necessary to provide a brain image processing method, a system, a readable storage medium and a device, aiming at the problem that the traditional way of distinguishing brain images of different patients is low in accuracy.
In a first aspect, the present application provides a method for processing brain images, comprising the steps of:
acquiring a known first brain image and a second brain image to be compared;
acquiring a first brain morphological feature of a first brain image and a second brain morphological feature of a second brain image;
and if the first brain morphological characteristics are matched with the second brain morphological characteristics, determining that the first brain image and the second brain image belong to the same person.
In a second aspect, the present application provides a brain image processing system comprising:
the image acquisition unit is used for acquiring a known first brain image and a second brain image to be compared;
the characteristic acquisition unit is used for acquiring a first brain morphological characteristic of a first brain image and a second brain morphological characteristic of a second brain image;
and the matching judgment unit is used for determining that the first brain image and the second brain image belong to the same person when the first brain morphological feature is matched with the second brain morphological feature.
In a third aspect, the present application provides a readable storage medium, on which an executable program is stored, wherein the executable program, when executed by a processor, implements the steps of any of the brain image processing methods described above.
In a fourth aspect, the present application provides a brain image processing apparatus, including a memory and a processor, where the memory stores an executable program, and the processor implements the steps of any of the above brain image processing methods when executing the executable program.
Compared with the related art, the brain image processing method, the brain image processing system, the readable storage medium and the device provided by the application can compare a brain image with a known brain image when obtaining the brain image, the known brain image is used as a first brain image, the obtained brain image is used as a second brain image to be compared, the first brain image and the second brain image are respectively processed to respectively obtain brain morphological characteristics, the brain morphological characteristics are analyzed and matched, and if the brain morphological characteristics are matched, the second brain image to be compared and the known brain image can be determined to belong to the same person; because the brain has a complete brain structure when a person is born, the brain image morphological characteristics are stable human body biological identification, a plurality of brain images of the same patient can be accurately found through the analysis and matching of the brain morphological characteristics, and meanwhile, compared with the distinguishing mode of identification numbers, certificate numbers and case history numbers, the scheme of the application can more accurately and effectively distinguish the brain images of different patients.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of an exemplary medical device 100 in one embodiment;
FIG. 2 is a schematic diagram of exemplary hardware and/or software components of an exemplary computing device 200 on which processing engine 140 is implemented, in one embodiment;
FIG. 3 is a diagram of exemplary hardware and/or software components of an exemplary mobile device 300 on which terminal 130 may be implemented, in one embodiment;
fig. 4 is a flow diagram of a method of brain image processing in one embodiment;
FIG. 5 is a flow diagram of MRI brain image processing in one embodiment;
fig. 6 is a schematic structural diagram of a brain image processing system in one embodiment;
fig. 7 is a schematic structural diagram of a brain image processing system in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Although various references are made herein to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on an imaging system and/or processor. The modules are merely illustrative and different aspects of the systems and methods may use different modules.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, various steps may be processed in reverse order or simultaneously. Meanwhile, other operations are added to or removed from these processes.
Fig. 1 is a schematic diagram of an exemplary medical device 100 for brain image processing, according to an embodiment. Referring to fig. 1, a medical device 100 may include a scanner 110, a network 120, one or more terminals 130, a processing engine 140, and a memory 150. All components in the medical device 100 may be interconnected by a network 120.
The scanner 110 may scan a subject and generate brain scan data related to the scanned subject. In some embodiments, the scanner 110 may be a medical imaging device, such as a CT device, a PET device, a SPECT device, an MRI device, and the like, or any combination thereof (e.g., a PET-CT device or a CT-MRI device). In the present application, the medical imaging device may particularly be an MRI device.
Reference to "image" in this application may refer to a 2D image, a 3D image, a 4D image, and/or any related data, which is not intended to limit the scope of this application. Various modifications and alterations will occur to those skilled in the art in light of the present disclosure.
The scanner 110 may include a support assembly 111, a detector assembly 112, a table 114, an electronics module 115, and a cooling assembly 116.
The support assembly 111 may support one or more components of the scanner 110, such as the detector assembly 112, the electronics module 115, the cooling assembly 116, and the like. In some embodiments, the support assembly 111 may include a main frame, a frame base, a front cover, and a rear cover (not shown). The front cover plate may be coupled to the frame base. The front cover plate may be perpendicular to the chassis base. The main frame may be mounted to a side of the front cover. The mainframe may include one or more support brackets to house the detector assembly 112 and/or the electronics module 115. The mainframe may include a circular opening (e.g., detection region 113) to accommodate the scan target. In some embodiments, the opening of the main chassis may be other shapes, including, for example, oval. The rear cover may be mounted to a side of the main frame opposite the front cover. The frame base may support a front cover plate, a main frame, and/or a rear cover plate. In some embodiments, the scanner 110 may include a housing to cover and protect the mainframe.
The detector assembly 112 can detect radiation events (e.g., radio frequency signals) emitted from the detection region 113. In some embodiments, the detector assembly 112 may receive radiation (e.g., radio frequency signals) and generate electrical signals. The detector assembly 112 may include one or more detector cells. One or more detector units may be packaged to form a detector block. One or more detector blocks may be packaged to form a detector box. One or more detector cassettes may be mounted to form a detection ring. One or more detection rings may be mounted to form a detector module.
The scanning bed 114 may support an object to be examined and position the object to be examined at a desired position in the examination region 113. In some embodiments, the subject may lie on a scanning bed 114. The scanning bed 114 may be moved and brought to a desired position in the detection region 113. In some embodiments, the scanner 110 may have a relatively long axial field of view, such as a 2 meter long axial field of view. Accordingly, the scanning bed 114 may be movable along the axial direction over a wide range (e.g., greater than 2 meters).
The electronic module 115 may collect and/or process the electrical signals generated by the detector assembly 112. The electronic module 115 may include one or a combination of an adder, a multiplier, a subtractor, an amplifier, a driver circuit, a differential circuit, an integrating circuit, a counter, a filter, an analog-to-digital converter, a lower limit detection circuit, a constant coefficient discriminator circuit, a time-to-digital converter, a coincidence circuit, and the like. The electronics module 115 may convert analog signals related to the energy of the radiation received by the detector assembly 112 into digital signals. The electronics module 115 may compare the plurality of digital signals, analyze the plurality of digital signals, and determine image data from the energy of the radiation received in the detector assembly 112. In some embodiments, if the detector assembly 112 has a large axial field of view (e.g., 0.75 meters to 2 meters), the electronics module 115 may have a high data input rate from multiple detector channels. For example, the electronic module 115 may process billions of events per second. In some embodiments, the data input rate may be related to the number of detector cells in the detector assembly 112.
The cooling assembly 116 may generate, transfer, transport, conduct, or circulate a cooling medium through the scanner 110 to absorb heat generated by the scanner 110 during imaging. In some embodiments, the cooling assembly 116 may be fully integrated into the scanner 110 and become part of the scanner 110. In some embodiments, the cooling assembly 116 may be partially integrated into the scanner 110 and associated with the scanner 110. The cooling assembly 116 may allow the scanner 110 to maintain a suitable and stable operating temperature (e.g., 25 ℃, 30 ℃, 35 ℃, etc.). In some embodiments, the cooling assembly 116 may control the temperature of one or more target components of the scanner 110. The target components may include the detector assembly 112, the electronics module 115, and/or any other components that generate heat during operation. The cooling medium may be in a gaseous state, a liquid state (e.g., water), or a combination of one or more thereof. In some embodiments, the gaseous cooling medium may be air.
The network 120 may include any suitable network that can assist the medical devices 100 in exchanging information and/or data. In some embodiments, one or more components of the medical device 100 (e.g., the scanner 110, the terminal 130, the processing engine 140, the memory 150, etc.) may communicate information and/or data with one or more other components of the medical device 100 via the network 120. For example, the processing engine 140 may obtain image data from the scanner 110 via the network 120. As another example, processing engine 140 may obtain user instructions from terminal 130 via network 120. Network 120 may include a public network (e.g., the internet), a private network (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), etc.), a wired network (e.g., ethernet), a wireless network (e.g., an 802.11 network, a Wi-Fi network, etc.), a cellular network (e.g., a Long Term Evolution (LTE) network), a frame relay network, a virtual private network ("VPN"), a satellite network, a telephone network, a router, a hub, a switch, a server computer, and/or any combination thereof. By way of example only, network 120 may include a cable network, a wireline network, a fiber optic network, a telecommunications network, an intranet, a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, the like, or any combination thereof. In some embodiments, network 120 may include one or more network access points. For example, the network 120 may include wired and/or wireless network access points, such as base stations and/or internet exchange points, through which one or more components of the medical device 100 may connect to the network 120 to exchange data and/or information.
The one or more terminals 130 include a mobile device 131, a tablet computer 132, a laptop computer 133, the like, or any combination thereof. In some embodiments, mobile device 131 may include a smart home device, a wearable device, a mobile device, a virtual reality device, an augmented reality device, and the like, or any combination thereof. In some embodiments, the smart home device may include a smart lighting device, a control device of a smart appliance, a smart monitoring device, a smart television, a smart camera, an internet phone, and the like, or any combination thereof. In some embodiments, the wearable device may include a bracelet, footwear, glasses, helmet, watch, clothing, backpack, smart jewelry, or the like, or any combination thereof. In some embodiments, mobile device 131 may include a mobile phone, a Personal Digital Assistant (PDA), a gaming device, a navigation device, a point of sale (POS) device, a laptop, a tablet, a desktop, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glasses, virtual reality eyeshields, augmented reality helmets, augmented reality glasses, augmented reality eyeshields, and the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include Google Glass, Oculus Rift, Hololens, Gear VR, and the like. In some embodiments, the terminal 130 may be part of the processing engine 140.
The processing engine 140 may process data and/or information obtained from the scanner 110, the terminal 130, and/or the memory 150. In some embodiments, processing engine 140 may be a single server or a group of servers. The server groups may be centralized or distributed. In some embodiments, the processing engine 140 may be local or remote. For example, the processing engine 140 may access information and/or data stored in the scanner 110, the terminal 130, and/or the memory 150 through the network 120. As another example, the processing engine 140 may be directly connected to the scanner 110, the terminal 130, and/or the memory 150 to access stored information and/or data. In some embodiments, processing engine 140 may be implemented on a cloud platform. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an interconnected cloud, a multi-cloud, and the like, or any combination thereof. In some embodiments, processing engine 140 may be implemented by computing device 200 having one or more components shown in FIG. 2.
In some embodiments, the memory 150 may be connected to the network 120 for communication with one or more other components in the medical device 100 (e.g., the processing engine 140, the terminal 130, etc.). One or more components in the medical device 100 may access data or instructions stored in the memory 150 through the network 120. In some embodiments, the memory 150 may be directly connected to or in communication with one or more other components in the medical device 100 (e.g., the processing engine 140, the terminal 130, etc.). In some embodiments, memory 150 may be part of processing engine 140.
FIG. 2 is a schematic diagram of exemplary hardware and/or software components of an exemplary computing device 200 on which processing engine 140 may be implemented, for one embodiment. As shown in FIG. 2, computing device 200 may include an internal communication bus 210, a processor (processor)220, a Read Only Memory (ROM)230, a Random Access Memory (RAM)240, a communication port 250, input/output components 260, a hard disk 270, and a user interface device 280.
For illustration only, only one processor 220 is depicted in computing device 200. However, it should be noted that the computing device 200 in the present application may also include multiple processors, and thus, operations and/or method steps described herein as being performed by one processor may also be performed by multiple processors, either jointly or separately.
Read Only Memory (ROM)230 and Random Access Memory (RAM)240 may store data/information obtained from scanner 110, terminal 130, memory 150, and/or any other component of medical device 100. Read Only Memory (ROM)230 may include Mask ROM (MROM), Programmable ROM (PROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), compact disk ROM (CD-ROM), and digital versatile disk ROM. Random Access Memory (RAM)240 may include Dynamic RAM (DRAM), double data Rate synchronous dynamic RAM (DDR SDRAM), Static RAM (SRAM), thyristor RAM (T-RAM), zero capacitor RAM (Z-RAM), and the like. In some embodiments, Read Only Memory (ROM)230 and Random Access Memory (RAM)240 may store one or more programs and/or instructions for performing the example methods described herein.
The communication port 250 may be connected to a network (e.g., network 120) to facilitate data communication. The communication port 250 may establish a connection between the processing engine 140 and the scanner 110, the terminal 130, and/or the memory 150. The connection may be a wired connection, a wireless connection, any other communication connection capable of enabling data transmission and/or reception, and/or any combination of these connections. The wired connection may include, for example, an electrical cable, an optical cable, a telephone line, etc., or any combination thereof. The wireless connection may include, for example, a bluetooth link, a Wi-Fi link, a WiMax link, a WLAN link, a ZigBee link, a mobile network link (e.g., 3G, 4G, 5G, etc.), and the like, or combinations thereof. In some embodiments, communication port 250 may be a standard communication port, such as RS232, RS485, and the like. In some embodiments, the communication port 250 may be a specially designed communication port. For example, the communication port 250 may be designed in accordance with digital imaging and communications in medicine (DICOM) protocol.
Input/output component 260 supports the flow of input/output data between computing device 200 and other components. In some embodiments, input/output components 260 may include input devices and output devices. Examples of input devices may include a keyboard, mouse, touch screen, microphone, etc., or a combination thereof. Examples of output devices may include a display device, speakers, printer, projector, etc., or a combination thereof. Examples of display devices may include Liquid Crystal Displays (LCDs), Light Emitting Diode (LED) based displays, flat panel displays, curved screens, television devices, Cathode Ray Tubes (CRTs), touch screens, and the like, or combinations thereof.
The computing device 200 may also include various forms of program storage units and data storage units, such as a hard disk 270, capable of storing various data files used in computer processing and/or communications, as well as possible program instructions executed by the processor 220.
The user interface device 280 may enable interaction and information exchange between the computing device 200 and a user.
Fig. 3 is a diagram of exemplary hardware and/or software components of an exemplary mobile device 300 on which terminal 130 may be implemented, for one embodiment. As shown in fig. 3, mobile device 300 may include antenna 310, display 320, Graphics Processing Unit (GPU)330, Central Processing Unit (CPU)340, input output unit (I/O)350, memory 360, and storage 390. In some embodiments, any other suitable component may also be included in mobile device 300, including but not limited to a system bus or a controller (not shown). In some embodiments, a mobile operating system 370 (e.g., iOS, Android, Windows Phone, etc.) and one or more application programs 380 may be loaded from storage 390 into memory 360 for execution by CPU 340. The application 380 may include a browser or any other suitable mobile application for receiving and rendering information related to image processing or other information from the processing engine 140. User interaction with the information flow may be enabled through the I/O350 and provided to the processing engine 140 and/or other components of the medical device 100 via the network 120.
To implement the various modules, units and their functionality described in this application, a computer hardware platform may be used as the hardware platform(s) for one or more of the elements described herein. A computer with user interface elements may be used as a Personal Computer (PC) or any other type of workstation or terminal device. The computer may also act as a server if suitably programmed. The brain image processing method, system, etc. may be implemented in the medical apparatus 100.
Fig. 4 is a schematic flow chart of a brain image processing method according to an embodiment of the present application. The brain image processing method in this embodiment includes the steps of:
step S410: acquiring a known first brain image and a second brain image to be compared;
in this step, a known first brain image may be obtained from the memory 150, the memory 150 may be provided with a database for storing the first brain image, and a second brain image to be compared may also be obtained from the memory 150, or a new brain image currently scanned and photographed, specifically, the process is that an object to be scanned may be placed on the table 114 of the medical equipment scanner 110, and enter the detection area 113 of the scanner 110, and the brain image is scanned and photographed to obtain the second brain image to be compared;
step S420: acquiring a first brain morphological feature of a first brain image and a second brain morphological feature of a second brain image;
in this step, the morphological contour of the brain can be shown in the first brain image and the second brain image, from which the corresponding brain morphological feature can be acquired, which can be performed by the processing engine 140.
Step S430: and if the first brain morphological characteristics are matched with the second brain morphological characteristics, determining that the first brain image and the second brain image belong to the same person.
In this step, because the brain has had complete brain structure when the people were born, brain image morphology characteristic is a stable human biological identification, and along with the time course, the change of brain morphology characteristic is little, and everyone's brain morphology characteristic is unique moreover, consequently, if there is the brain morphology characteristic phase-match of two brain images, can judge that both belong to the same person comparatively accurately.
In this embodiment, when obtaining a brain image, the brain image may be compared with a known brain image, the known brain image is used as a first brain image, the obtained brain image is used as a second brain image to be compared, the first brain image and the second brain image are respectively processed to respectively obtain brain morphological features therein, and the brain morphological features are analyzed and matched, and if the two features are matched, it may be determined that the second brain image to be compared and the known brain image belong to the same person; because the brain has a complete brain structure when a person is born, the brain image morphological characteristics are stable human body biological identification, a plurality of brain images of the same patient can be accurately found through the analysis and matching of the brain morphological characteristics, and meanwhile, compared with the distinguishing mode of identification numbers, certificate numbers and case history numbers, the scheme of the application can more accurately and effectively distinguish the brain images of different patients.
It should be noted that the brain image processing method described above may be executed on a console of a medical device, a post-processing workstation of the medical device, or an exemplary computing device 200 implementing a processing engine on a terminal 130 capable of communicating with the medical device, and is not limited thereto, and may be variably adjusted according to the needs of practical applications.
In one embodiment, the step of obtaining a first brain morphology feature of a first brain image and a second brain morphology feature of a second brain image comprises the steps of:
respectively inputting the first brain image and the second brain image into a preset brain spherical mapping network, and acquiring a first expanded spherical image corresponding to the first brain image and a second expanded spherical image corresponding to the second brain image;
reducing the first unfolded spherical image into a first standard spherical image, and extracting first brain morphological characteristics from the first standard spherical image;
and reducing the second unfolded spherical image into a second standard spherical image, and extracting second brain morphological characteristics from the second standard spherical image.
In this embodiment, because the brain of a person is in an irregular shape, and the brain image obtained by scanning and shooting is in an approximately spherical shape, the features of the brain image can be mapped onto the expanded spherical surface by inputting the brain image into a preset brain spherical surface mapping network, so as to obtain an expanded spherical surface map, and then the expanded spherical surface map is restored into a standard spherical surface map, and the standard spherical surface map has corresponding positions and can be compared, the brain morphological features of the brain image are extracted from the standard spherical surface map, so that feature comparison can be performed by combining the positions of the standard spherical surface map.
Further, the structure of the brain sphere mapping network may adopt a VNet network design, or may adopt other types of network designs, and is trained in advance and stored in the processing engine 140, the brain sphere mapping network has a preset mapping rule, through which an input brain image may be mapped into an expanded sphere map, and through an inverse process of the mapping rule, the expanded sphere map may be restored to obtain a standard sphere map, and at this time, features of the brain image are mapped onto the standard sphere; when the brain morphological characteristics of the brain image are extracted from the standard spherical image, the extraction of the characteristic points can be realized by adopting a spherical Harris characteristic point extraction method with higher accuracy and repeatability.
In one embodiment, before the step of inputting the first brain image and the second brain image into the preset brain sphere mapping network, the method further comprises the following steps:
and respectively preprocessing the first brain image and the second brain image, wherein the preprocessing comprises at least one of rotation, resampling, size adjustment, skull removal, image non-uniform correction, histogram matching and gray level normalization.
In this embodiment, before inputting the brain image into the brain spherical mapping network, one or more kinds of preprocessing may be performed on the brain image, such as rotation, resampling, resizing, skull removal, image non-uniform correction, histogram matching, grayscale normalization, and the like.
In one embodiment, after the step of preprocessing the first brain image and the second brain image respectively, the method further comprises the following steps:
and respectively carrying out gray matter segmentation and cortical reconstruction on the preprocessed first brain image and the preprocessed second brain image, and respectively inputting the first brain image and the second brain image to a spherical brain mapping network after the cortical reconstruction.
In this embodiment, since the human brain has complete sulci and gyrus characteristics at birth, the subsequent growth is only the increase of brain volume, the trimming process of the grey matter thickness of the gyrus and the myelination process of white matter, and the gyrus refers to a plurality of folds formed in the cerebral cortex; by performing gray matter segmentation on the brain image, the gray matter part which is possibly changed can be removed, and then cortical reconstruction is performed, so that the characteristics of the sulcus gyrus, such as the position, width, depth and the like of the sulcus gyrus, are kept, and the characteristics of the brain image are more obvious.
In one embodiment, the brain image processing method further comprises the steps of:
acquiring an initialized spherical mapping network, and acquiring a plurality of brain sample images and corresponding spherical images;
taking a plurality of brain sample images as training input samples, taking an expanded spherical image output by a spherical mapping network from a corresponding spherical image as a training supervision sample, and training the spherical mapping network;
and obtaining the brain spherical mapping network after training of the training input sample and the training supervision sample.
In this embodiment, the initialized spherical mapping network may be trained by using a plurality of brain sample images, so as to obtain a brain spherical mapping network; the sphere mapping network can be preset with a mapping standard, the mapping standard is used for supervising and mapping training of the brain sample image, the mapping standard can be a mapping process of a freesurfer sphere map corresponding to the brain sample image, namely in the training process, the freesurfer sphere map corresponding to the brain sample image also obtains an unfolded sphere map according to the same mapping rule, the unfolded sphere map of the freesurfer sphere map is used as a training supervision sample, the sphere mapping network is trained, the sphere mapping network is made to adapt to the relation between the brain image and the unfolded sphere map, and the brain sphere mapping network is obtained.
Specifically, during training, a freesurfer spherical map corresponding to a brain sample image can be obtained, a dice loss function of a spherical mapping network is adopted, the dice loss function is optimized to be converged, the mapping result of the brain image is made to conform to the mapping result of the freesurfer spherical map as far as possible, and then the converged dice loss function value is used for carrying out parameter configuration on the spherical mapping network, so that the spherical mapping network becomes the brain spherical mapping network, and the brain image mapping requirement is met.
The brain sample image may be a historical brain image which has been photographed, or a simulated sample image, and the like, and has a corresponding freesurfer spherical map, where freesurfer is a tool for processing brain 3D structural image data and performing automatic cortex and subcutaneous nucleus segmentation, and the tool is used to process brain scanning and photographing data to obtain a corresponding freesurfer spherical map; in addition, the freesurfer spherical map is only one of the spherical maps, and other types of spherical maps can be adopted in the application, and the specific application process is similar to that of the freesurfer spherical map.
In one embodiment, if the first brain morphological feature matches the second brain morphological feature, the step of determining that the first brain image and the second brain image belong to the same person comprises the steps of:
and if the number of the feature points matched with the first brain morphological feature and the second brain morphological feature is larger than or equal to a first preset value, determining that the first brain image and the second brain image belong to the same person.
In this embodiment, when the brain morphological features of the brain images are matched, the brain morphological feature points may be extracted from the first brain image and the second brain image respectively in the form of feature points, and after the matching process, the number of the matched feature points is obtained.
It should be noted that the first preset value can be adjusted according to actual needs; in the matching process, a feature matching algorithm, such as FLANN algorithm, K-nearest neighbor algorithm (KNN), brute force matching algorithm, or the like, may be employed.
In one embodiment, the step of obtaining a first brain morphology feature of a first brain image and a second brain morphology feature of a second brain image comprises the steps of:
respectively inputting the first brain image and the second brain image into a preset brain segmentation model, and acquiring a plurality of functional brain area images corresponding to the first brain image and a plurality of functional brain area images corresponding to the second brain image;
extracting first brain morphological features from a plurality of functional brain region maps of a first brain image;
second brain morphological features are extracted from a plurality of functional brain region maps of a second brain image.
In this embodiment, the brain morphological feature of the brain image may be obtained in another manner, the brain image is input into a preset brain segmentation model, the brain segmentation model segments the brain region in the brain image to obtain images of a plurality of functional brain regions, and then the brain morphological feature is extracted from the images of the plurality of functional brain regions, respectively.
It should be noted that the first brain morphological feature and the second brain morphological feature obtained by segmenting and extracting the brain region are different from the first brain morphological feature and the second brain morphological feature obtained by sphere mapping and extracting.
Further, the brain region may be divided into five regions, such as frontal lobe, temporal lobe, parietal lobe, occipital lobe and subcortical brain, or may be further subdivided, such as dividing the brain surface into 68 parts according to sulcus.
Further, the brain image processing method further includes the steps of:
acquiring an initialized segmentation model and acquiring a plurality of brain sample images;
taking a plurality of brain sample images as training input samples, carrying out brain partition labeling on the brain sample images by a doctor, taking partition labeling results as training target samples, and training an initialized segmentation model;
and obtaining the brain segmentation model after training of the training input sample and the training target sample.
In one embodiment, the step of extracting morphological features of the first brain from the plurality of functional brain region maps of the first brain image comprises the steps of:
extracting at least one of a relative volume, a surface area, a ratio of surface area to relative volume, and a length of a long axis of a brain region structure in each functional brain region map from a plurality of functional brain region maps of the first brain image;
or respectively taking a plurality of functional brain area images of the first brain image as mask images, mapping the mask images to corresponding regions of the first brain image, and extracting at least one of the gray average value, texture, gray histogram and variance peak value of the mapped first brain image;
or, at least one of the relative volume, the surface area, the ratio of the surface area to the relative volume, and the length of the long axis of the brain region structure in each functional brain region map of the first brain image is combined with at least one of the gray average, texture, gray histogram, and variance peak of the first brain image after mapping;
the step of extracting morphological features of the second brain from the plurality of functional brain region maps of the second brain image comprises the steps of:
extracting at least one of a relative volume, a surface area, a ratio of surface area to relative volume, and a length of a long axis of a brain region structure in each functional brain region map from a plurality of functional brain region maps of the second brain image;
or respectively taking a plurality of functional brain area images of the second brain image as mask images, mapping the mask images to corresponding regions of the second brain image, and extracting at least one of the gray average value, texture, gray histogram and variance peak value of the mapped second brain image;
or, at least one of the relative volume, the surface area, the ratio of the surface area to the relative volume, and the length of the long axis of the brain region structure in each functional brain region map of the second brain image is combined with at least one of the gray level average, texture, gray level histogram, and variance peak of the second brain image after mapping.
In this embodiment, the brain morphological feature extraction from the functional brain region images of the brain image may be performed in various ways, such as extracting at least one of the relative volume, the surface area, the ratio of the surface area to the relative volume, and the length of the long axis of the brain region structure in each functional brain region image, and matching by comparing the parameters of each functional brain region image; or, any functional brain area image is taken as a mask image and is mapped to a corresponding area of a corresponding brain image, at least one of gray average value, texture, gray histogram and variance peak value of the remaining functional brain areas after mask mapping is extracted, and matching is carried out through comparison of all parameters of the brain area image after mask mapping; or combining the two modes, and matching by using the comparison of the combined parameters; the multiple modes can utilize different parameters to realize the extraction of the brain morphological characteristics, realize the diversified matching of the brain images and adapt to various different matching requirements.
When the two methods are combined, the parameters of the first method, such as the relative volume, the surface area, the ratio of the surface area to the relative volume, and the length of the long axis, may be one or more, and the gray-scale average value, texture, gray-scale histogram, and peak value of variance of the residual functional brain region after mask mapping in the second method may be one or more, thereby realizing various combinations.
In one embodiment, if the first brain morphological feature matches the second brain morphological feature, the step of determining that the first brain image and the second brain image belong to the same person comprises the steps of:
and obtaining the similarity of the first brain morphological feature and the second brain morphological feature, and if the similarity is greater than or equal to a second preset value, determining that the first brain image and the second brain image belong to the same person.
In this embodiment, when the brain morphological features of the brain images are matched, the brain morphological feature similarity can be compared, and after the brain morphological features are extracted from the first brain image and the second brain image respectively, the similarity between the brain morphological features and the brain morphological features can be acquired.
When comparing the similarity, the first brain image and the second brain image are acquired in the same manner, such as using at least one of the relative volume, the surface area, the ratio of the surface area to the relative volume, and the length of the long axis of the brain region structure described above, or using at least one of the gray level average, the texture, the gray level histogram, and the variance peak of the remaining functional brain regions after mask mapping, or using any combination of the two manners; in addition, the second preset value can be adjusted according to actual needs.
In one embodiment, the brain image processing method further comprises the steps of:
if the similarity is smaller than a second preset value, the first brain image and the second brain image are respectively input into a preset brain spherical mapping network, and a first expansion spherical image corresponding to the first brain image and a second expansion spherical image corresponding to the second brain image are obtained;
restoring the first unfolded spherical image into a first standard spherical image, and extracting a third brain morphological feature of the first brain image from the first standard spherical image;
reducing the second unfolded spherical image into a second standard spherical image, and extracting a fourth brain morphological feature of the second brain image from the second standard spherical image;
and if the morphological characteristics of the third brain are matched with the morphological characteristics of the fourth brain, determining that the first brain image and the second brain image belong to the same person.
In this embodiment, after the brain morphological features are extracted in a manner that the brain image segments the functional brain region, if the corresponding similarity is smaller than the second preset value, the brain morphological features can be extracted in a manner of brain sphere mapping, and the brain image is subjected to matching analysis, so that accidental errors occurring in the manner that the brain image segments the functional brain region can be avoided, and the accuracy of matching the brain image is further improved.
It should be noted that, when determining whether the morphological feature of the third brain is matched with the morphological feature of the fourth brain, a similar processing procedure to the processing procedure for the morphological feature of the first brain and the morphological feature of the second brain in the above-mentioned spherical mapping may be adopted.
In one embodiment, the brain image processing method further comprises the steps of:
if the number of the matched characteristic points is smaller than a first preset value, respectively inputting the first brain image and the second brain image into a preset brain segmentation model, and acquiring a plurality of functional brain area images corresponding to the first brain image and a plurality of functional brain area images corresponding to the second brain image;
extracting a third brain morphological feature of the first brain image from a plurality of functional brain region maps of the first brain image;
extracting a fourth brain morphological feature of the second brain image from a plurality of functional brain region maps of the second brain image;
and if the morphological characteristics of the third brain are matched with the morphological characteristics of the fourth brain, determining that the first brain image and the second brain image belong to the same person.
In this embodiment, after the brain morphological features are extracted in the brain spherical mapping manner, if the number of matched feature points is smaller than the first preset value, the brain morphological features can be extracted in the brain image segmentation functional brain region manner, and the brain image is subjected to matching analysis, so that accidental errors occurring in the brain spherical mapping manner can be avoided, and the accuracy of brain image matching is further improved.
The third brain morphological feature and the fourth brain morphological feature obtained by segmenting and extracting the brain region are different from the third brain morphological feature and the fourth brain morphological feature obtained by sphere mapping and extracting; when determining whether the morphological feature of the third brain is matched with the morphological feature of the fourth brain, a processing procedure similar to the processing procedure for the morphological feature of the first brain and the morphological feature of the second brain in the above brain region segmentation may be adopted; the brain spherical mapping mode and the brain image functional brain region segmentation mode can be used independently or cooperatively, and the sequence is not limited when the modes are used cooperatively.
Specifically, the brain image processing method can be applied to the brain scanning imaging process of medical equipment, and the image features are extracted by a morphological method and used for comparing every two images to determine whether the two images belong to the same patient. The method can be used as an assistant of the traditional identity comparison method, automatically searches the database, supplements the missed images of the traditional method, and discriminates and removes the mistakenly found images of the traditional method.
Studies have shown that human brain has intact sulci and gyrus characteristics at birth, and that subsequent growth is simply an increase in brain volume, a pruning process of the thickness of the grey matter in the gyrus, and a myelination process of the white matter. That is, the characteristics of the brain morphology of a 5-year-old child are extracted, and when the child grows to 25 years old, the image of the child can still be found by comparing the two images with the 5-year-old image in the brain image library. Compared with the situation that personal information such as name, age and the like can be subjected to subjective misrepresentation or misrepresentation in input, the reproducibility of fingerprints and the change caused by external force, the brain image morphological characteristic is a stable human body biological identification.
Specifically, brain morphology may refer to the analysis of neuroimages obtained by nuclear magnetic resonance, PET, or CT scanning of the human brain to extract representative features. Sulci is a large number of folds formed in the cerebral cortex, which is the basis of advanced neural activity and is closely related to human intelligence. Each person's features are unique, as are the person's fingerprint loops, based on features in the neuroimage, such as the location and width, depth of the sulci, for example. The scanned neural image is quantitatively analyzed and evaluated by using a deep learning algorithm and is used as a high-level auxiliary mode for biological identity identification.
Meanwhile, the sulcus divides the brain into different brain lobes, such as frontal lobe, temporal lobe, parietal lobe, occipital lobe, etc. The brain surface can be divided into 68 parts according to the gyrus if it is more subdivided. For cases where the brain has a space occupying lesion or surgical lesion removal, each lobe or sulcus can be individually aligned and if there is a significant match in most areas, it can be considered an image of the same patient.
In this application, the cortex is regarded as a highly folded curved surface, which is projected onto a fixed sphere by conformal mapping, and then a series of feature points are extracted on the sphere for subsequent feature matching. In addition, a standard feature vector with characteristics is constructed according to the neural image of the brain. We first segment the whole brain, dividing the brain into five regions, frontal, temporal, parietal, occipital and subcortical. After the brain regions are segmented, neuroimaging features can be extracted for different brain regions as another effective feature for matching. By verifying the characteristics extracted after the cerebral cortex is mapped to the spherical surface, whether the detected objects are the same person can be judged; on the other hand, the identity of the individual can be identified through the matching degree of the characteristics of each functional area of the human brain. It should be noted that, when comparing the features of the functional brain regions, it is not possible to match a feature of a functional region, but it is not easy to determine whether the identity identification is inconsistent because the features of other functional regions are matched, and the functional region may be changed due to a lesion.
In a specific application, pairs of brain structure images acquired by an MRI scanning device, such as T1, T2 weighted images, etc., may be used.
After acquiring the images, a unified pre-processing operational flow can be performed for all MRI image data pairs: rotating, resampling, adjusting size, removing skull, correcting image non-uniformity, histogram matching and gray level normalization to convert all images into images with the size of 256 multiplied by 256mm3The directions are all standard Cartesian LPI coordinate systems, and the gray scale ranges are all standard images in the (-1, 1) interval.
After obtaining the standard image, gray matter segmentation and cortical reconstruction may also be performed.
In matching authentication identities, there can be two different ways:
1. identity authentication system based on spherical mapping and feature matching
For the input image pair, the two images are respectively mapped to the spherical surface through spherical surface mapping (namely cortical surface mapping in fig. 5), and then feature extraction is carried out based on the spherical surface. Therefore, an identity authentication system based on spherical mapping and feature matching is designed, and the specific framework is described as follows:
the system mainly comprises three modules of a spherical mapping network, feature point extraction and feature point matching, as shown in figure 5.
The structure of the spherical mapping network refers to the network design of the VNet. Firstly, inputting the preprocessed original gray level image into a network, wherein the output of the network is a spherical surface which is expanded according to a specific one-to-one mapping rule f. In the training process, a freesurfer spherical graph corresponding to the training data is unfolded according to the same rule f and used as the supervision information. And for the output of the spherical mapping network, restoring the output to the standard spherical surface according to the inverse process of f, and mapping the sulci and the gyrus to the standard spherical surface.
Next, extracting the characteristic points on the standard spherical surface by adopting a more accurate and higher-repeatability spherical Harris characteristic point extraction method; and after the characteristic points are extracted, matching the characteristic points by adopting a matching method to see whether enough characteristic points can be matched or not, and if so, proving that the identity of the object is successfully identified.
2. Identity authentication system based on brain segmentation
The preprocessed brain MRI image can be divided into training data and testing data, the data are labeled by a professional doctor, and the segmentation model is used for segmenting 5 functional brain regions of a brain structure and is used as a gold standard for segmentation.
During training, the training data and the label resize are input into the segmentation model after being input to 256 × 256. And when the loss function converges, selecting a proper model as a final segmentation model. The present application chooses to use Vnet networks as the segmentation model.
Feature extraction is then performed based on the segmented model, and the feature extraction can be divided into two parts: after some are the model segmented images, the relative volume, surface area to volume ratio, long axis length, etc. of each substructure may be output. And the other part is that the segmented substructures are respectively used as masks and then mapped to corresponding regions on the MRI image, and then the characteristics of the mask mapped image such as the gray average value, texture, gray histogram, variance peak value and the like are extracted. The two-part features can be used independently or connected to form a new feature.
And finally, carrying out similarity calculation on the feature vectors of each brain area of the two control samples, so as to judge the consistency of the sample pair, and simultaneously observing whether each functional area is diseased or not.
The feature matching algorithm used in the method is a FLANN algorithm, and can be replaced by other feature matching algorithms, such as a K-nearest neighbor (KNN) algorithm, a brute force matching algorithm and the like, and various different algorithms can be selected. Secondly, the image is divided into 5 functional brain areas, the brain areas can be further divided into finer areas, and meanwhile, networks other than Vnet, such as FCN networks like Unet, can also be selected by dividing the network. Finally, the method mainly utilizes cerebral cortex information for judgment, and then identifies the cerebral cortex according to the imaging characteristics of the segmented cerebral area; the two sequences can also be exchanged or combined to obtain the final identification result.
The method can map the preprocessed sample pair input images to a standard spherical surface, extract the feature points and perform feature point matching, and further identify whether the sample pair comes from the same person; and (3) obtaining the characteristics of the volume, texture, gray level and the like of different brain areas by using the brain segmentation model, and carrying out quantitative analysis on the corresponding functional brain areas by using the sample.
According to the above brain image processing method, an embodiment of the present application further provides a brain image processing system, and the following describes an embodiment of the brain image processing system in detail.
Fig. 6 is a schematic structural diagram of a brain image processing system according to an embodiment. The brain image processing system in this embodiment includes:
an image obtaining unit 510, configured to obtain a known first brain image and a second brain image to be compared;
a feature obtaining unit 520, configured to obtain a first brain morphological feature of the first brain image and a second brain morphological feature of the second brain image;
a matching judgment unit 530, configured to determine that the first brain image and the second brain image belong to the same person when the first brain morphological feature matches the second brain morphological feature.
In the present embodiment, the brain image processing system includes an image acquisition unit 510, a feature acquisition unit 520, and a matching determination unit 530; when obtaining a brain image, the image obtaining unit 510 may compare the brain image with a known brain image, the known image is used as a first brain image, the obtained brain image is used as a second brain image to be compared, the feature obtaining unit 520 processes the first brain image and the second brain image respectively to obtain brain morphological features thereof, the matching judging unit 530 analyzes and matches the brain morphological features, and if the two features are matched, it may be determined that the second brain image to be compared and the known brain image belong to the same person; because the brain has a complete brain structure when a person is born, the brain image morphological characteristics are stable human body biological identification, a plurality of brain images of the same patient can be accurately found through the analysis and matching of the brain morphological characteristics, and meanwhile, compared with the distinguishing mode of identification numbers, certificate numbers and case history numbers, the scheme of the application can more accurately and effectively distinguish the brain images of different patients.
In one embodiment, the feature obtaining unit 520 is configured to input the first brain image and the second brain image into a preset brain sphere mapping network, and obtain a first expanded sphere corresponding to the first brain image and a second expanded sphere corresponding to the second brain image; reducing the first unfolded spherical image into a first standard spherical image, and extracting first brain morphological characteristics from the first standard spherical image; and reducing the second unfolded spherical image into a second standard spherical image, and extracting second brain morphological characteristics from the second standard spherical image.
In one embodiment, the feature obtaining unit 520 is further configured to perform pre-processing on the first brain image and the second brain image respectively, where the pre-processing includes at least one of rotation, resampling, resizing, skull removal, image non-uniformity correction, histogram matching, and gray-scale normalization.
In one embodiment, the feature obtaining unit 520 is further configured to perform gray matter segmentation and cortical reconstruction on the first brain image and the second brain image after preprocessing, and input the first brain image and the second brain image into the brain sphere mapping network after the cortical reconstruction.
In one embodiment, as shown in fig. 7, the brain image processing system further includes a network training unit 540, configured to acquire the initialized sphere mapping network, and acquire a plurality of brain sample images and corresponding sphere maps; taking a plurality of brain sample images as training input samples, taking an expanded spherical image output by a spherical mapping network from a corresponding spherical image as a training supervision sample, and training the spherical mapping network; and obtaining the brain spherical mapping network after training of the training input sample and the training supervision sample.
In one embodiment, the matching determining unit 530 is configured to determine that the first brain image and the second brain image belong to the same person when the number of feature points of the first brain morphological feature and the second brain morphological feature is greater than or equal to a first preset value.
In an embodiment, the feature obtaining unit 520 is configured to input the first brain image and the second brain image into a preset brain segmentation model, and obtain a plurality of functional brain region maps corresponding to the first brain image and a plurality of functional brain region maps corresponding to the second brain image; extracting first brain morphological features from a plurality of functional brain region maps of a first brain image; second brain morphological features are extracted from a plurality of functional brain region maps of a second brain image.
In one embodiment, the feature obtaining unit 520 is configured to extract at least one of a relative volume, a surface area, a ratio of the surface area to the relative volume, and a length of a long axis of a brain region structure in each functional brain region map from a plurality of functional brain region maps of the first brain image;
or, the feature obtaining unit 520 is configured to respectively use the multiple functional brain region maps of the first brain image as mask images, map the mask images to corresponding regions of the first brain image, and extract at least one of a gray average, a texture, a gray histogram, and a variance peak of the mapped first brain image;
or, the feature obtaining unit 520 is configured to combine at least one of a relative volume, a surface area, a ratio of the surface area to the relative volume, and a length of a long axis of a brain region structure in each functional brain region map of the first brain image with at least one of a gray average, a texture, a gray histogram, and a variance peak of the mapped first brain image;
in addition, the feature obtaining unit 520 is further configured to extract at least one of a relative volume, a surface area, a ratio of the surface area to the relative volume, and a length of a long axis of a brain region structure in each functional brain region map from a plurality of functional brain region maps of the second brain image;
or, the feature obtaining unit 520 is further configured to take the plurality of functional brain region maps of the second brain image as mask images, map the mask images to corresponding regions of the second brain image, and extract at least one of a gray average, a texture, a gray histogram, and a variance peak of the mapped second brain image;
alternatively, the feature obtaining unit 520 is further configured to combine at least one of a relative volume, a surface area, a ratio of the surface area to the relative volume, and a length of the long axis of the brain region structure in each functional brain region map of the second brain image with at least one of a gray average, a texture, a gray histogram, and a variance peak of the mapped second brain image.
In an embodiment, the matching determining unit 530 is configured to obtain a similarity between the first brain morphological feature and the second brain morphological feature, and determine that the first brain image and the second brain image belong to the same person if the similarity is greater than or equal to a second preset value.
In an embodiment, when the matching determining unit 530 determines that the similarity is smaller than a second preset value, the feature obtaining unit 520 is further configured to input the first brain image and the second brain image to a preset brain sphere mapping network, and obtain a first expanded sphere corresponding to the first brain image and a second expanded sphere corresponding to the second brain image; restoring the first unfolded spherical image into a first standard spherical image, and extracting a third brain morphological feature of the first brain image from the first standard spherical image; reducing the second unfolded spherical image into a second standard spherical image, and extracting a fourth brain morphological feature of the second brain image from the second standard spherical image;
the matching judgment unit 530 is further configured to determine that the first brain image and the second brain image belong to the same person when the third brain morphological feature matches the fourth brain morphological feature.
In an embodiment, when the matching determining unit 530 determines that the number of matched feature points is smaller than a first preset value, the feature obtaining unit 520 is further configured to input the first brain image and the second brain image to a preset brain segmentation model, respectively, and obtain a plurality of functional brain region maps corresponding to the first brain image and a plurality of functional brain region maps corresponding to the second brain image; extracting a third brain morphological feature of the first brain image from a plurality of functional brain region maps of the first brain image; extracting a fourth brain morphological feature of the second brain image from a plurality of functional brain region maps of the second brain image;
the matching judgment unit 530 is further configured to determine that the first brain image and the second brain image belong to the same person when the third brain morphological feature matches the fourth brain morphological feature.
The brain image processing system of the embodiment of the application corresponds to the brain image processing method one by one, and the technical features and the beneficial effects thereof described in the embodiment of the brain image processing method are all applicable to the embodiment of the brain image processing system.
A readable storage medium having stored thereon an executable program which, when executed by a processor, carries out the steps of the above-mentioned brain image processing method.
The readable storage medium can realize that when a brain image is obtained, the brain image can be compared with a known brain image, the known image is used as a first brain image, the obtained brain image is used as a second brain image to be compared, the first brain image and the second brain image are respectively processed to respectively obtain brain morphological characteristics, the brain morphological characteristics are analyzed and matched, and if the brain morphological characteristics are matched, the second brain image to be compared and the known brain image can be determined to belong to the same person; because the brain has a complete brain structure when a person is born, the brain image morphological characteristics are stable human body biological identification, a plurality of brain images of the same patient can be accurately found through the analysis and matching of the brain morphological characteristics, and meanwhile, compared with the distinguishing mode of identification numbers, certificate numbers and case history numbers, the scheme of the application can more accurately and effectively distinguish the brain images of different patients.
The brain image processing device comprises a memory and a processor, wherein the memory stores an executable program, and the processor realizes the steps of the brain image processing method when executing the executable program.
According to the brain image processing device, the executable program is run on the processor, when a brain image is obtained, the brain image can be compared with a known brain image, the known brain image serves as a first brain image, the obtained brain image serves as a second brain image to be compared, the first brain image and the second brain image are respectively processed to respectively obtain brain morphological characteristics, the brain morphological characteristics are analyzed and matched, and if the brain morphological characteristics are matched, the second brain image to be compared and the known brain image can be determined to belong to the same person; because the brain has a complete brain structure when a person is born, the brain image morphological characteristics are stable human body biological identification, a plurality of brain images of the same patient can be accurately found through the analysis and matching of the brain morphological characteristics, and meanwhile, compared with the distinguishing mode of identification numbers, certificate numbers and case history numbers, the scheme of the application can more accurately and effectively distinguish the brain images of different patients.
The brain image processing device may be provided in the medical device 100, or may be provided in the terminal 130 or the processing engine 140.
It will be understood by those skilled in the art that all or part of the processes for implementing the embodiments in the brain image processing method can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and in an embodiment, the program can be stored in the storage medium of a computer system and executed by at least one processor in the computer system to implement the processes including the embodiments of the brain image processing method as described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
Those skilled in the art will appreciate that all or part of the steps in the method for implementing the above embodiments may be implemented by a program instructing the relevant hardware. The program may be stored in a readable storage medium. Which when executed comprises the steps of the method described above. The storage medium includes: ROM/RAM, magnetic disk, optical disk, etc.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method of brain image processing, the method comprising the steps of:
acquiring a known first brain image and a second brain image to be compared;
acquiring a first brain morphological feature of the first brain image and a second brain morphological feature of the second brain image;
and if the first brain morphological feature is matched with the second brain morphological feature, determining that the first brain image and the second brain image belong to the same person.
2. The brain image processing method according to claim 1, wherein the step of acquiring a first brain morphological feature of the first brain image and a second brain morphological feature of the second brain image comprises the steps of:
inputting the first brain image and the second brain image into a preset brain spherical mapping network respectively, and acquiring a first expanded spherical image corresponding to the first brain image and a second expanded spherical image corresponding to the second brain image;
reducing the first unfolded spherical map into a first standard spherical map, and extracting the first brain morphological characteristics from the first standard spherical map;
and reducing the second unfolded spherical map into a second standard spherical map, and extracting the second brain morphological characteristics from the second standard spherical map.
3. The brain image processing method according to claim 2, wherein before the step of inputting the first brain image and the second brain image into the preset brain sphere mapping network, the method further comprises the following steps:
respectively preprocessing the first brain image and the second brain image, wherein the preprocessing comprises at least one of rotation, resampling, size adjustment, skull removal, image non-uniform correction, histogram matching and gray level normalization;
and respectively carrying out gray matter segmentation and cortical reconstruction on the preprocessed first brain image and the second brain image, and respectively inputting the first brain image and the second brain image into the spherical brain mapping network after the cortical reconstruction.
4. The brain image processing method according to claim 2, further comprising the steps of:
acquiring an initialized spherical mapping network, and acquiring a plurality of brain sample images and corresponding spherical images;
taking the multiple brain sample images as training input samples, taking an expanded spherical image output by the spherical mapping network from the corresponding spherical images as a training supervision sample, and training the spherical mapping network;
and obtaining the brain spherical mapping network after the training of the training input sample and the training supervision sample.
5. The method of claim 2, wherein the step of determining that the first brain image and the second brain image belong to the same person if the first brain morphological feature matches the second brain morphological feature comprises the steps of:
and if the number of the feature points of the first brain morphological feature matched with the second brain morphological feature is larger than or equal to a first preset value, determining that the first brain image and the second brain image belong to the same person.
6. The brain image processing method according to claim 1, wherein the step of acquiring a first brain morphological feature of the first brain image and a second brain morphological feature of the second brain image comprises the steps of:
inputting the first brain image and the second brain image into a preset brain segmentation model respectively, and acquiring a plurality of functional brain area maps corresponding to the first brain image and a plurality of functional brain area maps corresponding to the second brain image;
extracting the first brain morphological feature from a plurality of functional brain region maps of the first brain image;
extracting the second brain morphological feature from a plurality of functional brain region maps of the second brain image.
7. The method of claim 6, wherein the step of determining that the first brain image and the second brain image belong to the same person if the first brain morphological feature matches the second brain morphological feature comprises the steps of:
and obtaining the similarity of the first brain morphological feature and the second brain morphological feature, and if the similarity is greater than or equal to a second preset value, determining that the first brain image and the second brain image belong to the same person.
8. The brain image processing method according to claim 7, further comprising the steps of:
if the similarity is smaller than the second preset value, the first brain image and the second brain image are respectively input into a preset brain spherical mapping network, and a first expanded spherical image corresponding to the first brain image and a second expanded spherical image corresponding to the second brain image are obtained;
reducing the first unfolded spherical map into a first standard spherical map, and extracting a third brain morphological feature of the first brain image from the first standard spherical map;
reducing the second unfolded spherical map into a second standard spherical map, and extracting a fourth brain morphological feature of the second brain image from the second standard spherical map;
and if the third brain morphological feature is matched with the fourth brain morphological feature, determining that the first brain image and the second brain image belong to the same person.
9. The brain image processing method according to claim 5, further comprising the steps of:
if the number of the matched characteristic points is smaller than the first preset value, respectively inputting the first brain image and the second brain image into a preset brain segmentation model, and acquiring a plurality of functional brain area images corresponding to the first brain image and a plurality of functional brain area images corresponding to the second brain image;
extracting a third brain morphological feature of the first brain image from a plurality of functional brain region maps of the first brain image;
extracting a fourth brain morphological feature of the second brain image from a plurality of functional brain region maps of the second brain image;
and if the third brain morphological feature is matched with the fourth brain morphological feature, determining that the first brain image and the second brain image belong to the same person.
10. A readable storage medium having stored thereon an executable program, which when executed by a processor performs the steps of the brain image processing method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010567070.0A CN111899217A (en) | 2020-06-19 | 2020-06-19 | Brain image processing method and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010567070.0A CN111899217A (en) | 2020-06-19 | 2020-06-19 | Brain image processing method and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111899217A true CN111899217A (en) | 2020-11-06 |
Family
ID=73206843
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010567070.0A Pending CN111899217A (en) | 2020-06-19 | 2020-06-19 | Brain image processing method and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111899217A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170270379A1 (en) * | 2016-03-21 | 2017-09-21 | Konica Minolta Laboratory U.S.A., Inc. | Patient identification using dynamic medical images |
CN108776787A (en) * | 2018-06-04 | 2018-11-09 | 北京京东金融科技控股有限公司 | Image processing method and device, electronic equipment, storage medium |
CN109074500A (en) * | 2016-01-21 | 2018-12-21 | 医科达有限公司 | System and method for dividing the medical image of same patient |
CN109285200A (en) * | 2018-08-23 | 2019-01-29 | 上海连叶智能科技有限公司 | A kind of conversion method of the Multimodal medical image based on artificial intelligence |
CN110874614A (en) * | 2019-11-13 | 2020-03-10 | 上海联影智能医疗科技有限公司 | Brain image classification method, computer device and readable storage medium |
CN110910335A (en) * | 2018-09-15 | 2020-03-24 | 北京市商汤科技开发有限公司 | Image processing method, image processing device and computer readable storage medium |
-
2020
- 2020-06-19 CN CN202010567070.0A patent/CN111899217A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109074500A (en) * | 2016-01-21 | 2018-12-21 | 医科达有限公司 | System and method for dividing the medical image of same patient |
US20170270379A1 (en) * | 2016-03-21 | 2017-09-21 | Konica Minolta Laboratory U.S.A., Inc. | Patient identification using dynamic medical images |
CN108776787A (en) * | 2018-06-04 | 2018-11-09 | 北京京东金融科技控股有限公司 | Image processing method and device, electronic equipment, storage medium |
CN109285200A (en) * | 2018-08-23 | 2019-01-29 | 上海连叶智能科技有限公司 | A kind of conversion method of the Multimodal medical image based on artificial intelligence |
CN110910335A (en) * | 2018-09-15 | 2020-03-24 | 北京市商汤科技开发有限公司 | Image processing method, image processing device and computer readable storage medium |
CN110874614A (en) * | 2019-11-13 | 2020-03-10 | 上海联影智能医疗科技有限公司 | Brain image classification method, computer device and readable storage medium |
Non-Patent Citations (1)
Title |
---|
赵波;郑兴华;白璐;杨勇;: "脑白质疏松症MR图像病变区域的量化分析", 杭州电子科技大学学报, no. 06, 15 December 2012 (2012-12-15) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11610313B2 (en) | Systems and methods for generating normative imaging data for medical image processing using deep learning | |
CN112508965B (en) | Automatic outline sketching system for normal organs in medical image | |
EP3662444B1 (en) | Systems and methods for automatic vertebrae segmentation and identification in medical images | |
CN112101451B (en) | Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block | |
CN107563434B (en) | Brain MRI image classification method and device based on three-dimensional convolutional neural network | |
CN111476777B (en) | Chest radiography image processing method, system, readable storage medium and apparatus | |
CN113239755B (en) | Medical hyperspectral image classification method based on space-spectrum fusion deep learning | |
WO2021109727A1 (en) | Method and system for drawing brain functional atlas | |
CN111951274A (en) | Image segmentation method, system, readable storage medium and device | |
CN116071401B (en) | Virtual CT image generation method and device based on deep learning | |
CN112001979B (en) | Motion artifact processing method, system, readable storage medium and apparatus | |
CN110674773A (en) | Dementia recognition system, device and storage medium | |
CN112614573A (en) | Deep learning model training method and device based on pathological image labeling tool | |
CN114332132A (en) | Image segmentation method and device and computer equipment | |
Li et al. | BrainK for structural image processing: creating electrical models of the human head | |
CN114340496A (en) | Analysis method and related device of heart coronary artery based on VRDS AI medical image | |
CN107330948B (en) | fMRI data two-dimensional visualization method based on popular learning algorithm | |
CN115861716B (en) | Glioma classification method and device based on twin neural network and image histology | |
CN111899217A (en) | Brain image processing method and readable storage medium | |
CN114399501B (en) | Deep learning convolutional neural network-based method for automatically segmenting prostate whole gland | |
CN113077474B (en) | CT image-based bed board removing method, system, electronic equipment and storage medium | |
CN112669405B (en) | Image reconstruction method, system, readable storage medium and device | |
CN116051467A (en) | Bladder cancer myolayer invasion prediction method based on multitask learning and related device | |
CN113191413B (en) | Prostate multimode MR image classification method and system based on foveal residual error network | |
Odagawa et al. | Classification with CNN features and SVM on embedded DSP core for colorectal magnified NBI endoscopic video image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |