CN116912351B - Correction method and system for intracranial structure imaging based on artificial intelligence - Google Patents

Correction method and system for intracranial structure imaging based on artificial intelligence Download PDF

Info

Publication number
CN116912351B
CN116912351B CN202311170696.8A CN202311170696A CN116912351B CN 116912351 B CN116912351 B CN 116912351B CN 202311170696 A CN202311170696 A CN 202311170696A CN 116912351 B CN116912351 B CN 116912351B
Authority
CN
China
Prior art keywords
image
feature set
intracranial
key
description
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311170696.8A
Other languages
Chinese (zh)
Other versions
CN116912351A (en
Inventor
潘帆
冯军峰
郑定昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202311170696.8A priority Critical patent/CN116912351B/en
Publication of CN116912351A publication Critical patent/CN116912351A/en
Application granted granted Critical
Publication of CN116912351B publication Critical patent/CN116912351B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

According to the correction method and system for the intracranial structure imaging based on the artificial intelligence, the transfer deriving unit is called to switch the transition vector description into the key feature set of the second intracranial structure imaging type, whether missing data exist in the key feature set of the second intracranial structure imaging type is judged according to the artificial intelligence analysis thread, and if the missing data exist, correction processing is carried out on the key feature set of the first intracranial structure imaging type according to the missing data. During practical operation, intracranial structure imaging may have problems of incomplete imaging or insufficient definition, so that a doctor cannot accurately and reliably diagnose a patient when checking the patient. Therefore, the application analyzes and processes the blurred or blocked data, thus improving the accuracy and reliability of intracranial structure imaging and improving the working efficiency.

Description

Correction method and system for intracranial structure imaging based on artificial intelligence
Technical Field
The application relates to the technical field of image correction, in particular to an intracranial structure imaging correction method and system based on artificial intelligence.
Background
Intracranial structure detectors are now a very important device in hospitals, but after long-term operation of the device, there may be problems of ageing of the device or of the device technology being behind, resulting in problems of not being ready for imaging of the device.
At present, a device is replaced or the technology of the device is updated, a great deal of funds are required to be consumed, and long-term researches of the inventor find that the device can process the intracranial structure imaging so as to solve the problems of equipment aging or equipment technology lag, but how to process the intracranial structure imaging in the actual operation process is a technical problem which is difficult to solve at present.
Disclosure of Invention
In order to improve the technical problems in the related art, the application provides an intracranial structure imaging correction method and system based on artificial intelligence.
In a first aspect, there is provided a method of modifying an imaging of an intracranial structure based on artificial intelligence, the method comprising: obtaining a set of key features of a first intracranial structure imaging species; classifying the key feature set into at least two kinds of classification matrixes by adopting a classification unit with a difference; generating an image key attribute feature set of the key feature set according to the at least two category division matrixes, wherein an edge attribute feature set in the image key attribute feature set corresponds to a secondary image attribute feature, and pixel points in the image key attribute feature set correspond to constraint conditions between adjacent secondary image attribute features; invoking a compression unit to switch the image key attribute feature set into transition vector description of the key feature set based on image description content corresponding to the edge attribute feature set; and calling a deriving unit to switch the transition vector description into a key feature set of a second intracranial structure imaging type, judging whether missing data exists in the key feature set of the second intracranial structure imaging type according to an artificial intelligent analysis thread, and if so, correcting the key feature set of the first intracranial structure imaging type according to the missing data.
In an independent embodiment, the classifying the set of key features into at least two category-division matrices using a differencing division unit includes: and classifying the key feature sets by adopting at least two different dividing units to obtain at least two kinds of dividing matrixes.
In an independent embodiment, the generating the image key attribute feature set of the key feature set according to the at least two category-dividing matrices includes: respectively analyzing and processing the at least two kinds of division matrixes to obtain at least two analysis and processing results; and splicing the at least two analysis processing results to obtain the image key attribute feature set of the key feature set.
In an independently implemented embodiment, the invoking compression unit switches the image key attribute feature set to a transition vector description of the key feature set based on image description content corresponding to the edge attribute feature set, comprising: invoking a compression unit thread based on intracranial image descriptive contents to switch the image key attribute characteristic set into transition vector description of the key attribute characteristic set based on the image descriptive contents corresponding to the edge attribute characteristic set; the intracranial image descriptive content comprises a descriptive element set and all descriptive contents of all pixel points in the image key attribute characteristic set.
In an independently implemented embodiment, the intracranial image description based compression unit thread is an intracranial image description based neuro-convolution thread; invoking a compression unit thread based on intracranial image descriptive content to switch the image key attribute feature set to a transition vector description of the key feature set based on the image descriptive content corresponding to the edge attribute feature set, comprising: invoking the nerve convolution thread based on the intracranial image descriptive contents, and carrying out N continuous optimization processing on the intracranial image descriptive contents corresponding to the image key attribute feature set; and determining the transition vector description of the key feature set according to the intracranial image description content of the N continuous optimization processing.
In an independent embodiment, invoking the compression unit thread based on the intracranial image description content to perform N-continuous optimization processing on the intracranial image description content corresponding to the image key attribute feature set, including: when the compression unit based on the intracranial image descriptive content is called to carry out N continuous optimization processing, according to the shielding descriptive content of an x-th pixel yx in the image key attribute feature set after the last continuous optimization processing, image descriptive content data related to a pixel near the x-th pixel yx and all descriptive contents after the last continuous optimization processing, optimizing to obtain the shielding descriptive content of the x-th pixel yx after the continuous optimization processing; according to the shielding description content of all the pixel points after the continuous optimization treatment, optimizing to obtain all the description contents after the continuous optimization treatment; and when N is not equal to M, adding one to N, and repeating the two steps.
In an independent embodiment, the image description content data related to the near pixel point includes: the last integration information, the next integration information, the integration information of the last pixel point and the integration information of the next pixel point; integrating the undetermined feature vector corresponding to the last pixel point in the continuous optimization process and the positioning description factor of the x pixel point according to a first significance data query model to obtain the last integrated information; integrating the next corresponding undetermined feature vector of the x-th pixel point in the continuous optimization process and the positioning description factor of the x-th pixel point according to a second significance data query model to obtain the next integration information; integrating shielding description content corresponding to the last pixel point in the last continuous optimization process of the x-th pixel point and positioning description factors of the x-th pixel point according to the first significance data query model to obtain integration information of the last pixel point; integrating shielding description content corresponding to the next pixel point in the last continuous optimization process of the x-th pixel point and a positioning description factor of the x-th pixel point according to the second significance data query model to obtain integration information of the next pixel point; wherein the relevance metric values in the first salient data query model and the second salient data query model are the same or different.
In an independent embodiment, N is a fixed value set in advance.
In an independently implemented embodiment, the determining the transition vector description of the key feature set from the N-continuous optimization process of intracranial image description content comprises: and integrating the N intracranial image description contents subjected to N continuous optimization according to a third significance data query model of a set period to obtain integrated intracranial image description contents, and determining the integrated intracranial image description contents as transition vector description of the key feature set.
In a second aspect, an artificial intelligence based correction system for imaging of intracranial structures is provided, comprising a processor and a memory in communication with each other, the processor being adapted to read a computer program from the memory and execute the computer program to implement the method described above.
The correction method and the correction system for intracranial structure imaging based on artificial intelligence provided by the embodiment of the application acquire a key feature set of a first intracranial structure imaging type; classifying the key feature set into at least two kinds of classification matrixes by using a classification unit with a difference; generating an image key attribute feature set of a key feature set according to at least two category division matrixes, wherein an edge attribute feature set in the image key attribute feature set corresponds to a secondary image attribute feature, and a pixel point in the image key attribute feature set corresponds to a constraint condition between adjacent secondary image attribute features; invoking a compression unit to switch the image key attribute feature set into transition vector description of the key feature set based on the image description content corresponding to the edge attribute feature set; and calling a deriving unit to switch the transition vector description into a key feature set of the second intracranial structure imaging type, judging whether missing data exists in the key feature set of the second intracranial structure imaging type according to the artificial intelligent analysis thread, and if so, correcting the key feature set of the first intracranial structure imaging type according to the missing data. During practical operation, intracranial structure imaging may have problems of incomplete imaging or insufficient definition, so that a doctor cannot accurately and reliably diagnose a patient when checking the patient. Therefore, the application analyzes and processes the blurred or blocked data, thus improving the accuracy and reliability of intracranial structure imaging and improving the working efficiency.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for correcting an imaging of an intracranial structure based on artificial intelligence according to an embodiment of the application.
FIG. 2 is a block diagram of an artificial intelligence based intracranial structure imaging correction device according to an embodiment of the present application.
FIG. 3 is a block diagram of an artificial intelligence based intracranial structure imaging correction system according to an embodiment of the present application.
Fig. 4 is a schematic diagram of connection relation of an abnormal line judging structure based on artificial intelligence according to an embodiment of the present application.
Detailed Description
In order to better understand the above technical solutions, the following detailed description of the technical solutions of the present application is made by using the accompanying drawings and specific embodiments, and it should be understood that the specific features of the embodiments and the embodiments of the present application are detailed descriptions of the technical solutions of the present application, and not limiting the technical solutions of the present application, and the technical features of the embodiments and the embodiments of the present application may be combined with each other without conflict.
Referring to fig. 1, a method for correcting an intracranial structure imaging based on artificial intelligence is shown, which may include the following steps 301-305.
Step 301 obtains a set of key features of a first intracranial structure imaging modality.
Optionally, a set of key features of the first intracranial structure imaging species is obtained, the features of the first intracranial structure imaging species being derived into a set of key features of the second intracranial structure imaging species. Wherein the first intracranial structure imaging species comprises: the external carotid artery and the vertebral-basilar artery system participate in compensatory blood supply and the like after stenosis or occlusion of the bilateral internal carotid artery, anterior cerebral artery and middle cerebral artery.
The intracranial structure imaging is obtained by preprocessing the vibration signal through a server. The vibration signals originate from the vibrator, the signals reflected by the vibration signals at the places contacting different structures are different, and a plurality of sensors are also arranged for receiving the reflected vibration signals, and intracranial structure imaging is constructed according to the reflected signals.
In step 302, the key feature set is classified into not less than two kinds of classification matrices using the classification units having the differences.
The dividing unit is a module that divides the inputted descriptive content (key feature set) into category division matrices of files. The specific division mode is an experience type set according to experience facts of doctors.
Illustratively, the subject is processed using the differentiated partitioning units to obtain at least two category-partitioning matrices. Not less than two kinds of division matrices may be different from each other. The same or similar data are divided into the same folder according to the set classification mode.
Step 303, generating an image key attribute feature set of the key feature set according to at least two kinds of partition matrixes, wherein an edge attribute feature set in the image key attribute feature set corresponds to a secondary image attribute feature, and a pixel point in the image key attribute feature set corresponds to a constraint condition between adjacent secondary image attribute features.
By way of example, image key attribute features may be understood as important information in intracranial structure imaging that can reveal abnormalities.
Step 304, a compression unit is called to switch the image key attribute feature set into a transition vector description of the key feature set based on the image description content corresponding to the edge attribute feature set.
By way of example, an edge attribute feature set may be understood as information that is obscured or obscured in imaging of intracranial structures.
Optionally, invoking a compression unit thread based on the intracranial image description content to switch the image key attribute feature set into a transition vector description of the key feature set; the intracranial image descriptive content comprises a descriptive element set and all descriptive contents of all pixel points in the image key attribute characteristic set.
Step 305, calling a deriving unit to switch the transition vector description into a key feature set of a second intracranial structure imaging type, judging whether missing data exists in the key feature set of the second intracranial structure imaging type according to an artificial intelligence analysis thread, and if so, correcting the key feature set of the first intracranial structure imaging type according to the missing data.
Illustratively, the transition vector description may be understood as an intermediate vector description.
The set of key features of the second intracranial structure imaging modality is a derivative of the set of key features of the first intracranial structure imaging modality.
Optionally, the second intracranial structure imaging modality comprises: the external carotid artery and the vertebral-basilar artery system participate in compensatory blood supply and the like after stenosis or occlusion of the bilateral internal carotid artery, anterior cerebral artery and middle cerebral artery. The second intracranial structure imaging modality is a differential intracranial structure imaging modality from the first intracranial structure imaging modality.
In summary, the method provided in this embodiment obtains the key feature set of the first intracranial structure imaging class; classifying the key feature set into at least two kinds of classification matrixes by using a classification unit with a difference; generating an image key attribute feature set of a key feature set according to at least two category division matrixes, wherein an edge attribute feature set in the image key attribute feature set corresponds to a secondary image attribute feature, and a pixel point in the image key attribute feature set corresponds to a constraint condition between adjacent secondary image attribute features; invoking a compression unit to switch the image key attribute feature set into transition vector description of the key feature set based on the image description content corresponding to the edge attribute feature set; and calling a deriving unit to switch the transition vector description into a key feature set of the second intracranial structure imaging type, judging whether missing data exists in the key feature set of the second intracranial structure imaging type according to the artificial intelligent analysis thread, and if so, correcting the key feature set of the first intracranial structure imaging type according to the missing data. During practical operation, intracranial structure imaging may have problems of incomplete imaging or insufficient definition, so that a doctor cannot accurately and reliably diagnose a patient when checking the patient. Therefore, the application analyzes and processes the blurred or blocked data, thus improving the accuracy and reliability of intracranial structure imaging and improving the working efficiency.
In this embodiment, step 303 in the above embodiment may be alternatively implemented as step 3031 and step 3032, and the method may specifically include the following steps.
And step 3031, respectively analyzing and processing at least two kinds of the division matrixes to obtain at least two analysis and processing results.
Optionally, the key feature set is divided by at least two different dividing units to obtain at least two kinds of division matrices. Wherein, a category division matrix corresponds to an analysis processing result.
Step 3032, splicing at least two analysis and processing results to obtain an image key attribute feature set of a key feature set, wherein an edge attribute feature set in the image key attribute feature set corresponds to a secondary image attribute feature, and pixels in the image key attribute feature set correspond to constraint conditions between adjacent secondary image attribute features;
the compression unit thread based on the intracranial image description in the present embodiment is a nerve convolution thread based on the intracranial image description. In this embodiment, step 304 in the above embodiment may be alternatively implemented as step 701 and step 702, and the method includes the following.
And step 701, calling a compression unit thread based on the image key attribute feature set, and carrying out N continuous optimization processing on intracranial image description contents corresponding to the image key attribute feature set.
In one example, N is a fixed value set in advance, and N is an integer.
Step 702, determining a transition vector description of a key feature set according to the description content of the intracranial image processed by N continuous optimization;
the description content of the N continuous optimization processed intracranial image comprises the following components: and after N continuous optimization processing, the description element set AN and all description contents BN of all pixel points in the image key attribute feature set.
In this embodiment, the step 701 in the above embodiment may be alternatively implemented as the step 7011, the step 7012, and the step 7013, and the method includes the following steps.
Step 7011, when invoking the compressing unit based on the intracranial image description content to perform the nth continuous optimization process, optimizing to obtain the shielding description content of the xth pixel yx after the continuous optimization process according to the shielding description content of the xth pixel yx after the last continuous optimization process, the image description content data related to the next pixel of the xth pixel yx and all the description contents after the last continuous optimization process in the image key attribute feature set.
The near pixel point refers to a pixel point connected with one pixel point.
For example, when the nth continuous optimization is performed, the shielding description content of the xth pixel yx after the last continuous optimization is recorded as bn.1, and the shielding description content of the xth pixel yx after the present continuous optimization is recorded as the whole description content after the last continuous optimization.
For example, the occlusion description content of the pixel point p3 in the nth continuous optimization processing in the image key attribute feature set is obtained according to the image description content data related to the pixel points p0, p1, p4 and p5, which are close to the pixel point p3, of the 3 rd pixel point p3 after the last continuous optimization processing, and all the description contents bn.1 after the last continuous optimization processing.
Step 7012, optimizing to obtain all the description contents after the continuous optimization according to the shielding description contents of all the pixel points after the continuous optimization.
The compression unit is constructed according to the edge attribute feature set, and N continuous optimization processing is carried out on intracranial image description contents of the image key attribute feature set.
Schematically, all description contents BN after the Nth continuous optimization processing are obtained according to the shielding description contents of all pixel points in the Nth iteration.
Step 7013, repeating the above two steps after adding one to N when N is not equal to M.
Illustratively, the intracranial image description content of the image key attribute feature set is subjected to N continuous optimization processing. After the shielding description content and all description contents BN of all pixel points yx after the Nth continuous optimization processing are obtained, N is not equal to N, carrying out the N+1 th continuous optimization processing on intracranial image description contents of the image key attribute feature set until the N continuous optimization processing is completed.
Step 702, determining a transition vector description of the key feature set according to the description content of the intracranial image processed by N continuous optimization.
In one example, determining a transition vector description of a key feature set from an intracranial image description of an N-continuous optimization process includes: and integrating N intracranial image description contents subjected to N continuous optimization according to a third significance data query model of a set period to obtain integrated intracranial image description contents, and determining the integrated intracranial image description contents as transition vector description of the key feature set.
Optionally, after the compression unit completes the loop iteration, the third saliency data query model is used for weighting the sample shielding description content of the pixel points to obtain the final state Ax of each pixel point.
The compression unit is constructed according to the edge attribute feature set, and N continuous optimization processing is carried out on intracranial image description contents of the image key attribute feature set.
Schematically, according to the sample shielding description content of the pixel point p7 and all description contents BN after N continuous optimization processing, a final state A7 of the pixel point p7 is obtained.
And obtaining a transition vector of the key feature set according to the final state Ax of all the optimized pixel points.
In summary, according to the method provided by the embodiment, by using the compression unit of the nerve convolution thread based on the intracranial image description content to build on the dimension perpendicular to the edge attribute feature set, the intracranial image description content of the image key attribute feature set is subjected to N continuous optimization processing, so that the problem that the traditional thread can only build word sequences of texts and cannot process analysis processing results is solved.
Image descriptive content data associated with a pixel in proximity thereto, comprising: the last integration information, the next integration information, the integration information of the last pixel point and the integration information of the next pixel point.
The last one means that a point in the graph is determined as the end point of an edge in the graph.
The next means that a point in the graph is determined as the starting point of an edge in the graph.
And integrating the undetermined feature vector corresponding to the x pixel point in the continuous optimization process and the positioning description factor of the x pixel point according to the first significance data query model to obtain the last integrated information.
And integrating the next corresponding undetermined feature vector of the x pixel point in the continuous optimization process and the positioning description factor of the x pixel point according to the second significance data query model to obtain next integrated information.
And integrating the shielding description content corresponding to the last pixel point in the last continuous optimization process and the positioning description factor of the x-th pixel point according to the first saliency data query model to obtain the integration information of the last pixel point.
And integrating shielding description content corresponding to the next pixel point in the last continuous optimization process and positioning description factors of the x pixel point according to the second saliency data query model to obtain integration information of the next pixel point.
Optionally, the relevance metric values in the first salient data query model and the second salient data query model are the same or different.
In summary, in the method provided in this embodiment, by constructing in a dimension perpendicular to the edge attribute feature set, the information source of each pixel point may include the previous information and the next information, so that the previous information and the next information may interact semantically; and simultaneously introducing a first significance data query model and a second significance data query model so that the thread can distinguish the previous information from the next information.
With reference to fig. 2, an artificial intelligence-based intracranial structure imaging correction device 200 is provided, the device comprising:
a feature acquisition module 210 for acquiring a set of key features of a first intracranial structure imaging modality;
a matrix dividing module 220, configured to classify the key feature set into at least two kinds of division matrices using a division unit having a difference;
a feature generation module 230, configured to generate an image key attribute feature set of the key feature set according to the at least two category division matrices, where an edge attribute feature set in the image key attribute feature set corresponds to a secondary image attribute feature, and a pixel point in the image key attribute feature set corresponds to a constraint condition between adjacent secondary image attribute features;
a vector description module 240, configured to invoke a compression unit to switch the image key attribute feature set into a transition vector description of the key feature set based on image description content corresponding to the edge attribute feature set;
the imaging correction module 250 is configured to invoke the deriving unit to switch the transition vector description to a key feature set of a second intracranial structure imaging category, determine, according to an artificial intelligence analysis thread, whether missing data exists in the key feature set of the second intracranial structure imaging category, and if so, correct the key feature set of the first intracranial structure imaging category according to the missing data.
On the above basis, referring to FIG. 3, there is shown a correction system 300 for artificial intelligence based imaging of intracranial structures, comprising a processor 310 and a memory 320 in communication with each other, the processor being adapted to read a computer program from the memory and execute the computer program to implement the method described above.
On the basis of the above, the specific structural relationship of the present application is shown, including a Z-axis motor, a vibration sensor, a piezoelectric film and a vibration conduction block, and the specific distribution is shown in fig. 4.
On the basis of the above, there is also provided a computer readable storage medium on which a computer program stored which, when run, implements the above method.
In summary, based on the above scheme, a key feature set of a first intracranial structure imaging category is obtained; classifying the key feature set into at least two kinds of classification matrixes by using a classification unit with a difference; generating an image key attribute feature set of a key feature set according to at least two category division matrixes, wherein an edge attribute feature set in the image key attribute feature set corresponds to a secondary image attribute feature, and a pixel point in the image key attribute feature set corresponds to a constraint condition between adjacent secondary image attribute features; invoking a compression unit to switch the image key attribute feature set into transition vector description of the key feature set based on the image description content corresponding to the edge attribute feature set; and calling a deriving unit to switch the transition vector description into a key feature set of the second intracranial structure imaging type, judging whether missing data exists in the key feature set of the second intracranial structure imaging type according to the artificial intelligent analysis thread, and if so, correcting the key feature set of the first intracranial structure imaging type according to the missing data. During practical operation, intracranial structure imaging may have problems of incomplete imaging or insufficient definition, so that a doctor cannot accurately and reliably diagnose a patient when checking the patient. Therefore, the application analyzes and processes the blurred or blocked data, thus improving the accuracy and reliability of intracranial structure imaging and improving the working efficiency.
It should be appreciated that the systems and modules thereof shown above may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may then be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system of the present application and its modules may be implemented not only with hardware circuitry such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also with software executed by various types of processors, for example, and with a combination of the above hardware circuitry and software (e.g., firmware).
It should be noted that, the advantages that may be generated by different embodiments may be different, and in different embodiments, the advantages that may be generated may be any one or a combination of several of the above, or any other possible advantages that may be obtained.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements and adaptations of the application may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within the present disclosure, and therefore, such modifications, improvements, and adaptations are intended to be within the spirit and scope of the exemplary embodiments of the present disclosure.
Meanwhile, the present application uses specific words to describe embodiments of the present application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the application are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
The computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer storage medium may be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated through any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
The computer program code necessary for operation of portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, c#, vb net, python, etc., a conventional programming language such as C language, visual Basic, fortran 2003, perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, ruby and Groovy, or other programming languages, etc. The program code may execute entirely on the user's computer or as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or the use of services such as software as a service (SaaS) in a cloud computing environment.
Furthermore, the order in which the elements and sequences are presented, the use of numerical letters, or other designations are used in the application is not intended to limit the sequence of the processes and methods unless specifically recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of example, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the application. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in order to simplify the description of the present disclosure and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure, however, is not intended to imply that more features than are required by the subject application. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the numbers allow for adaptive variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations in some embodiments for use in determining the breadth of the range, in particular embodiments, the numerical values set forth herein are as precisely as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited herein is hereby incorporated by reference in its entirety. Except for the application history file that is inconsistent or conflicting with this disclosure, the file (currently or later attached to this disclosure) that limits the broadest scope of the claims of this disclosure is also excluded. It is noted that the description, definition, and/or use of the term in the appended claims controls the description, definition, and/or use of the term in this application if there is a discrepancy or conflict between the description, definition, and/or use of the term in the appended claims.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of the application. Thus, by way of example, and not limitation, alternative configurations of embodiments of the application may be considered in keeping with the teachings of the application. Accordingly, the embodiments of the present application are not limited to the embodiments explicitly described and depicted herein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (5)

1. A method for modifying an imaging of an intracranial structure based on artificial intelligence, the method comprising:
obtaining a set of key features of a first intracranial structure imaging species;
classifying the key feature set into at least two kinds of classification matrixes by adopting a classification unit with a difference;
generating an image key attribute feature set of the key feature set according to the at least two category division matrixes, wherein an edge attribute feature set in the image key attribute feature set corresponds to a secondary image attribute feature, and pixel points in the image key attribute feature set correspond to constraint conditions between adjacent secondary image attribute features;
invoking a compression unit to switch the image key attribute feature set into transition vector description of the key feature set based on image description content corresponding to the edge attribute feature set;
calling a deriving unit to switch the transition vector description into a key feature set of a second intracranial structure imaging type, judging whether missing data exists in the key feature set of the second intracranial structure imaging type according to an artificial intelligent analysis thread, and if so, correcting the key feature set of the first intracranial structure imaging type according to the missing data;
wherein the generating the image key attribute feature set of the key feature set according to the at least two category division matrices includes:
respectively analyzing and processing the at least two kinds of division matrixes to obtain at least two analysis and processing results;
splicing the at least two analysis processing results to obtain an image key attribute feature set of the key feature set;
the calling compression unit switches the image key attribute feature set to a transition vector description of the key feature set based on image description content corresponding to the edge attribute feature set, and the method comprises the following steps:
invoking a compression unit thread based on intracranial image descriptive contents to switch the image key attribute characteristic set into transition vector description of the key attribute characteristic set based on the image descriptive contents corresponding to the edge attribute characteristic set; the intracranial image descriptive content comprises a descriptive element set and all descriptive contents of all pixel points in the image key attribute characteristic set;
the compression unit thread based on the intracranial image descriptive content is a nerve convolution thread based on the intracranial image descriptive content; invoking a compression unit thread based on intracranial image descriptive content to switch the image key attribute feature set to a transition vector description of the key feature set based on the image descriptive content corresponding to the edge attribute feature set, comprising:
invoking the nerve convolution thread based on the intracranial image descriptive contents, and carrying out N continuous optimization processing on the intracranial image descriptive contents corresponding to the image key attribute feature set; determining the transition vector description of the key feature set according to the intracranial image description content of the N continuous optimization processing;
the method for optimizing the intracranial image description content comprises the steps of:
when the compression unit based on the intracranial image descriptive content is called to carry out N continuous optimization processing, according to the shielding descriptive content of an x-th pixel yx in the image key attribute feature set after the last continuous optimization processing, image descriptive content data related to a pixel near the x-th pixel yx and all descriptive contents after the last continuous optimization processing, optimizing to obtain the shielding descriptive content of the x-th pixel yx after the continuous optimization processing;
according to the shielding description content of all the pixel points after the continuous optimization treatment, optimizing to obtain all the description contents after the continuous optimization treatment;
when N is not equal to M, adding one to N, and repeating the two steps;
wherein M is a fixed value set in advance.
2. The method of claim 1, wherein classifying the set of key features into not less than two category classification matrices using a differencing partition unit comprises: and classifying the key feature sets by adopting at least two different dividing units to obtain at least two kinds of dividing matrixes.
3. The method of claim 1, wherein the image descriptive content data associated with the near pixel point comprises:
the last integration information, the next integration information, the integration information of the last pixel point and the integration information of the next pixel point;
integrating the undetermined feature vector corresponding to the last pixel point in the continuous optimization process and the positioning description factor of the x pixel point according to a first significance data query model to obtain the last integrated information;
integrating the next corresponding undetermined feature vector of the x-th pixel point in the continuous optimization process and the positioning description factor of the x-th pixel point according to a second significance data query model to obtain the next integration information;
integrating shielding description content corresponding to the last pixel point in the last continuous optimization process of the x-th pixel point and positioning description factors of the x-th pixel point according to the first significance data query model to obtain integration information of the last pixel point;
integrating shielding description content corresponding to the next pixel point in the last continuous optimization process of the x-th pixel point and a positioning description factor of the x-th pixel point according to the second significance data query model to obtain integration information of the next pixel point;
wherein the relevance metric values in the first salient data query model and the second salient data query model are the same or different.
4. The method of claim 1, wherein the determining a transition vector description of the set of key features from the N-continuous optimization processed intracranial image description content comprises: and integrating the N intracranial image description contents subjected to N continuous optimization according to a third significance data query model of a set period to obtain integrated intracranial image description contents, and determining the integrated intracranial image description contents as transition vector description of the key feature set.
5. An artificial intelligence based correction system for imaging of intracranial structures, comprising a processor and a memory in communication with each other, the processor being adapted to read a computer program from the memory and execute the computer program to implement the method of any of claims 1-4.
CN202311170696.8A 2023-09-12 2023-09-12 Correction method and system for intracranial structure imaging based on artificial intelligence Active CN116912351B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311170696.8A CN116912351B (en) 2023-09-12 2023-09-12 Correction method and system for intracranial structure imaging based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311170696.8A CN116912351B (en) 2023-09-12 2023-09-12 Correction method and system for intracranial structure imaging based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN116912351A CN116912351A (en) 2023-10-20
CN116912351B true CN116912351B (en) 2023-11-17

Family

ID=88360634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311170696.8A Active CN116912351B (en) 2023-09-12 2023-09-12 Correction method and system for intracranial structure imaging based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN116912351B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2254092A2 (en) * 2009-05-19 2010-11-24 Mitsubishi Electric Corporation Method for reconstructing a distance field of a swept volume at a sample point
EP3599575A1 (en) * 2017-04-27 2020-01-29 Dassault Systèmes Learning an autoencoder
CN111324765A (en) * 2020-02-07 2020-06-23 复旦大学 Fine-grained sketch image retrieval method based on depth cascade cross-modal correlation
CN115131557A (en) * 2022-05-30 2022-09-30 沈阳化工大学 Lightweight segmentation model construction method and system based on activated sludge image
CN115359203A (en) * 2022-09-21 2022-11-18 李敏 Three-dimensional high-precision map generation method and system and cloud platform
CN115457340A (en) * 2022-09-21 2022-12-09 曲阜同联网络科技有限公司 Image recognition processing method and system based on artificial intelligence

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886326B (en) * 2019-01-31 2022-01-04 深圳市商汤科技有限公司 Cross-modal information retrieval method and device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2254092A2 (en) * 2009-05-19 2010-11-24 Mitsubishi Electric Corporation Method for reconstructing a distance field of a swept volume at a sample point
EP3599575A1 (en) * 2017-04-27 2020-01-29 Dassault Systèmes Learning an autoencoder
CN111324765A (en) * 2020-02-07 2020-06-23 复旦大学 Fine-grained sketch image retrieval method based on depth cascade cross-modal correlation
CN115131557A (en) * 2022-05-30 2022-09-30 沈阳化工大学 Lightweight segmentation model construction method and system based on activated sludge image
CN115359203A (en) * 2022-09-21 2022-11-18 李敏 Three-dimensional high-precision map generation method and system and cloud platform
CN115457340A (en) * 2022-09-21 2022-12-09 曲阜同联网络科技有限公司 Image recognition processing method and system based on artificial intelligence

Also Published As

Publication number Publication date
CN116912351A (en) 2023-10-20

Similar Documents

Publication Publication Date Title
US20190361686A1 (en) Methods, systems, apparatuses and devices for facilitating change impact analysis (cia) using modular program dependency graphs
CN113903473A (en) Medical information intelligent interaction method and system based on artificial intelligence
CN116737975A (en) Public health data query method and system applied to image analysis
CN116912351B (en) Correction method and system for intracranial structure imaging based on artificial intelligence
CN115481197B (en) Distributed data processing method, system and cloud platform
CN116739184B (en) Landslide prediction method and system
CN115373688B (en) Optimization method and system of software development thread and cloud platform
CN115514570B (en) Network diagnosis processing method, system and cloud platform
CN115473822B (en) 5G intelligent gateway data transmission method, system and cloud platform
CN113626538B (en) Medical information intelligent classification method and system based on big data
CN117037982A (en) Medical big data information intelligent acquisition method and system
CN113380363B (en) Medical data quality evaluation method and system based on artificial intelligence
CN116687371B (en) Intracranial pressure detection method and system
CN113947709A (en) Image processing method and system based on artificial intelligence
CN113643818B (en) Method and system for integrating medical data based on regional data
CN115631829B (en) Network connection multi-mode detection method and system based on acupoint massage equipment
CN115563153B (en) Task batch processing method, system and server based on artificial intelligence
CN115079882B (en) Human-computer interaction processing method and system based on virtual reality
CN115509811B (en) Distributed storage data recovery method, system and cloud platform
CN115409510B (en) Online transaction security system and method
CN113611425B (en) Method and system for intelligent regional medical integrated database based on software definition
CN113610133B (en) Laser data and visual data fusion method and system
CN113918963B (en) Authority authorization processing method and system based on business requirements
CN115292301B (en) Task data abnormity monitoring and processing method and system based on artificial intelligence
CN113608689B (en) Data caching method and system based on edge calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant