CN115701878A - Eye perfusion state prediction method and device and electronic equipment - Google Patents
Eye perfusion state prediction method and device and electronic equipment Download PDFInfo
- Publication number
- CN115701878A CN115701878A CN202211262660.8A CN202211262660A CN115701878A CN 115701878 A CN115701878 A CN 115701878A CN 202211262660 A CN202211262660 A CN 202211262660A CN 115701878 A CN115701878 A CN 115701878A
- Authority
- CN
- China
- Prior art keywords
- sample
- feature
- eye
- target
- neck
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000010412 perfusion Effects 0.000 title claims abstract description 151
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000001514 detection method Methods 0.000 claims abstract description 28
- 238000012549 training Methods 0.000 claims abstract description 18
- 230000017531 blood circulation Effects 0.000 claims description 25
- 230000003205 diastolic effect Effects 0.000 claims description 10
- 238000002595 magnetic resonance imaging Methods 0.000 claims description 9
- 230000035479 physiological effects, processes and functions Effects 0.000 claims description 9
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 claims description 4
- 239000008280 blood Substances 0.000 claims description 4
- 210000004369 blood Anatomy 0.000 claims description 4
- 230000036772 blood pressure Effects 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 4
- 230000004410 intraocular pressure Effects 0.000 claims description 4
- 229910052760 oxygen Inorganic materials 0.000 claims description 4
- 239000001301 oxygen Substances 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 4
- 208000031481 Pathologic Constriction Diseases 0.000 claims description 2
- 230000036262 stenosis Effects 0.000 claims description 2
- 208000037804 stenosis Diseases 0.000 claims description 2
- 230000006870 function Effects 0.000 description 8
- 230000004386 ocular blood flow Effects 0.000 description 8
- 210000004204 blood vessel Anatomy 0.000 description 7
- 210000001715 carotid artery Anatomy 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 206010057469 Vascular stenosis Diseases 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000012014 optical coherence tomography Methods 0.000 description 3
- 238000002604 ultrasonography Methods 0.000 description 3
- 238000002583 angiography Methods 0.000 description 2
- 210000001367 artery Anatomy 0.000 description 2
- 210000004004 carotid artery internal Anatomy 0.000 description 2
- 238000003066 decision tree Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 210000001636 ophthalmic artery Anatomy 0.000 description 2
- 230000001575 pathological effect Effects 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 206010012689 Diabetic retinopathy Diseases 0.000 description 1
- 201000007527 Retinal artery occlusion Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 206010064930 age-related macular degeneration Diseases 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000036770 blood supply Effects 0.000 description 1
- 239000002775 capsule Substances 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000002872 contrast media Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000000004 hemodynamic effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- MOFVSTNWEDAEEK-UHFFFAOYSA-M indocyanine green Chemical compound [Na+].[O-]S(=O)(=O)CCCCN1C2=CC=C3C=CC=CC3=C2C(C)(C)C1=CC=CC=CC=CC1=[N+](CCCCS([O-])(=O)=O)C2=CC=C(C=CC=C3)C3=C2C1(C)C MOFVSTNWEDAEEK-UHFFFAOYSA-M 0.000 description 1
- 229960004657 indocyanine green Drugs 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 208000002780 macular degeneration Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000001328 optic nerve Anatomy 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
- 208000004644 retinal vein occlusion Diseases 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Landscapes
- Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
Abstract
The embodiment of the application provides an eye perfusion state prediction method, an eye perfusion state prediction device and electronic equipment. Acquiring neck data of a detected user detected by medical detection equipment; constructing a target feature set composed of at least one target feature based on the neck data; predicting and obtaining the eye perfusion state of the detected user based on the target characteristic set by using an eye perfusion state prediction model; the eye perfusion state prediction model is obtained based on sample neck data and corresponding sample eye perfusion state training. According to the technical scheme, neck data of the detected user are obtained through some simple medical detection devices, the eye perfusion state of the detected user is obtained through the eye perfusion state prediction model based on the target characteristic set prediction, and therefore the eye perfusion state can be obtained based on the neck data in some complex application scenes (aerospace scenes, outdoor emergency scenes and the like).
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an eye perfusion state prediction method and device and electronic equipment.
Background
Changes in Ocular perfusion (OBF) are associated with many pathological conditions of the eye, and thus, the condition of a patient can be determined by examining Ocular hemodynamics and function. In the related art, the detection of the perfusion status of the eye by using Arterial Spin Labeling (ASL) perfusion Magnetic Resonance Imaging (MRI) is widely used in clinical practice.
However, the MRI examination apparatus has a relatively large volume and is complicated to operate, and is generally installed in a fixed place such as a hospital, and requires a professional to operate, so that it is difficult to adapt to some special scenes, such as an aerospace scene and an outdoor emergency scene. Therefore, a new solution is yet to be proposed.
Disclosure of Invention
The embodiment of the application provides an eye perfusion state prediction method, an eye perfusion state prediction device and electronic equipment, which are used for solving the problem that the eye perfusion state cannot be obtained in some scenes in the prior art.
In a first aspect, an embodiment of the present application provides an eye perfusion status prediction method, including:
acquiring neck data of a detected user detected by medical detection equipment;
constructing a target feature set composed of at least one target feature based on the neck data;
predicting and obtaining the eye perfusion state of the detected user based on the target characteristic set by using an eye perfusion state prediction model; the eye perfusion state prediction model is obtained based on sample neck data and corresponding sample eye perfusion state training.
In a second aspect, an embodiment of the present application provides an eye perfusion status prediction apparatus, including:
the acquisition module is used for acquiring neck data of a detected user detected by medical detection equipment;
the construction module is used for constructing a target feature set consisting of at least one target feature based on the neck data;
the prediction module is used for predicting and obtaining the eye perfusion state of the detected user based on the target feature set by utilizing an eye perfusion state prediction model; the eye perfusion state prediction model is obtained based on sample neck data and corresponding sample eye perfusion state training.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the eye perfusion status prediction method according to the first aspect.
In a fourth aspect, the present application provides in an embodiment a computer-readable storage medium, wherein the instructions when executed by a processor of an electronic device, enable the electronic device to perform the method for predicting an eye perfusion status according to the first aspect.
In the embodiment of the application, neck data of a detected user detected by medical detection equipment is obtained; constructing a target feature set composed of at least one target feature based on the neck data; predicting and obtaining the eye perfusion state of the detected user based on the target feature set by using an eye perfusion state prediction model; the eye perfusion state prediction model is obtained based on sample neck data and corresponding sample eye perfusion state training. According to the invention, the neck data of the detected user is obtained through some simple medical detection equipment, and the eye perfusion state of the detected user is obtained through the prediction model of the eye perfusion state based on the target characteristic set, so that the eye perfusion state can be obtained based on the neck data in some complex application scenes (aerospace scenes, outdoor emergency scenes and the like).
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following descriptions are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow chart diagram illustrating an embodiment of a method for predicting an eye perfusion status according to the present application;
fig. 2 is a block diagram of an apparatus for predicting an eye perfusion status according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating an embodiment of an electronic device provided in the present application;
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification and claims of this application and in the above-described figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, the number of operations, e.g., 101, 102, etc., merely being used to distinguish between various operations, and the number itself does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
In the related art, changes in the perfusion status of the eye are associated with a number of pathological conditions of the eye, such as age-related macular degeneration, diabetic retinopathy, retinal vein occlusion, retinal artery occlusion.
In recent years, medical detection means for the perfusion state of the eye are abundant, and some mature techniques for imaging the blood flow of the eye include Fundus Fluorescence Angiography (FFA), indocyanine green angiography (ICGA), optical Coherence Tomography (OCTA), and the like, which provide a way to further understand the physiology of the blood vessels of the eye. However, FFA and ICGA are invasive, require exogenous contrast agents, and are not suitable for contrast-sensitive people; optical techniques are affected by the clarity of the ocular media and are sensitive to motion artifacts, the OCTA is qualitative and does not quantify ocular blood flow.
Currently, the detection of the perfusion status of the eye by using Arterial Spin Labeling (ASL) perfusion Magnetic Resonance Imaging (MRI) is widely used in clinical practice. Specifically, the processing method for ASL data may be implemented as follows: an ocular blood flow image (OBF image) is automatically derived for viewing by a professional, which may be a 5-year-experienced radiologist, using a dedicated workstation (philips central monitoring workstation), receiving a region of interest (ROI) drawn by the professional on the fundus optic nerve in the ocular blood flow image. In order to reduce errors caused by artificial ROI delineation, the ROI is delineated three times in each state. When delineating the ROIs, the retinal/choroidal plexus was covered as much as possible, and then the mean value for each ROI was automatically calculated. And the calculated ROI average value represents an OBF value corresponding to the tested person. Further outputting an OBF value corresponding to the detected person, and judging whether the current eye high perfusion state, eye low perfusion state or eye normal perfusion state is existed by a professional based on the OBF value; or the eye perfusion state is divided into a normal eye perfusion state or an early eye perfusion state.
However, the operation of the MRI examination apparatus is complicated, needs to be controlled by a specialized technician, and is often bulky, and is often installed in a fixed place such as a hospital, and is difficult to adapt to some special scenes. For example, in an aerospace scene, since gravity changes in a space environment and a space in a space capsule is limited, an eye perfusion state of an astronaut cannot be checked by a large-scale inspection apparatus in the related art, so that the eye perfusion state of the astronaut in the space environment cannot be evaluated. For another example, in an outdoor emergency scene, the accident site is usually inconvenient to traffic (remote or nearby), and the wounded person is often difficult to transport to a hospital with MRI examination equipment in time, so that the first-aid staff is often unable to know the eye perfusion state of the wounded person in time, which affects the treatment of the wounded person.
After a series of studies, researchers found that the blood supply to the eye mainly comes from the Ophthalmic Artery (OA), which is a branch of the Internal Carotid Artery (ICA). And the physiological data of the neck artery is easy to obtain, so that the acquisition of the eye perfusion state of the corresponding eye through the physiological data of the neck artery is a problem to be researched.
In order to solve the technical problem, in the embodiment of the application, neck data of a user to be detected by medical detection equipment is acquired; constructing a target feature set composed of at least one target feature based on the neck data; predicting and obtaining the eye perfusion state of the detected user based on the target feature set by using an eye perfusion state prediction model; the eye perfusion state prediction model is obtained based on sample neck data and corresponding sample eye perfusion state training. According to the invention, neck data of the detected user is obtained through some simple medical detection equipment, and the eye perfusion state of the detected user is obtained through the eye perfusion state prediction model based on the target characteristic set prediction, so that the eye perfusion state can be obtained based on the neck data in some complex application scenes (aerospace scenes, outdoor emergency scenes and the like).
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flowchart illustrating an eye perfusion status predicting method according to an embodiment of the present disclosure. The method may include the steps of:
101. neck data is acquired for detecting a subject user with a medical detection device.
The scene where the detected user is located is a complex application scene such as an aerospace scene or an outdoor first-aid scene, and the medical detection equipment can be portable color Doppler ultrasound equipment and the like which are easy to carry and can acquire neck data. The neck data may include neck blood flow data and neck blood vessel data.
102. Based on the neck data, a set of target features is constructed that is made up of at least one target feature.
In an alternative embodiment, the corresponding neck features may be obtained based on the neck data, specifically, the neck data includes neck blood flow data and neck blood vessel data, wherein the neck blood flow data is represented in the color ultrasound device in the form of an image, so that a systolic peak blood flow velocity, an end diastolic blood flow velocity, an average blood flow velocity, a resistance index, a pulsatility index, a systolic/diastolic ratio of the neck may be determined in the image, and further, the neck features such as carotid artery intima-media thickness, carotid artery wall elasticity, a vascular stenosis degree, a plaque position, and a plaque size may be obtained from the neck blood vessel data.
Further, at least one target can be selected from the neck features according to a preset rule to form a target feature set.
It should be noted that the target feature set may include human physiological features or eye physiological features in addition to the neck features.
103. And predicting and obtaining the eye perfusion state of the detected user based on the target feature set by using the eye perfusion state prediction model.
The eye perfusion state prediction model is obtained based on sample neck data and corresponding sample eye perfusion state training.
In the embodiment of the application, neck data of a detected user detected by medical detection equipment is acquired; constructing a target feature set composed of at least one target feature based on the neck data; predicting and obtaining the eye perfusion state of the detected user based on the target feature set by using an eye perfusion state prediction model; the eye perfusion state prediction model is obtained based on sample neck data and corresponding sample eye perfusion state training. According to the invention, neck data of the detected user is obtained through some simple medical detection equipment, and the eye perfusion state of the detected user is obtained through the eye perfusion state prediction model based on the target characteristic set prediction, so that the eye perfusion state can be obtained based on the neck data in some complex application scenes (aerospace scenes, outdoor emergency scenes and the like).
In an alternative embodiment, the eye perfusion status prediction model is trained as follows: acquiring sample neck data and a corresponding sample eye perfusion state; constructing a sample feature set consisting of at least one target sample feature based on the sample neck data; and training an eye perfusion state prediction model by using the sample feature set and the sample eye perfusion state corresponding to the sample feature set.
The sample neck data and the sample eye perfusion state can be obtained by detecting a plurality of tested persons, namely, one sample neck data and one sample eye perfusion state can correspond to one tested person.
Further, a sample feature set composed of at least one target sample feature is constructed based on the sample neck data, wherein the target sample feature may be a neck sample feature corresponding to the neck data, and may also be an eye sample feature or a human body physiological sample feature.
Wherein the eye perfusion status prediction model includes but is not limited to: examples of the learning classification model include a Support Vector Machine (SVM), a Naive Bayes model (NB), a Decision Tree (DT), and a Random Forest (RF).
In addition to this, the network model can be implemented as: a model based on a Gate Recovery Unit (GRU), a deep learning model constructed based on a Transformer, and the like.
After the sample feature set and the sample eye perfusion state corresponding to the sample feature set are determined, the sample feature set is used as input and the sample eye perfusion state is used as a label to be input into the eye perfusion state prediction model so as to obtain an eye perfusion prediction state, and a loss function of the eye perfusion state prediction model is continuously optimized based on the sample eye perfusion state and the eye perfusion prediction state until the sample eye perfusion state and the eye perfusion prediction state tend to be uniform, so that the corresponding eye perfusion state prediction model is obtained.
As an alternative embodiment, obtaining sample neck data and corresponding sample eye perfusion status may be implemented as: acquiring sample neck data of a sample user detected and obtained by the medical detection equipment; and acquiring the sample eye perfusion state of the sample user based on perfusion magnetic resonance imaging obtained by detecting the sample user by using the eye detection equipment.
Wherein, the neck data of the sample can be obtained by a medical detection device, the medical detection device can be a color ultrasonic device, and the eye detection device can be an MRI device.
As can be seen from the above, the target sample feature may be a neck sample feature corresponding to the neck data, and may also be an eye sample feature, or a human physiology sample feature, and optionally, the method further includes: acquiring at least one eye sample characteristic and at least one human body physiological sample characteristic; constructing a sample feature set composed of at least one target sample feature based on the sample neck data comprises: determining at least one neck sample characteristic corresponding to the sample neck data; at least one target sample feature is selected from the at least one neck sample feature, the at least one eye sample feature and the at least one human physiology sample feature to obtain a sample feature set consisting of the at least one target sample feature.
Wherein, the eye sample characteristics can be obtained by detecting with an tonometer and an intraocular lens biological measuring instrument, and the human body physiological sample characteristics can be obtained by a monitor.
Based on the same principle, the corresponding neck sample characteristics can be obtained based on the neck sample data, specifically, the neck sample data includes neck blood flow data and neck blood vessel data, wherein the neck blood flow data is represented in an image form in the color Doppler ultrasound device, so that the systolic peak blood flow velocity, the end diastolic blood flow velocity, the average blood flow velocity, the resistance index, the pulsatility index, and the systolic/diastolic ratio of the neck can be determined in the image, and in addition, the neck sample characteristics such as the carotid artery intratubular media thickness, the carotid artery wall elasticity, the blood vessel stenosis degree, the plaque position, and the plaque size can be obtained from the neck blood vessel data.
And the at least one human body physiological sample characteristic comprises at least one of blood pressure, heart rate, blood oxygen, age, sex and body quality index; the at least one ocular physiological sample characteristic includes at least one of intraocular pressure and visual axis length.
Further, after at least one neck sample feature, at least one eye sample feature and at least one human body physiological sample feature are obtained, at least one target sample feature is selected from the sample features.
It can be understood that, since the corresponding eye perfusion state is predicted based on the neck data, the neck sample feature is preferentially selected in the selection process of the target sample feature, and further the eye sample feature and the human body physiological sample feature are selected.
As an alternative implementation manner, selecting at least one target sample feature from the at least one neck sample feature, the at least one eye sample feature, and the at least one human physiology sample feature to obtain a sample feature set composed of the at least one target sample feature may be implemented as: respectively selecting at least one sample characteristic from at least one neck sample characteristic, at least one human body physiological sample characteristic and at least one eye physiological sample characteristic to construct a plurality of characteristic subsets; and determining a sample feature set according to the importance degree of each feature subset to the eye perfusion state prediction model.
Optionally, after the plurality of feature subsets are constructed, the plurality of feature subsets are used as input, eye perfusion sample states corresponding to the plurality of feature subsets are used as labels and input to the eye perfusion state prediction model, so as to obtain eye perfusion output states corresponding to the plurality of feature subsets, the size of the loss function is calculated based on the eye perfusion output states and the eye perfusion sample states to obtain a plurality of loss functions, and the selection of the loss functions includes a cross entropy loss function, an L1 loss function, an L2 loss function and the like.
In addition, the method of selecting the feature subset may further include, but is not limited to: forward search methods, backward search methods, bidirectional search feature selection, recursive feature elimination, and the like.
As another optional implementation manner, selecting at least one target sample feature from the at least one neck sample feature, the at least one eye sample feature, and the at least one human physiology sample feature to obtain a sample feature set composed of the at least one target sample feature may be implemented as: determining weighting coefficients corresponding to at least one neck sample characteristic, at least one human body physiological sample characteristic and at least one eye physiological sample characteristic respectively; and selecting at least one target sample characteristic with the weight coefficient larger than the preset coefficient to form a sample characteristic set.
Optionally, a weight coefficient corresponding to each sample feature may be determined based on the correlation of each sample to the eye perfusion state, and further, the number of sample features in the sample feature set may be set, so that a plurality of sample features whose weight coefficients conform to the number of sample features in the sample feature set are selected from the plurality of sample features to form the sample feature set.
As a further alternative implementation, selecting at least one target sample feature from the at least one neck sample feature, the at least one eye sample feature, and the at least one human physiology sample feature to obtain a sample feature set composed of the at least one target sample feature may be implemented as: determining characteristic states corresponding to at least one neck sample characteristic, at least one eye sample characteristic and at least one human body physiological sample characteristic respectively; and selecting at least one target sample feature with the feature state consistent with the sample eye perfusion state to form a sample feature set.
For example, for the average blood flow velocity, two thresholds are assumed to be set, which are a and B, where a is greater than B, the actual value of the average blood flow velocity is Z, the average blood flow velocity greater than a or less than B is abnormal, and the average blood flow velocity greater than B is normal.
It is understood that the sample features with the feature states consistent with the sample eye perfusion states have a relatively large correlation with the eye perfusion states, and therefore, at least one target sample feature with the feature states consistent with the sample eye perfusion states can be selected from the at least one neck sample feature, the at least one eye sample feature, and the at least one human physiological sample feature to form a sample feature set.
It should be noted that, the selection manner of the target sample features may be arbitrarily combined or used alone, and is not described herein again.
In an alternative embodiment, constructing the set of target features from at least one target feature based on the neck data comprises: determining at least one neck feature in the neck data; and selecting at least one target feature from the at least one neck feature to obtain a target feature set consisting of the at least one target feature.
The selection mode of the target feature may be any one of the three target sample features.
From the above, the target feature set may include a human physiological feature or an eye physiological feature in addition to the neck feature, and therefore, the method further includes: acquiring at least one human body physiological characteristic and at least one eye physiological characteristic; based on the neck data, constructing a set of target features comprised of at least one target feature comprises: determining at least one neck feature in the neck data; selecting at least one target parameter characteristic from at least one neck characteristic, at least one human body physiological characteristic and at least one eye physiological characteristic to obtain a target characteristic set formed by at least one target characteristic; the eye perfusion state prediction model is obtained based on at least one neck sample characteristic, at least one human body physiological sample characteristic, at least one eye physiological sample characteristic and corresponding sample eye perfusion state training.
Based on the same principle, the selection mode of the target characteristics can be any one of the three selection modes of the target sample characteristics
In an alternative embodiment, the at least one physiological characteristic of the person includes at least one of blood pressure, heart rate, blood oxygen, age, gender, body mass index; the at least one ocular physiological characteristic comprises at least one of intraocular pressure and visual axis length; the at least one cervical characteristic includes at least one of a systolic peak blood flow velocity, an end diastolic blood flow velocity, a mean blood flow velocity, an impedance index, a pulsatility index, a systolic/diastolic ratio, a carotid mid-thickness, a carotid wall elasticity, a degree of vascular stenosis, a plaque location, and a plaque size.
In an optional embodiment, the method further comprises: taking at least one characteristic type corresponding to the sample characteristic set as a target characteristic type; constructing a set of target features comprised of at least one target feature based on the neck data comprises: and forming a target feature set formed by target features corresponding to at least one target feature type based on the neck data.
It can be understood that, in the training process of the eye perfusion state prediction model, at least one target feature type corresponding to the sample feature set has been determined, and therefore, the at least one feature type corresponding to the sample feature set can be used as the target feature type, so that at least one target feature with the feature type as the target feature can be selected from at least one neck feature, at least one human physiological feature and at least one eye physiological feature to form the sample feature set.
Fig. 2 is a block diagram of an apparatus for predicting an eye perfusion status according to an embodiment of the present disclosure. Referring to fig. 2, the apparatus includes: the device comprises an acquisition module 21, a construction module 22 and a prediction module 23.
An obtaining module 21, configured to obtain neck data of a user to be detected by using a medical detection device;
a construction module 22 for constructing a target feature set composed of at least one target feature based on the neck data;
a prediction module 23, configured to predict, by using an eye perfusion status prediction model, an eye perfusion status of the detected user based on the target feature set; the eye perfusion state prediction model is obtained based on sample neck data and corresponding sample eye perfusion state training.
Optionally, the building module 22 is specifically configured to:
determining at least one neck feature in the neck data;
and selecting at least one target feature from the at least one neck feature to obtain a target feature set formed by the at least one target feature.
Optionally, the building module 22 is further specifically configured to:
acquiring at least one human body physiological characteristic and at least one eye physiological characteristic;
constructing a set of target features comprised of at least one target feature based on the neck data comprises:
determining at least one neck feature in the neck data;
selecting at least one target parameter characteristic from the at least one neck characteristic, the at least one human body physiological characteristic and the at least one eye physiological characteristic to obtain a target characteristic set formed by the at least one target characteristic; the eye perfusion state prediction model is obtained based on at least one neck sample characteristic, at least one human body physiological sample characteristic, at least one eye physiological sample characteristic and corresponding sample eye perfusion state training.
Optionally, the at least one human physiological characteristic comprises at least one of blood pressure, heart rate, blood oxygen, age, gender, body mass index; the at least one ocular physiological characteristic comprises at least one of intraocular pressure and visual axis length; the at least one cervical characteristic includes at least one of peak systolic blood flow velocity, end diastolic blood flow velocity, mean blood flow velocity, impedance index, pulsatility index, systolic/diastolic ratio, carotid intima-media thickness, carotid wall elasticity, degree of vascular stenosis, plaque location, and plaque size.
Optionally, the eye perfusion state prediction model is trained as follows:
acquiring sample neck data and a corresponding sample eye perfusion state;
constructing a sample feature set consisting of at least one target sample feature based on the sample neck data;
and training an eye perfusion state prediction model by using the sample feature set and the sample eye perfusion state corresponding to the sample feature set.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring sample neck data of a sample user detected and obtained by the medical detection equipment;
acquiring a sample eye perfusion state of the sample user based on perfusion magnetic resonance imaging obtained by detecting the sample user by using an eye detection device.
Optionally, the obtaining module is further configured to obtain at least one eye sample feature and at least one human physiological sample feature.
The building block 22 is specifically configured to:
determining at least one neck sample characteristic corresponding to the sample neck data;
selecting at least one target sample feature from the at least one neck sample feature, the at least one eye sample feature and the at least one human body physiological sample feature to obtain a sample feature set formed by the at least one target sample feature.
Optionally, the building module 22 is further specifically configured to:
respectively selecting at least one sample characteristic from at least one neck sample characteristic, the at least one human body physiological sample characteristic and the at least one eye physiological sample characteristic to construct a plurality of characteristic subsets;
and determining a sample feature set according to the importance degree of each feature subset to the eye perfusion state prediction model.
Optionally, the building module 22 is further specifically configured to:
determining weighting coefficients corresponding to at least one neck sample characteristic, at least one human body physiological sample characteristic and at least one eye physiological sample characteristic respectively;
and selecting at least one target sample characteristic with the weight coefficient larger than a preset coefficient to form a sample characteristic set.
Optionally, the building module 22 is further specifically configured to:
determining characteristic states corresponding to the at least one neck sample characteristic, the at least one eye sample characteristic and the at least one human body physiological sample characteristic respectively;
and selecting at least one target sample feature with the feature state consistent with the sample eye perfusion state to form a sample feature set.
Optionally, the building module 22 is further specifically configured to:
taking at least one feature type corresponding to the sample feature set as a target feature type;
the constructing a set of target features comprised of at least one target feature based on the neck data comprises:
and constructing a target feature set composed of target features corresponding to at least one target feature type based on the neck data.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In one possible design, the eye perfusion status prediction apparatus of the embodiment shown in fig. 3 may be implemented as an electronic device, which may include a memory 301 and a processor 302 as shown in fig. 3;
the memory 301 stores one or more computer instructions for the processor 302 to invoke for execution.
The processor 302 is configured to:
acquiring neck data of a detected user detected by medical detection equipment;
constructing a target feature set composed of at least one target feature based on the neck data;
predicting and obtaining the eye perfusion state of the detected user based on the target feature set by using an eye perfusion state prediction model; the eye perfusion state prediction model is obtained based on sample neck data and corresponding sample eye perfusion state training.
The processor 302 may include one or more processors for executing computer instructions to perform all or part of the steps of the method described above. Of course, the processor may also be implemented as one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components configured to perform the above-described methods.
The memory 301 is configured to store various types of data to support operations at the terminal. The memory may be implemented by any type or combination of volatile and non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Of course, the electronic device may of course also comprise other components, such as input/output interfaces, communicators, etc.
The input/output interface provides an interface between the processor and a peripheral interface module, which may be an output device, an input device, etc.
The communicator is configured to facilitate wired or wireless communication between the electronic device and other devices, and the like.
The electronic device may be a physical device or an elastic computing host provided by a cloud computing platform, and the electronic device may be a cloud server, and the processor, the memory, and the like may be basic server resources rented or purchased from the cloud computing platform.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as a memory comprising instructions, executable by the processor 302 of the electronic device shown in fig. 3 to perform the above-described method is also provided. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the eye perfusion status prediction method of fig. 1.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (14)
1. A method for predicting an ocular perfusion status, comprising:
acquiring neck data of a detected user detected by medical detection equipment;
constructing a target feature set composed of at least one target feature based on the neck data;
predicting and obtaining the eye perfusion state of the detected user based on the target characteristic set by using an eye perfusion state prediction model; the eye perfusion state prediction model is obtained based on sample neck data and corresponding sample eye perfusion state training.
2. The method of claim 1, wherein constructing a set of target features of at least one target feature based on the neck data comprises:
determining at least one neck feature in the neck data;
and selecting at least one target feature from the at least one neck feature to obtain a target feature set consisting of the at least one target feature.
3. The method of claim 1, further comprising:
acquiring at least one human body physiological characteristic and at least one eye physiological characteristic;
constructing a set of target features comprised of at least one target feature based on the neck data comprises:
determining at least one neck feature in the neck data;
selecting at least one target parameter characteristic from the at least one neck characteristic, the at least one human body physiological characteristic and the at least one eye physiological characteristic to obtain a target characteristic set formed by the at least one target characteristic; the eye perfusion state prediction model is obtained based on at least one neck sample characteristic, at least one human body physiological sample characteristic, at least one eye physiological sample characteristic and corresponding sample eye perfusion state training.
4. The method of claim 3, wherein the at least one human physiological characteristic includes at least one of blood pressure, heart rate, blood oxygen, age, gender, body mass index; the at least one ocular physiological characteristic comprises at least one of intraocular pressure and visual axis length; the at least one neck feature comprises at least one of peak systolic blood flow velocity, end diastolic blood flow velocity, mean blood flow velocity, impedance index, pulsatility index, systolic/diastolic ratio, carotid intima-media thickness, carotid wall elasticity, degree of vessel stenosis.
5. The method of claim 1, wherein the eye perfusion status prediction model is trained as follows:
acquiring sample neck data and a corresponding sample eye perfusion state;
constructing a sample characteristic set consisting of at least one target sample characteristic based on the sample neck data;
and training an eye perfusion state prediction model by using the sample feature set and the sample eye perfusion state corresponding to the sample feature set.
6. The method of claim 5, wherein the obtaining sample neck data and corresponding sample eye perfusion status comprises:
acquiring sample neck data of a sample user detected and obtained by the medical detection equipment;
acquiring a sample eye perfusion state of the sample user based on perfusion magnetic resonance imaging obtained by detecting the sample user by using an eye detection device.
7. The method of claim 5, further comprising:
acquiring at least one eye sample characteristic and at least one human body physiological sample characteristic;
the constructing a sample feature set composed of at least one target sample feature based on the sample neck data comprises:
determining at least one neck sample characteristic corresponding to the sample neck data;
selecting at least one target sample feature from the at least one neck sample feature, the at least one eye sample feature and the at least one human body physiological sample feature to obtain a sample feature set formed by the at least one target sample feature.
8. The method of claim 7, wherein selecting at least one target sample feature from the at least one neck sample feature, the at least one eye sample feature, and the at least one human physiology sample feature to obtain a sample feature set composed of the at least one target sample feature comprises:
respectively selecting at least one sample characteristic from at least one neck sample characteristic, the at least one human body physiological sample characteristic and the at least one eye physiological sample characteristic to construct a plurality of characteristic subsets;
and determining a sample feature set according to the importance degree of each feature subset to the eye perfusion state prediction model.
9. The method of claim 7, wherein selecting at least one target sample feature among the at least one neck sample feature, at least one eye sample feature, and at least one human physiology sample feature to obtain a sample feature set consisting of the at least one target sample feature comprises:
determining a weighting coefficient corresponding to each of at least one neck sample characteristic, at least one human body physiological sample characteristic and at least one eye physiological sample characteristic;
and selecting at least one target sample characteristic with the weight coefficient larger than the preset coefficient to form a sample characteristic set.
10. The method of claim 7, wherein selecting at least one target sample feature from the at least one neck sample feature, the at least one eye sample feature, and the at least one human physiology sample feature to obtain a sample feature set composed of the at least one target sample feature comprises:
determining characteristic states corresponding to the at least one neck sample characteristic, the at least one eye sample characteristic and the at least one human body physiological sample characteristic respectively;
and selecting at least one target sample feature with the feature state consistent with the sample eye perfusion state to form a sample feature set.
11. The method of claim 7, further comprising:
taking at least one feature type corresponding to the sample feature set as a target feature type;
the constructing a set of target features comprised of at least one target feature based on the neck data comprises:
and forming a target feature set formed by target features corresponding to at least one target feature type based on the neck data.
12. An eye perfusion status prediction apparatus, comprising:
the acquisition module is used for acquiring neck data of a detected user detected by medical detection equipment;
the construction module is used for constructing a target feature set consisting of at least one target feature based on the neck data;
the prediction module is used for predicting and obtaining the eye perfusion state of the detected user based on the target characteristic set by utilizing an eye perfusion state prediction model; the eye perfusion state prediction model is obtained based on sample neck data and corresponding sample eye perfusion state training.
13. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the eye perfusion status prediction method of any one of claims 1-11.
14. A computer-readable storage medium, whose instructions, when executed by a processor of an electronic device, enable the electronic device to perform the eye perfusion status prediction method of any one of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211262660.8A CN115701878B (en) | 2022-10-14 | 2022-10-14 | Eye perfusion state prediction method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211262660.8A CN115701878B (en) | 2022-10-14 | 2022-10-14 | Eye perfusion state prediction method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115701878A true CN115701878A (en) | 2023-02-14 |
CN115701878B CN115701878B (en) | 2024-07-30 |
Family
ID=85162778
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211262660.8A Active CN115701878B (en) | 2022-10-14 | 2022-10-14 | Eye perfusion state prediction method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115701878B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101978375A (en) * | 2008-03-17 | 2011-02-16 | 皇家飞利浦电子股份有限公司 | Imerys pigments inc [us] |
CN113506640A (en) * | 2021-08-17 | 2021-10-15 | 首都医科大学附属北京友谊医院 | Brain perfusion state classification device, method and equipment and model training device |
CN114533121A (en) * | 2022-02-18 | 2022-05-27 | 首都医科大学附属北京友谊医院 | Brain perfusion state prediction device, method and equipment and model training device |
CN114842972A (en) * | 2022-04-20 | 2022-08-02 | 平安国际智慧城市科技股份有限公司 | Method, device, electronic equipment and medium for determining user state |
-
2022
- 2022-10-14 CN CN202211262660.8A patent/CN115701878B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101978375A (en) * | 2008-03-17 | 2011-02-16 | 皇家飞利浦电子股份有限公司 | Imerys pigments inc [us] |
CN113506640A (en) * | 2021-08-17 | 2021-10-15 | 首都医科大学附属北京友谊医院 | Brain perfusion state classification device, method and equipment and model training device |
CA3159991A1 (en) * | 2021-08-17 | 2022-09-26 | Zhenchang Wang | Cerebral perfusion state classification apparatus, method and device, and model training apparatus |
CN114533121A (en) * | 2022-02-18 | 2022-05-27 | 首都医科大学附属北京友谊医院 | Brain perfusion state prediction device, method and equipment and model training device |
CN114842972A (en) * | 2022-04-20 | 2022-08-02 | 平安国际智慧城市科技股份有限公司 | Method, device, electronic equipment and medium for determining user state |
Also Published As
Publication number | Publication date |
---|---|
CN115701878B (en) | 2024-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101977645B1 (en) | Eye image analysis method | |
US11540931B2 (en) | Systems and methods for identifying personalized vascular implants from patient-specific anatomic data | |
CN106061387B (en) | For the system and method according to the specific anatomical image data prediction coronary plaque vulnerability of patient | |
JP6796117B2 (en) | Methods and systems for determining treatment by modifying the patient-specific geometric model | |
US20220367066A1 (en) | Systems and methods for anatomical modeling using information obtained from a medical procedure | |
CN105380598B (en) | Method and system for the automatic treatment planning for arteriarctia | |
US10169542B2 (en) | Systems and methods for automatically determining myocardial bridging and patient impact | |
CN104736046B (en) | System and method for number evaluation vascular system | |
US20170004280A1 (en) | Systems and methods for image processing for modeling changes in patient-specific blood vessel geometry and boundary conditions | |
RU2679572C1 (en) | Clinical decision support system based on triage decision making | |
JP2019532706A (en) | Method and system for visualization of at-risk cardiac tissue | |
EP3592242B1 (en) | Blood vessel obstruction diagnosis apparatus | |
CN117598666A (en) | Plaque vulnerability assessment in medical imaging | |
CA3159991C (en) | Cerebral perfusion state classification apparatus, method and device, and model training apparatus | |
CN114533121B (en) | Brain perfusion state prediction device, method and equipment and model training device | |
US11727570B2 (en) | Methods and systems for determining coronary hemodynamic characteristic(s) that is predictive of myocardial infarction | |
KR20190074477A (en) | Method for predicting cardio-cerebrovascular disease using eye image | |
CN115701878B (en) | Eye perfusion state prediction method and device and electronic equipment | |
KR102343796B1 (en) | Method for predicting cardiovascular disease using eye image | |
CN110200599A (en) | A kind of pulse wave detecting method, terminal device and system | |
KR20220106947A (en) | Method for predicting cardiovascular disease using eye image | |
CN111755104B (en) | Heart state monitoring method and system, electronic equipment and storage medium | |
LU503428B1 (en) | Method, Device and System for Diagnosing Pulmonary Embolism Based on non-contrast chest CT Images | |
CN118379417A (en) | Three-dimensional fundus image generation method, electronic apparatus, and storage medium | |
LALANDE et al. | PREDICTION OF THE EVOLUTION OF THE AORTIC DIAMETER ACCORDING TO THE THROMBUS SIGNAL FROM MRIMAGES ON SMALL ABDOMINAL AORTIC ANEURYSMS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |