CN113269160B - Colonoscope operation predicament intelligent identification system based on eye movement characteristics - Google Patents
Colonoscope operation predicament intelligent identification system based on eye movement characteristics Download PDFInfo
- Publication number
- CN113269160B CN113269160B CN202110798775.8A CN202110798775A CN113269160B CN 113269160 B CN113269160 B CN 113269160B CN 202110798775 A CN202110798775 A CN 202110798775A CN 113269160 B CN113269160 B CN 113269160B
- Authority
- CN
- China
- Prior art keywords
- eye movement
- colonoscope
- visual field
- module
- constructing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses a colonoscope operation dilemma intelligent identification system based on eye movement characteristics, which comprises: the information acquisition module: the system is used for acquiring eye movement information of a subject observing an effective area of a display screen in the process of colonoscope simulation operation; a characteristic construction module: the system is used for constructing an eye movement characteristic space with remarkable specificity aiming at two conditions of normal visual field and lost visual field of the colonoscope according to the collected eye movement information; a feature enhancement module: the method is used for constructing a deep convolution generation type confrontation network, and generating and expanding the confrontation network according to real eye movement characteristics under two conditions of normal visual field and lost visual field to obtain enhanced eye movement characteristics; the predicament identification module: the method is used for constructing the long-term memory neural network, the real eye movement characteristics and the enhanced eye movement characteristics under the two conditions of normal visual field and lost visual field are utilized to form a training set, the long-term memory neural network is trained, and the colonoscope operation predicament intelligent identification model is obtained.
Description
Technical Field
The invention relates to the technical field of artificial intelligence and mode recognition combined with medical engineering, in particular to a colonoscope operation dilemma intelligent identification system based on eye movement characteristics.
Background
According to the annual statistics of the world health organization 2018 on the global cancer, the number of new colon cancer patients in China accounts for about 21% of the world, and the incidence rate continuously increases at the increasing rate of 1.3% every year. The colonoscope is a simple, visual and minimally invasive clinical medical detection method, and plays a vital role in colorectal cancer screening, early diagnosis and treatment and comprehensive prevention. Aiming at the high-risk group of colorectal cancer, the popularization and implementation of regular screening and prevention can effectively reduce the morbidity and mortality of colorectal cancer in China. Currently, electronic colonoscopy is the first choice for detecting colorectal cancer and precancerous lesions in almost all countries and institutions. As a gold standard for colorectal cancer detection, the colonoscope has been widely applied to clinical diagnosis and treatment in anorectal surgery in China, so that the diagnosis and treatment level of intestinal tumors in China is improved to a great extent, and the colonoscope has important practical significance for prevention and treatment of colorectal cancer.
With the development of economic society of China, the living standard of people is continuously improved, the demand for health services is increasingly increased, in recent years, the market scale of colonoscopes of China is continuously kept in a stable growth situation and reaches about 28 hundred million yuan in 2020, so that the demands of various medical institutions on professional colonoscope diagnosis and treatment operating doctors are increasingly urgent, and a standardized colonoscope training and evaluation mode needs to be explored and established urgently. At present, an endoscope virtual reality simulation training system is introduced in China to improve the normative and the uniformity of the training and the evaluation of colonoscopy doctors. However, the existing endoscope simulation training system generally has the problems of paying attention to result evaluation and neglecting process guidance, namely, only at the end of training, the training system counts the overall parameters of the training person such as the time consumption, the endoscope entering depth, the gas filling amount, the visual field loss proportion and the like of the simulation training, gives an evaluation report, and ignores timely discovery and effective intervention guidance aiming at the operation difficulty of the training person.
Disclosure of Invention
The invention aims to provide an intelligent colonoscope operation dilemma identification system based on eye movement characteristics, and aims to solve the problem that the operation dilemma of a trainer cannot be timely and accurately found in the conventional colonoscope simulation training.
To solve the above technical problem, an embodiment of the present invention provides the following solutions:
a colonoscope operational dilemma intelligent identification system based on eye movement characteristics comprising:
the information acquisition module: the system is used for acquiring eye movement information of a subject observing an effective area of a display screen in the process of colonoscope simulation operation;
a characteristic construction module: the system is used for constructing an eye movement characteristic space with remarkable specificity aiming at two conditions of normal visual field and lost visual field of the colonoscope according to the collected eye movement information;
a feature enhancement module: the method is used for constructing a deep convolution generation type confrontation network, and generating and expanding the confrontation network according to real eye movement characteristics under two conditions of normal visual field and lost visual field to obtain enhanced eye movement characteristics;
the predicament identification module: the method is used for constructing the long-term memory neural network, the real eye movement characteristics and the enhanced eye movement characteristics under the two conditions of normal visual field and lost visual field are utilized to form a training set, the long-term memory neural network is trained, and the colonoscope operation predicament intelligent identification model is obtained.
Preferably, the information acquisition module is specifically configured to establish an eye movement measurement index system covering a time stamp, an eye movement event type, a fixation point coordinate, a saccade angle, and a binocular pupil size, wherein the eye movement event type includes a fixation and a saccade; and acquiring eye movement data of a trainer performing colonoscope operation on the endoscope simulation training system through the eye movement tracking system based on the eye movement measurement index system.
Preferably, the information acquisition module further comprises:
the preprocessing module is used for preprocessing the acquired eye movement data to obtain the eye movement information of the trainer in the colonoscope simulation operation process; wherein the content of the first and second substances,
the preprocessing module is specifically used for performing low-pass filtering based on wavelet transform on the acquired eye movement data and reserving signals below 100 Hz; according to the coordinate position of the fixation point, eye movement data separated from the effective display area of the colonoscope are removed; according to the video information of the colonoscope, the normal visual field and the visual field loss segments in the operation process of the trainer are automatically segmented.
Preferably, the feature construction module is configured to extract real eye movement features during the colonoscope operation process according to the preprocessed eye movement information, and construct an eye movement feature space.
Preferably, the eye movement features with significant specificity comprise: eye movement time-frequency characteristics, saccade amplitude characteristics, staring area distribution characteristics and pupil contraction characteristics; wherein the content of the first and second substances,
the specific eye movement time-frequency characteristics comprise: segment length of normal and lost vision; glance duration, number of glances, number of gazes in a segment;
specific saccade amplitude features include: average saccade magnitude in the segment and frequency cumulative fraction of saccade magnitude greater than 2.5 degrees;
specific gaze region distribution characteristics include: a cumulative fraction of frequencies within [0, 75] pixels of gaze transfer distance in a segment;
specific pupil scaling characteristics include: in the segment, the adjustment range of the pupils of both eyes, the frequency cumulative ratio of the adjustment range of the pupils of both eyes at [55%,100% ], and the interval with the maximum frequency cumulative ratio of the adjustment range of the pupils of both eyes; the accommodative amplitude of the binocular pupil during saccades and fixations, respectively, of the segments.
Preferably, the feature enhancement module is specifically configured to splice real eye movement features with significant specificity in the constructed eye movement feature space to form a multi-dimensional feature matrix; constructing an eye movement feature generator according to the multi-dimensional feature matrix, and fitting random noise to preliminarily generate eye movement features; constructing an eye movement characteristic discriminator, discriminating and comparing the preliminarily generated eye movement characteristic with the real eye movement characteristic in the multi-dimensional characteristic matrix, calculating a loss function of the eye movement characteristic, and feeding back the loss function to the generator; and optimizing the weight of the generator according to the feedback result, further fitting and judging the preliminarily generated eye movement characteristics again, and repeating iteration until the eye movement characteristics meeting the requirements of the discriminator are generated respectively aiming at the two conditions of normal visual field and lost visual field.
Preferably, the feature enhancing module further comprises:
the visual verification module of the enhanced characteristic is used for visually verifying the effectiveness and diversity of the generated characteristic; wherein the content of the first and second substances,
the visual verification module of the enhanced features is specifically used for constructing a t-distribution random neighbor embedding algorithm, mapping the real eye movement features and the generated enhanced eye movement features into a two-dimensional plane, and visually verifying the distribution and aliasing conditions of the real eye movement features and the generated enhanced eye movement features.
Preferably, the predicament identification module is specifically configured to combine the real eye movement features under the two situations of normal visual field and lost visual field with the enhanced eye movement features meeting the requirements of the discriminator to form a training set of the intelligent identification model of the colonoscope operation predicament; constructing a long-and-short time memory neural network, inputting a training set into the constructed long-and-short time memory neural network, and learning a mapping relation from eye movement characteristics to operation predicament; optimizing parameters of a neural network, measuring the difference between an output result and a real result, and repeatedly performing iterative learning until a judgment result meeting the loss function requirement is output;
wherein the parameters include: training times, batch size, number of layers of the long and short memory neural network and number of nodes in the hidden layer.
Preferably, the dilemma identifying module further comprises:
the dilemma identification validity verification module is used for verifying the intelligent identification validity of the visual field loss segment; wherein the content of the first and second substances,
the dilemma identification validity verification module is specifically used for constructing a working characteristic curve of the subject and determining the accuracy of the evaluation by calculating the area under the working characteristic curve of the subject.
Preferably, the predicament identification validity verification module is specifically used for constructing a confusion matrix of a colonoscope operation predicament intelligent identification model; calculating the sensitivity and specificity of the intelligent colonoscope operation predicament identification model according to the constructed confusion matrix, drawing a working characteristic curve of the testee by taking the sensitivity as a vertical coordinate and the error hit rate specificity as a horizontal coordinate, and determining the estimated accuracy rate according to the area under the working characteristic curve of the testee, wherein the error hit rate = 1-specificity.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the embodiment of the invention provides a colonoscope operation dilemma intelligent identification system based on eye movement characteristics aiming at the problem of 'attach importance to result evaluation and neglect process guidance' commonly existing in the current endoscope simulation training system.
The small sample eye movement data related to the embodiment of the invention is derived from the eye movement condition of a trainer performing colonoscope operation on an endoscope simulation training system through an eye movement tracking system; on the basis, after data preprocessing, the eye movement rule of a trainer under the two conditions of normal visual field and lost visual field is mined based on anthropometric research, an eye movement characteristic space with obvious specificity is constructed, eye movement characteristics meeting the requirements of the sample size and diversity of an intelligent identification system are generated in an expanding way, and a validity verification method of generated data is developed, so that a high-level characteristic analysis method is provided, and meanwhile, high-quality and high-efficiency training sample support is provided for a related intelligent identification system; and then, by taking the real features and the enhanced features of the small samples as training sets, establishing a colonoscope operation dilemma intelligent identification model based on a long-time memory neural network, verifying the identification effectiveness, and realizing accurate identification of the colonoscope operation dilemma under the small sample collection condition.
The intelligent colonoscope operation dilemma identification system based on the eye movement characteristics can effectively improve the eye movement characteristic description accuracy facing the whole colonoscope operation process, expand the eye movement characteristic sample size and diversity, automatically identify the operation dilemma encountered by a trainer in colonoscope training, provide a high-quality eye movement characteristic foundation for the intelligent colonoscope physician training system, and simultaneously provide necessary technical support for objective, quantitative and standard colonoscope operation skill assessment and guidance system construction in related fields, thereby effectively assisting the intelligent and standardized development in the endoscope training field in China.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating an embodiment of a system for intelligently identifying operational distress of a colonoscope based on eye movement characteristics;
FIG. 2 is a schematic overall flow chart of a colonoscope operational distress intelligent identification system based on eye movement characteristics according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating segmentation of an application scenario and a visual field segment of a colonoscope operation predicament intelligent identification system based on eye movement characteristics according to an embodiment of the present invention;
FIGS. 4A-4B are schematic diagrams of eye movement trajectories with normal and lost vision provided by embodiments of the present invention, and FIG. 4C is a schematic diagram of pupil accommodation size;
fig. 5 is a structural diagram of a deep convolution generation type countermeasure network and a long-and-short term memory neural network according to an embodiment of the present invention;
6A-6C are schematic diagrams of eye movement feature enhancement visualization verification results based on t-SNE algorithm provided by the embodiment of the invention;
fig. 7 is a graph of the performance characteristics of a subject based on intelligent identification of operational distress in a colonoscope based on eye movement characteristics, according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
An embodiment of the present invention provides an intelligent colonoscope operation dilemma identification system based on eye movement characteristics, as shown in fig. 1, the system comprising:
the information acquisition module: the system is used for acquiring eye movement information of a subject observing an effective area of a display screen in the process of colonoscope simulation operation;
a characteristic construction module: the system is used for constructing an eye movement characteristic space with remarkable specificity aiming at two conditions of normal visual field and lost visual field of the colonoscope according to the collected eye movement information;
a feature enhancement module: method and apparatus for constructing deep convolution-generated countermeasure network Generating and expanding according to the real eye movement characteristics under the two conditions of normal visual field and lost visual field respectively to obtain enhanced eye movement characteristics;
the predicament identification module: for constructing long-and-short-term memory neural network The real eye movement characteristics and the enhanced eye movement characteristics under the two conditions of normal visual field and lost visual field are utilized to form a training set, and the long-time memory neural network is trained to obtain the intelligent colonoscope operation predicament identification model.
Further, the information acquisition module is specifically used for establishing an eye movement measurement index system covering a time label, eye movement event types, fixation point coordinates, a glancing angle and binocular pupil sizes, wherein the eye movement event types comprise staring and glancing; and acquiring eye movement data of a trainer performing colonoscope operation on the endoscope simulation training system through the eye movement tracking system based on the eye movement measurement index system.
Further, the information acquisition module further comprises:
the preprocessing module is used for preprocessing the acquired eye movement data to obtain the eye movement information of the trainer in the colonoscope simulation operation process; wherein the content of the first and second substances,
the preprocessing module is specifically used for performing low-pass filtering based on wavelet transform on the acquired eye movement data and reserving signals below 100 Hz; according to the coordinate position of the fixation point, eye movement data separated from the effective display area of the colonoscope are removed; according to the video information of the colonoscope, the normal visual field and the visual field loss segments in the operation process of the trainer are automatically segmented.
Further, the feature construction module is configured to extract a real eye movement feature in a colonoscope operation process according to the preprocessed eye movement information, and construct an eye movement feature space.
Further, the eye movement features with significant specificity include: eye movement time-frequency characteristics, saccade amplitude characteristics, staring area distribution characteristics and pupil contraction characteristics; wherein the content of the first and second substances,
the specific eye movement time-frequency characteristics comprise: segment length of normal and lost vision; glance duration, number of glances, number of gazes in a segment;
specific saccade amplitude features include: average saccade magnitude in the segment and frequency cumulative fraction of saccade magnitude greater than 2.5 degrees;
specific gaze region distribution characteristics include: a cumulative fraction of frequencies within [0, 75] pixels of gaze transfer distance in a segment;
specific pupil scaling characteristics include: in the segment, the adjustment range of the pupils of both eyes, the frequency cumulative ratio of the adjustment range of the pupils of both eyes at [55%,100% ], and the interval with the maximum frequency cumulative ratio of the adjustment range of the pupils of both eyes; the accommodative amplitude of the binocular pupil during saccades and fixations, respectively, of the segments.
Further, the feature enhancement module is specifically configured to splice real eye movement features with significant specificity in the constructed eye movement feature space to form a multi-dimensional feature matrix; constructing an eye movement feature generator according to the multi-dimensional feature matrix, and fitting random noise to preliminarily generate eye movement features; constructing an eye movement characteristic discriminator, discriminating and comparing the preliminarily generated eye movement characteristic with the real eye movement characteristic in the multi-dimensional characteristic matrix, calculating a loss function of the eye movement characteristic, and feeding back the loss function to the generator; and optimizing the weight of the generator according to the feedback result, further fitting and judging the preliminarily generated eye movement characteristics again, and repeating iteration until the eye movement characteristics meeting the requirements of the discriminator are generated respectively aiming at the two conditions of normal visual field and lost visual field.
Further, the feature enhancement module further comprises:
the visual verification module of the enhanced characteristic is used for visually verifying the effectiveness and diversity of the generated characteristic; wherein the content of the first and second substances,
the visual verification module of the enhanced features is specifically used for constructing a t-distribution random neighbor embedding algorithm, mapping the real eye movement features and the generated enhanced eye movement features into a two-dimensional plane, and visually verifying the distribution and aliasing conditions of the real eye movement features and the generated enhanced eye movement features.
Further, the predicament identification module is specifically configured to combine the real eye movement features under the two situations of normal visual field and lost visual field with the enhanced eye movement features meeting the requirements of the discriminator to form a training set of the intelligent identification model of the colonoscope operation predicament; constructing a long-and-short time memory neural network, inputting a training set into the constructed long-and-short time memory neural network, and learning a mapping relation from eye movement characteristics to operation predicament; optimizing parameters of a neural network, measuring the difference between an output result and a real result, and repeatedly performing iterative learning until a judgment result meeting the loss function requirement is output;
wherein the parameters include: training times, batch size, number of layers of the long and short memory neural network and number of nodes in the hidden layer.
Further, the dilemma identifying module further comprises:
the dilemma identification validity verification module is used for verifying the intelligent identification validity of the visual field loss segment; wherein the content of the first and second substances,
the dilemma identification validity verification module is specifically used for constructing a working characteristic curve of the subject and determining the accuracy of the evaluation by calculating the area under the working characteristic curve of the subject.
Further, the predicament identification validity verification module is specifically used for constructing a confusion matrix of a colonoscope operation predicament intelligent identification model; calculating the sensitivity and specificity of the intelligent colonoscope operation predicament identification model according to the constructed confusion matrix, drawing a working characteristic curve of the testee by taking the sensitivity as a vertical coordinate and the error hit rate specificity as a horizontal coordinate, and determining the estimated accuracy rate according to the area under the working characteristic curve of the testee, wherein the error hit rate = 1-specificity.
The intelligent colonoscope operation dilemma identification system based on the eye movement characteristics, provided by the embodiment of the invention, is used for acquiring the eye movement information of an effective area of a display screen observed by a trainer in the process of colonoscope simulation operation through an information acquisition module; the characteristic construction module constructs an eye movement characteristic space with remarkable specificity aiming at two conditions of normal visual field and visual field loss of the colonoscope according to the collected eye movement information; the characteristic enhancement module is used for constructing a deep convolution generation type confrontation network and respectively generating and expanding the confrontation network according to the real eye movement characteristics of the small sample under the two conditions of normal visual field and visual field loss; the predicament identification module: the method is used for constructing the long-term memory neural network, the real eye movement characteristics and the enhanced eye movement characteristics under the two conditions of normal visual field and lost visual field are utilized to form a training set, the long-term memory neural network is trained, and the colonoscope operation predicament intelligent identification model is obtained. Therefore, on the basis of the collected eye movement data in the small sample colonoscope operation process, the quantity and diversity of the eye movement characteristic samples are enhanced by utilizing the deep convolution generation type anti-network, the training sample set of the identification model is established by the disordered mixing of the real eye movement characteristics and the enhanced eye movement characteristics, the colonoscope operation difficulty intelligent identification model based on the long-time memory neural network is established, the intelligent and accurate identification for the colonoscope operation difficulty of a trainer is realized, and the problem of the lack of normative and unified methods for colonoscopy and evaluation is solved while the problem of the colonoscope operation overall process evaluation is solved.
Fig. 2 is a flowchart of an implementation of a colonoscope operation predicament intelligent identification system based on eye movement features according to an embodiment of the present invention, including:
s101, collecting eye movement data of a trainer;
in this embodiment, S101 specifically includes: establishing a measurement index system covering a time label, an eye movement event type (staring/glancing), a fixation point coordinate, a glancing angle and a binocular pupil size; based on the eye movement measurement index system, eye movement data of a trainer performing colonoscope operation on the endoscope simulation training system is acquired through the eye movement tracking system, and as shown in fig. 3, a necessary data basis is provided for realizing refined high-level eye movement feature description.
S102, preprocessing the collected eye movement data;
it should be noted that, in this embodiment, the foregoing S102 specifically includes the following processes:
1. performing low-pass filtering based on wavelet transformation on the eye movement data acquired by the acquisition module, and reserving signals below 100 Hz;
2. according to the coordinate position of the fixation point, eye movement data separated from the effective display area of the colonoscope are removed;
3. according to the video information of the colonoscope, the normal visual field and the visual field loss segments in the operation process of the trainer are automatically segmented.
S103, calculating eye movement characteristics in the colonoscope simulation training process based on the eye movement data obtained after preprocessing, carrying out hypothesis test on two groups of data of normal visual field and visual field loss, screening out eye movement characteristics with obvious specificity, and constructing an eye movement characteristic space;
it should be noted that, in this embodiment, the screening of the eye movement characteristics with significant specificity in S103 includes the following specific steps:
aiming at the eye movement condition in the operation process of the colonoscope, as shown in fig. 4A-4C, detailed analysis is carried out on four aspects of eye movement time-frequency characteristics, saccade amplitude characteristics, gaze region distribution characteristics and pupil contraction characteristics, independent sample t test and analysis of variance (ANOVA) with a confidence interval of 95% are carried out on the difference between a normal visual field group and a lost visual field group, and the significant eye movement characteristics of the operation predicament are obtained through small sample (the normal visual field group: 77 eye movement segments, the lost visual field group: 51 eye movement segments) analysis, are used for constructing an eye movement characteristic space, and provide necessary anthropometric basis for an intelligent colonoscope operation predicament identification system based on the eye movement characteristics. The statistical result shows that the t test is consistent with the analysis result of variance, and the measurement analysis result based on the eye movement characteristics shows that:
1. an eye movement time-frequency characteristic with inter-group specificity, comprising: segment duration of normal/lost view; glance duration, number of glances, number of gazes in a segment. The duration of the missing visual field segment is significantly shorter compared to the normal visual field case (12.53 s + -1.45 s vs. 22.97s + -2.01 s, p < 0.001); the saccade duration in the segment was significantly reduced (5.49 s ± 1.00s vs. 10.78s ± 1.44s, p = 0.003), the number of saccades (52.06 ± 7.91 vs. 101.19 ± 9.36, p <0.001) and the number of gazes (23.69 ± 2.77 vs. 40.94 ± 3.89, p < 0.001).
2. A saccade amplitude signature with inter-group specificity comprising: the average saccade magnitude in the segment and the frequency cumulative percentage of saccades that are greater than 2.5 degrees. The mean saccade amplitude is significantly greater when visual field is lost compared to normal visual field (2.48 ° ± 0.19 ° vs. 1.45 ° ± 0.08 °, p < 0.001); the frequency cumulative occupancy of both groups reached the maximum significant difference (46.60% ± 2.96% vs. 25.12% ± 1.83%, p <0.001) when the saccade amplitude was greater than 2.5 degrees.
3. A gaze region distribution feature with inter-group specificity, comprising: the cumulative fraction of the frequencies within 0, 75 pixels of gaze transfer distance in the segment. The cumulative fractional ratio of gaze transfer distances within [0, 75] pixels is significantly smaller when the field of view is lost compared to the normal case (63.60% ± 3.02% vs. 82.53% ± 1.68%, p < 0.001).
4. Pupil contraction characteristics with inter-group specificity, including: the accommodation amplitude of the binocular pupils in the segment, the frequency accumulation ratio of the binocular pupils accommodation amplitude at [55%,100% ], the interval with the maximum frequency accumulation ratio of the binocular pupils accommodation amplitude, and the accommodation amplitude of the binocular pupils in the process of glancing and staring respectively in the segment.
The accommodative amplitude of the pupils of both eyes when the field of view is lost, compared to normal visual field conditions, is significantly smaller in gaze (left eye: 47.61% + -1.58% vs. 65.16% + -1.39%, p < 0.001; right eye: 47.28% + -1.47% vs. 63.31% + -1.60%, p <0.001), saccades (left eye: 46.22% + -1.52% vs. 63.50% + -1.49%, p < 0.001; right eye: 46.80% + -1.41% vs. 61.75% + -1.63%, p <0.001) and in whole segments (left eye: 46.96% + -1.54% vs. 64.38% + -1.43%, p < 0.001; right eye: 46.99% + -1.45% vs. 62.52% + -1.60%, p <0.001), and left and right eye accommodation is substantially consistent.
The cumulative ratio of the frequency of binocular pupillary accommodation amplitude at 55%,100% when visual field is lost is significantly lower than that in normal visual field (25.88% + -4.03% vs. 72.07% + -3.21% for left eye, p < 0.001; 26.53% + -3.80% vs. 68.01% + -3.45% for right eye, p < 0.001).
The maximum frequency integration interval of the binocular pupillary accommodation amplitude when the visual field is lost is about 45%, 50%, (interval code is 10), while the maximum frequency integration interval when the visual field is normal is about 65%, 70%, (interval code is 13), and it is seen that the high-frequency peak interval of pupillary accommodation is significantly smaller when the visual field is lost compared to the normal visual field (10.05% ± 0.33% vs. 13.71% ± 0.31%, p < 0.001; 9.93% ± 0.33% vs. 13.43% ± 0.36%, p <0.001) for the left eye).
S104, establishing an eye movement characteristic enhancement model and algorithm based on a deep convolution generation type countermeasure network (DCGANs);
it should be noted that, in this embodiment, the step S104 specifically includes the following steps:
1. splicing the real specific eye movement characteristics to form a multi-dimensional characteristic matrix;
2. constructing an eye movement feature generator according to the multi-dimensional feature matrix, and fitting random noise to preliminarily generate eye movement features;
3. constructing an eye movement characteristic discriminator, discriminating and comparing the preliminarily generated eye movement characteristic with the real eye movement characteristic, calculating a loss function of the preliminarily generated eye movement characteristic, and feeding the loss function back to the generator;
4. and optimizing generator parameters, further fitting and judging the preliminary generation result, and repeatedly iterating until eye movement characteristics meeting the requirements of the discriminator are generated, so that characteristic generalization is performed while the characteristics in the group are met, the data volume of the eye movement characteristics of the small sample is effectively expanded, and a necessary training data basis is provided for an intelligent identification model of the subsequent colonoscope operation dilemma.
It should be noted that, in this embodiment, as shown in the upper half of fig. 5 (where Linear is Linear transformation, Reshape is matrix transformation, deconv (deconvolution) is deconvolution, relu (rectified Linear unit) is a Linear rectification function, and Tanh is a hyperbolic tangent activation function), eye movement feature generation is performed by using a deep convolution generation type anti-net model, where the model is composed of a generator and a discriminator, the generator converts an input noise vector into an eye movement feature by using micro step convolution, and the discriminator outputs a confidence that the feature is a real eye movement feature by using step convolution, and the distribution of generated eye movement data is closer to the distribution of real eye movement data by a mutual game of the generator and the discriminator.
S105, constructing an enhanced eye movement characteristic visualization verification model and algorithm based on t-SNE;
in this embodiment, S105 specifically includes:
the t-SNE algorithm maps the probability distribution of the high-dimensional eye movement features to the low-dimensional space, so that the similarity and difference between the real eye movement features and the generated eye movement features can be observed, and the usability of the generated eye movement features can be visually evaluated. The experimental results (as shown in fig. 6A to 6C) show that the normal visual field group and the lost visual field group of the real eye movement feature have partial overlap, and the two groups are differentially separated into clusters, which indicates that the selected specific features have certain similarity and individual difference, and the distribution of the enhanced eye movement feature is approximately similar to that of the real eye movement feature, so that the enhanced eye movement feature data can assist in expanding the real data to train the expansion of the sample set. The validity verification algorithm for eye movement feature generation based on t-SNE is specifically shown below.
Inputting:
the visual field is composed of a real eye movement characteristic X0 of a visual field normal group, a real eye movement characteristic X1 of a visual field loss group, a visual field
The enhanced eye movement characteristic X2 of the field normal group and the enhanced eye movement characteristic X3 of the field loss group are jointly formed
A high dimensional eye movement feature data set.
And (3) outputting:
the real eye movement characteristics of the reduced-dimension visual field normal group Y0 and the visual field loss group Y0
Y1, enhanced eye movement characteristic Y2 of the normal visual field group and enhanced eye movement characteristic Y3 of the lost visual field group.
And step 3: at a given pointLower, the high dimensional eye movement characteristicThe Euclidean distance between the two is converted into a representation phase
And 4, step 4: carrying out joint probability assignment;
and 5: initializing Y randomly based on normal distribution;
calculating the joint probability distribution in the low dimension:
updating iteration:
wherein the content of the first and second substances,representing iterationsThe solution of the second time is that of the first time,is composed ofDivergence (loss function).
S106, constructing a colonoscope operation predicament intelligent identification model and algorithm based on a long-time memory neural network;
it should be noted that, in this embodiment, the step S106 is specifically shown in a lower half of fig. 5 (where S (state) is an input characteristic state), and includes the following steps:
1. mixing the real eye movement characteristics of the normal visual field group and the lost visual field group with the real eye movement characteristics in an unordered manner to be used as a training set of a colonoscope operation predicament intelligent identification model;
2. constructing a colonoscope operation dilemma intelligent identification method based on a long-time memory network, inputting a training set into the network, and learning a mapping relation from eye movement characteristics to intelligent identification of the dilemma;
3. optimizing and identifying network parameters, measuring the difference between the output result and the real result, learning and optimizing again, and iterating repeatedly until the identification result meeting the requirement of the loss function is output, so that the accurate intelligent identifying of the operational predicament of the colonoscope is realized, and the problem of missing of the intelligent and accurate predicament identifying method is solved.
The intelligent colonoscope operation dilemma identification algorithm based on the eye movement characteristics is concretely as follows.
Inputting:
parameters are as follows: true eye movement sample set(ii) a True eye movement sample set data dimensionalityNoisy data dimension(ii) a Number of hidden layer nodesNumber of iterations(ii) a Learning rateIs a random number; size of batchGenerating eye movement sample set size
LSTM parameters: inputting eye movement data dimensionsTime step(ii) a Number of iterations(ii) a Batch size(ii) a Number of hidden layer nodes(ii) a Number of hidden layersOutput of the current cell(ii) a Current cell input(ii) a Current cell state(ii) a Learning rateGradient penalty coefficientIs a random number;
and (3) outputting:
and (5) classifying the normal visual field and the visual field loss condition in the eye movement characteristic.
Step 2: defining a loss function of the generator G and the discriminator D;
optimizing the weight of the discriminator D by using an Adam optimizer;
optimizing the weight of the generator G using an Adam optimizer;
End for;
And 5: real eye movement sample set of visual field normal group and visual field loss groupAnd generating an eye movement sample setInputting an LSTM and initializing an LSTM storage state;
6, iterating according to the iteration number set nEpoches;
will be the output of the last unitAnd current cell inputInputting the information into a forgetting layer, and determining which information in the last unit is discarded;
will be the output of the last unitAnd current cell inputIs inputted intoLayer, creating a new candidate vectorAs new information added to the current cell;
will be the output of the last unitAnd current cell inputInputting the information into an input layer, and determining which new information is stored in the current unit;
calculating the current cell state based on the forgotten layer, the candidate layer, the input layer, and the previous cell state;
Will be the output of the last unitAnd current cell inputIs inputted intoLayer, determining which states of the current cell need to be output;
current cell stateIs inputted intoA layer and mixing it withMultiplying the layer outputs to generate the output state of the current unit;
output is passed throughAfter layering, obtaining classification results of normal visual field and visual field loss;
by cross entropyAnd calculating the deviation of the classification result by the function, and optimizing the parameters.
End for;
S107, constructing a working characteristic curve of the testee, and checking the effectiveness of the intelligent predicament identification model and the algorithm through accuracy, precision and sensitivity;
it should be noted that, in this embodiment, the experimental result is shown in fig. 7, and the above step S107 specifically includes the following steps:
1. according to the dilemma identification result obtained by a long-time memory network, a confusion matrix is constructed, and the True Positive (TP), False Positive (FP), True Negative (TN) and False Negative (FN) number of the result are recorded. The confusion matrix form of the identification model is as follows:
wherein TP represents the number of predictions that the visual field is normal; FN represents the number of normal predictions of visual field as visual field loss; FP represents the number of predictions of visual field loss as normal; TN denotes the number of visual field losses predicted as visual field losses.
2. Calculating the sensitivity and specificity of the intelligent identification model of the colonoscope operation predicament, and drawing a working characteristic curve of the testee by taking the sensitivity as a vertical coordinate and the error hit rate specificity as a horizontal coordinate. The curve reflects the relationship between sensitivity and specificity under different critical values by a graphical method. Wherein, Sensitivity (TPR) is the specific gravity predicted correctly in all the results that the real value is the normal feature of the visual field, and the formula is:
specificity (TNR) is the specific gravity predicted to be correct in all results used to calculate that the true value is a visual field loss feature, and the formula is:
false hit rate = 1-specificity.
3. The Area Under the working characteristic Curve (AUC) of the subject is calculated to determine the accuracy of the assessment. Wherein, the larger the AUC value is, the higher the identification accuracy is.
In summary, the colonoscope operation dilemma intelligent identification system based on eye movement characteristics provided by the embodiment of the invention is an intelligent auxiliary system for performing colonoscope operation dilemma identification on the basis of small sample eye movement data acquisition. The related eye movement small sample information is derived from an eye movement tracking system, on the basis of the construction of an eye movement measurement index system, after data and processing, eye movement feature spaces are respectively constructed aiming at two conditions of normal visual field and lost visual field, eye movement features with obvious inter-group specificity are further obtained on the basis of statistical analysis, the eye movement features of the small samples are effectively enhanced by utilizing a deep convolution generation type countermeasure network model, labeled eye movement feature samples meeting the requirements of intelligent identification model sample amount and diversity are obtained, the effectiveness of the enhanced samples is visually verified, then a training sample set of the identification model is established by disordered mixing of real eye movement features and enhanced eye movement features, a colonoscope operation distress intelligent identification model based on a long-time memory neural network is constructed, and the intelligent and accurate identification aiming at the colonoscope operation distress of a trainer is realized, and performing quantitative verification evaluation on the test sample. The system realizes the high-level analysis and description of the eye movement characteristics in the operation process of the colonoscope of the trainer, simultaneously realizes the intelligent identification of the eye movement mode of the trainer during normal operation and encountering dilemma, further provides necessary technical support for the trainer to intervene and develop refined skill guidance in time and intelligent operation guidance based on virtual reality in the future, and can well serve the intelligentized and standardized construction of the skill training of the colonoscopy physician.
The embodiment of the invention provides an intelligent predicament identification system for the whole process of colonoscope operation, aiming at the problem that 'result evaluation is emphasized and process guidance is ignored' generally existing in the current endoscope simulation training system. In the process of colonoscope simulation training, the specific visual mode of the trainer under different training conditions is discovered through eye movement analysis of the trainer, and the operation dilemma of the trainer is automatically recognized in real time. On one hand, a high-quality eye movement characteristic foundation is provided for the development of an intelligent training system of a colonoscopy physician; on the other hand, the system provides necessary technical support for objectively, quantitatively and standard colonoscope operation skill assessment and guidance system construction in the related field.
The colonoscope operation dilemma intelligent identification system based on the eye movement characteristics, which is provided by the embodiment of the invention, belongs to the cross field of human body behaviours, medicine and information science, can be widely applied to relevant aspects of simulation training, clinical diagnosis and treatment and the like of endoscopes, and has positive promotion effect on endoscope practice skill training, new method, new theory and new technology development of precise endoscope clinical diagnosis and treatment in China by constructing and assisting the increasingly developed medical engineering theory and technology of China with quantitative and precise eye movement behavior description and the intelligent identification system.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. An intelligent colonoscope operational dilemma identification system based on eye movement characteristics, comprising:
the information acquisition module: the system is used for acquiring eye movement information of a subject observing an effective area of a display screen in the process of colonoscope simulation operation;
a characteristic construction module: the system is used for constructing an eye movement characteristic space with remarkable specificity aiming at two conditions of normal visual field and lost visual field of the colonoscope according to the collected eye movement information;
a feature enhancement module: the method is used for constructing a deep convolution generation type confrontation network, and generating and expanding the confrontation network according to real eye movement characteristics under two conditions of normal visual field and lost visual field to obtain enhanced eye movement characteristics;
the predicament identification module: the method is used for constructing the long-term memory neural network, the real eye movement characteristics and the enhanced eye movement characteristics under the two conditions of normal visual field and lost visual field are utilized to form a training set, the long-term memory neural network is trained, and the colonoscope operation predicament intelligent identification model is obtained.
2. A colonoscope operational distress intelligent discrimination system according to claim 1 and wherein said information collection module is specifically adapted to establish an eye movement measurement index system covering time stamps, eye movement event types, gaze point coordinates, saccade angles, binocular pupil sizes, wherein said eye movement event types include gaze and saccades; and acquiring eye movement data of a trainer performing colonoscope operation on the endoscope simulation training system through the eye movement tracking system based on the eye movement measurement index system.
3. A colonoscope operational distress intelligent identification system based on eye movement characteristics, according to claim 2, wherein said information collection module further comprises:
the preprocessing module is used for preprocessing the acquired eye movement data to obtain the eye movement information of the trainer in the colonoscope simulation operation process; wherein the content of the first and second substances,
the preprocessing module is specifically used for performing low-pass filtering based on wavelet transform on the acquired eye movement data and reserving signals below 100 Hz; according to the coordinate position of the fixation point, eye movement data separated from the effective display area of the colonoscope are removed; according to the video information of the colonoscope, the normal visual field and the visual field loss segments in the operation process of the trainer are automatically segmented.
4. The system according to claim 3, wherein the feature construction module is configured to extract real eye movement features during the colonoscope operation according to the pre-processed eye movement information to construct the eye movement feature space.
5. A colonoscope operational distress intelligent discrimination system based on eye movement characteristics, according to claim 1, wherein said eye movement characteristics with significant specificity, comprises: eye movement time-frequency characteristics, saccade amplitude characteristics, staring area distribution characteristics and pupil contraction characteristics; wherein the content of the first and second substances,
the specific eye movement time-frequency characteristics comprise: segment length of normal and lost vision; glance duration, number of glances, number of gazes in a segment;
specific saccade amplitude features include: average saccade magnitude in the segment and frequency cumulative fraction of saccade magnitude greater than 2.5 degrees;
specific gaze region distribution characteristics include: a cumulative fraction of frequencies within [0, 75] pixels of gaze transfer distance in a segment;
specific pupil scaling characteristics include: in the segment, the adjustment range of the pupils of both eyes, the frequency cumulative ratio of the adjustment range of the pupils of both eyes at [55%,100% ], and the interval with the maximum frequency cumulative ratio of the adjustment range of the pupils of both eyes; the accommodative amplitude of the binocular pupil during saccades and fixations, respectively, of the segments.
6. The system according to claim 1, wherein the feature enhancement module is specifically configured to concatenate real eye movement features with significant specificity in the constructed eye movement feature space to form a multi-dimensional feature matrix; constructing an eye movement feature generator according to the multi-dimensional feature matrix, and fitting random noise to preliminarily generate eye movement features; constructing an eye movement characteristic discriminator, discriminating and comparing the preliminarily generated eye movement characteristic with the real eye movement characteristic in the multi-dimensional characteristic matrix, calculating a loss function of the eye movement characteristic, and feeding back the loss function to the generator; and optimizing the weight of the generator according to the feedback result, further fitting and judging the preliminarily generated eye movement characteristics again, and repeating iteration until the eye movement characteristics meeting the requirements of the discriminator are generated respectively aiming at the two conditions of normal visual field and lost visual field.
7. A colonoscope operational distress intelligent discrimination system based on eye movement characteristics, according to claim 6, wherein said characteristics enhancement module further comprises:
the visual verification module of the enhanced characteristic is used for visually verifying the effectiveness and diversity of the generated characteristic; wherein the content of the first and second substances,
the visual verification module of the enhanced features is specifically used for constructing a t-distribution random neighbor embedding algorithm, mapping the real eye movement features and the generated enhanced eye movement features into a two-dimensional plane, and visually verifying the distribution and aliasing conditions of the real eye movement features and the generated enhanced eye movement features.
8. The system according to claim 6, wherein the predicament identification module is specifically configured to combine the real eye movement features under the two situations of normal visual field and lost visual field with the enhanced eye movement features meeting the requirements of the discriminator to form a training set of a colonoscope operation predicament intelligent identification model; constructing a long-and-short time memory neural network, inputting a training set into the constructed long-and-short time memory neural network, and learning a mapping relation from eye movement characteristics to operation predicament; optimizing parameters of a neural network, measuring the difference between an output result and a real result, and repeatedly performing iterative learning until a judgment result meeting the loss function requirement is output;
wherein the parameters include: training times, batch size, number of layers of the long and short memory neural network and number of nodes in the hidden layer.
9. The intelligent eye movement feature based colonoscope operational distress identification system according to claim 8, wherein said distress identification module further comprises:
the dilemma identification validity verification module is used for verifying the intelligent identification validity of the visual field loss segment; wherein the content of the first and second substances,
the dilemma identification validity verification module is specifically used for constructing a working characteristic curve of the subject and determining the accuracy of the evaluation by calculating the area under the working characteristic curve of the subject.
10. The system according to claim 9, wherein the predicament identification validity module is specifically configured to construct a confusion matrix of a colonoscope operation predicament intelligent identification model; calculating the sensitivity and specificity of the intelligent colonoscope operation predicament identification model according to the constructed confusion matrix, drawing a working characteristic curve of the testee by taking the sensitivity as a vertical coordinate and the error hit rate specificity as a horizontal coordinate, and determining the estimated accuracy rate according to the area under the working characteristic curve of the testee, wherein the error hit rate = 1-specificity.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110798775.8A CN113269160B (en) | 2021-07-15 | 2021-07-15 | Colonoscope operation predicament intelligent identification system based on eye movement characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110798775.8A CN113269160B (en) | 2021-07-15 | 2021-07-15 | Colonoscope operation predicament intelligent identification system based on eye movement characteristics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113269160A CN113269160A (en) | 2021-08-17 |
CN113269160B true CN113269160B (en) | 2021-10-12 |
Family
ID=77236566
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110798775.8A Active CN113269160B (en) | 2021-07-15 | 2021-07-15 | Colonoscope operation predicament intelligent identification system based on eye movement characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113269160B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113762426A (en) * | 2021-11-09 | 2021-12-07 | 紫东信息科技(苏州)有限公司 | Gastroscope visual field lost frame detection method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105573500A (en) * | 2015-12-22 | 2016-05-11 | 王占奎 | Intelligent AR (augmented reality) eyeglass equipment controlled through eye movement |
CN108230426A (en) * | 2018-02-07 | 2018-06-29 | 深圳市唯特视科技有限公司 | A kind of image generating method based on eye gaze data and image data set |
CN111949131A (en) * | 2020-08-17 | 2020-11-17 | 陈涛 | Eye movement interaction method, system and equipment based on eye movement tracking technology |
US10884494B1 (en) * | 2020-01-10 | 2021-01-05 | Microsoft Technology Licensing, Llc | Eye tracking device calibration |
-
2021
- 2021-07-15 CN CN202110798775.8A patent/CN113269160B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105573500A (en) * | 2015-12-22 | 2016-05-11 | 王占奎 | Intelligent AR (augmented reality) eyeglass equipment controlled through eye movement |
CN108230426A (en) * | 2018-02-07 | 2018-06-29 | 深圳市唯特视科技有限公司 | A kind of image generating method based on eye gaze data and image data set |
US10884494B1 (en) * | 2020-01-10 | 2021-01-05 | Microsoft Technology Licensing, Llc | Eye tracking device calibration |
CN111949131A (en) * | 2020-08-17 | 2020-11-17 | 陈涛 | Eye movement interaction method, system and equipment based on eye movement tracking technology |
Also Published As
Publication number | Publication date |
---|---|
CN113269160A (en) | 2021-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6577607B2 (en) | Automatic feature analysis, comparison, and anomaly detection | |
Dantcheva et al. | Show me your face and I will tell you your height, weight and body mass index | |
Valstar et al. | Fully automatic recognition of the temporal phases of facial actions | |
CN107403142B (en) | A kind of detection method of micro- expression | |
CN105559802A (en) | Tristimania diagnosis system and method based on attention and emotion information fusion | |
US20200364868A1 (en) | Biomarker determination using optical flows | |
CN110786849B (en) | Electrocardiosignal identity recognition method and system based on multi-view discriminant analysis | |
CN110189324B (en) | Medical image processing method and processing device | |
EP3832663A1 (en) | Diagnostic support system and diagnostic support method | |
Yan et al. | Measuring dynamic micro-expressions via feature extraction methods | |
Zhao et al. | Transferable self-supervised instance learning for sleep recognition | |
Chen et al. | Hybrid facial image feature extraction and recognition for non-invasive chronic fatigue syndrome diagnosis | |
CN113269160B (en) | Colonoscope operation predicament intelligent identification system based on eye movement characteristics | |
Zhang et al. | Efficient 3D dental identification via signed feature histogram and learning keypoint detection | |
CN111317448A (en) | Method and system for analyzing visual space cognition | |
Li et al. | Image understanding from experts' eyes by modeling perceptual skill of diagnostic reasoning processes | |
Xu et al. | Application of artificial intelligence technology in medical imaging | |
CN114220543A (en) | Body and mind pain index evaluation method and system for tumor patient | |
CN113485555A (en) | Medical image reading method, electronic equipment and storage medium | |
CN111275754B (en) | Face acne mark proportion calculation method based on deep learning | |
Mao et al. | A novel method of human identification based on dental impression image | |
JP6201520B2 (en) | Gaze analysis system and method using physiological indices | |
Carrillo-de-Gea et al. | Detection of normality/pathology on chest radiographs using LBP | |
CN112885435B (en) | Method, device and system for determining image target area | |
CN112289444A (en) | Method and device for determining potentially important information of patient |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20210817 Assignee: Beijing tangrenxiang Technology Co.,Ltd. Assignor: University OF SCIENCE AND TECHNOLOGY BEIJING Contract record no.: X2023980034565 Denomination of invention: An Intelligent Identification System for Colonoscopy Operation Difficulties Based on Eye Movement Features Granted publication date: 20211012 License type: Common License Record date: 20230410 |
|
EE01 | Entry into force of recordation of patent licensing contract |