CN116486458A - Airway assessment method based on image recognition technology - Google Patents

Airway assessment method based on image recognition technology Download PDF

Info

Publication number
CN116486458A
CN116486458A CN202310472339.0A CN202310472339A CN116486458A CN 116486458 A CN116486458 A CN 116486458A CN 202310472339 A CN202310472339 A CN 202310472339A CN 116486458 A CN116486458 A CN 116486458A
Authority
CN
China
Prior art keywords
face
airway
recognition technology
image recognition
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310472339.0A
Other languages
Chinese (zh)
Inventor
闵苏
王杰
张加强
谢克亮
熊利泽
马京
贾东兴
卢卫国
唐盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhihuiyuan Medical Technology Co ltd
Original Assignee
Hangzhou Zhihuiyuan Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Zhihuiyuan Medical Technology Co ltd filed Critical Hangzhou Zhihuiyuan Medical Technology Co ltd
Priority to CN202310472339.0A priority Critical patent/CN116486458A/en
Publication of CN116486458A publication Critical patent/CN116486458A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an airway assessment method based on an image recognition technology, and belongs to the technical field of clinical medicine. An airway assessment method based on image recognition technology, comprising: acquiring a front face image, a front face-side face image and a front face maximum opening image, and respectively realizing front face determination; the invention collects the target image in real time, recognizes and calculates the face and the face characteristic points, calculates the mouth opening degree, calculates the neck activity degree, calculates the jaw distance and the small mandible by the face characteristic points proportion, predicts the marand peter classification and predicts and classifies the abnormal teeth, and comprehensively analyzes the data to predict difficult airways, and helps and guides doctors to make objective airway assessment on patients by applying the convolutional neural network in the form of mobile application program to the assessment of clinical difficult airways, so that the doctors make corresponding preparation before establishing the artificial airways, and the success rate of intubation is improved.

Description

Airway assessment method based on image recognition technology
Technical Field
The invention relates to the technical field of clinical medicine, in particular to an airway assessment method based on an image recognition technology.
Background
Difficult endotracheal intubation occurs at about 6% of the time in general anesthesia. Tracheal intubation difficulty is life threatening, an analysis from the american society of anesthesia indicates that difficult tracheal intubation accounts for 17% of abnormal respiratory events, 85% of which cause brain injury and death. The higher the degree of difficulty, the greater the risk of brain damage or death. Therefore, pre-anesthesia airway assessment is critical.
The preoperative airway assessment can identify difficult airways early, reduce the occurrence of unexpected difficult airways, and is also a precondition for correctly treating difficult airways and making adequate preparation. Some dangerous factors of difficult airways can be obtained from observing the appearance of a patient, such as congenital craniomaxillofacial deformity, wound, infection, tumor-induced oral maxillofacial deformity or defect, scar adhesion after burn-in induced small mouth deformity, chin-chest adhesion, abnormal anatomy near the airways after surgery or radiotherapy, temporomandibular joint rigidity, obesity, short neck, small mandible, high larynx, giant tongue, and the like. The multi-mode assessment method is recommended to carry out risk assessment of the airway and the trachea cannula, so that a doctor is helped and guided to carry out objective airway assessment on a patient by applying the convolutional neural network to assessment of the clinically difficult airway in a mobile application program mode, the doctor is made to prepare correspondingly before establishing an artificial airway, and the success rate of the cannula is improved.
Disclosure of Invention
The invention aims to solve the problems in the background technology in the prior art, and provides an airway assessment method based on an image recognition technology.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
an airway assessment method based on image recognition technology, comprising: acquiring a front face image, a front face-side face image and a front face maximum opening image, and respectively realizing front face determination; recognizing and outputting characteristic points of the front face and the side face, and further calculating neck activity, opening degree, jaw distance and small mandible; identifying march classification and dental abnormalities; and comprehensively analyzing the data obtained by the neck activity degree, the mouth opening degree, the jaw distance and the small mandible, identifying and predicting the Margaret classification and the tooth abnormality, and further training a logistic regression model by referring to clinical data to judge whether the airway is a difficult airway.
Preferably, the face determination includes indicia such as eyes, nose, mouth, and face contours.
Preferably, the neck activity degree is calculated by face gesture evaluation, and the direct relation between the face and the camera and the world coordinates is calculated to obtain a space rotation angle, which is used for directly measuring some parameters of physical constitution and indirectly assisting compensation parameters.
Preferably, the directly measured physical parameters include neck activity and static head orientation.
Preferably, the calculation of the opening degree, the jaw distance and the small mandible may incorporate the Delaunay triangulation algorithm and the Voronoi diagram algorithm, respectively, which are commonly used in cosmetic algorithms.
Preferably, identifying a mahalanobis score can combine anesthesia consensus with a general clinical anesthesia assessment table in a hospital can result in the following 4-level classification of the mallamati score: stage I: the pharyngisthmus bow, the soft jaw and the jaw drop can be seen; stage II: the pharyngisthmus arch, the soft jaw, but the jaw drop is covered by the tongue root; dish grade: only the soft jaw is visible, indicating difficult intubation; grade IV: the soft jaw is also not visible, indicating difficult cannulation.
Preferably, the tooth abnormality includes a relationship between a length of an upper incisor and upper and lower incisors when closed in a natural state, wherein the tooth abnormality can be primarily predicted using a target algorithm.
Preferably, the logistic regression model is formed by constructing 4 layers of Batchnormal layers to cross 4 layers of dropout layers by adopting a tensorflow framework and adding an output layer and an input layer.
Compared with the prior art, the invention provides an airway assessment method based on an image recognition technology, which has the following beneficial effects:
according to the airway assessment method based on the image recognition technology, a target image is acquired in real time, a face and face characteristic points are recognized and calculated, the opening degree is calculated through the face characteristic point proportion, the neck activity degree is calculated, the jaw distance and the small mandible are calculated, then the marvelin classification and the prediction classification of abnormal teeth are predicted, the data are comprehensively analyzed and used for predicting difficult airways, and a convolutional neural network is applied to the assessment of clinical difficult airways in a mobile application program mode to help and guide doctors to make objective airway assessment on patients, so that the doctors make corresponding preparations before establishing artificial airways, and the success rate of intubation is improved.
Drawings
FIG. 1 is a system block diagram of an airway assessment method based on image recognition technology according to the present invention;
FIG. 2 is a functional flowchart of an airway assessment method based on image recognition technology according to the present invention;
FIG. 3 is a Markov classification flow chart of an airway assessment method based on image recognition technology according to the present invention;
FIG. 4 is a schematic diagram of a technical route of an airway assessment method based on image recognition technology according to the present invention;
FIG. 5 is a graph of facial features 68 of an airway assessment method based on image recognition technology according to the present invention;
FIG. 6 is a human body feature 5 point diagram of an airway assessment method based on image recognition technology;
FIG. 7 is a Delaunay triangle cutaway view of an airway assessment method based on image recognition technology according to the present invention;
FIG. 8 is a Voronoi diagram of an airway assessment method based on image recognition technology;
fig. 9 is a mallamp ati hierarchical chart of an airway assessment method based on an image recognition technology.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
In the description of the present invention, it should be understood that the terms "upper," "lower," "front," "rear," "left," "right," "top," "bottom," "inner," "outer," and the like indicate or are based on the orientation or positional relationship shown in the drawings, merely to facilitate description of the present invention and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Referring to fig. 1-9, an airway assessment method based on image recognition technology includes: acquiring a front face image, a front face-side face image and a front face maximum opening image, and respectively realizing front face determination, wherein the face determination comprises marks such as eyes, nose, mouth, face outline and the like; recognizing and outputting characteristic points of the front face and the side face, and further calculating neck activity, opening degree, jaw distance and small mandible; identifying march classification and dental abnormalities; and comprehensively analyzing the data obtained by the neck activity degree, the mouth opening degree, the jaw distance and the small mandible, identifying and predicting the Margaret classification and the tooth abnormality, and further training a logistic regression model by referring to clinical data to judge whether the airway is a difficult airway.
In the invention, the face detection is to find the position of the face in a picture, namely, the face is circled out, for example, a digital camera automatically draws the face when photographing. The face alignment (key points of the face) is to automatically find out the marking characteristic positions of eyes, nose and mouth, face outline and the like on the face on the basis of the detected face.
The method comprises the steps of inputting image data marked with key points of a human face, extracting the face, and affine transforming the key points of the human face into a unit space by utilizing affine transformation to unify the size and a coordinate system because the sizes of the faces are different. And (3) carrying out down-average on the face key points of the data to obtain an initial face shape, and carrying out residual error calculation to fit the face key points based on the initial shape.
Pixels are randomly sampled in the range of the initial key points to serve as corresponding characteristic pixel points. The feature pixel point selects the closest initial key point as an anchor, and calculates the deviation. The current pixel point is close to the initial key point (the average position of the key point) through a coordinate system after rotation, conversion and expansion, namely, the square of the distance between the current pixel point and the initial key point is minimized, and the optimal transformation tform is obtained. And acting the tform on the deviation and adding the position information of the tform to obtain the characteristic pixel point of the current key point.
And after obtaining the characteristic pixel points, starting to construct a residual tree, and calculating the deviation between the current key point and the target key point. And selecting a plurality of partition points through the characteristic pixel points by using an annealing method, dividing a left tree and a right tree, and selecting the minimized deviation as the optimal partition point. The samples are partitioned, and the current key point positions are updated based on the average residual error of the samples. And returning to the previous step, reselecting the characteristic key points, fitting the next residual tree, and finally synthesizing the results of all residual trees to obtain the key point positions.
In a preferred scheme, the neck activity degree can be calculated through face pose evaluation, and the direct relation between the face and the camera and world coordinates is calculated, so that a space rotation angle is obtained and is used for directly measuring some parameters of physical constitution and indirectly assisting compensation parameters.
In the invention, face pose estimation refers to calculating the face orientation of a person in an actual three-dimensional space according to a two-dimensional face image. Three rotation angles (pitch, yaw, roll) representing azimuth are output, where pitch represents pitch angle (rotation angle about x-axis), yaw represents yaw angle (rotation angle about y-axis), and roll represents roll angle (rotation angle about z-axis);
here, using opencv's solvePnp () function, the world coordinate system (UVW), camera coordinate system (XYZ), image center coordinate system (uv), and pixel coordinate system (xy) are calculated, resulting in a rotation matrix, further solving for the euler angles:
as shown in fig. 5, the key points of the face are calculated to obtain the angle α of the mandible (space vector (A, B, C)), and then the face pose is estimated to obtain ψ,γ and the actual angle θ, then the angle cosine is:
the distance of each feature point, L actual pixel distance, L is the distance of the display pixel on the image, can be obtained by the following formulas:
after face posture data are obtained, a series of guiding measures are designed to carry out overall evaluation on the neck activity, and the following scheme is set:
taking a sitting position or a standing position, and taking the following actions in sequence in front of head-on vision of the two eyes:
s1 buckling: ordering the testee to use the chin to touch the chest, and estimating the activity degree of the cervical vertebra (the normal cervical vertebra can flex by 35-45 degrees);
s2, stretching: ordering the testee to lean the head as much as possible (normal back extension 35-45 °);
s3 side bending: the testee is ordered to touch the right shoulder by using the right ear, touch the left shoulder by using the left ear, and the distances from the normal two ears to the same side shoulder peak are equal (the lateral deflection is about 45 degrees, the two shoulders are required to be equal in height in advance, and the shoulders can not be lifted during the action);
s4, rotating: the method comprises the steps that a person to be tested is ordered to contact left and right shoulders respectively by using the chin, but the shoulders cannot be lifted to contact the chin, each side is normally rotated by 60-80 degrees, and after the system gives an acousto-optic prompt to a user, extremum values of 3 rotation angles (all angles are respectively used as final input data) are collected and recorded.
And (3) evaluating the pose of the face, and calculating the direct relation between the face and the camera and the world coordinates to obtain a spatial rotation angle, wherein the spatial rotation angle is used for directly measuring some parameters of the physical constitution and indirectly assisting in compensating parameters. The directly measured physical parameters comprise neck activity degree and static head orientation, the neck activity degree also requires the patient to rotate the head according to instructions, respectively perform 4 actions of buckling, stretching, lateral buckling and rotating, and the extremum of three spatial rotation angles under different states is calculated. The image pixel distance and the actual pixel distance are calibrated through space angle compensation, and the pixel distance and the actual distance are reflected through methods of a scale, comparison and the like.
In a preferred approach, the calculation of the opening degree, the jaw distance and the small mandible may incorporate the Delaunay triangulation algorithm and the voronoi diagram algorithm, respectively, commonly used in cosmetic algorithms.
The invention discloses a method for processing key point data, which comprises the steps of calculating the distance of an image key point and compensating a space rotation angle. The compensation of the spatial rotation angle is due to the fact that the human face exists in a state that the human body is dynamically stationary, and under the condition that the human face is kept stationary relative to the camera, a deviation angle still exists, and the error is generally greater than 1%. Then in the collection of the side face data, obvious errors exist in the feature points of the full side face image. It is first thought that only full side face data alone trains the model. However, in the actual test process, the characteristic points are found to be greatly disturbed. The side face data thus also refers to the compensation of the spatial rotation angle, and the face rotation angle performs face pose evaluation in the range of 0 ° -80 °.
For the face of the patient, the voice gives out instructions to enable the patient to cooperate, the number of pixels of the opening degree on the image is calculated, for the side face of the patient, two line segments are fitted by using characteristic key points with dense chin areas, the number of pixels and the included angle are calculated respectively, and the mandible distance and the mandible angle can be calculated by the same method to assist in judging the small mandible;
for data comparison, a graph (comparison of body surface measurement data of a normal airway patient and three-dimensional model measurement related data) in "analysis of anatomical difference of three-dimensional finite element model of difficult airway patient" is introduced for each big hospital expert, wherein the chin distance is smaller than 6cm ", and the small mandible is introduced (comparison of body surface measurement data of a difficult airway patient and three-dimensional model measurement related data):
table 1 comparison of body surface measurement data and three-dimensional lateral measurement related data of a patient with a normal airway
Table 1Comparison of measurement data between normal airway
s urface measurement group and three-dimensional model group
Table 2 comparison of difficult airway patient body surface measurement data and three-dimensional lateral measurement related data
Table 2Comparison of measurement data between difficult airway
surface measurement group and three-dimensional model group
From the upper graph, the mandibular distance and mandibular angle of the patient with difficult airways are obviously different from those of the normal patient;
in general clinical consultation, expert doctors generally adopt 3 transverse fingers as a standard for judging whether the opening degree is qualified, namely, the probability of difficult airways of patients with the opening degree not larger than 3 transverse fingers is relatively high, and in the case of using the standard for machine vision, the quantitative 3 transverse fingers are about 4.6-6.4em (adults), so that the behavior is obviously unreasonable, and most people cannot be applicable, such as old people and children;
further, using other face data close to the mouth as comparison data, testing the data such as the distance between eyes and nose bridge, considering the interference of external conditions, replacing the original 3 transverse fingers by adopting the eye distance, solving the eye distance and the opening distance on the basis of the rotation angle compensation of the key points of the face, judging the opening degree, and obtaining the value of the opening degree (the distance between upper lips and lower lips)/the horizontal distance between eyes (the average distance between left eyes and right eyes) between 0.92 and 1.84 and the average value between 1.41 in the data test of thousands of cases, carrying out logistic regression data analysis by combining the final possible airway evaluation result, and adopting 1.38 as the judgment zero boundary point;
in order to calculate more characteristic parameters (opening degree, jaw distance, small mandible, etc.), the Delaunay triangulation algorithm and the voronoi diagram algorithm commonly used in the beauty algorithm are introduced. The Delaunay triangulation algorithm is an optimal triangulation based on face images and feature point coordinates, has uniqueness and regionality, can use the characteristics of the Delaunay triangulation algorithm to obtain a more general comparison reference (divisor of pixel distance), so that the obtained pixel distance is not converted into an actual distance to be compared and calculated, and the Delaunay triangle is also a parameter with information coding and can be added as a feature parameter of a subsequent neural network.
In another preferred embodiment, identifying a mahalanobis score may combine anesthesia consensus with a general clinical anesthesia assessment table in a hospital may result in the following 4-level classification of the mallamati score: stage I: the pharyngisthmus bow, the soft jaw and the jaw drop can be seen; stage II: the pharyngisthmus arch, the soft jaw, but the jaw drop is covered by the tongue root; dish grade: only the soft jaw is visible, indicating difficult intubation; grade IV: the soft jaw is also invisible and indicates difficult intubation; the tooth abnormality includes the relationship between the length of the upper incisor and the upper and lower incisors when closed in a natural state, wherein the tooth abnormality can be primarily predicted by using a target algorithm.
The invention, the abnormal teeth mainly illustrate several conditions affecting the formation of difficult airways, such as the length of the upper incisors and the relationship between the upper incisors and the lower incisors when the upper incisors are closed in a natural state, and the invention adopts a target recognition algorithm to perform preliminary prediction because the mature human face feature point algorithm does not involve the organs;
and combining anesthesia consensus with a general clinical anesthesia assessment table in hospitals can result in the following class 4 classification of the Mallampati scale:
stage I: the pharyngisthmus bow, the soft jaw and the jaw drop can be seen;
stage II: the pharyngisthmus arch, the soft jaw, but the jaw drop is covered by the tongue root;
dish grade: only the soft jaw is visible, indicating difficult intubation;
grade IV: the soft jaw is also not visible, indicating a difficult cannula.
These 4 different classifications have distinct characteristics. The experiment collects more than one thousand samples with different ages and sexes for labeling training, and expert doctors with experience are invited to classify before labeling so as to ensure the accuracy of the samples. Considering obvious image characteristics, the common target recognition model (ssd, fastcnn, yolo) is used for testing, the accuracy of the test set reaches more than 95%, and the requirements of medical instrument standards on intelligent software recognition are met. In order to achieve real-time photographing identification on a mobile phone platform, a yolov4-tiny target identification network trained under a dark net architecture is mainly used as a pre-training network. And quantizing by using the Tencentrated ncnn framework, and finally deploying the quantized and quantized frames on the embedded platform.
Whether the acquisition is carried out by a mobile phone or fixed camera (with limited angle of view), even the following acquisition of the cradle head cannot avoid the interference of the influence (including illumination, form, position and the like) of the three-dimensional structure of the whole mouth on the vicinity of the soft palate image in the shooting process. The visual observation and the image acquisition show different contents, and finally the result is influenced. Compared with the two acquisition schemes, the motility observed by human eyes plays a key role in the acquisition process. In summary, the following 3 optimization schemes are proposed:
a1, providing visual feedback of a patient, enabling the patient to adjust the posture to a certain extent, and avoiding the influence caused by camera fixation and view angle limitation;
a2, improving the brightness, contrast and saturation of the preview image, so that a collector and a patient can observe the target position better;
and A3, performing voice broadcasting and text prompting on the basis of identifying the March classification in real time.
In a preferred scheme, the logistic regression model is formed by constructing 4 layers of Batchnormalization by using a tensorflow framework, intersecting the 4 layers of dropout layers, and adding the output layer and the input layer.
According to the invention, based on better compatibility of the tensorflow and android systems, the tensorflow framework is used for data processing and data prediction. In the aspect of data, except for the Marshall classification into multi-class data, the rest is continuous variable or two-class variable, and one-hot encoding processing is independently carried out to generate 4-item two-class data. In the aspect of a data model, 4 layers of Batchnormalization (normalization layer) are built on the basis of a tensorf low framework to cross 4 layers of dropout layers, and an output layer and an input layer (dense (1)) are added to form a logistic regression model. Model data quantization is performed on the model in order to fit the processor of the android system. In pure integer quantization of models, some operators may not have integer implementation and only floating point operators may be used. To ensure that the conversion proceeds smoothly, an integer with floating point rollback (using the default floating point input/output) may be used at this time.
The model data source is mainly obtained by judging the data standard threshold value and adding expert principal and principal judging data as input samples. The input sample needs to be optimized and screened after original operation, and the data is cleaned by using a data distribution model, so that interference caused by partial extreme data in sampling is avoided. And 7, carrying out 7 on the processed data: 3 into training data set and test data set, training samples 1032, average loss=0.0256, accuracy=96.56%.
Some of the physical parameters mentioned above are mainly related documents queried in earlier work, so that the measurement data is endowed with medical significance. And introducing some standards, initially establishing a difficult airway analysis model, and carrying out independent prediction and cascade prediction on physical parameters. After a certain amount of data is accumulated, the characteristic nodes are analyzed by firstly spreading the data to the network, and the characteristic physical parameters which are possibly existed and have larger correlation with the difficult airway are calculated.
In addition, it should be noted that the airway assessment method based on the image recognition technology in this embodiment is suitable for an embedded control system with a camera acquisition function, front and side images of a face, a maximum opening image and a mahalanobis hierarchical image are acquired and identified by the camera, and the opening degree, the chin distance, the chin, the tooth abnormality and the neck activity are calculated by these image identifications, so that the information is helpful to assist doctors in integrating, assisting doctors in playing a guiding and prompting role, and the data for identifying and calculating the opening degree, the chin distance, the chin, the tooth abnormality and the neck activity are preferably comprehensively analyzed to determine whether the airway is a difficult airway.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a Read-only memory (ROM), a random access memory (RAM, randomAccessMemory), an electrical carrier signal, a telecommunication signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
In the foregoing, only the preferred embodiments of the present invention are described above, but the protection scope of the present invention is not limited thereto, and any person skilled in the art should be able to apply equivalent substitutions or alterations to the technical solution and the inventive concept thereof within the scope of the present invention.

Claims (8)

1. An airway assessment method based on image recognition technology is characterized by comprising the following steps:
acquiring a front face image, a front face-side face image and a front face maximum opening image, and respectively realizing front face determination;
recognizing and outputting characteristic points of the front face and the side face, and further calculating neck activity, opening degree, jaw distance and small mandible;
identifying predictive mahalanobis classification and dental abnormalities;
and comprehensively analyzing the data obtained by the neck activity degree, the mouth opening degree, the jaw distance and the small mandible, identifying and predicting the Margaret classification and the tooth abnormality, and further training a logistic regression model by referring to clinical data to judge whether the airway is a difficult airway.
2. The method of airway assessment based on image recognition technology of claim 1, wherein face determination comprises:
eye, nose, mouth, and face contours, etc.
3. The airway assessment method based on image recognition technology according to claim 1, wherein the neck activity is calculated by face pose assessment, and the direct relationship between the face and the camera and world coordinates is calculated to obtain a spatial rotation angle for directly measuring some parameters of physical constitution and indirectly assisting compensation parameters.
4. An airway assessment method based on image recognition technology according to claim 3, wherein the directly measured physical parameters include:
neck mobility and static head orientation.
5. The airway assessment method based on the image recognition technology according to claim 1, wherein the calculation of the opening degree, the mandibular distance and the small mandible can be respectively introduced into the Delaunay triangulation algorithm and the voronoi diagram algorithm commonly used in the beauty algorithm.
6. The airway assessment method based on image recognition technology according to claim 1, wherein the recognition of mahalanobis classification can be combined with anesthesia consensus and a general clinical anesthesia assessment table in hospitals to obtain the following 4-level classification of Mallampati classification:
stage I: the pharyngisthmus bow, the soft jaw and the jaw drop can be seen;
stage II: the pharyngisthmus arch, the soft jaw, but the jaw drop is covered by the tongue root;
dish grade: only the soft jaw is visible, indicating difficult intubation;
grade IV: the soft jaw is also not visible, indicating difficult cannulation.
7. The method of airway assessment based on image recognition technology of claim 1, wherein the tooth anomaly comprises:
the length of the upper incisor is related to the upper incisor and the lower incisor when the upper incisor is closed in a natural state, wherein
The tooth abnormality can be primarily predicted using a target algorithm.
8. The airway assessment method based on the image recognition technology according to claim 1, wherein the logistic regression model is formed by constructing 4 layers of batch normalization layers and 4 layers of dropouts by adopting a tensorflow framework, and adding an output layer and an input layer.
CN202310472339.0A 2023-04-27 2023-04-27 Airway assessment method based on image recognition technology Pending CN116486458A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310472339.0A CN116486458A (en) 2023-04-27 2023-04-27 Airway assessment method based on image recognition technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310472339.0A CN116486458A (en) 2023-04-27 2023-04-27 Airway assessment method based on image recognition technology

Publications (1)

Publication Number Publication Date
CN116486458A true CN116486458A (en) 2023-07-25

Family

ID=87224848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310472339.0A Pending CN116486458A (en) 2023-04-27 2023-04-27 Airway assessment method based on image recognition technology

Country Status (1)

Country Link
CN (1) CN116486458A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117238509A (en) * 2023-11-15 2023-12-15 首都医科大学宣武医院 Difficult airway assessment system and assessment method based on common camera data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117238509A (en) * 2023-11-15 2023-12-15 首都医科大学宣武医院 Difficult airway assessment system and assessment method based on common camera data
CN117238509B (en) * 2023-11-15 2024-02-27 首都医科大学宣武医院 Difficult airway assessment system and assessment method based on common camera data

Similar Documents

Publication Publication Date Title
CN104424385B (en) A kind of evaluation method and device of medical image
CN112734757B (en) Spine X-ray image cobb angle measuring method
Tian et al. DCPR-GAN: dental crown prosthesis restoration using two-stage generative adversarial networks
CN116486458A (en) Airway assessment method based on image recognition technology
CN110059670B (en) Non-contact measuring method and equipment for head and face, limb movement angle and body posture of human body
CN112370018A (en) Computer application software for predicting difficult airway and airway management data system
CN115381429B (en) Airway assessment terminal based on artificial intelligence
JP2019208831A (en) Dental analysis system and dental analysis X-ray system
CN113647939A (en) Artificial intelligence rehabilitation evaluation and training system for spinal degenerative diseases
Dai et al. Locating anatomical landmarks on 2D lateral cephalograms through adversarial encoder-decoder networks
CN113065552A (en) Method for automatically positioning head shadow measurement mark point
CN113197549A (en) System for diagnosing diseases through face recognition technology
CN112201349A (en) Orthodontic operation scheme generation system based on artificial intelligence
CN114240934B (en) Image data analysis method and system based on acromegaly
Alagha et al. Objective grading facial paralysis severity using a dynamic 3D stereo photogrammetry imaging system
CN115602320B (en) Difficult airway assessment method and system
CN114092449A (en) Human organ and meridian visual positioning method based on deep learning
Yun et al. Automated 3D cephalometric landmark identification using computerized tomography
CN115511979A (en) Endoscope mainboard control system based on gray level conversion
CN113570545A (en) Visual identification pain grading assessment method
Chen et al. Detection of Various Dental Conditions on Dental Panoramic Radiography Using Faster R-CNN
CN113075599B (en) Magnetic resonance signal acquisition method, magnetic resonance system and medium
CN117238509B (en) Difficult airway assessment system and assessment method based on common camera data
Li et al. Nasolabial Folds Extraction based on Neural Network for the Quantitative Analysis of Facial Paralysis
CN112336476A (en) Automatic image identification method and system for oral medical treatment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination