CN114145844B - Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm - Google Patents

Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm Download PDF

Info

Publication number
CN114145844B
CN114145844B CN202210123800.7A CN202210123800A CN114145844B CN 114145844 B CN114145844 B CN 114145844B CN 202210123800 A CN202210123800 A CN 202210123800A CN 114145844 B CN114145844 B CN 114145844B
Authority
CN
China
Prior art keywords
data
model
deep learning
learning algorithm
artificial intelligence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210123800.7A
Other languages
Chinese (zh)
Other versions
CN114145844A (en
Inventor
武正浩
马玉丹
武其春
张今
马宏
索亮
韩雪妃
谷越
刘明雷
刘文哲
蔡宇晴
高艳
张哲�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shuzhi Yuanyu Artificial Intelligence Technology Co ltd
Original Assignee
Beijing Shuzhi Yuanyu Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shuzhi Yuanyu Artificial Intelligence Technology Co ltd filed Critical Beijing Shuzhi Yuanyu Artificial Intelligence Technology Co ltd
Priority to CN202210123800.7A priority Critical patent/CN114145844B/en
Publication of CN114145844A publication Critical patent/CN114145844A/en
Application granted granted Critical
Publication of CN114145844B publication Critical patent/CN114145844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Abstract

The invention relates to an artificial intelligence technology and discloses a laparoscopic surgery artificial intelligence cloud auxiliary system based on a deep learning algorithm, which comprises the following steps: s1, preparing data, selecting representative images from a large number of collected laparoscopic surgery videos as training data, intercepting pictures and adding labels to the pictures; s2, training the label data to obtain a deep learning algorithm model for detecting the key elements of the operation in real time; s3, analyzing the detection effect of the model, adding label data to the part which is missed in detection and is detected by mistake again, and repeatedly strengthening training; s4, the detection precision of the model is improved to an ideal range, the operation is carried out under the guidance of the model, the data are stored in a cloud server in real time, and the cloud data are analyzed to realize the optimization iteration of the system; the cloud computing technology is carried, and continuous optimization and upgrading of system functions are achieved through real-time transmission and unified storage analysis of data.

Description

Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an artificial intelligence cloud auxiliary system for laparoscopic surgery based on a deep learning algorithm.
Background
Minimally invasive surgery has progressed for 30 years and has entered the high platform at the technical level. Laparoscopic surgery, once an emerging technology, is gradually becoming a new "traditional surgery". Nowadays, surgery is gradually taking a new situation of multidisciplinary and multi-technology combination, high-quality large-scale clinical research is continuously promoted, and new science and technology concepts are continuously upgraded and iterated. Under the background, the gravity center of the minimally invasive surgery is more and more oriented to the practical problem, and the development of the minimally invasive surgery is directed to the aspects of digital surgery, high-tech surgery and the like.
By means of the AI medical auxiliary system, the problem of unbalanced urban and rural medical resources can be improved to a certain extent.
Disclosure of Invention
The invention aims to upgrade traditional laparoscope equipment into digital laparoscope equipment with an artificial intelligence cloud auxiliary system, provides a laparoscope operation artificial intelligence cloud auxiliary system based on a deep learning algorithm, and solves the defects that the existing laparoscope operation excessively depends on personal judgment of a surgeon, the equipment runs in an isolated mode, and the advanced technologies such as cloud computing, Internet of things and the like are not carried.
The invention aims to realize the technical scheme that an artificial intelligence cloud auxiliary system for laparoscopic surgery based on a deep learning algorithm comprises the following steps:
the method comprises the following steps of firstly, preparing data, selecting representative images from a large number of collected laparoscopic surgery videos as training data, intercepting pictures and adding labels to the pictures;
training the label data to obtain a deep learning algorithm model for detecting the key elements of the operation in real time;
analyzing the detection effect of the model, adding label data again for the part with missing detection and error detection, and repeatedly strengthening training;
and step four, the detection precision of the model is improved to an ideal range, the operation is carried out under the guidance of the model, the data are stored in a cloud server in real time, and the cloud data are analyzed to realize the optimization iteration of the system.
The representative images include:
non-damaged video, non-surgical procedure abnormal video, non-patient organ congenital anomaly video.
The label includes:
each organ area label, recommended knife entering area label and danger warning area label.
The step of adding the label comprises the following steps:
a1, dividing representative images into three stages of before, during and after lesion excision;
a2, selecting 5000 operation videos at each stage, and intercepting 10 pictures from each video;
a3, adding organ area labels, recommended knife inserting area labels and danger warning area labels to 50000 pictures of the three stages respectively.
The training step comprises:
setting a file path; preprocessing data; installing a YOLOR algorithm dependent term; preparing pre-trained weights for the YOLOR algorithm; developing an overfitting model; observing the loss of the verification set so as to find the optimal training round number of the model; and (5) rebirling the data and training an optimal model by using the optimal number of turns.
The surgical key elements include:
real-time positions of organs, recommended knife-inserting positions, and prediction and warning of dangerous situations.
The deep learning algorithm model comprises:
the system comprises an image classification model, an image segmentation model, a target detection model and a key point detection model.
The cloud data includes:
the system comprises a surgery video image, organ positions detected by the system in real time, recommended knife-inserting positions made by the system in real time, dangerous situation prediction and warning made by the system, and Boolean value combination for warning whether the system makes a warning in advance after a surgery accident.
The data preprocessing step comprises:
graying, geometric transformation, and image enhancement.
The Boolean value combination comprises:
b1, no accident happens in the operation, and the system does not give a danger early warning;
b2, no accident occurs in the operation, and the system already makes a danger early warning;
b3, an accident occurs in the operation, and the system does not give a danger early warning;
b4, an accident occurs in the operation, and the system already makes a danger early warning.
The application has the following beneficial effects:
1. according to the invention, a deep learning algorithm model with real-time detection of operation key elements is trained through a large amount of clinical operation data and labels of each organ area, a recommended knife entering area label and a danger warning area label which are completed under the guidance of experts, so that doctors are assisted to carry out more accurate judgment in the operation, and prediction and warning are made before danger occurs, thereby reducing occurrence of laparoscopic operation accidents.
2. The invention uploads operation data, system real-time detection data, system real-time recommendation data, system danger warning data and whether the system has early warning before an accident occurs or not to the cloud server, and continuously optimizes the auxiliary effect of the system by continuously analyzing the missed detection situation and the false detection situation.
3. When the laparoscopic surgery is carried out with the assistance of the invention, expert-level recommended knife-entering guide and dangerous preposed warning can be obtained in real time.
4. The invention can be used for medical education, helps young surgeons to obtain the guidance of expert doctors at the initial stage of the industry, and develops good operation habits.
5. The invention can realize embedded combination with innovative medical instruments, and promote the development of novel digital medical surgical instruments by means of continuously accumulated cloud data.
Drawings
FIG. 1 is a schematic overall flow chart of the present invention.
Fig. 2 is a schematic diagram of a data annotation process.
FIG. 3 is a schematic diagram of a model training process.
FIG. 4 is a diagram of a model structure of the YOLOR algorithm.
FIG. 5 is a comparison of the YOLOR algorithm with other target detection algorithms.
Fig. 6 is a cloud function diagram of the system.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the detailed description of the embodiments of the present application provided below in connection with the appended drawings is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application. The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the invention relates to a laparoscopic surgery artificial intelligence cloud auxiliary system based on a deep learning algorithm, and a specific data preparation and data annotation manner thereof is as shown in fig. 2:
more than 5000 cases of laparoscopic surgery videos are collected, damaged videos, videos with abnormal operation procedures caused by human factors and videos with congenital ectopic organs of patients are removed. Then all the remaining videos are divided into three types of early stage, middle stage and later stage according to the operation stage. Subsequently, 5000 samples are randomly selected from all the preoperative videos, 5000 samples are also randomly selected from all the intraoperative videos, and similarly, 5000 samples are also randomly selected from all the postoperative videos. Next, 10 pictures were taken from each sample, thereby obtaining 50000 pictures per stage for a total of 150000 pictures. Finally, under the guidance of the specialist, adding organ area labels, recommended knife entering area labels and danger warning area labels to the pictures respectively.
As shown in fig. 3, the specific training mode is as follows:
setting a file path; carrying out graying, geometric transformation and image enhancement on the data in sequence; installing a YOLOR algorithm dependent term; preparing pre-trained weights for the YOLOR algorithm; developing an overfitting model; observing the loss of the verification set so as to find the optimal training round number of the model; and (5) rebirling the data and training an optimal model by using the optimal number of turns.
As shown in fig. 4, the YOLOR algorithm innovatively provides concepts of explicit knowledge and implicit knowledge, and by expanding different configurations of error terms in the objective function, the model can learn deeper data details, thereby obtaining better prediction performance.
Figure 731446DEST_PATH_IMAGE001
The above equation is an objective function of conventional network training, wherein
Figure 989252DEST_PATH_IMAGE002
Is the value of the observation that the measured value,
Figure 91200DEST_PATH_IMAGE003
is a set of parameters of the neural network,
Figure 50935DEST_PATH_IMAGE004
it is shown that the operation of the neural network,
Figure 794900DEST_PATH_IMAGE005
is the term for the error as a function of,
Figure 749955DEST_PATH_IMAGE006
is the target of a given task. The training process is minimized as much as possible
Figure 537652DEST_PATH_IMAGE007
So that
Figure 420901DEST_PATH_IMAGE008
Is infinitely close to
Figure 358770DEST_PATH_IMAGE006
. However, error terms due to conventional objective function
Figure 263272DEST_PATH_IMAGE007
Without any subdivision, the model is limited to completing only one type of target detection at a time. For example, in automotive applications, algorithms can detect both human and vehicular targets in real time, but cannot detect both human and female, domestic or imported vehicles at the same time. This is because such information often exceeds the scope of the representation, and human beings can easily recognize the information, which is called implicit knowledge by virtue of human subconscious sense.
Through subconsciously learned experiences, are encoded and stored in the human brain. Using these rich experiences as a large database, humans can efficiently process data even if they are not visible in advance or are very slightly different from each other.
Figure 988913DEST_PATH_IMAGE009
The above equation is the innovative objective function in YOLOR. By modeling errors with explicit knowledge and implicit knowledge, respectively, richer combinatorial error terms are generated
Figure 595344DEST_PATH_IMAGE010
. And then guide the training process of the multipurpose network by minimizing it. Wherein
Figure 884374DEST_PATH_IMAGE011
And
Figure 250240DEST_PATH_IMAGE012
are respectively observed values
Figure 350045DEST_PATH_IMAGE013
And subconscious codes
Figure 912614DEST_PATH_IMAGE014
Is modeled on the error of (a) a (b),
Figure 536493DEST_PATH_IMAGE015
is a task-specific operation for combining or selecting information from explicit knowledge and implicit knowledge.
As shown in fig. 5, YOLOR increased the speed by 88% compared to YOLOv4 when the same target detection accuracy was achieved.
And (3) after the model detection precision is improved to an ideal range through training data and a YOLOR algorithm, deploying the model and performing an operation under the guidance of the model.
As shown in fig. 6, all data in the surgical process are uploaded to the cloud, including surgical video images, organ positions detected by the system in real time, recommended knife-in positions made by the system in real time, dangerous situation prediction and warning made by the system, and boolean value combinations for warning whether the system has made a warning in advance after a surgical accident occurs.
Next, the system automatically classifies the cloud data according to the following four cases: no accident happens in the operation, and the system does not give out danger early warning; no accident happens in the operation, and the system already gives out danger early warning; accidents occur in the operation, and the system does not give out danger early warning; an accident occurs during the operation, and the system already gives a danger warning.
And then, performing targeted optimization upgrading on the system according to different Boolean value combinations:
and directly converting data which do not have accidents and do not make danger early warning by the system into data assets for storage for later scientific research and teaching.
And analyzing the early warning accuracy of the data which does not have an accident but has made a danger early warning in advance, and making different treatments according to the early warning accuracy. If the early warning is accurate, the data is directly converted into data assets to be stored, if the early warning is false, the reason of the false warning is found out, an upgrading system is optimized, and the rate of the false warning is continuously reduced through continuous iteration.
And for data which has an accident but the system does not give a danger early warning in advance, repeating the operation process, finding out the reason of the non-early warning, optimizing and upgrading the system, and continuously reducing the rate of missing report through continuous iteration.
And directly converting data which are subjected to accidents and have early warning of danger in advance into data assets for storage for later scientific research and teaching.
Each module in the system of the present invention is a content for implementing the corresponding method step in the method of the present invention, and specifically includes the corresponding operation step of the method of the present invention.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. The utility model provides a laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm which characterized in that: the system comprises the following auxiliary methods:
the method comprises the following steps that firstly, data are prepared, representative images are selected from a large number of collected laparoscopic surgery videos to serve as training data, pictures are intercepted, and labels are added to the pictures;
training the label data by using a YOLOR algorithm to obtain a deep learning algorithm model for detecting three operation key elements including each organ position, a recommended knife inserting position and a dangerous area warning in real time;
analyzing the detection effect of the model, adding label data again for the part with missing detection and error detection, and repeatedly applying a YOLOR algorithm to strengthen training; the objective function in the YOLOR algorithm is
Figure DEST_PATH_IMAGE002
The error is modeled by respectively explicit knowledge and implicit knowledge to generate richer combined error terms
Figure DEST_PATH_IMAGE004
And then guide the training process of the multipurpose network by minimizing it, wherein
Figure DEST_PATH_IMAGE006
And
Figure DEST_PATH_IMAGE008
respectively error modeling of the observed value x and the subconscious code z,
Figure DEST_PATH_IMAGE010
is a task-specific operation for combining or selecting information from explicit knowledge and implicit knowledge;
and step four, the detection precision of the model is improved to an ideal range, the operation is carried out under the guidance of the model, the data are stored in a cloud server in real time, and the cloud data are analyzed to realize the optimization iteration of the system.
2. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the representative images include:
non-damaged video, non-surgical procedure abnormal video, non-patient organ congenital anomaly video.
3. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the label includes:
each organ area label, recommended knife entering area label and danger warning area label.
4. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the step of adding the label comprises the following steps:
a1, dividing representative images into three stages of before, during and after lesion excision;
a2, selecting 5000 operation videos at each stage, and intercepting 10 pictures from each video;
a3, adding organ area labels, recommended knife inserting area labels and danger warning area labels to 50000 pictures of the three stages respectively.
5. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the training step comprises:
setting a file path; preprocessing data; installing a YOLOR algorithm dependent term; preparing pre-trained weights for the YOLOR algorithm; developing an overfitting model; observing the loss of the verification set so as to find the optimal training round number of the model; and (5) rebirling the data and training an optimal model by using the optimal number of turns.
6. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the deep learning algorithm model comprises:
the system comprises an image classification model, an image segmentation model, a target detection model and a key point detection model.
7. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the cloud data includes:
the system comprises a surgery video image, organ positions detected by the system in real time, recommended knife-inserting positions made by the system in real time, dangerous situation prediction and warning made by the system, and Boolean value combination for warning whether the system makes a warning in advance after a surgery accident.
8. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the data preprocessing step comprises:
graying, geometric transformation, and image enhancement.
9. The laparoscopic surgery artificial intelligence cloud auxiliary system based on the deep learning algorithm is characterized in that: the Boolean value combination comprises:
b1, no accident happens in the operation, and the system does not give a danger early warning;
b2, no accident occurs in the operation, and the system already makes a danger early warning;
b3, an accident occurs in the operation, and the system does not give a danger early warning;
b4, an accident occurs in the operation, and the system already makes a danger early warning.
CN202210123800.7A 2022-02-10 2022-02-10 Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm Active CN114145844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210123800.7A CN114145844B (en) 2022-02-10 2022-02-10 Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210123800.7A CN114145844B (en) 2022-02-10 2022-02-10 Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm

Publications (2)

Publication Number Publication Date
CN114145844A CN114145844A (en) 2022-03-08
CN114145844B true CN114145844B (en) 2022-06-10

Family

ID=80450317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210123800.7A Active CN114145844B (en) 2022-02-10 2022-02-10 Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm

Country Status (1)

Country Link
CN (1) CN114145844B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114601560B (en) * 2022-05-11 2022-08-19 中国科学院深圳先进技术研究院 Minimally invasive surgery assisting method, device, equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020096889A1 (en) * 2018-11-05 2020-05-14 Medivators Inc. Assessing endoscope channel damage using artificial intelligence video analysis
CN109754007A (en) * 2018-12-27 2019-05-14 武汉唐济科技有限公司 Peplos intelligent measurement and method for early warning and system in operation on prostate
EP4064994A4 (en) * 2020-03-21 2024-05-15 Smart Medical Systems Ltd Artificial intelligence detection system for mechanically-enhanced topography
CN111709941B (en) * 2020-06-24 2023-05-09 上海迪影科技有限公司 Lightweight automatic deep learning system and method for pathological image
CN111798439A (en) * 2020-07-11 2020-10-20 大连东软教育科技集团有限公司 Medical image quality interpretation method and system for online and offline fusion and storage medium
CN112614573A (en) * 2021-01-27 2021-04-06 北京小白世纪网络科技有限公司 Deep learning model training method and device based on pathological image labeling tool
CN112932663B (en) * 2021-03-02 2021-10-22 成都与睿创新科技有限公司 Intelligent auxiliary system for improving safety of laparoscopic cholecystectomy
CN112966772A (en) * 2021-03-23 2021-06-15 之江实验室 Multi-person online image semi-automatic labeling method and system
CN113813053A (en) * 2021-09-18 2021-12-21 长春理工大学 Operation process analysis method based on laparoscope endoscopic image

Also Published As

Publication number Publication date
CN114145844A (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN114398961B (en) Visual question-answering method based on multi-mode depth feature fusion and model thereof
CN112949786A (en) Data classification identification method, device, equipment and readable storage medium
CN110111885B (en) Attribute prediction method, attribute prediction device, computer equipment and computer readable storage medium
CN112613517B (en) Endoscopic instrument segmentation method, endoscopic instrument segmentation apparatus, computer device, and storage medium
CN114724682B (en) Auxiliary decision-making device for minimally invasive surgery
CN114145844B (en) Laparoscopic surgery artificial intelligence cloud auxiliary system based on deep learning algorithm
CN108595432B (en) Medical document error correction method
Fathabadi et al. Box-trainer assessment system with real-time multi-class detection and tracking of laparoscopic instruments, using CNN
CN117077786A (en) Knowledge graph-based data knowledge dual-drive intelligent medical dialogue system and method
CN116805533A (en) Cerebral hemorrhage operation risk prediction system based on data collection and simulation
JP7115693B2 (en) Diagnosis support system, diagnosis support device and diagnosis support method
CN117237351B (en) Ultrasonic image analysis method and related device
CN110378353A (en) A kind of tongue picture feature extracting method, system and computer readable storage medium
CN116935009B (en) Operation navigation system for prediction based on historical data analysis
CN109509517A (en) A kind of medical test Index for examination modified method automatically
CN113342973A (en) Diagnosis method of auxiliary diagnosis model based on disease two-classifier
CN112861881A (en) Honeycomb lung recognition method based on improved MobileNet model
Qin et al. Learning invariant representation of tasks for robust surgical state estimation
CN111724371A (en) Data processing method and device and electronic equipment
CN116232699A (en) Training method of fine-grained network intrusion detection model and network intrusion detection method
CN115565018A (en) Image classification method and device, equipment and storage medium
CN112435745B (en) Method and device for recommending treatment strategy, electronic equipment and storage medium
CN114706971A (en) Biomedical document type determination method and device
CN114283114A (en) Image processing method, device, equipment and storage medium
Rajput et al. Identification of Pneumonia in Chest X-Ray Images using Bio-Inspired Optimization Based LSTM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant