WO2022145999A1 - 인공지능 기반의 자궁경부암 검진 서비스 시스템 - Google Patents
인공지능 기반의 자궁경부암 검진 서비스 시스템 Download PDFInfo
- Publication number
- WO2022145999A1 WO2022145999A1 PCT/KR2021/020091 KR2021020091W WO2022145999A1 WO 2022145999 A1 WO2022145999 A1 WO 2022145999A1 KR 2021020091 W KR2021020091 W KR 2021020091W WO 2022145999 A1 WO2022145999 A1 WO 2022145999A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- reading
- cervical cancer
- unit
- cervical
- image
- Prior art date
Links
- 206010008342 Cervix carcinoma Diseases 0.000 title claims abstract description 121
- 208000006105 Uterine Cervical Neoplasms Diseases 0.000 title claims abstract description 121
- 201000010881 cervical cancer Diseases 0.000 title claims abstract description 121
- 238000012216 screening Methods 0.000 title claims abstract description 62
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 16
- 238000003745 diagnosis Methods 0.000 claims abstract description 57
- 210000003679 cervix uteri Anatomy 0.000 claims description 40
- 238000004891 communication Methods 0.000 claims description 24
- 238000001514 detection method Methods 0.000 claims description 23
- 238000013145 classification model Methods 0.000 claims description 17
- 238000000034 method Methods 0.000 claims description 15
- 238000001574 biopsy Methods 0.000 claims description 10
- 230000003902 lesion Effects 0.000 claims description 10
- 238000013135 deep learning Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 19
- 230000010287 polarization Effects 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000013526 transfer learning Methods 0.000 description 5
- 230000002159 abnormal effect Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- QTBSBXVTEAMEQO-UHFFFAOYSA-N Acetic acid Chemical compound CC(O)=O QTBSBXVTEAMEQO-UHFFFAOYSA-N 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 210000003097 mucus Anatomy 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 239000000853 adhesive Substances 0.000 description 2
- 230000001070 adhesive effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 201000011510 cancer Diseases 0.000 description 2
- 230000009841 epithelial lesion Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 125000000896 monocarboxylic acid group Chemical group 0.000 description 1
- 230000008722 morphological abnormality Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 210000004291 uterus Anatomy 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00011—Operational features of endoscopes characterised by signal transmission
- A61B1/00016—Operational features of endoscopes characterised by signal transmission using wireless means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00039—Operational features of endoscopes provided with input arrangements for the user
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00045—Display arrangement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00055—Operational features of endoscopes provided with output arrangements for alerting the user
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/303—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the vagina, i.e. vaginoscopes
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H15/00—ICT specially adapted for medical reports, e.g. generation or transmission thereof
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Definitions
- the present invention relates to a cervical cancer screening service system, and more particularly, to an artificial intelligence-based cervical cancer screening service system.
- Cervical cancer the second most common cancer in women worldwide, is a cancer that can be diagnosed early through regular checkups. ) is high, so there is a need for a screening method that can compensate for this.
- the present invention has been proposed to solve the above problems of the previously proposed methods, and by mounting a pre-learned first reading model to read a cervical image in a cervical cancer diagnosis camera device, the Internet environment is not smooth.
- the purpose of this is to provide an AI-based cervical cancer screening service system that can help in cervical cancer screening by checking the AI reading results with a cervical cancer diagnosis camera device alone in the region.
- the cervical cancer diagnosis camera device can directly transmit the cervical image to the server to request a reading, so that a reading can be conveniently requested and a reading report can be provided from a reading expert.
- the server provides the AI reading result derived by applying the second reading model to the reading expert to help in the reading of the cervical image, which can increase the accuracy of the final reading result.
- Another purpose is to provide an AI-based cervical cancer screening service system.
- the present invention includes a central server that processes a read request and a plurality of regional servers located in preset regional bases, so that even in countries with slow Internet line speeds, the final result by a reading expert can be performed as quickly and efficiently as possible.
- a central server that processes a read request
- a plurality of regional servers located in preset regional bases so that even in countries with slow Internet line speeds, the final result by a reading expert can be performed as quickly and efficiently as possible.
- Another purpose is to provide an AI-based cervical cancer screening service system that can provide a reading report.
- An artificial intelligence-based cervical cancer screening service system for achieving the above object,
- a cervical cancer diagnosis camera device that mounts a pre-trained first reading model to read a cervix image, takes the cervix, uses the captured cervix image as an input of the first reading model, and outputs an AI reading result;
- a server that receives a reading request including the cervical image from the cervical cancer diagnosis camera device, requests a reading expert to read it, and provides a final reading report to a user of the cervical cancer diagnosis camera device,
- the cervical cancer diagnosis camera device The cervical cancer diagnosis camera device
- a camera unit for obtaining an image of the cervix by photographing the cervix
- a first reading unit storing the first reading model, receiving the cervical image obtained from the camera unit, and predicting and outputting an AI reading result from the first reading model
- a touch panel unit for outputting a real-time image captured by the camera unit and an AI reading result of the first reading unit, and receiving a user input signal
- It is characterized in that it includes a communication unit for transmitting a read request including the cervical image to the server.
- the server Preferably, the server,
- a data generating unit that detects a cervical region from the cervical images classified according to the lesion criteria and generates and stores annotation data
- a learning unit configured to learn a second reading model based on deep learning to predict a reading result by understanding the relationship between the cervical region image and the lesion reference in the cervical image by using the annotation data as training data;
- a second reading unit for outputting an AI reading result based on artificial intelligence by applying the cervical image received from the cervical cancer diagnosis camera device to the second reading model learned by the learning unit;
- a reading requesting unit that transmits the cervical image and the AI reading result of the second reading unit to a reading expert to request a final reading
- a result providing unit that receives the final reading report from the reading expert and provides the result to the user of the cervical cancer diagnosis camera device.
- the first reading unit More preferably, the first reading unit,
- AI reading results can be predicted and output.
- the first reading unit More preferably, the first reading unit,
- the cervical cancer diagnosis camera device The cervical cancer diagnosis camera device
- the test is terminated, and the AI reading result output from the first reading unit has the highest positive probability or is the same as the user's opinion If different, it may further include a control unit for transmitting the read request to the server through the communication unit.
- the first read model and the second read model include:
- the communication unit includes
- the cervical image may be transmitted to an application installed in the user device.
- the server More preferably, the server,
- It is configured to include a central server that provides a final reading report according to the reading request and a plurality of regional servers located in preset regional bases,
- the communication unit of the cervical cancer diagnosis camera device transmits the read request to a local server of the nearest base among the local servers,
- the local server receiving the read request may transmit the read request to the central server, and receive and provide a final read report according to the read request.
- the server Even more preferably, the server,
- the payment processor may further include a payment processing unit that manages the user's points and deducts the points when the read request is received from the cervical cancer diagnosis camera device.
- the first reading model pre-learned to read the cervical image is mounted on the cervical cancer diagnosis camera device, so that the uterus even in an area where the Internet environment is not smooth
- the cervical cancer diagnosis camera device alone can check the AI reading results to help cervical cancer screening.
- the cervical cancer diagnosis camera device can directly transmit the cervical image to the server and request a reading, so it is convenient By requesting a reading and receiving a reading report from a reading expert, the convenience of the medical staff can be maximized. By helping, the accuracy of the final reading result can be increased.
- the AI-based cervical cancer screening service system proposed in the present invention, by configuring a central server that processes a reading request and a plurality of regional servers located in preset regional bases, the Internet line speed is improved Even in slow countries, it is possible to provide a final reading report by a reading expert as quickly and efficiently as possible.
- FIG. 1 is a diagram showing the configuration of an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- FIG. 2 is a diagram illustrating a detailed configuration of a cervical cancer diagnosis camera device in an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- FIG. 3 is a diagram illustrating a manufacturing process of a camera unit in an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- FIG. 4 is a diagram illustrating a comparison between an image of a cervix and an image of a cervix with reduced light reflection in the AI-based cervical cancer screening service system according to an embodiment of the present invention.
- FIG. 5 is a diagram illustrating a detailed configuration of a server in an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- FIG. 6 is a diagram illustrating, for example, annotation data stored by a data generator in an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- FIG. 7 is a diagram illustrating, for example, a configuration of a detection model in an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- FIG. 8 is a view showing, for example, a detection result by a detection model of an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- FIG. 9 is a diagram illustrating, for example, a configuration of a reading model in an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- FIG. 10 is a diagram illustrating, for example, a reading result of a classification model in an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- the AI-based cervical cancer screening service system may include a cervical cancer diagnosis camera device 100 and a server 200 , and a user device 300 and a reading expert device 400 may be further included.
- the cervical cancer diagnosis camera device 100 is equipped with a pre-trained first reading model to read a cervix image, the cervix is photographed, and the captured cervix image is input to the first reading model, and the AI reading result is obtained. can be printed out. More specifically, the cervical cancer diagnosis camera device 100 is a device that acquires a cervical image by photographing the cervix. You can print it yourself. A detailed configuration of the cervical cancer diagnosis camera apparatus 100 will be described in detail later with reference to FIG. 2 .
- the server 200 receives a reading request including a cervical image from the cervical cancer diagnosis camera device 100 , requests a reading expert to read it, and provides a final reading report to the user of the cervical cancer diagnosis camera device 100 . can do. That is, when it is difficult for the medical staff to be convinced by only the AI reading result of the first reading model mounted on the cervical cancer diagnosis camera device 100 , the server 200 may be requested to read as an additional measure and a final reading report may be provided.
- the server 200 includes an independent second reading model different from the first reading model mounted on the cervical cancer diagnosis camera device 100, and provides the AI reading result derived by the second reading model to the reading expert By doing so, it is possible to support the judgment of the reading expert and increase the accuracy of the final reading result.
- the server 200 can utilize a lot of computational resources and big data, unlike the cervical cancer diagnosis camera device 100, it uses a highly accurate second reading model to provide a highly reliable AI reading result to a reading expert even if there is a lot of computation. can provide A detailed configuration of the server 200 will be described in detail later with reference to FIG. 5 .
- the server 200 may include a central server that provides a final reading report according to a reading request and a plurality of regional servers located in preset regional bases.
- the cervical cancer diagnosis camera device 100 transmits a read request to a local server at a base closest to the local servers, and the local server receiving the read request transmits a read request to the central server and a final read report according to the read request can be received and provided.
- overseas medical staff using the cervical cancer diagnosis camera device 100 may transmit a cervical image to a local server at a base closest to a plurality of local servers, access the local server, and check the final reading report.
- domestic reading experts can respond to requests from many foreign countries using only reading devices and programs.
- the server 200 as a central server that processes a read request and a plurality of regional servers located in preset regional bases, a read request requested in a country with a slow Internet line speed is processed as quickly and efficiently as possible. and can provide a final reading report.
- the user device 300 may be a terminal of a user that performs cervical cancer screening using the cervical cancer diagnosis camera device 100 .
- the user may check the final reading report provided from the server 200 through an application or web program installed in the user device 300 .
- the reading expert device 400 may be a reading expert's terminal that receives a reading request from the server 200 , reads a cervical image, prepares a final reading report, and transmits it to the server 200 .
- the reading expert may install a reading program in the reading expert device 400 and receive and respond to a reading request from the server 200 .
- the user device 300 and the reading expert device 400 may be implemented as various electronic devices capable of Internet communication.
- the electronic device is a smart phone, a tablet PC (personal computer), a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop PC, a netbook computer, a workstation ), a server, a personal digital assistant (PDA), a media box, a game console, an electronic dictionary, or a wearable device may include at least one of, and the wearable device is an accessory type (eg, a watch, a ring, Bracelets, anklets, necklaces, eyeglasses, contact lenses, or head-mounted-devices (HMDs), fabrics or integral garments (such as electronic garments), body-mounted (such as skin pads or tattoo), or at least one of an implantable circuit.
- the electronic device is not limited to the aforementioned devices, but may be a combination of two or more of the aforementioned various devices. have.
- the cervical cancer diagnosis camera device 100 of the AI-based cervical cancer screening service system includes a camera unit 110 , a first reader unit 120 , It may be configured to include the touch panel unit 130 and the communication unit 140 , and may further include a control unit 150 , a handle unit 160 , and an alarm unit 170 .
- the camera unit 110 may acquire an image of the cervix by photographing the cervix.
- the camera unit 110 may include a high-resolution Time-of-Flight (ToF) sensor. That is, a three-dimensional image of the structure of the cervix can be confirmed using the camera unit 110 to which the ToF sensor is applied, and information on the surface tissue of the cervix can be obtained and optimized to discriminate lesions on the surface of the cervix.
- ToF Time-of-Flight
- the camera unit 110 of the cervical cancer diagnosis camera device 100 includes a camera lens 111 and a polarized light. It may be configured to include a filter unit 112 .
- the camera unit 110 may be configured with a camera lens 111 for photographing the cervix at the distal end of the camera module, reduce light reflection to improve the image quality of the cervix image, and use the cervix image
- the polarization filter unit 112 may be further included.
- FIG. 4 is a diagram illustrating a comparison between an image of a cervix and an image of a cervix with reduced light reflection in the AI-based cervical cancer screening service system according to an embodiment of the present invention.
- the presence or absence of white epithelial lesions and the severity of the lesions are read and graded by a gynecologist oncologist.
- the mucus is removed to prevent obstruction of the diagnostic visual field due to the light reflection.
- a syringe or 3-5% acetic acid CH 3 COOH
- CH 3 COOH 3-5% acetic acid
- a method such as adjusting the color tone of the camera or using stereo photography may be used.
- the image shown on the right side of FIG. As shown, the white light reflection is significantly reduced, so that a highly reliable cervical image can be obtained.
- the polarization filter unit 112 is attached to the camera lens 111 located at the distal end of the camera module, and may be composed of a polarizing film made of a coated negative film material. As shown in FIG. 3 , the polarizing filter unit 112 may be configured by attaching a polarizing film to the camera lens 111 using a UV adhesive, and attaching a plurality of polarizing films to the camera lens 111 by overlapping them. may be configured.
- a polarizing film made of a coated negative film material has the advantages of being adjustable in size, light weight, and low in unit cost, and is a flexible material that is attached to the camera lens 111 and is easy to implement by integrating the camera unit 100 .
- the polarization angle can be adjusted, and the color and angle can be changed according to the amount of overlap, so that the polarization efficiency can be improved by 50% or more.
- a linear polarizer filter (PL) or a circular polarizer filter (CPL) may be used as the polarizing film.
- the polarizing filter unit 112 may further include a long pass filter that transmits a long wave between the camera lens 111 and the polarizing film.
- the camera unit 110 irradiating light to the object to be photographed at the end, may be configured to further include a light source composed of a high-brightness LED.
- a light source composed of a high-brightness LED.
- high-brightness white LED and RGB LED can be used, and a standard light source can be provided to take a clear image with the same primary color.
- the first reading unit 120 may store the first reading model, receive the cervical image acquired from the camera unit 110 , and predict and output the AI reading result from the first reading model. In this case, since the first reading unit 120 uses a cervical image in which light reflection is reduced by the polarization filter unit 112 , reading accuracy can be greatly improved. In addition, the first reading unit 120 may predict and output the AI reading result in the embedded state, and may output negative, positive, and the possibility of the need for biopsy, respectively, as probabilities. For example, it can be output as negative 23%, positive 47%, biopsy required 30%, etc.
- the first reading unit 120 is equipped with a pre-trained first reading model to predict the presence or absence of cervical cancer from a cervical image, and reads the reading result in an embedded state without separate communication with the web server 200 or the like. It can be derived and printed. That is, the learning of the first reading model is processed by the server 200, etc., and the learned model is pre-loaded in the first reading unit 120 at the time of shipment or mounted using wired/wireless communication to enable embedded prediction, and if necessary Accordingly, the first reading model may be updated through wired/wireless communication.
- the first reading model is an artificial intelligence model learned using a large amount of cervical images labeled with cervical cancer, precancerous stage, voice, etc., and artificial intelligence such as a convolutional neural network (CNN) and a recurrent neural network (RNN) It may be based on a neural network or a random forest classifier.
- CNN convolutional neural network
- RNN recurrent neural network
- the first reading model needs to output the AI reading result from the cervical cancer diagnosis camera device 100 with limited computational resources, it may be a lightweight model to use less computational resources, and transfer learning is used to It may have been learned. Transfer learning is the reuse of an already trained model for a new problem. Since it uses an already trained model, it has the advantage of being able to train deep neural networks with relatively little data. Also, most real-world problems can be useful because you don't usually have millions of labeled data to train a complex model.
- the touch panel unit 130 may output a real-time image captured by the camera unit 110 and an AI reading result of the first reader 120 , and receive a user input signal.
- a 6-inch touch panel may be applied, the direction may be adjusted by a tilt method, and a manipulation signal including a focus position touch input of the camera unit 110 may be input.
- the communication unit 140 may transmit a read request including the cervical image to the server 200 .
- the communication network used by the communication unit 140 is a wired network such as a local area network (LAN), a wide area network (WAN), or a value added network (VAN) or a mobile communication network (mobile).
- radio communication network such as a local area network (LAN), a wide area network (WAN), or a value added network (VAN) or a mobile communication network (mobile).
- radio communication network such as a local area network (LAN), a wide area network (WAN), or a value added network (VAN) or a mobile communication network (mobile).
- radio communication network such as a local area network (LAN), a wide area network (WAN), or a value added network (VAN) or a mobile communication network (mobile).
- radio communication network such as a local area network (LAN), a wide area network (WAN), or a value added network (VAN) or a mobile communication network (mobile).
- satellite communication network such as a local area network
- the communication unit 140 may transmit various signals and data together while transmitting an image of the cervix with reduced light reflection to the server 200 according to a manipulation signal input through the touch panel unit 130 . Therefore, when the reading result by the first reading unit 120 output to the touch panel unit 130 does not match the user's opinion or when a biopsy is required, the user transmits the cervical image to the server 200 while You can request an image reading or biopsy.
- the communication unit 140 may transmit the cervical image to an application installed in the user device 300 . Accordingly, the user may check the cervical image captured by the cervical cancer diagnosis camera device 100 through the application.
- the controller 150 terminates the test, and the AI reading result output from the first reading unit 120 is determined. If the positive probability is highest or is different from the user's opinion, a read request may be transmitted to the server 200 through the communication unit 140 .
- the first reading unit 120 may output negative, positive, and the possibility of the need for biopsy, respectively, as probabilities, and the control unit 150 receives the opinions of users such as medical staff through the touch panel unit 130 It is possible to receive input, compare the AI reading result with the user's opinion, and determine whether to request a reading to the server 200 and process it.
- the handle unit 160 may be configured to photograph the cervix in a handheld manner. That is, as in the case of the cervical cancer diagnosis camera device 100 shown in FIG. 1 , the user holds the handle 160 and can take a picture while checking the real-time cervical image output to the touch panel 130 in real time. It can also be provided with a shooting button on the handle so that you can comfortably shoot with one hand.
- the camera unit 110 may include a correction function and an auto-focus function for hand-shake, and may alleviate even light shake to stably acquire a cervical image in a hand-held manner.
- the alarm unit 170 may generate an alarm when a problem occurs. That is, when an error occurs in the cervical diagnosis camera device 100 , an alarm sound may be generated or an alarm may be output to the touch panel unit 130 , and an alarm may be transmitted to the server 200 to enable integrated control.
- the server 200 of the AI-based cervical cancer screening service system includes a data generating unit 210 , a learning unit 220 , and a second reading unit ( 230 ), the read request unit 240 , and the result providing unit 250 may be included, and may further include a payment processing unit 260 .
- the data generator 210 may detect a cervix region from the cervix images classified according to the lesion criteria to generate and store annotation data.
- the data generator 210 may collect a large amount of cervical images, and may classify positive, negative, and positive cervical cancer, precancerous stages, and the like. Since cervical images contain unnecessary regions other than the cervix, which may affect accurate cervical cancer diagnosis, it is possible to remove unnecessary regions and collect annotation data for diagnosis using only the cervical region.
- FIG. 6 is a diagram illustrating, for example, annotation data stored by the data generator 210 in an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- the data generator 210 of the AI-based cervical cancer screening service system according to an embodiment of the present invention selects the cervical region in the cervical image as a region of interest (RoI). ) in the form of a box to collect annotation data.
- RoI region of interest
- the data generator 210 may evaluate and verify the quality of the collected cervical image data to construct generalized high-quality learning data except for the low-quality data.
- the low-quality data can be used as learning data for classifying technical defect images in the future by establishing a separate database.
- the learning unit 220 may learn the second reading model based on deep learning to predict the reading result by understanding the relationship between the cervical region image and the lesion reference in the cervical image by using the annotation data as training data. . At this time, the learning unit 220 learns to detect the cervical region in the cervix image using the annotation data generated and stored by the data generating unit 210 , and predicts the reading result using the label classified according to the lesion criterion. can learn to do
- the second reading unit 230 applies the cervical image received from the cervical cancer diagnosis camera device 100 to the second reading model learned by the learning unit 220 to output the AI reading result based on artificial intelligence.
- the second reading model is an artificial intelligence model learned using a large amount of cervical images labeled with cervical cancer, precancerous stage, voice, etc., and artificial intelligence such as CNN (Convolutional Neural Network), RNN (Recurrent Neural Network) It may be based on a neural network or a random forest classifier.
- CNN Convolutional Neural Network
- RNN Recurrent Neural Network
- the second reading model since the second reading model is learned by the learning unit 220 and operated in the server 200 such as performing prediction by the second reading unit 230, more computational resources can be used compared to the first reading model, and relatively Because real-time is not required, it can be configured with emphasis on accuracy. Accordingly, the second reading model may be configured as a deep learning model with a deep depth, and a model obtained by compressing and/or reducing the weight of the second reading model may be configured as the first reading model. Also, the server 200 may upgrade the second reading model by periodically performing additional learning using the newly collected cervical images.
- the reading requesting unit 240 may transmit the cervical image and the AI reading result of the second reading unit 230 to a reading expert to request a final reading. That is, when a reading expert is requested to read, the AI reading result of the second reader 230 may be provided to assist the reading expert in decision making.
- the result providing unit 250 may receive the final reading report from the reading expert and provide it to the user of the cervical cancer diagnosis camera device 100 .
- the result providing unit 250 may provide the final reading report through an application or web program installed in the user device 300 .
- the payment processing unit 260 may manage the user's points, and may deduct the points upon receiving a read request from the cervical cancer diagnosis camera device 100 .
- the payment processing unit 260 may manage the user's points, and may deduct the points upon receiving a read request from the cervical cancer diagnosis camera device 100 .
- a user who uses the cervical cancer diagnosis camera device 100 abroad makes a request for reading, it is difficult to use the same payment method as in Korea. payment can be processed in this way.
- the first reading model and the second reading model may include a detection model for detecting a cervical region in the cervical image, and a classification model for predicting an AI reading result from the cervical region detected in the detection model. That is, the first reading model or the second reading model may be a model that performs both detection and reading, but may include a detection model specialized for cervical region detection and a classification model for classifying and reading an image.
- the first reading model and the second reading model of the AI-based cervical cancer screening service system may use a detection model implemented using RetinaNet, Depending on the example, faster RCNN may be used. That is, based on a model such as RetinaNet or faster RCNN, a detection model specialized for cervical position detection can be configured by modifying the hyper parameter by using the cervical image in which the region of interest is displayed as training data.
- the detection models constituting the first reading model and the second reading model are learned using annotation data as shown in FIG. 6 , and unnecessary regions are removed from the cervix image and the position of the cervix is determined as a region of interest (Region of Interest). Interest, RoI) can be extracted.
- FIG. 8 is a diagram illustrating, for example, a detection result by a detection model of an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- a red box indicates a region of interest detected by a detection model of an AI-based cervical cancer screening service system according to an embodiment of the present invention
- a green box indicates standard annotation data.
- the learned detection model is specialized for cervical position detection, so that unnecessary parts can be removed from the image and only the region of interest can be extracted with high accuracy.
- FIG. 9 is a diagram illustrating, for example, a configuration of a reading model in an AI-based cervical cancer screening service system according to an embodiment of the present invention.
- the first reading model and the second reading model of the AI-based cervical cancer screening service system according to an embodiment of the present invention may use a classification model implemented using ResNet, Depending on the example, InceptionResNet may be used.
- the classification model may be first trained to distinguish between positive and negative, and then additionally trained to distinguish between positive and negative cases requiring a biopsy. The classification model can be tested through k-fold cross validation.
- the classification model may be trained using transfer learning.
- Transfer learning is the reuse of an already trained model for a new problem. Since it uses an already trained model, it has the advantage of being able to train deep neural networks with relatively little data. Also, most real-world problems can be useful because you don't usually have millions of labeled data to train a complex model.
- transfer learning by using such transfer learning to generate an artificial intelligence classification model optimized for a domestic patient from a model already trained with a large amount of data, more precise and high-accuracy reading results can be obtained.
- the classification model constituting the first reading model and the second reading model may classify the ROI detected in the detection model and output a reading result. More specifically, the first reading model and the second reading model may output probabilities of negative, positive, and biopsy needs, respectively, as probabilities. For example, it can be output as negative 23%, positive 47%, biopsy required 30%, etc. In this case, since the classification model uses a cervical image with reduced light reflection, reading accuracy can be greatly improved.
- the collected cervical images are classified and labeled as normal and abnormal, trained with a total of 7,657 labeled images (normal: 2,829, abnormal: 5,028), and modeled with 1,965 images (normal: 708, abnormal: 1,257) By using it for verification, a classification model was created. At this time, the ResNet-50 algorithm specialized in classification among deep learning architecture algorithms was optimized for cervical data and learned.
- the generated classification model showed precision of 91.72%, recall of 78.24%, and F1 score of 84.44%, and as a result of analysis through ROC curve, AUC (Area Under the Curve) value of 95% was recorded.
- FIG. 10 is a diagram illustrating, for example, a reading result of a classification model in an AI-based cervical cancer screening service system according to an embodiment of the present invention. As shown in FIG. 10 , it is confirmed that the classification model of the AI-based cervical cancer screening service system according to an embodiment of the present invention has excellent performance in predicting the reading result by classifying normal and abnormal. can
- the first reading model pre-trained to read the cervical image is mounted on the cervical cancer diagnosis camera device 100, and the Internet Even in an area where the environment is not smooth, the cervical cancer diagnosis camera device 100 alone can check the AI reading result to help cervical cancer screening.
- the cervical cancer diagnosis camera device 100 directly transmits the cervical image to the server 200 to request a reading. Therefore, it is possible to conveniently request a reading and receive a reading report from a reading expert, thereby maximizing the convenience of the medical staff, and the server 200 can read the AI reading result derived by applying the second reading model to the reading expert The accuracy of the final reading result can be increased by providing the
- the AI-based cervical cancer screening service system proposed in the present invention, by configuring a central server that processes a reading request and a plurality of regional servers located in preset regional bases, the Internet line speed is improved Even in slow countries, it is possible to provide a final reading report by a reading expert as quickly and efficiently as possible.
- the present invention may include a computer-readable medium including program instructions for performing operations implemented in various communication terminals.
- the computer-readable medium includes magnetic media such as hard disks, floppy disks and magnetic tapes, optical media such as CD_ROM and DVD, and floppy disks. It may include magneto-optical media and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
- Such a computer-readable medium may include program instructions, data files, data structures, etc. alone or in combination.
- the program instructions recorded on the computer-readable medium may be specially designed and configured to implement the present invention, or may be known and available to those skilled in the art of computer software.
- it may include not only machine language code such as generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Surgery (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Optics & Photonics (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Gynecology & Obstetrics (AREA)
- Reproductive Health (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (8)
- 자궁경부암 검진 서비스 시스템으로서,자궁경부 영상을 판독하도록 사전 학습된 제1 판독 모델을 탑재하여, 자궁경부를 촬영하고 촬영한 자궁경부 영상을 상기 제1 판독 모델의 입력으로 하여, AI 판독 결과를 출력하는 자궁경부암 진단 카메라 장치(100); 및상기 자궁경부암 진단 카메라 장치(100)로부터 상기 자궁경부 영상을 포함하는 판독 요청을 수신하고, 판독 전문가에게 판독을 의뢰하여 상기 자궁경부암 진단 카메라 장치(100)의 사용자에게 최종 판독 보고서를 제공하는 서버(200)를 포함하며,상기 자궁경부암 진단 카메라 장치(100)는,자궁경부를 촬영하여 자궁경부 영상을 획득하는 카메라부(110);상기 제1 판독 모델을 저장하고, 상기 카메라부(110)에서 획득한 자궁경부 영상을 전달받아, 상기 제1 판독 모델로부터 AI 판독 결과를 예측하여 출력하는 제1 판독부(120);상기 카메라부(110)에서 촬영되는 실시간 영상 및 상기 제1 판독부(120)의 AI 판독 결과를 출력하고, 사용자 입력 신호를 입력받는 터치 패널부(130); 및상기 자궁경부 영상을 포함하는 판독 요청을 상기 서버(200)에 전송하는 통신부(140)를 포함하는 것을 특징으로 하는, 인공지능 기반의 자궁경부암 검진 서비스 시스템.
- 제1항에 있어서, 상기 서버(200)는,병변 기준에 따라 분류된 자궁경부 영상에서, 자궁경부 영역을 검출해 주석(annotation) 데이터를 생성하여 저장하는 데이터 생성부(210);상기 주석 데이터를 학습 데이터로 하여, 자궁경부 영상 내의 자궁경부 영역 이미지와 병변 기준 사이의 관계를 이해하여 판독 결과를 예측하도록 딥러닝 기반으로 제2 판독 모델을 학습하는 학습부(220);상기 자궁경부암 진단 카메라 장치(100)로부터 수신한 상기 자궁경부 영상을, 상기 학습부(220)에서 학습한 상기 제2 판독 모델에 적용해 인공지능 기반으로 AI 판독 결과를 출력하는 제2 판독부(230);상기 자궁경부 영상 및 상기 제2 판독부(230)의 AI 판독 결과를 판독 전문가에게 전송하여 최종 판독을 요청하는 판독 요청부(240); 및상기 판독 전문가로부터 최종 판독 보고서를 수신하여, 상기 자궁경부암 진단 카메라 장치(100)의 사용자에게 제공하는 결과 제공부(250)를 포함하는 것을 특징으로 하는, 인공지능 기반의 자궁경부암 검진 서비스 시스템.
- 제2항에 있어서, 상기 제1 판독부(120)는,임베디드 상태에서 AI 판독 결과를 예측하여 출력하는 것을 특징으로 하는, 인공지능 기반의 자궁경부암 검진 서비스 시스템.
- 제2항에 있어서, 상기 제1 판독부(120)는,음성, 양성 및 조직검사 필요의 가능성을 각각 확률로 출력하며,상기 자궁경부암 진단 카메라 장치(100)는,상기 제1 판독부(120)에서 출력되는 AI 판독 결과가 음성 확률이 가장 높고, 사용자의 소견과 일치하면 검사를 종료하고, 상기 제1 판독부(120)에서 출력되는 AI 판독 결과가 양성 확률이 가장 높거나 사용자 소견과 상이하면, 상기 통신부(140)를 통해 상기 판독 요청을 상기 서버(200)에 전송하는 제어부(150)를 더 포함하는 것을 특징으로 하는, 인공지능 기반의 자궁경부암 검진 서비스 시스템.
- 제2항에 있어서, 상기 제1 판독 모델 및 제2 판독 모델은,상기 자궁경부 영상에서 자궁경부 영역을 검출하는 검출 모델; 및상기 검출 모델에서 검출된 자궁경부 영역으로부터 AI 판독 결과를 예측하는 분류 모델을 포함하여 구성되는 것을 특징으로 하는, 인공지능 기반의 자궁경부암 검진 서비스 시스템.
- 제2항에 있어서,상기 자궁경부암 진단 카메라 장치(100)를 사용해 자궁경부암 검진을 수행하는 사용자의 사용자 디바이스(300)를 더 포함하며,상기 결과 제공부(250)는,상기 사용자 디바이스(300)에 설치된 애플리케이션 또는 웹 프로그램을 통해 상기 최종 판독 보고서를 제공하고,상기 통신부(140)는,상기 사용자 디바이스(300)에 설치된 애플리케이션에 상기 자궁경부 영상을 전송하는 것을 특징으로 하는, 인공지능 기반의 자궁경부암 검진 서비스 시스템.
- 제2항에 있어서, 상기 서버(200)는,상기 판독 요청에 따른 최종 판독 보고서를 제공하는 중앙 서버와 미리 설정된 지역별 거점에 위치하는 복수의 지역 서버를 포함하여 구성되되,상기 자궁경부암 진단 카메라 장치(100)의 통신부(140)는, 상기 지역 서버 중에서 가장 가까운 거점의 지역 서버에 상기 판독 요청을 전송하며,상기 판독 요청을 수신한 지역 서버는, 상기 중앙 서버로 상기 판독 요청을 전송하고, 상기 판독 요청에 따른 최종 판독 보고서를 수신하여 제공하는 것을 특징으로 하는, 인공지능 기반의 자궁경부암 검진 서비스 시스템.
- 제7항에 있어서, 상기 서버(200)는,사용자의 포인트를 관리하며, 상기 자궁경부암 진단 카메라 장치(100)로부터 상기 판독 요청을 수신하면 상기 포인트를 차감하는 결제 처리부(260)를 더 포함하는 것을 특징으로 하는, 인공지능 기반의 자궁경부암 검진 서비스 시스템.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112023013249A BR112023013249A2 (pt) | 2020-12-30 | 2021-12-30 | Sistema de serviço de triagem de câncer cervical baseado em inteligência artificial |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020200187157A KR102462975B1 (ko) | 2020-12-30 | 2020-12-30 | 인공지능 기반의 자궁경부암 검진 서비스 시스템 |
KR10-2020-0187157 | 2020-12-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022145999A1 true WO2022145999A1 (ko) | 2022-07-07 |
Family
ID=82260899
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2021/020091 WO2022145999A1 (ko) | 2020-12-30 | 2021-12-30 | 인공지능 기반의 자궁경부암 검진 서비스 시스템 |
Country Status (3)
Country | Link |
---|---|
KR (1) | KR102462975B1 (ko) |
BR (1) | BR112023013249A2 (ko) |
WO (1) | WO2022145999A1 (ko) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116843956A (zh) * | 2023-06-14 | 2023-10-03 | 长江大学 | 一种宫颈病理图像异常细胞识别方法、系统及存储介质 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117940070A (zh) * | 2022-08-18 | 2024-04-26 | 美迪科诶爱有限公司 | 心电图判读服务提供方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100107266A (ko) * | 2009-03-25 | 2010-10-05 | 주식회사 케이티 | 원격 진료를 위한 장치 및 방법 |
KR102056847B1 (ko) * | 2018-10-16 | 2019-12-17 | 김태희 | 자궁경부 자동판독 및 임상의사결정지원시스템 기반의 원격 자궁경부암 검진 시스템 |
KR20200038120A (ko) * | 2018-10-02 | 2020-04-10 | 한림대학교 산학협력단 | 위 내시경 이미지의 딥러닝을 이용하여 위 병변을 진단하는 장치 및 방법 |
JP2020098370A (ja) * | 2018-12-17 | 2020-06-25 | 廣美 畑中 | 医用画像をaiの判断で症状度合いごと画像に表示する診断方法。 |
KR102155381B1 (ko) * | 2019-09-19 | 2020-09-11 | 두에이아이(주) | 인공지능 기반 기술의 의료영상분석을 이용한 자궁경부암 판단방법, 장치 및 소프트웨어 프로그램 |
-
2020
- 2020-12-30 KR KR1020200187157A patent/KR102462975B1/ko active IP Right Grant
-
2021
- 2021-12-30 WO PCT/KR2021/020091 patent/WO2022145999A1/ko active Application Filing
- 2021-12-30 BR BR112023013249A patent/BR112023013249A2/pt unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20100107266A (ko) * | 2009-03-25 | 2010-10-05 | 주식회사 케이티 | 원격 진료를 위한 장치 및 방법 |
KR20200038120A (ko) * | 2018-10-02 | 2020-04-10 | 한림대학교 산학협력단 | 위 내시경 이미지의 딥러닝을 이용하여 위 병변을 진단하는 장치 및 방법 |
KR102056847B1 (ko) * | 2018-10-16 | 2019-12-17 | 김태희 | 자궁경부 자동판독 및 임상의사결정지원시스템 기반의 원격 자궁경부암 검진 시스템 |
JP2020098370A (ja) * | 2018-12-17 | 2020-06-25 | 廣美 畑中 | 医用画像をaiの判断で症状度合いごと画像に表示する診断方法。 |
KR102155381B1 (ko) * | 2019-09-19 | 2020-09-11 | 두에이아이(주) | 인공지능 기반 기술의 의료영상분석을 이용한 자궁경부암 판단방법, 장치 및 소프트웨어 프로그램 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116843956A (zh) * | 2023-06-14 | 2023-10-03 | 长江大学 | 一种宫颈病理图像异常细胞识别方法、系统及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
KR102462975B1 (ko) | 2022-11-08 |
BR112023013249A2 (pt) | 2023-10-10 |
KR20220097585A (ko) | 2022-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022145999A1 (ko) | 인공지능 기반의 자궁경부암 검진 서비스 시스템 | |
WO2020032555A1 (en) | Electronic device and method for providing notification related to image displayed through display and image stored in memory based on image analysis | |
WO2020235966A1 (ko) | 예측된 메타데이터를 이용하여 의료 영상을 처리하는 장치 및 방법 | |
WO2017135564A1 (ko) | 전자장치, 휴대단말기 및 그 제어방법 | |
WO2018110994A1 (en) | Optical lens assembly and method of forming image using the same | |
WO2016018108A1 (en) | Apparatus and method for enhancing accuracy of a contactless body temperature measurement | |
WO2019039771A1 (en) | ELECTRONIC DEVICE FOR STORING DEPTH INFORMATION IN RELATION TO AN IMAGE BASED ON DEPTH INFORMATION PROPERTIES OBTAINED USING AN IMAGE, AND ITS CONTROL METHOD | |
WO2019088555A1 (ko) | 전자 장치 및 이를 이용한 눈의 충혈도 판단 방법 | |
WO2023182727A1 (en) | Image verification method, diagnostic system performing same, and computer-readable recording medium having the method recorded thereon | |
WO2020171512A1 (en) | Electronic device for recommending composition and operating method thereof | |
WO2017090833A1 (en) | Photographing device and method of controlling the same | |
WO2021006522A1 (ko) | 딥 러닝 모델을 활용한 영상 진단 장치 및 그 방법 | |
WO2017010628A1 (en) | Method and photographing apparatus for controlling function based on gesture of user | |
WO2019107981A1 (en) | Electronic device recognizing text in image | |
WO2019083227A1 (en) | MEDICAL IMAGE PROCESSING METHOD, AND MEDICAL IMAGE PROCESSING APPARATUS IMPLEMENTING THE METHOD | |
WO2021054518A1 (ko) | 인공지능 기반 기술의 의료영상분석을 이용한 자궁경부암 진단방법, 장치 및 소프트웨어 프로그램 | |
WO2022114731A1 (ko) | 딥러닝 기반 비정상 행동을 탐지하여 인식하는 비정상 행동 탐지 시스템 및 탐지 방법 | |
EP3440593A1 (en) | Method and apparatus for iris recognition | |
WO2016182090A1 (ko) | 안경형 단말기 및 이의 제어방법 | |
WO2019208915A1 (ko) | 외부 장치의 자세 조정을 통해 복수의 카메라들을 이용하여 이미지를 획득하는 전자 장치 및 방법 | |
WO2023136409A1 (en) | Technique for identifying dementia based on mixed tests | |
WO2023022537A1 (ko) | Ai 기반 차량 디스크 불량 검출 시스템 | |
WO2020256325A1 (en) | Electronic device and method for providing function by using corneal image in electronic device | |
WO2020231156A1 (en) | Electronic device and method for acquiring biometric information using light of display | |
WO2018174431A1 (ko) | 옵티칼 렌즈 어셈블리 및 이를 포함한 전자 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21915797 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2301003997 Country of ref document: TH |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112023013249 Country of ref document: BR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01E Ref document number: 112023013249 Country of ref document: BR Free format text: APRESENTE NOVA FOLHA DO RESUMO ADAPTADA AO ART. 22 INCISO I DA INSTRUCAO NORMATIVA/INPI/NO 31/2013, UMA VEZ QUE O CONTEUDO ENVIADO NA PETICAO NO 870230057308 DE 30/06/2023 ENCONTRA-SE FORA DA NORMA EM RELACAO AO TITULO, NAO APRESENTANDO O TITULO EM DESTAQUE. A EXIGENCIA DEVE SER RESPONDIDA EM ATE 60 (SESSENTA) DIAS DE SUA PUBLICACAO E DEVE SER REALIZADA POR MEIO DA PETICAO GRU CODIGO DE SERVICO 207. |
|
ENP | Entry into the national phase |
Ref document number: 112023013249 Country of ref document: BR Kind code of ref document: A2 Effective date: 20230630 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21915797 Country of ref document: EP Kind code of ref document: A1 |