KR20180123810A - Data enrichment processing technology and method for decoding x-ray medical image - Google Patents
Data enrichment processing technology and method for decoding x-ray medical image Download PDFInfo
- Publication number
- KR20180123810A KR20180123810A KR1020170057979A KR20170057979A KR20180123810A KR 20180123810 A KR20180123810 A KR 20180123810A KR 1020170057979 A KR1020170057979 A KR 1020170057979A KR 20170057979 A KR20170057979 A KR 20170057979A KR 20180123810 A KR20180123810 A KR 20180123810A
- Authority
- KR
- South Korea
- Prior art keywords
- image
- neural network
- data
- line
- deep neural
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 17
- 238000005516 engineering process Methods 0.000 title description 2
- 238000013528 artificial neural network Methods 0.000 claims abstract description 31
- 230000008569 process Effects 0.000 claims abstract description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 239000013598 vector Substances 0.000 claims 4
- 238000001228 spectrum Methods 0.000 claims 2
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000004458 analytical method Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 abstract description 2
- 238000010801 machine learning Methods 0.000 abstract description 2
- 238000006243 chemical reaction Methods 0.000 abstract 2
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 238000007405 data analysis Methods 0.000 abstract 1
- 238000000605 extraction Methods 0.000 abstract 1
- 238000001914 filtration Methods 0.000 abstract 1
- 238000003672 processing method Methods 0.000 abstract 1
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5205—Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G06T5/001—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- General Physics & Mathematics (AREA)
- Public Health (AREA)
- Artificial Intelligence (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Optics & Photonics (AREA)
- High Energy & Nuclear Physics (AREA)
- Veterinary Medicine (AREA)
- Radiology & Medical Imaging (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
Description
본 발명은 컴퓨팅 장치, 좀 더 구체적으로는 뉴럴 네트워크를 수반한 클라이언트 장치 및 그것을 포함하는 시스템에 관한 것이다.The present invention relates to a computing device, and more particularly to a client device with a neural network and a system containing it.
최근에 컴퓨터는 머신 비전(machine vision) 또는 객체 인식(object recognition)을 제공한다. 객체 인식은 사용자에게 다양한 유익한 도구들을 제공한다. 예를 들면, 객체 인식은 뉴럴 네트워크(neural network)를 포함하는 알고리즘에 의존한다. 즉, 사용자 장치는 뉴럴 네트워크를 사용하여 입력 이미지에 포함된 객체를 인식할 수 있다. 일반적으로, 뉴럴 네트워크는 트레이닝 이미지(training image)들을 사용하여 객체들을 인식하는 훈련을하게 된다. 이러한 객체 인식 프로세스는, 더 많은 트레이닝 이미지를 사용할수록, 좀 더 높은 식별력을 가질 수 있다.Recently, computers provide machine vision or object recognition. Object recognition provides the user with various useful tools. For example, object recognition relies on an algorithm that includes a neural network. That is, the user device can recognize the object included in the input image using the neural network. Generally, a neural network trains to recognize objects using training images. This object recognition process can have a higher discrimination power as more training images are used.
일반적으로, 뉴럴 네트워크들은 상호 연결된 “뉴런(neuron)”들의 시스템들을 포함한다. 뉴럴 네트워크들은 입력들로부터 값들을 계산하고, 그것들의 적응적 네이처(adaptive nature)의 결과로서 패턴 인식을 할 수 있을 뿐만 아니라 기계를 학습할 수 있다.In general, neural networks include systems of interconnected " neurons ". Neural networks can learn the machine as well as calculate values from inputs and pattern recognition as a result of their adaptive nature.
이미지 인식을 위한 뉴럴 네트워크는, 메모리 및 프로세싱의 집중을 요하는 학습과 인식을 위해, 데이터 프로세스들을 요구한다. 따라서, 많은 계산이 필요하다. 사실, 웨이트(weight) 값들은, 계산적인 프로세스들을 수행하는 동안, 저장 및 처리를 위해 메모리를 요구한다.Neural networks for image recognition require data processes for learning and recognition that require memory and processing intensive. Therefore, a lot of calculations are needed. In fact, the weight values require memory for storage and processing, while performing computational processes.
예상할 수 있듯이, 트레이닝 데이터 세트의 크기가 증가할수록 뉴럴 네트워크의 성능은 향상된다. 불행하게도,스마트폰과 같은 모바일 장치에서는 메모리 및 처리 용량에 한계가 있다. 따라서, 점점 널리 사용되는 모바일장치들은 일반적으로 이미지 인식 기술을 사용할 수 없다. 따라서, 제한된 자원을 가지는 컴퓨팅 장치에서 뉴럴 네트워크의 성능을 향상시킬 방법이 요구된다.As can be expected, the performance of the neural network improves as the size of the training data set increases. Unfortunately, mobile devices such as smart phones have limited memory and processing capacity. Thus, increasingly popular mobile devices generally can not use image recognition technology. Accordingly, there is a need for a method of improving the performance of a neural network in a computing device having limited resources.
본 발명의 목적은 제한된 자원을 통해 객체 인식을 수행할 수 있는 뉴럴 네트워크를 수반한 이미지 분석 및 변환 장치 및 그것을 포함하는 시스템을 제공하는 데 있다.It is an object of the present invention to provide an image analysis and transformation apparatus accompanied by a neural network capable of performing object recognition through limited resources, and a system including the same.
상기 목적을 달성하기 위한 본 발명에 따른 뉴럴 네트워크(neural network)를 수반하는 클라이언트 장치는 프로세서, 서버 시스템으로부터 수신되는 상기 뉴럴 네트워크를 저장하는 메모리, 그리고 상기 뉴럴 네트워크를 수신하는 입력 장치를 포함하되, 상기 뉴럴 네트워크는 상기 서버 시스템에서 상기 클라이언트 장치를 위해 트레인(train)된다.According to an aspect of the present invention, there is provided a client apparatus including a neural network, the apparatus including a processor, a memory for storing the neural network received from the server system, and an input device for receiving the neural network, The neural network is trained for the client device in the server system.
실시 예로서, 상기 입력 장치는 이미지를 캡처(capture)하고, 상기 메모리에 이미지 입력 데이터를 저장한다.In an embodiment, the input device captures an image and stores the image input data in the memory.
실시 예로서, 이미지 입력 데이터를 맵핑하는 다층 퍼셉트론분류기(multilayer perceptron classifier)를 더 포함한다. As an embodiment, it further includes a multilayer perceptron classifier for mapping the image input data.
영상을 이용한 학습 데이터 저장장치는, 영상을 포함하는 복수의 이미지들을 수신하는 입력부, 상기 수신된 이미지들을 기초로 저장소를 구축하는 제어 부 및 객체별 데이터가 분류되어 저장되는 데이터베이스를 포함하고, 상기 제어부는, 상기 수신된 이미지들에서 객체의 특징을 검출하는 검출부, 상기 검출된 이미지들을 그룹화(grouping)하고, 상기 그룹화된 이미지들에 태깅(tagging)을 하는 분류부, 상기 태깅된 이미지들을 연대별로 정렬하는 연대별 정렬부 및 상기 정렬된 이미지의 연대와 대응되는 멀티미디어 데이터를 상기 데이터베이스로부터 수신받아 그룹화하는 DB 구축부를 포함하는 것을 특징으로 한다.The learning data storage device using an image includes an input unit for receiving a plurality of images including an image, a control unit for building a storage based on the received images, and a database for storing object-classified data, A classification unit for grouping the detected images and tagging the grouped images, a classification unit for classifying the tagged images according to dates, And a DB constructing unit for receiving the multimedia data corresponding to the age of the sorted images from the database and grouping them.
실시 예로서, 상기 뉴럴 네트워크의 학습 방법은 역전달 , 자동 디코딩 중 하나를 포함한다.As an embodiment, the learning method of the neural network includes one of inverse delivery and automatic decoding.
상기 목적을 달성하기 위한 본 발명에 따른 서버 시스템은 트레이닝 이미지를 수신하는 입력 장치, 적어도 두 레이어 페어를 포함하고, 각 레이어 페어는 컨볼루셔널 레이어 및 서브샘플링 레이어를 포함하는 뉴럴 네트워크, 그리고 다층 퍼셉트론 분류기를 포함하되, 상기 뉴럴 네트워크는 상기 컨볼루셔널 레이어에서 임시 웨이트(weight)의 양자화를 수행하고, 상기 컨볼루셔널 레이어에 적용된 입력에 응답하여 상기 서브샘플링 레이어에서 임시 특징 맵을 생성하고, 상기 다층 퍼셉트론 분류기에서 웨이트의 양자화를 수행하고, 그리고 상기 다층 퍼셉트론 분류기에서 양자화된 웨이트 다층 퍼셉트론에 적용 된 상기 임시 특징 맵(feature map)에 응답하여 분류 출력을 생성한다.According to an aspect of the present invention, there is provided a server system including an input device for receiving a training image, at least two layer pairs, each layer pair including a convolutional layer and a sub-sampling layer, Wherein the neural network performs a quantization of a temporary weight at the convolutional layer and generates a temporary feature map at the subsampling layer in response to an input applied to the convolutional layer, Performs quantization of the weights in a multi-layer perceptron classifier, and generates a classifier output in response to the temporary feature map applied to the quantized weight multi-layer perceptron in the multi-layer perceptron classifier.
본 발명의 실시 예에 따르면, 제한된 자원을 통해 객체 인식을 수행할 수 있는 뉴럴 네트워크를 수반한 클라이언트 장치 및 그것을 포함하는 시스템을 제공할 수 있다.According to an embodiment of the present invention, a client apparatus including a neural network capable of performing object recognition through limited resources and a system including the client apparatus can be provided.
도 1은 발명의 실시 예에 따른 뉴럴 네트워크를 보여주는 블록도이다.
도 2는 본 발명에 따른 시뮬레이션을 수행하기 위해 사용되는 샘플 컨볼루셔널 뉴럴 네트워크의 데이터 구조를 예시적으로 보여주는 도면이다.1 is a block diagram illustrating a neural network according to an embodiment of the invention.
2 is an exemplary diagram illustrating a data structure of a sample convolutional neural network used to perform a simulation according to the present invention.
뉴럴 네트워크(neural network)들의 종류는 단일 방향 로직의 하나 또는 두 계층을 가지는 것, 복잡한 다중 입력을 가지는 것, 많은 방향으로 피드백 루프를 가지는 것, 및 많은 계층을 가지는 것을 포함한다. 일반적으로, 이러한 시스템은 그들의 기능들의 제어 및 조직을 결정하기 위해 프로그램 된 알고리즘을 사용한다. 대부분의 시스템들은 처리량의 파라미터들 및 뉴런들로의 다양한 연결들을 변경하기 위하여 “웨이트(weight)들”(값들로 표현될 수 있는)을 사용한다. 뉴럴 네트워크(neural network)들은 트레이닝(training) 데이터의 세트들의 사용을 통하여 완료된 이전의 트레이닝(training)으로부터 자발적으로 학습할 수 있다.The types of neural networks include having one or two layers of unidirectional logic, having complex multiple inputs, having feedback loops in many directions, and having many layers. Generally, such systems use programmed algorithms to determine the control and organization of their functions. Most systems use " weights " (which can be expressed as values) to change the throughput parameters and the various connections to the neurons. Neural networks can learn spontaneously from previous training completed through the use of sets of training data.
이러 컨볼루션 연산은 여러층의 컨볼루션 유닛을 둠으로써 픽셀의 조합으로부터 윤곽선을 윤곽선의 조합으로부터 도형을, 도형의 조합으로부터 물체를 단계적으로 찾아 낼 수 있다. 컨볼루션 신경망에서 통합(Pooling)연산은 이미지 등의 데이터로부터 어떤 영역에 어떤 국부 패턴이 있는지를 찾아내며 이를 반복하면 데이터의 특징 정을 찾아낼 수 있다. This convolution operation has a layer of convolution units, so that a contour line can be determined from a combination of contour lines, a figure can be found from a combination of contour lines, and an object can be found step by step. In a convolution neural network, a pooling operation finds out which local pattern exists in an area from data such as an image, and repeats it to find a characteristic of the data.
101 : 센서 노드
102 : 데이터 전처리
103 : 탐색 데이터 분석
104 : 기계학습
105 : 복합이벤트 처리
106 : 리포트
107 : 뉴렁 네트워크
301 : 입력 이미지
402 : 컨볼루션 레이어
403 : SBSA 레이어
406 : 결과이미지
407 : 뉴럴 네트워크 101: Sensor node
102: Data preprocessing
103: Analysis of search data
104: Machine learning
105: Complex Event Processing
106: Report
107: Neurong Network
301: Input image
402: convolution layer
403: SBSA layer
406: Result image
407: Neural network
Claims (9)
특징 벡터들을 딥신경망의 입력으로, 상기 타겟 프레임으로부터 추출된 특징 벡터들을 출력으로 설정하고, 상기 입력 및 출력으로 설정된 각 특징 벡터들에 기초하여 딥신경망의 가중치들을 훈련시킴으로써 딥신경망(DNN) 모델을 생성하고, 생성된 딥신경망(DNN) 모델에 기초하여 학습된 데이터를 분석하는 방법.Extracting feature vectors from the data and image frames;
(DNN) model by setting feature vectors as inputs of the deep neural network, outputting the feature vectors extracted from the target frame, and training the weights of the deep neural networks based on each feature vector set to the input and output And analyzing the learned data based on the generated deep neural network (DNN) model.
상기 입력 장치는 이미지를 캡처(capture)하고, 상기 메모리에 이미지 입력 데이터를 저장하는 이미지 변환 소프트웨어.The method according to claim 1,
Wherein the input device captures an image and stores image input data in the memory.
이미지 입력 데이터를 맵핑하는 다층 퍼셉트론 분류기(multilayer perceptron classifier)를 더 포함하는 이미지 변환 소프트웨어.The method according to claim 1,
Image transformation software further comprising a multilayer perceptron classifier for mapping image input data.
상기 뉴럴 네트워크는 컨볼루셔널(convolutional) 뉴럴 네트워크를 포함하는 이미지 변환 소프트웨어.The method according to claim 1,
Wherein the neural network comprises a convolutional neural network.
상기 뉴럴 네트워크는 특징 맵(feature map)을 생성하는 이미지 변환 소프트웨어.The method according to claim 1,
Wherein the neural network generates a feature map.
상기 특징 맵은 입력 이미지로부터 유래된 복수의 웨이트(weight) 값들을 포함하는 이미지 변환 소프트웨어.6. The method of claim 5,
Wherein the feature map comprises a plurality of weight values derived from an input image.
상기 딥신경망에 기초하여 손실된 패킷의 위상(phase) 및 로그 파워 스펙트라(log power spectra)를 추정하는단계; 및 상기 추정된 위상 및 로그 파워 스펙트라를 역 퓨리에 변환하여 상기 손실된 패킷을 복원하는 단계를 포함하는 패킷 손실 복원 방법.Wherein,
Estimating a phase of a lost packet and a log power spectra based on the deep neural network; And restoring the lost packet by inverse Fourier transforming the estimated phase and log power spectra.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170057979A KR20180123810A (en) | 2017-05-10 | 2017-05-10 | Data enrichment processing technology and method for decoding x-ray medical image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170057979A KR20180123810A (en) | 2017-05-10 | 2017-05-10 | Data enrichment processing technology and method for decoding x-ray medical image |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20180123810A true KR20180123810A (en) | 2018-11-20 |
Family
ID=64568594
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020170057979A KR20180123810A (en) | 2017-05-10 | 2017-05-10 | Data enrichment processing technology and method for decoding x-ray medical image |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20180123810A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102043672B1 (en) * | 2019-03-29 | 2019-11-12 | 주식회사 딥노이드 | System and method for lesion interpretation based on deep learning |
KR102186893B1 (en) | 2019-10-15 | 2020-12-04 | 주식회사 리드브레인 | Medical image processing system using artificial intelligence based curriculum learning method and medical diagnosis and treatment system |
KR20210012233A (en) | 2019-07-24 | 2021-02-03 | 가톨릭대학교 산학협력단 | Method and apparatus for converting contrast enhanced image and non-enhanced image using artificial intelligence |
US11193884B2 (en) | 2018-07-02 | 2021-12-07 | The Research Foundation For The State University Of New York | System and method for structural characterization of materials by supervised machine learning-based analysis of their spectra |
WO2022119325A1 (en) * | 2020-12-01 | 2022-06-09 | 서울대학교병원 | Sleep apnea diagnostic auxiliary system using simple skull x-ray image and method for providing diagnostic auxiliary information using same |
WO2023054800A1 (en) * | 2021-09-30 | 2023-04-06 | 고려대학교 산학협력단 | Medical data split learning system, control method for same, and recording medium for performing method |
-
2017
- 2017-05-10 KR KR1020170057979A patent/KR20180123810A/en unknown
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11193884B2 (en) | 2018-07-02 | 2021-12-07 | The Research Foundation For The State University Of New York | System and method for structural characterization of materials by supervised machine learning-based analysis of their spectra |
KR102043672B1 (en) * | 2019-03-29 | 2019-11-12 | 주식회사 딥노이드 | System and method for lesion interpretation based on deep learning |
KR20210012233A (en) | 2019-07-24 | 2021-02-03 | 가톨릭대학교 산학협력단 | Method and apparatus for converting contrast enhanced image and non-enhanced image using artificial intelligence |
KR102186893B1 (en) | 2019-10-15 | 2020-12-04 | 주식회사 리드브레인 | Medical image processing system using artificial intelligence based curriculum learning method and medical diagnosis and treatment system |
WO2022119325A1 (en) * | 2020-12-01 | 2022-06-09 | 서울대학교병원 | Sleep apnea diagnostic auxiliary system using simple skull x-ray image and method for providing diagnostic auxiliary information using same |
WO2023054800A1 (en) * | 2021-09-30 | 2023-04-06 | 고려대학교 산학협력단 | Medical data split learning system, control method for same, and recording medium for performing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR20180004898A (en) | Image processing technology and method based on deep learning | |
KR20180123810A (en) | Data enrichment processing technology and method for decoding x-ray medical image | |
Passalis et al. | Learning bag-of-features pooling for deep convolutional neural networks | |
CN110084281B (en) | Image generation method, neural network compression method, related device and equipment | |
US20190228268A1 (en) | Method and system for cell image segmentation using multi-stage convolutional neural networks | |
Khan et al. | Situation recognition using image moments and recurrent neural networks | |
CN111079674B (en) | Target detection method based on global and local information fusion | |
CN112236779A (en) | Image processing method and image processing device based on convolutional neural network | |
CN111797983A (en) | Neural network construction method and device | |
CN110222718B (en) | Image processing method and device | |
CN113705769A (en) | Neural network training method and device | |
CN113221787A (en) | Pedestrian multi-target tracking method based on multivariate difference fusion | |
WO2022012668A1 (en) | Training set processing method and apparatus | |
CN112200057A (en) | Face living body detection method and device, electronic equipment and storage medium | |
CN113095370A (en) | Image recognition method and device, electronic equipment and storage medium | |
CN113554654A (en) | Point cloud feature extraction model based on graph neural network and classification and segmentation method | |
CN111738074B (en) | Pedestrian attribute identification method, system and device based on weak supervision learning | |
CN115294563A (en) | 3D point cloud analysis method and device based on Transformer and capable of enhancing local semantic learning ability | |
Dai | Real-time and accurate object detection on edge device with TensorFlow Lite | |
CN113536970A (en) | Training method of video classification model and related device | |
CN115018039A (en) | Neural network distillation method, target detection method and device | |
CN110532959B (en) | Real-time violent behavior detection system based on two-channel three-dimensional convolutional neural network | |
KR20200119042A (en) | Method and system for providing dance evaluation service | |
CN108875555B (en) | Video interest area and salient object extracting and positioning system based on neural network | |
CN111798518A (en) | Mechanical arm posture detection method, device and equipment and computer storage medium |