WO2022055023A1 - 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템 - Google Patents
스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템 Download PDFInfo
- Publication number
- WO2022055023A1 WO2022055023A1 PCT/KR2020/016228 KR2020016228W WO2022055023A1 WO 2022055023 A1 WO2022055023 A1 WO 2022055023A1 KR 2020016228 W KR2020016228 W KR 2020016228W WO 2022055023 A1 WO2022055023 A1 WO 2022055023A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- image
- processing unit
- platform system
- unit
- Prior art date
Links
- 238000010191 image analysis Methods 0.000 title claims abstract description 29
- 238000012545 processing Methods 0.000 claims abstract description 52
- 230000002159 abnormal effect Effects 0.000 claims abstract description 31
- 230000009471 action Effects 0.000 claims abstract description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 40
- 230000006870 function Effects 0.000 claims description 32
- 238000000034 method Methods 0.000 claims description 16
- 230000006399 behavior Effects 0.000 claims description 13
- 239000000284 extract Substances 0.000 claims description 12
- 238000010801 machine learning Methods 0.000 claims description 9
- 230000003746 surface roughness Effects 0.000 claims description 9
- 230000006866 deterioration Effects 0.000 claims description 8
- 239000003086 colorant Substances 0.000 claims description 6
- 230000005856 abnormality Effects 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000003542 behavioural effect Effects 0.000 claims 1
- 238000001914 filtration Methods 0.000 claims 1
- 238000004458 analytical method Methods 0.000 description 51
- 238000013135 deep learning Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 238000001514 detection method Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 7
- 238000012544 monitoring process Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000012790 confirmation Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000005520 cutting process Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 208000001836 Firesetting Behavior Diseases 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000003412 degenerative effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000006748 scratching Methods 0.000 description 1
- 230000002393 scratching effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000005211 surface analysis Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Definitions
- the present invention relates to an IoT integrated intelligent image analysis system capable of smart recognition of objects, and more specifically, smartly predicts abnormal situations such as failures or accidents by considering both non-image data and image data in which IoT functions are implemented. It relates to an IoT integrated intelligent image analysis platform system capable of object recognition to prevent
- a monitoring system that monitors a specific place through monitoring means and enables countermeasures or post-confirmation when an abnormality is found is introduced in various places such as entrances, parking lots, buildings, industrial sites and residential areas of places where security is important. come. Through this, security is improved, access control is easy, and the effect of lowering the crime rate has been proven.
- DVR Digital Video Recorder
- a person can easily recognize people, objects, scenes and visual details when looking at a photo or video.
- the goal of object recognition technology is to teach computers to do things that humans can do, such as the ability to understand what is contained in images.
- object recognition which allows a computer to analyze and interpret the visual information that a person receives the most information from, is a computer vision technology that identifies an object on an image or video, which is produced through deep learning and machine learning algorithms. It is a key skill.
- object recognition using machine learning algorithms is a technology that is being used in various fields such as video surveillance, face recognition, robot control, IoT, autonomous driving, manufacturing, and security.
- Patent Document 1 Republic of Korea Patent No. 10-0980586 (registered on August 31, 2010))
- An object of the present invention is to provide an IoT integrated intelligent image analysis platform system capable of object recognition that can smartly predict and prevent abnormal situations such as failures or accidents by considering both non-image data and image data in which IoT functions are implemented. will be.
- the IoT integrated intelligent image analysis platform system is an IoT integrated intelligent image analysis platform system that integrates and analyzes image data and non-image data, and an image data acquisition unit that acquires at least one image data ; a non-image data acquisition unit configured to acquire at least one non-image data; an image data processing unit that analyzes the image data; a non-image data processing unit that analyzes the non-image data; and an integrated data determination unit that finally determines the abnormal condition when the image data processing unit or the non-image data processing unit determines that the abnormal situation is based on the image data or the non-image data, wherein the image data processing unit obtains the It is characterized in that by recognizing an object from image data, estimating the state of the object, estimating the authenticity of the object, or estimating the action event of the object.
- the non-image data processing unit analyzes the non-image data
- a case in which the measured value of the non-image data is out of a data range of a normal situation is defined as an abnormal event, and whether the abnormal event occurs, a time of occurrence, and a predefined value are defined. It is characterized in that the abnormal situation is determined in consideration of the number of occurrence counts per unit time.
- the image data It is characterized in that the processing unit is controlled to determine whether an abnormality is present, and when the image data processing unit determines that the abnormal situation is an abnormal situation, it is finally determined as an abnormal situation.
- the image data processing unit may include: an object processing unit for processing a function of recognizing an object from the acquired image data; It characterized in that it further comprises a user learning setting unit 302 to provide a user with a function related to machine learning of image data.
- the object processing unit extracts the object from the image data, the object authenticity identification unit to determine whether or not forgery; an object state recognition unit for estimating an object state from the image data; It characterized in that it further comprises an object action recognition unit for estimating the action event of the object from the image data.
- the object authenticity identification unit extracts an image from the image data, analyzes colors of pixels constituting the extracted image, and extracts a desired color from the analyzed colors, and then, through a genuineness determination algorithm, determines that the object is genuine. It is characterized by deriving a probability of one.
- the object state recognition unit extracts an image from the image data, filters the image, analyzes the color of pixels in the filtered image, and derives a ratio for each color from the image to estimate the degree of deterioration, , by estimating the surface roughness through pre-processing of the image, it is characterized in that it is estimated whether the state of the object is damaged.
- the object behavior recognition unit detects an object from the image data, and by learning through a neural network to classify the type of the detected object as a pre-machine-learning label, it is characterized in that the object's behavior event is estimated.
- the IoT integrated intelligent image analysis platform system of the present invention uses a deep learning algorithm to detect and classify objects when a specific event (roaming, intrusion, fire, abandonment, collapse, fight, etc.) occurs in the image, By giving an alarm and storing the analysis information in the DB, it has the advantage of continuous monitoring and accident prevention.
- a specific event proaming, intrusion, fire, abandonment, collapse, fight, etc.
- FIG. 1 is a block diagram showing the overall configuration of an IoT integrated intelligent image analysis platform system according to an embodiment of the present invention.
- FIG. 2 is a detailed block diagram showing the internal configuration of the image data processing unit of FIG. 1 .
- FIG. 3 is a block diagram illustrating a surface damage analysis function of an IoT integrated intelligent image analysis platform system according to an embodiment of the present invention.
- FIG. 4 is a data flow diagram illustrating a damage detection algorithm analysis and result confirmation process of the IoT integrated intelligent image analysis platform method according to an embodiment of the present invention.
- FIG. 5 is a block diagram illustrating a genuine/fake analysis function of an IoT integrated intelligent image analysis platform system according to an embodiment of the present invention.
- FIG. 6 is a data flow diagram illustrating a process of analyzing a true/fake algorithm and confirming a result of the IoT integrated intelligent image analysis platform method according to an embodiment of the present invention.
- FIG. 1 is a block diagram showing the overall configuration of an IoT integrated intelligent image analysis platform system according to an embodiment of the present invention
- FIG. 2 is a block diagram showing the internal configuration of the image data processing unit of FIG. 1 in detail.
- the IoT integrated intelligent image analysis platform system of the present invention is connected to the manager mobile terminal 700, the manager client 750, the IoT imaging device 800, and the IoT non-image sensor 900 It may include an analysis server (1000).
- the IoT non-image sensor 900 is provided to collect non-image data, and may be, for example, various sensors such as a temperature sensor, a humidity sensor, and an illuminance sensor.
- the manager mobile terminal 700 may receive a result of analyzing the object recognition and object state in the analysis server 1000 for the collected image data and non-image data, and take a picture on behalf of the IoT imaging device 800 . Through the function, image data may be collected and transmitted to the analysis server 1000 to request object recognition and analysis.
- the manager client 750 is provided with the result of analyzing the object recognition and object state such as surface roughness or damage in the analysis server 1000 for the collected image data and non-image data, and the analysis result It is possible to generate statistical data and reports reinterpreted from the analysis, and as a preprocessing process before object analysis, necessary input variables or analysis range designation can be provided to the analysis server 1000 to be utilized for object analysis.
- the analysis server 1000 receives image data and non-image data to perform object analysis.
- the image data acquisition unit 100 the non-image data acquisition unit 200 , and the image data processing unit 300 .
- a non-image data processing unit 400 may be further included.
- the image data acquisition unit 100 acquires at least one image data from the IoT imaging device 800
- the non-image data acquisition unit 200 acquires at least one non-image data from the IoT non-image sensor 900 .
- the image data may be an object image acquired from a camera, and it is considered that an object image (photo) of one frame is also included in the image data.
- the image data processing unit 300 may perform a function of recognizing an object or the like from the acquired image data, determining the state of the object, or determining whether the object is authentic or not.
- the non-image data processing unit 400 analyzes non-image data such as sensed data
- a case in which the measured value (sensed value) of the non-image data is out of the data range of a normal situation is defined as an abnormal event
- the An abnormal situation can be determined by considering the occurrence or not, the occurrence time, and a predefined number of occurrence counts per unit time.
- the integrated data determination unit 500 may play a role of finally determining the abnormal situation. there is.
- the integrated data determination unit 500 determines that the non-image data processing unit 400 is in an abnormal situation
- the location and/or the time obtained by the image data acquisition unit 100 is the same as or closest to the location and or Based on the time image data
- the image data processing unit 300 is controlled to determine whether an abnormality is present, and if the image data processing unit 300 determines that the abnormal situation is abnormal, it can be finally determined as an abnormal situation.
- the image data processing unit 300 includes an object processing unit 301 that performs a function related to an object from image data, and an image/learning database 303 that labels and stores data for image/learning.
- an image/learning database 303 that labels and stores data for image/learning.
- the object processing unit 301 performs a function of confirming various recognition/identification/patterns related to an object from the image data.
- the object processing unit 301 extracts an object from image data, and determines whether a forgery is an object based on the authenticity of the object (eg, determining whether the recognized object is a synthesized fake image)
- the authenticity identification unit 3011, the object state recognition unit 3012 capable of recognizing and estimating the state of the object, and the object's action (eg, whether an act of fighting has occurred due to a violent action between objects) event is recognized and may be configured to include an object behavior recognition unit 3013 that can be estimated.
- the object authenticity identification unit 3011 extracts an image from the obtained image data, and distinguishes the real/fake of the object (or product) by deep learning-based image processing for the image. , for such a distinction, a result may be derived by analyzing the color of the image data, the color ratio, the text included in the image, the surface texture, and the like.
- each color information of the pixels of the image is analyzed and classified, and a desired color is extracted from the analyzed color information.
- a K-mean clustering algorithm or the like can be used, and a program such as OpenCV can be used.
- the clustering algorithm based on the similarity between data minimizes the variance between clusters, identifies the color ratio within the item from the clustered colors, and uses OpenCV
- the color ratio can be extracted through It is possible to learn the color magnification of the real thing, and the difference between the real image and the fake image can be distinguished according to the color ratio extracted as a result.
- the GAN algorithm can extract the surface material characteristics of an item through a learning method through competition between a generator and a discriminator.
- the GAN algorithm generates a replica image, and through mutual feedback and learning of the genuine model and the fake model using the difference in surface material, the accuracy of the counterfeit reproduction reading algorithm, which is a learning model for reading the genuine product, can be increased. By using a model with increased accuracy, it is possible to read when a fake image is input.
- the object authenticity identification unit 3011 recognizes the label attached to the product (object), etc. to guarantee the genuine product, and in order to determine the authenticity/falseness of the object, the text information of the label paper attached to the object must be recognized separately. do.
- the object authenticity identification unit 3011 identifies an item using a CNN algorithm, and the CNN algorithm can extract unique characteristics or characteristics of a genuine product using the identified item, and is used as a learning model for reading illegal copies it might be
- the object state recognition unit 3012 extracts an image from the obtained image data, and estimates the state of the object (or product) (for example, the aged state of the object) by deep learning-based image processing for the image, For such estimation, results may be derived by analyzing color, surface roughness, etc. of image data.
- the object state recognition unit 3012 analyzes and extracts the deteriorated part from the turbine blade extracted as image data and indicates the degree of ratio, estimating the degree of deterioration of the turbine blade and predicting the replacement cycle, etc. It is possible to prevent accidents that may occur due to missed timing and to prevent unnecessary replacement of turbine blades in a steady state due to incorrect estimation.
- illuminance or contrast is filtered using OpenCV, etc., and a specific color is extracted and binarized, and then the ratio can be calculated.
- each color information of the pixels of the entire image is analyzed through a K-mean algorithm, etc. (color classification), and then a specific color ratio is derived from the entire image (color extraction). .
- color extraction step it is possible to efficiently estimate the degree of deterioration (damage) of the object.
- the surface roughness of the object can be used to analyze the surface roughness of the object by pre-processing the entire image and grasping the contour of the surface constituting the object through the canny edge algorithm. For example, when a crack occurs on the surface of a turbine blade, which is an object, or when the roughness increases, a corresponding edge is created. By detecting this and calculating the volume or area, the surface roughness of the object can be estimated. .
- the ratio of the surface roughness and the degree of deterioration of the object in the image included in the image data is calculated, and the calculated result (surface roughness, degree of deterioration) is provided to the user in a visual form.
- the user/administrator can estimate the state of an object, which had been estimated only with the naked eye, together with numerical information data (color information such as illuminance and deterioration), so that the state of the object (whether replacement is necessary) can be estimated more clearly. there is.
- the degree of damage can be determined through the masking operation of the damaged area using R-CNN and the volume calculation of the masked area. and the volume can be calculated by dividing the coordinates by a tetrahedron.
- the degree of damage may be determined by calculating the area instead of the volume.
- the object behavior recognition unit 3013 extracts an image from the acquired image data, detects an object (or product) by deep learning-based image processing for the image, and classifies the object from the image Estimating a specific behavior of an object (eg, wandering, intrusion, fire, abandonment, falling, fighting) of an object, etc. For such estimation, section cutting of image data, image processing, object detection, image classification, etc. results can be derived.
- an object called a person is detected by using Yolo 604, etc. from image data obtained through a surveillance camera, etc., and the detected human action (event) is performed by machine learning learning.
- image label classification ex: Convolutional Neural Networks (CNN) algorithm
- CNN Convolutional Neural Networks
- the present invention can be effectively used to prevent crimes such as violence, arson, abuse, kidnapping, etc. in a specific space or to secure security.
- an image may be extracted from image data, and an object may be detected from the image through a deep learning algorithm.
- you can use a program such as Yolo v3.
- a neural network such as a CNN algorithm, for example, it is possible to know whether successive images of the object can be classified as a wandering behavior of an object or a fire behavior. there is.
- the user learning setting unit 302 is extracted from the acquired image data or linked with the image/learning database 303 that stores learning data for training, etc., to generate learning data necessary for deep learning. It provides a tracking function that provides convenient management for various users when labeling, for evaluation of training models, and image editing (eg, image cropping) function for areas or frames such as desired objects, etc.
- a program (app) that interworks may be provided.
- the user learning setting unit 302 detects an object called a person from image data obtained through a surveillance camera, etc., and estimates the detected person's action (event) through image classification, so that the detected person fights In order to efficiently track (track) an object, etc., in order to efficiently track (tracking) an object, it is necessary to estimate whether or not the user is performing the action of It is possible to effectively increase the accuracy of machine learning results such as estimation.
- the user learning setting unit 302 utilizes an image processing program such as OpenCV to provide an object tracking function to enable tracking (tracking) of a labeled object, or, for example, an IoT imaging device such as CCTV ( 800), it is possible to provide an image frame unit division function that divides the video captured by the object or object into frame units for accurate estimation of the behavior of the object, and stores it in the image/learning database 303 in the form of an image.
- an image processing program such as OpenCV to provide an object tracking function to enable tracking (tracking) of a labeled object
- an IoT imaging device such as CCTV ( 800)
- the manager mobile terminal 700 acquires an object image by using the photo taking function, and transmits the image data including the acquired object image to the analysis server 1000 through a wired/wireless communication network such as the Internet, intranet, LTE network, etc. (S10, S12).
- a wired/wireless communication network such as the Internet, intranet, LTE network, etc.
- the analysis server 1000 stores the transmitted image data (pictures) in the database 303, and the manager client 750 requests and inquires the image data from the analysis server 1000 to specify the analysis range. There is (S14, S16).
- an analysis range for each data may be designated, and after the analysis range is designated, it may be transmitted to the analysis server 1000 to request analysis by, for example, a detection algorithm (S18, S20).
- the damage detection algorithm may be an algorithm capable of detecting the degree of damage to the object, and the above-described K-mean algorithm, canny edge algorithm, or the like may be utilized.
- the analysis server 1000 derives the analysis result by the damage detection algorithm, generates the roughness and damage analysis result by deep learning, and stores it in the database 303 (S22, S24).
- the manager mobile terminal 700 may request the analysis server 1000 for illuminance and damage analysis results through an interlocked manager app, etc., and receive and confirm the results from the analysis server 1000 (S26, S28, S30).
- the analysis result can also be provided by the manager client 750.
- the illuminance change image list (including labeling of each image, deterioration information, etc.) or a search function for each measurement image point can be provided.
- the management program may include a function of analyzing the result of comparing the image analysis result to the detection algorithm through the analysis chart, or providing a result for each measurement image point.
- the normal reference value is set according to the ratio of surface roughness quantified in the analysis result of analyzing the degree of damage to the object, and it is divided into normal, replacement recommendation, and replacement need.
- an alarm may be provided to the administrator client 750 .
- FIG. 6 is a data flow diagram illustrating a process of analyzing a true/fake algorithm and confirming a result of the IoT integrated intelligent image analysis platform method according to an embodiment of the present invention.
- an object image that needs to be checked for authenticity/falseness is acquired, and the image data including the acquired object image is analyzed through a wired/wireless communication network such as the Internet, intranet, or LTE network. (1000) (S50, S52).
- the analysis server 1000 stores the transmitted image data (pictures) in the database 303, and the manager client 750 requests and inquires the image data from the analysis server 1000 to specify the analysis range. There is (S54, S56).
- the manager client 750 can designate an analysis range for each data, and after designating the analysis range, transmit it to the analysis server 1000 to read, for example, true / fake Analysis by an algorithm may be requested (S58, S60), where the true/false reading algorithm may be a neural network algorithm such as the CNN, RNN, or GAN described above.
- the analysis server 1000 derives the analysis result for the data analysis range by the true/fake reading algorithm, and at this time, the real/false reading result is generated through the analysis of the surface texture/trademark pattern by deep learning, and the database 303 ) to (S62, S64).
- the manager mobile terminal 700 may request the analysis server 1000 for the analysis result of reading a genuine/fake product through an interlocked manager app, etc., and receive and check the result from the analysis server 1000 (S66, S68, S70) .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (8)
- 영상 데이터와 비영상 데이터를 통합하여 분석하는 IoT 통합 지능형 영상분석 플랫폼 시스템에 있어서,적어도 하나의 영상 데이터를 취득하는 영상 데이터 취득부;적어도 하나의 비영상 데이터를 취득하는 비영상 데이터 취득부;상기 영상 데이터를 분석하는 영상 데이터 처리부;상기 비영상 데이터를 분석하는 비영상 데이터 처리부;상기 영상 데이터 처리부 또는 상기 비영상 데이터 처리부에서 상기 영상 데이터 또는 상기 비영상 데이터로부터 비정상 상황이라고 판단하는 경우, 상기 비정상 상황을 최종적으로 판단하는 통합 데이터 판단부를 포함하되,상기 영상 데이터 처리부는 취득된 상기 영상 데이터로부터 객체를 인식하여, 객체의 상태를 추정하거나, 객체의 진위 여부를 추정하거나, 객체의 행위 이벤트를 추정하는 것을 특징으로 하는 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템.
- 제1항에 있어서,상기 비영상 데이터 처리부는상기 비영상 데이터를 분석함에 있어서, 상기 비영상 데이터의 측정값이 정상상황의 데이터 범위를 벗어나는 경우를 비정상 이벤트로 정의하고, 상기 비정상 이벤트의 발생여부, 발생시간, 기정의된 단위 시간당 발생카운트수를 고려하여, 비정상 상황을 판단하는 것을 특징으로 하는 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템.
- 제2항에 있어서,상기 통합 데이터 판단부는상기 비영상 데이터 처리부에서 비정상 상황이라고 판단하는 경우, 상기 영상 데이터 취득부에서 취득된 위치 및/또는 시간과 동일하거나 가장 근접한 위치 및 또는 시간의 영상 데이터를 기반으로, 상기 영상 데이터 처리부에 비정상 여부를 판단하도록 제어하고, 영상 데이터 처리부에서 비정상 상황이라고 판단하면, 최종적으로 비정상 상황으로 판단하는 것을 특징으로 하는 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템.
- 제1항 내지 제3항 중 어느 한 항에 있어서,상기 영상 데이터 처리부는상기 취득된 영상 데이터로부터 객체를 인식하는 기능을 처리하는 객체 처리부;사용자로 하여금 영상 데이터의 머신러닝에 관련된 기능을 제공하는 사용자 학습설정부를 더 포함하는 것을 특징으로 하는 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템.
- 제4항에 있어서,상기 객체 처리부는상기 영상 데이터로부터 객체를 추출하되, 위조 여부를 판단하는 객체 진위 식별부;상기 영상 데이터로부터 객체의 상태를 추정하는 객체 상태 인식부;상기 영상 데이터로부터 객체의 행위 이벤트를 추정하는 객체 행위 인식부를 더 포함하는 것을 특징으로 하는 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템.
- 제5항에 있어서,상기 객체 진위 식별부는상기 영상 데이터로부터 이미지를 추출하고, 상기 추출된 이미지를 구성하는 픽셀들의 색상을 분석하고, 상기 분석된 색상들로부터 원하는 색상을 추출한 뒤, 진품 판단 알고리즘을 통해, 상기 객체가 진품일 확률을 도출하는 것을 특징으로 하는 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템.
- 제5항에 있어서,상기 객체 상태 인식부는상기 영상 데이터로부터 이미지를 추출하고, 상기 이미지를 필터 처리하고, 상기 필터 처리된 이미지가 가지고 있는 픽셀들의 색상을 분석하고, 상기 이미지에서 색상별 비율을 도출하여 열화정도를 추정하고, 상기 이미지의 전처리를 통해 표면조도를 추정하여, 상기 객체의 상태가 손상된 것인지 여부를 추정하는 것을 특징으로 하는 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템.
- 제5항에 있어서,상기 객체 행위 인식부는상기 영상 데이터로부터 객체를 검출하고, 상기 검출된 객체의 종류를 미리 머신러닝된 라벨로 분류하는 것을 신경망을 통해 학습함으로써, 객체의 행동 이벤트를 추정하는 것을 특징으로 하는 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2020-0117503 | 2020-09-14 | ||
KR1020200117503A KR102263512B1 (ko) | 2020-09-14 | 2020-09-14 | 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022055023A1 true WO2022055023A1 (ko) | 2022-03-17 |
Family
ID=76377968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/016228 WO2022055023A1 (ko) | 2020-09-14 | 2020-11-18 | 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템 |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102263512B1 (ko) |
WO (1) | WO2022055023A1 (ko) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102667443B1 (ko) * | 2021-06-16 | 2024-05-22 | 동의대학교 산학협력단 | 주거지용 주차 관리 방법 |
KR102541221B1 (ko) | 2022-11-08 | 2023-06-13 | 주식회사 경우시스테크 | 인공지능 기반 영상인식플랫폼을 포함하는 이동식 지능형 cctv 시스템 |
KR102541212B1 (ko) | 2022-11-08 | 2023-06-13 | 주식회사 영신 | 인공지능 기반 영상인식 시스템을 포함하는 임베디드 영상인식 안전통합관제 플랫폼 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100881230B1 (ko) * | 2008-08-27 | 2009-02-09 | 주식회사 상상돔 | 스테레오 영상을 이용한 고 정밀 위폐 감별 시스템 |
JP2011021951A (ja) * | 2009-07-14 | 2011-02-03 | Mitsubishi Heavy Ind Ltd | 腐食環境監視装置及び方法 |
KR101772916B1 (ko) * | 2016-12-30 | 2017-08-31 | 한양대학교 에리카산학협력단 | 촬영영상과 인공지능 알고리즘 기반의 균열검출 자동화 프로그램을 이용한 터널 라이닝 표면의 균열 검출 방법 및 시스템 |
KR20190098105A (ko) * | 2019-08-02 | 2019-08-21 | 엘지전자 주식회사 | 스마트 홈 감시 장치 및 방법 |
KR102058452B1 (ko) * | 2019-06-28 | 2019-12-23 | 가온플랫폼 주식회사 | IoT 융합 지능형 영상분석 플랫폼 시스템 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100980586B1 (ko) | 2010-05-07 | 2010-09-06 | 주식회사 에스엘티 | 단일 또는 다중 카메라를 이용한 지능형 영상보안방범 방법 및 그 시스템 |
-
2020
- 2020-09-14 KR KR1020200117503A patent/KR102263512B1/ko active IP Right Grant
- 2020-11-18 WO PCT/KR2020/016228 patent/WO2022055023A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100881230B1 (ko) * | 2008-08-27 | 2009-02-09 | 주식회사 상상돔 | 스테레오 영상을 이용한 고 정밀 위폐 감별 시스템 |
JP2011021951A (ja) * | 2009-07-14 | 2011-02-03 | Mitsubishi Heavy Ind Ltd | 腐食環境監視装置及び方法 |
KR101772916B1 (ko) * | 2016-12-30 | 2017-08-31 | 한양대학교 에리카산학협력단 | 촬영영상과 인공지능 알고리즘 기반의 균열검출 자동화 프로그램을 이용한 터널 라이닝 표면의 균열 검출 방법 및 시스템 |
KR102058452B1 (ko) * | 2019-06-28 | 2019-12-23 | 가온플랫폼 주식회사 | IoT 융합 지능형 영상분석 플랫폼 시스템 |
KR20190098105A (ko) * | 2019-08-02 | 2019-08-21 | 엘지전자 주식회사 | 스마트 홈 감시 장치 및 방법 |
Also Published As
Publication number | Publication date |
---|---|
KR102263512B1 (ko) | 2021-06-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022055023A1 (ko) | 스마트 객체인식이 가능한 IoT 통합 지능형 영상분석 플랫폼 시스템 | |
KR101808587B1 (ko) | 객체인식과 추적감시 및 이상상황 감지기술을 이용한 지능형 통합감시관제시스템 | |
JP6905850B2 (ja) | 画像処理システム、撮像装置、学習モデル作成方法、情報処理装置 | |
JP5121258B2 (ja) | 不審行動検知システム及び方法 | |
EP2113846B1 (en) | Behavior history searching device and behavior history searching method | |
CN101635834A (zh) | 类神经控制自动追踪识别系统 | |
WO2021100919A1 (ko) | 행동 시퀀스 기반으로 이상행동 여부를 판단하는 방법, 프로그램 및 시스템 | |
KR101372860B1 (ko) | 영상 검색 시스템 및 영상 분석 서버 | |
CN112132048A (zh) | 一种基于计算机视觉的社区巡更分析方法及系统 | |
CN111079694A (zh) | 一种柜面助手履职监控装置和方法 | |
KR20200017594A (ko) | 딥 러닝과 멀티 에이전트를 활용한 대규모 개체 식별 및 추적 방법 | |
KR20190035187A (ko) | 감시 지역 음성 경고 방송 시스템 | |
CN114359976B (zh) | 一种基于人物识别的智能安防方法与装置 | |
KR20190108218A (ko) | 디지털 영상 이미지를 이용한 위급상황 분석장치 및 방법 | |
JP2002304651A (ja) | 入退室管理装置とその方法、およびこの方法の実行プログラムとこの実行プログラムを記録した記録媒体 | |
KR102233679B1 (ko) | Ess 침입자 및 화재 감지 장치 및 방법 | |
KR20210043960A (ko) | IoT와 인공지능 기술을 이용한 행동인식 기반의 안전 감시 시스템 및 방법 | |
JP5202419B2 (ja) | 警備システムおよび警備方法 | |
KR101394270B1 (ko) | 영상 감지 시스템 및 방법 | |
KR102423934B1 (ko) | 안면인식 및 유사한 옷 색상의 다수 객체 추적기술을 통한 스마트 휴먼검색 통합 솔루션 | |
KR20230097854A (ko) | 발전소내 작업자의 위험행동 인지방법 및 시스템 | |
KR101547255B1 (ko) | 지능형 감시 시스템의 객체기반 검색방법 | |
CN110533889B (zh) | 一种敏感区域电子设备监测定位装置与方法 | |
KR20190072323A (ko) | 영상 감시 시스템 및 영상 감시방법 | |
JP2012212238A (ja) | 物品検出装置および静止人物検出装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20953425 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20953425 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20953425 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14/09/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20953425 Country of ref document: EP Kind code of ref document: A1 |