WO2021020866A1 - Système et procédé d'analyse d'images pour surveillance à distance - Google Patents

Système et procédé d'analyse d'images pour surveillance à distance Download PDF

Info

Publication number
WO2021020866A1
WO2021020866A1 PCT/KR2020/009954 KR2020009954W WO2021020866A1 WO 2021020866 A1 WO2021020866 A1 WO 2021020866A1 KR 2020009954 W KR2020009954 W KR 2020009954W WO 2021020866 A1 WO2021020866 A1 WO 2021020866A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
images
event
monitored
Prior art date
Application number
PCT/KR2020/009954
Other languages
English (en)
Korean (ko)
Other versions
WO2021020866A9 (fr
Inventor
심재술
Original Assignee
(주)유디피
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)유디피 filed Critical (주)유디피
Publication of WO2021020866A1 publication Critical patent/WO2021020866A1/fr
Publication of WO2021020866A9 publication Critical patent/WO2021020866A9/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the present invention relates to an image analysis system and method for remote monitoring, and more particularly, when detecting an object to be monitored by analyzing an object identified in an image periodically received from a camera unit located in a remote location instead of in real time, based on deep learning.
  • the present invention relates to an image analysis system and method for remote monitoring capable of reducing time and cost for analyzing a monitored object by providing an event.
  • the present invention applies a deep learning-based image analysis method to an image received from a camera located at a remote location and has a very small number of transmission frames per second to easily detect an object to be monitored in the image.
  • the purpose of this is to increase the efficiency and reliability of the system for video monitoring by providing support to provide events and shortening the time required for video analysis during deep learning-based video analysis.
  • an object of the present invention is to support to reduce system configuration cost by supporting reliable detection of an object to be monitored even when a low-cost camera with a very small number of transmission frames per second is used.
  • An image analysis system for remote monitoring includes an image collection unit that receives a plurality of images from a camera unit, and an object in which movement occurs by analyzing each of the plurality of images according to a preset image analysis algorithm is detected.
  • An object extraction unit for extracting an object region for the object from the plurality of images; an image synthesis unit for generating a composite image obtained by combining at least one object region extracted from the object extraction unit into a single image; and the composite image
  • a deep learning unit that identifies the object to be monitored by analyzing through a deep learning algorithm in which a pattern for the object to be monitored is set in advance, and object information on the identified object from the deep learning unit is received, and the object information is It may include an event determination unit that determines that an event occurs when a set event occurrence condition is satisfied.
  • the object information includes a type of an object and a degree of similarity with the object to be monitored
  • the event determination unit includes a type of the object related to the object to be monitored in advance, and the type of the object according to the object information. If they coincide and the degree of similarity is equal to or greater than a preset reference value, it may be determined that an event occurs.
  • the event determination unit may further include an event notification unit for generating and outputting event information when an event occurs as a result of the determination of the event determination unit or transmitting it to a preset external device.
  • the object extraction unit generates a median image obtained by synthesizing the plurality of images, and the movement occurs among the plurality of images through a difference image from the median image for each of the plurality of images. It may be characterized in that the object region of the object is extracted for each image in which the object is detected.
  • the object extracting unit extracts the object region from the specific image along an outline of the region determined as the object from the specific image in which the object is detected, and the image combining unit It may be characterized in that one or more object regions extracted by the object extracting unit are synthesized into one composite image through a preset box filling problem.
  • the plurality of images may be images corresponding to an object detected by detection of a sensor configured in the camera unit or an image analysis by the camera unit.
  • An image analysis method for remote monitoring of a monitoring server that communicates with a camera unit through a communication network includes receiving a plurality of images from the camera unit, and analyzing each of the plurality of images in advance. Extracting an object region for the object from the plurality of images when an object in which movement has occurred by analyzing according to an algorithm is detected, and generating a composite image by combining the extracted one or more object regions into one image; Identifying an object to be monitored by analyzing the composite image through a deep learning algorithm in which a pattern for a predetermined object to be monitored is learned, receiving object information on the identified object, and generating an event in which the object information is preset When the condition is satisfied, it may include determining that an event has occurred.
  • the present invention provides an image analysis algorithm based on a real-time image at a number of frames per second due to a large data transmission distance and a low performance of the camera unit when transmitting a plurality of images to a monitoring server according to object detection from an image in a remote location. Even if an image is transmitted in the form of a snapshot that is insufficient to identify the object to be monitored through the camera unit, only the object region is separated and extracted from the plurality of images transmitted from the camera unit and synthesized into a single image.
  • the camera unit located in a remote location can be configured as a low-cost camera, thereby reducing system configuration costs and improving the reliability of the object analysis results. There is a guarantee effect.
  • the present invention does not analyze the entire area of each of the plurality of images received from the camera unit in the monitoring server through a deep learning algorithm, but separates only the object area where movement has occurred in each of the plurality of images, and then converts it into one image.
  • the analysis time required for object identification of the deep learning algorithm can be greatly shortened by analyzing a single composite image synthesized by using a deep learning algorithm.Through this, even if the number of camera units communicating with the monitoring server is large, In addition to supporting rapid event determination based on identification, even if the number of camera units increases, the number of monitoring servers for accommodating them and hardware performance can be reduced, thereby reducing system configuration cost.
  • FIG. 1 is a block diagram of an image analysis system for remote monitoring according to an embodiment of the present invention.
  • FIG. 2 is a detailed configuration diagram of a monitoring server configuring an image analysis system for remote monitoring according to an embodiment of the present invention.
  • 3 to 5 are diagrams illustrating operations for a process of extracting an object area and generating a composite image of a monitoring server according to an embodiment of the present invention.
  • 6 and 7 are diagrams illustrating operations of a monitoring server for identification of an object to be monitored and an event generation process according to an embodiment of the present invention.
  • FIG. 8 is a flowchart of an image analysis method for remote monitoring according to an embodiment of the present invention.
  • FIG. 1 is a configuration diagram of an image analysis system for remote monitoring according to an embodiment of the present invention, including a camera unit 10 located at a remote location and a monitoring server 100 communicating through a communication network as shown. Can be.
  • the camera unit 10 may be configured as an IP (Internet Protocol) camera.
  • the camera unit 10 may include a sensor unit such as a passive infrared sensor (PIR) or perform self-image analysis, and when detecting an object that satisfies a preset condition through the sensing signal or image analysis of the sensor unit A plurality of images in which the object is detected may be transmitted to the monitoring server 100 based on the object detection time point.
  • a sensor unit such as a passive infrared sensor (PIR) or perform self-image analysis
  • each of the plurality of images may be composed of a frame that is a snapshot.
  • the camera unit 10 has difficulty in transmitting a large amount of data related to a real-time high-definition image due to the distance from the monitoring server 100, so that the data capacity of the image, the data transmission speed of the camera unit 10, and the network environment Considering that, an image composed of a very small number of the snapshots per second may be generated and transmitted to the monitoring server 100 periodically during the time when an object is detected.
  • the camera unit 10 may generate and transmit an image composed of a snapshot of 2 frames per second.
  • the monitoring server 100 may receive a plurality of images according to the detection of one or more objects from the camera unit 10.
  • the monitoring server 100 detects and extracts an object moving in the image targeting a plurality of images in the form of a snapshot that is periodically transmitted instead of a real-time image, and selects an object to be monitored among one or more objects displayed in the plurality of images.
  • an event can be generated when a monitored object that satisfies a preset event condition is identified.
  • a moving person or vehicle may be set as the object to be monitored in the monitoring server 100.
  • the monitoring server 100 can easily and accurately identify the object to be monitored for 10 frames or less, not a real-time image of several tens of frames per second, and provide an event for this, and through this, a camera unit located at a remote location ( Even if 10) is configured as a low-cost, low-performance camera, it is possible to reduce system configuration cost and increase system reliability by supporting the monitoring server 100 to easily monitor the object to be monitored.
  • FIG. 2 is a configuration diagram of the monitoring server 100, as shown, an image collection unit 110, an object extraction unit 120, an image synthesis unit 130, a deep learning unit 140, and , An event determination unit 150 and an event notification unit 160 may be included.
  • At this time, at least one of the image collection unit 110, the object extraction unit 120, the image synthesis unit 130, the deep learning unit 140, the event determination unit 150, and the event notification unit 160 May be configured as a control unit that controls the monitoring server 100, and a component part of the monitoring server 100 other than the control unit may be controlled by the control unit.
  • control unit executes an overall control function of the monitoring server 100 using programs and data stored in the monitoring server 100.
  • the control unit may include RAM, ROM, CPU, GPU, and bus, and RAM, ROM, CPU, GPU, and the like may be connected to each other through a bus.
  • 3 to 5 are diagrams illustrating operations of a process of extracting an object area and generating a composite image of the monitoring server 100 according to an embodiment of the present invention.
  • the image collection unit 110 may receive a plurality of images from a camera unit 10 located at a remote location through a communication network.
  • the image collection unit 110 may store the plurality of images in the DB 101 included in the monitoring server 100.
  • the object extraction unit 120 receives a plurality of images from the image collection unit 110, detects an object in which movement has occurred in the plurality of images, and determines the object region of the detected object from the plurality of images. Can be extracted.
  • the object extraction unit 120 may analyze each of the plurality of images received from the image collection unit 110 through a preset image analysis algorithm to detect an object in which movement has occurred.
  • the object extracting unit 120 may apply a differential image method, a Model of Gaussian (MOG) algorithm using Gaussian Mixture Models (GMM), a codebook algorithm, and the like as the image analysis algorithm.
  • MOG Model of Gaussian
  • GMM Gaussian Mixture Models
  • the object extracting unit 120 may extract an object region corresponding to the specific object from each of at least one or more images in which the specific object is detected from among the plurality of images.
  • the object extraction unit 120 detects an object that has moved when the image analysis algorithm is applied to the plurality of images. Errors may occur in detection.
  • the object extraction unit 120 generates a median image obtained by synthesizing the plurality of images, and detects an object through a difference image from the median image for each of the plurality of images.
  • the object area of the object may be extracted for each image in which the moving object is detected among the plurality of images.
  • the object extracting unit 120 may prevent an error when analyzing an image targeting a plurality of images having a very small number of frames per second.
  • the object extraction unit 120 may prevent an error by generating an analysis target image obtained by obtaining a horizontal edge and a vertical edge for each of a plurality of images, and calculating the difference between the analysis target images to detect the object. have.
  • the object extraction unit 120 may detect one or more objects from each of the plurality of images.
  • the image synthesizing unit 130 may interwork with the object extracting unit 120 to collect an object area for each object extracted from the plurality of images, and combine the object area for each object into a single image. have.
  • the image synthesis unit 130 synthesizes one or more object regions for each object extracted by the object extraction unit 120 from at least one of a plurality of images through a preset bin packing problem. Can be combined into images.
  • the object extracting unit 120 for a plurality of images collected by the image collection unit 110 as shown in FIG. 4 When an object related to a moving vehicle is detected through a predetermined image analysis algorithm for a plurality of images, an object region for each object may be extracted from the plurality of images.
  • the object extracting unit 120 may also detect nonsensical objects such as a shadow of a vehicle to be monitored or a light reflected by the vehicle as a moving object, together with the object to be monitored, and detect an object area according to such a meaningless object.
  • the object region may be extracted from a specific image in which a moving object is detected along an outline of an area determined as the moving object.
  • the object extraction unit 120 is the object of the meaningless object, which is regarded as noise such as a change in a shadow or light attached to the object to be monitored when extracting the object area to a bounding box in the process of extracting the object-related object area. Due to the problem that the size of the bounding box of the object to be monitored increases more than necessary depending on the region, only the region related to the object in which movement has occurred can be extracted roughly along the outline without using the bounding box.
  • the image synthesis unit 130 may collect one or more object regions extracted from the plurality of images by the object extraction unit 120 and combine them into one image.
  • the image synthesizing unit 130 may apply one or more object regions to a preset box filling problem-related algorithm as shown in Fig. 5(a) and synthesize it into one image. As shown in ), a composite image in which the one or more object regions are combined may be generated.
  • the object extraction unit 120 may provide the position of the object in the image for each image in which the object is detected among a plurality of images to the image synthesis unit 130, and the image synthesis unit 130 ) May also record the position of the object in the image in the composite image.
  • FIGS. 6 and 7 are exemplary diagrams of the operation of the monitoring server 100 for identification of the object to be monitored and the event generation process according to the embodiment of the present invention.
  • the deep learning unit ( 140) receives a composite image from the image synthesizing unit 130, analyzes the composite image through a deep learning algorithm in which a preset pattern for the object to be monitored is learned, and identifies the object to be monitored from the composite image. have.
  • the deep learning unit 140 may continuously learn the composite images generated by the deep learning algorithm in response to images provided whenever an object is detected by the camera unit 10, and through the learning It is possible to learn the pattern of the object to be monitored in the learning algorithm.
  • the deep learning unit 140 may output object information for each object identified in the composite image through the deep learning algorithm, connected to the monitoring server 100 or through a separately configured output unit.
  • the deep learning algorithm receives feedback information selected by a user as an object to be monitored from among object information output from the deep learning unit 140 through the user interface unit 170 to be received, and the deep learning algorithm based on the feedback information By modifying, the identification error of the object to be monitored can be reduced so that a pattern for the object to be monitored can be learned.
  • the deep learning unit 140 may learn a pattern of an object to be monitored, such as a person or vehicle, to the deep learning algorithm.
  • the deep learning algorithm is preferably Regions with Convolutional Neural Network (R-CNN), but is not limited thereto, and various neural network models may be applied.
  • R-CNN Regions with Convolutional Neural Network
  • the user interface unit 170 may be included in the monitoring server 100 and configured.
  • the deep learning unit 140 analyzes one or more object regions included in the composite image through a deep learning algorithm, and among objects corresponding to the object region, the object to be monitored An object is identified through a deep learning algorithm, and object information including the object type of the object to be monitored and the similarity between the object type is generated in correspondence to the object area identified as the object to be monitored, and then the event determination unit 150 Can provide.
  • the deep learning unit 140 corresponds to a specific object area among one or more object areas included in the composite image, and when a specific object identified is a person-related monitoring object, the object type is The object information including a degree of similarity to a person may be generated in relation to the specific object set as a person and identified as a monitoring target object.
  • the object identified in correspondence with the specific object area is a vehicle-related monitoring object
  • the object type is set as a vehicle
  • the object type is set as a monitoring object.
  • the object information including a degree of similarity to a vehicle may be generated.
  • the composite image may include location information on a location (or location of an object) corresponding to the object area in the image corresponding to the object area for each object area, and the deep learning unit 140 Among the location information for each object area included in the image, location information of an object area corresponding to the specific object identified as the object to be monitored may be included in the object information corresponding to the specific object.
  • the deep learning unit 140 matches the object information generated for each object area identified as the object to be monitored among the one or more object areas with the object area corresponding to the object information in the composite image and adds it to the composite image.
  • a composite image including the object information may be provided to the event determination unit 150.
  • the event determination unit 150 receives object information on the identified object from the deep learning unit 140, and when the object information satisfies a preset event occurrence condition, a plurality of images transmitted from the camera unit 10 It can be determined as the occurrence of an event corresponding to the image of
  • the event determination unit 150 may determine that an event has occurred.
  • the event determination unit 150 sets an object type related to a person or vehicle as a monitoring target object among one or more object information provided from the deep learning unit 140, and If there is object information having a similarity with the person or vehicle equal to or greater than a preset reference value, it is determined that a preset event condition is satisfied, and the event is determined to have occurred in response to the camera unit 10 that transmitted the plurality of images. I can.
  • event notification unit 160 may interwork with the event determination unit 150 to generate event information when the event determination unit 150 determines that an event has occurred and output it through the output unit.
  • the event notification unit 160 may include a plurality of images corresponding to the event in the event information and output through the output unit.
  • the event notification unit 160 may transmit the event information to a preset external device through a communication network.
  • the event notification unit 160 may identify object information that satisfies the event occurrence condition in conjunction with the event determination unit 150, and monitor among a plurality of images based on the location information included in the object information. For each image in which the target object exists, a plurality of images in which the location of the object to be monitored is marked with a preset mark may be generated and then included in the event information and transmitted.
  • the monitoring server 100 may communicate with a plurality of different camera units 10 through a communication network.
  • the image collection unit 110 configured in the monitoring server 100 may allocate different channels to each of the plurality of camera units 10 and receive a plurality of images for each channel.
  • the monitoring server 100 may classify the plurality of camera units 10 through a channel, and individually determine whether an event occurs for each of the plurality of camera units 10 as described above.
  • the present invention provides an event by determining whether the object is a monitoring target object in a monitoring server for an object detected as an event by the camera unit, according to object detection in an image by a camera unit located at a remote location.
  • the data transmission distance is significant and the number of frames per second is transmitted in the form of a snapshot that is insufficient to identify the object to be monitored through an image analysis algorithm based on real-time images due to the low performance of the camera unit.
  • whether the object detected by the camera is easily detected by the camera unit through a deep learning algorithm after separating and extracting only the object area from the plurality of images transmitted from the camera unit and combining it into one image.
  • the present invention does not analyze the entire area of each of the plurality of images received from the camera unit in the monitoring server through a deep learning algorithm, but separates only the object area where movement has occurred in each of the plurality of images, and then converts it into one image.
  • the analysis time required for object identification of the deep learning algorithm can be greatly shortened by analyzing a single composite image synthesized by using a deep learning algorithm.Through this, even if the number of camera units communicating with the monitoring server is large, In addition to supporting rapid event determination based on identification, even if the number of camera units increases, the number of monitoring servers and hardware performance for accommodating them can be reduced, thereby reducing system configuration cost.
  • FIG. 8 is a flowchart illustrating an image analysis method for remote monitoring of a monitoring server communicating with a camera unit through a communication network according to an embodiment of the present invention.
  • the monitoring server 100 may receive a plurality of images from the camera unit 10 (S1).
  • the monitoring server 100 analyzes each of the plurality of images according to a preset image analysis algorithm (S2) and extracts an object region for the object from the plurality of images when an object in which movement has occurred (S3) is detected. I can (S4).
  • the monitoring server 100 may generate a composite image obtained by combining the extracted one or more object regions into one image (S5).
  • the monitoring server 100 may identify the object to be monitored by analyzing the composite image through a deep learning algorithm in which a pattern for the object to be monitored is set in advance (S6).
  • the monitoring server 100 may receive object information on the identified object, determine that an event has occurred when the object information satisfies a preset event occurrence condition, and output event information according to the event occurrence or
  • the event information may be transmitted to a set external device (S7).
  • CMOS-based logic circuitry CMOS-based logic circuitry
  • firmware software
  • software or a combination thereof.
  • transistors logic gates, and electronic circuits in the form of various electrical structures.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un système et procédé d'analyse d'images permettant une surveillance à distance, grâce auxquels des objets identifiés dans des images reçues périodiquement plutôt qu'en temps réel en provenance d'une unité de prise de vues positionnée à un emplacement à distance peuvent être analysés sur la base d'un apprentissage profond pour fournir un événement lorsqu'un objet à surveiller est détecté. Ainsi, le temps nécessaire à l'analyse de l'objet à surveiller peut être diminué et les coûts peuvent être réduits.
PCT/KR2020/009954 2019-07-31 2020-07-28 Système et procédé d'analyse d'images pour surveillance à distance WO2021020866A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190093150A KR102247359B1 (ko) 2019-07-31 2019-07-31 원격 모니터링을 위한 영상 분석 시스템 및 방법
KR10-2019-0093150 2019-07-31

Publications (2)

Publication Number Publication Date
WO2021020866A1 true WO2021020866A1 (fr) 2021-02-04
WO2021020866A9 WO2021020866A9 (fr) 2021-04-01

Family

ID=74230394

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/009954 WO2021020866A1 (fr) 2019-07-31 2020-07-28 Système et procédé d'analyse d'images pour surveillance à distance

Country Status (2)

Country Link
KR (1) KR102247359B1 (fr)
WO (1) WO2021020866A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537009A (zh) * 2021-06-30 2021-10-22 上海晶赞融宣科技有限公司 一种居家隔离监管系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102305467B1 (ko) * 2021-05-12 2021-09-30 씨티씨 주식회사 딥러닝-기반 산사태 분산 감지 방법
KR102305468B1 (ko) * 2021-05-12 2021-09-30 씨티씨 주식회사 딥러닝-기반 산사태 분산 감지 시스템
KR102586144B1 (ko) * 2021-09-23 2023-10-10 주식회사 딥비전 딥러닝을 이용한 손 움직임 추적방법 및 장치
KR102348233B1 (ko) * 2021-12-03 2022-01-07 새빛이앤엘 주식회사 Cctv 동영상 콘트라스트 최적화를 이용한 영상 감시 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150117772A1 (en) * 2013-10-24 2015-04-30 TCL Research America Inc. Video object retrieval system and method
KR101789690B1 (ko) * 2017-07-11 2017-10-25 (주)블루비스 딥 러닝 기반 보안 서비스 제공 시스템 및 방법
KR101932009B1 (ko) * 2017-12-29 2018-12-24 (주)제이엘케이인스펙션 다중 객체 검출을 위한 영상 처리 장치 및 방법
KR101954717B1 (ko) * 2018-10-22 2019-03-06 주식회사 인텔리빅스 고속분석 영상처리장치 및 그 장치의 구동방법
KR101937272B1 (ko) * 2012-09-25 2019-04-09 에스케이 텔레콤주식회사 복수 개의 촬영영상으로부터 이벤트를 검출하는 장치 및 방법

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101930049B1 (ko) 2014-11-28 2018-12-17 한국전자통신연구원 관심 객체 기반 병렬 영상 분석 장치 및 그 방법
JP2019003565A (ja) * 2017-06-19 2019-01-10 コニカミノルタ株式会社 画像処理装置、画像処理方法、及び画像処理プログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101937272B1 (ko) * 2012-09-25 2019-04-09 에스케이 텔레콤주식회사 복수 개의 촬영영상으로부터 이벤트를 검출하는 장치 및 방법
US20150117772A1 (en) * 2013-10-24 2015-04-30 TCL Research America Inc. Video object retrieval system and method
KR101789690B1 (ko) * 2017-07-11 2017-10-25 (주)블루비스 딥 러닝 기반 보안 서비스 제공 시스템 및 방법
KR101932009B1 (ko) * 2017-12-29 2018-12-24 (주)제이엘케이인스펙션 다중 객체 검출을 위한 영상 처리 장치 및 방법
KR101954717B1 (ko) * 2018-10-22 2019-03-06 주식회사 인텔리빅스 고속분석 영상처리장치 및 그 장치의 구동방법

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537009A (zh) * 2021-06-30 2021-10-22 上海晶赞融宣科技有限公司 一种居家隔离监管系统
CN113537009B (zh) * 2021-06-30 2024-02-13 上海晶赞融宣科技有限公司 一种居家隔离监管系统

Also Published As

Publication number Publication date
WO2021020866A9 (fr) 2021-04-01
KR102247359B1 (ko) 2021-05-04
KR20210014988A (ko) 2021-02-10

Similar Documents

Publication Publication Date Title
WO2021020866A1 (fr) Système et procédé d'analyse d'images pour surveillance à distance
WO2014051337A1 (fr) Appareil et procédé pour détecter un événement à partir d'une pluralité d'images photographiées
WO2013115470A1 (fr) Système et procédé de contrôle intégré utilisant une caméra de surveillance destinée à un véhicule
WO2016171341A1 (fr) Système et procédé d'analyse de pathologies en nuage
WO2017074005A1 (fr) Système de surveillance à sélection automatique de cctv, et serveur de gestion de surveillance à sélection automatique de cctv et procédé de gestion
WO2017115905A1 (fr) Système et procédé de reconnaissance de pose de corps humain
WO2017090892A1 (fr) Caméra de génération d'informations d'affichage à l'écran, terminal de synthèse d'informations d'affichage à l'écran (20) et système de partage d'informations d'affichage à l'écran le comprenant
WO2012124852A1 (fr) Dispositif de caméra stéréo capable de suivre le trajet d'un objet dans une zone surveillée, et système de surveillance et procédé l'utilisant
WO2019124635A1 (fr) Procédé orienté syntaxe de détection d'une intrusion d'objet dans une vidéo comprimée
WO2016099084A1 (fr) Système de fourniture de service de sécurité et procédé utilisant un signal de balise
WO2018151503A2 (fr) Procédé et appareil destinés à la reconnaissance de gestes
WO2021100919A1 (fr) Procédé, programme et système pour déterminer si un comportement anormal se produit, sur la base d'une séquence de comportement
KR20190038137A (ko) 채널별 객체 검출 최적화를 위한 영상분석 방법 및 서버장치
WO2014061922A1 (fr) Appareil et procédé permettant de détecter le sabotage d'une caméra au moyen d'une image de contour
CN112487891B (zh) 一种应用于电力作业现场的视觉智能动态识别模型构建方法
WO2012137994A1 (fr) Dispositif de reconnaissance d'image et son procédé de surveillance d'image
WO2021167374A1 (fr) Dispositif de recherche vidéo et système de caméra de surveillance de réseau le comprenant
WO2016064107A1 (fr) Procédé et appareil de lecture vidéo sur la base d'une caméra à fonctions de panoramique/d'inclinaison/de zoom
WO2011043498A1 (fr) Appareil intelligent de surveillance d'images
WO2022186426A1 (fr) Dispositif de traitement d'image pour classification automatique de segments, et son procédé de commande
WO2023158068A1 (fr) Système et procédé d'apprentissage pour améliorer le taux de détection d'objets
WO2023128186A1 (fr) Système et procédé de sécurité d'image basés sur un sous-titrage vidéo multimodal
WO2022019601A1 (fr) Extraction d'un point caractéristique d'un objet à partir d'une image ainsi que système et procédé de recherche d'image l'utilisant
WO2013022170A1 (fr) Procédé de détermination de problème de connexion et appareil de détermination de problème de connexion pour dispositif d'entrée d'image
WO2019124634A1 (fr) Procédé orienté syntaxe de suivi d'objet dans une vidéo comprimée

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20847546

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30.06.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20847546

Country of ref document: EP

Kind code of ref document: A1