WO2024075950A1 - Procédé et système de génération de modèle d'ia embarquée pour cctv embarquée - Google Patents

Procédé et système de génération de modèle d'ia embarquée pour cctv embarquée Download PDF

Info

Publication number
WO2024075950A1
WO2024075950A1 PCT/KR2023/010760 KR2023010760W WO2024075950A1 WO 2024075950 A1 WO2024075950 A1 WO 2024075950A1 KR 2023010760 W KR2023010760 W KR 2023010760W WO 2024075950 A1 WO2024075950 A1 WO 2024075950A1
Authority
WO
WIPO (PCT)
Prior art keywords
edge
cctv
model
analysis result
data set
Prior art date
Application number
PCT/KR2023/010760
Other languages
English (en)
Korean (ko)
Inventor
양창모
김동칠
서경은
오세호
Original Assignee
한국전자기술연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자기술연구원 filed Critical 한국전자기술연구원
Publication of WO2024075950A1 publication Critical patent/WO2024075950A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • the present invention relates to a method and system for generating an edge AI model for edge CCTV.
  • CCTV without a video analysis function was mainly used. That is, in the conventional video security service, the method was mainly used to transmit video captured by CCTV to a VMS (Video Management System) control center and then perform video analysis at the VMS control center.
  • VMS Video Management System
  • edge CCTV In video security services using edge CCTV, a method is used to reduce the burden of video analysis on the VMS control center by performing video analysis on the edge module mounted on the CCTV and then transmitting the results to the VMS control center.
  • edge CCTV there is a problem that once the edge AI model for video analysis is mounted on the edge module, the mounted AI model cannot be changed or modified.
  • An embodiment of the present invention automatically generates a learning data set and an edge AI model for edge CCTV based on the video analysis results of the edge CCTV and the precise video analysis results in the VMS, and uses the created optimal edge AI model to use the edge CCTV.
  • the method for generating an edge AI model for edge CCTV includes analyzing the first CCTV image analysis result from at least one edge CCTV among a plurality of edge CCTVs. receiving; Generating a second CCTV video analysis result using a video analysis AI model targeting the first CCTV video analysis result; Generating a learning data set based on event occurrence time information, video clips, and the second video analysis result corresponding to the first CCTV image analysis result; and learning and generating an edge AI model for the edge CCTV based on the learning data set.
  • the step of receiving a first CCTV image analysis result analyzed from at least one edge CCTV among the plurality of edge CCTVs includes at least one of an object detection result and a predetermined event detection result from the edge CCTV.
  • a first CCTV image analysis result including one may be received.
  • the step of generating a learning data set based on object and event occurrence time information, video clips, and the second video analysis result corresponding to the first CCTV image analysis result includes the second
  • the training data set can be created by setting the video analysis results as GT (Ground Truth) information.
  • the step of learning and generating an edge AI model for the edge CCTV based on the learning data set is performed when preset time information is satisfied and the size of the learning data set is predetermined.
  • the edge AI model can be learned and created.
  • Some embodiments of the present invention include searching and selecting an edge AI model to be applied to at least one edge CCTV among a plurality of edge CCTVs; And it may further include distributing the selected edge AI model to edge CCTV.
  • the step of searching and selecting an edge AI model to be applied to at least one edge CCTV among the plurality of edge CCTVs includes selecting the edge AI model based on the analysis function and installation size information of the edge AI model. You can search and select edge AI analysis models to apply to CCTV.
  • the edge AI model generation system for edge CCTV includes a communication unit for transmitting and receiving data to a plurality of edge CCTVs, a memory storing a program for learning and generating an edge AI model for the edge CCTV, and As the program stored in the memory is executed, upon receiving the first CCTV image analysis result analyzed from at least one edge CCTV among the plurality of edge CCTVs through the communication unit, the first CCTV image analysis result is sent to the video analysis AI.
  • the processor may receive a first CCTV image analysis result including at least one of an object detection result and a predetermined event detection result from the edge CCTV.
  • the processor may generate the learning data set by setting the second image analysis result as GT (Ground Truth) information.
  • the processor may learn and generate the edge AI model when preset time information is satisfied and when the size of the learning data set satisfies a predetermined size or more.
  • the processor searches and selects an edge AI model to be applied to at least one edge CCTV among a plurality of edge CCTVs based on the analysis function and installation size information of the edge AI model, and selects the edge AI model to be applied to at least one edge CCTV among a plurality of edge CCTVs.
  • Edge AI models can be deployed to edge CCTV.
  • Figure 1 is a block diagram showing the configuration of an edge AI model generation system according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of the preparation steps of the edge AI model generation method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of detailed steps in the edge AI model generation method according to an embodiment of the present invention.
  • edge AI model generation system 100 for edge CCTV according to an embodiment of the present invention will be described with reference to FIG. 1.
  • Figure 1 is a block diagram showing the configuration of an edge AI model creation system 100 according to an embodiment of the present invention.
  • the edge AI model creation system 100 includes an input unit 110, a communication unit 120, a display unit 130, a memory 140, and a processor 150. At this time, the edge AI model creation system in the present invention may be VMS.
  • the input unit 110 generates input data in response to user input of the edge AI model creation system 100.
  • user input may include search and selection input of the edge AI model for CCTV, and other control inputs such as termination.
  • the input unit 110 includes at least one input means.
  • the input unit 110 includes a keyboard, key pad, dome switch, touch panel, touch key, mouse, menu button, etc. may include.
  • the communication unit 120 transmits and receives data to and from a plurality of CCTVs, and also communicates with external devices such as servers and data collection devices to transmit and receive data.
  • This communication unit 120 may include both a wired communication module and a wireless communication module.
  • the wired communication module can be implemented as a power line communication device, telephone line communication device, home cable (MoCA), Ethernet, IEEE1294, integrated wired home network, and RS-485 control device.
  • wireless communication modules include WLAN (wireless LAN), Bluetooth, HDR WPAN, UWB, ZigBee, Impulse Radio, 60GHz WPAN, Binary-CDMA, wireless USB technology and wireless HDMI technology, as well as 5G (5th generation communication) and LTE-A. It may be composed of modules to implement functions such as (long term evolution-advanced), LTE (long term evolution), and Wi-Fi (wireless fidelity).
  • the display unit 130 displays display data according to the operation of the edge AI model creation system 100.
  • the display unit 130 may display information about a plurality of CCTVs, a list of edge AI models corresponding to each CCTV, information about each edge AI model, and configuration information of a learning data set, etc. on the screen.
  • the display unit 130 includes a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, and a micro electro mechanical systems (MEMS) display. and electronic paper displays.
  • LCD liquid crystal display
  • LED light emitting diode
  • OLED organic light emitting diode
  • MEMS micro electro mechanical systems
  • the display unit 130 may be combined with the input unit 110 and implemented as a touch screen.
  • the memory 140 stores programs for learning and creating edge AI models for edge CCTV.
  • the memory 140 is a general term for non-volatile storage devices and volatile storage devices that continue to retain stored information even when power is not supplied.
  • memory 140 may include compact flash (CF) cards, secure digital (SD) cards, memory sticks, solid-state drives (SSD), and micro SD.
  • CF compact flash
  • SD secure digital
  • SSD solid-state drives
  • micro SD micro SD.
  • NAND flash memory such as cards
  • magnetic computer storage devices such as hard disk drives (HDD)
  • optical disc drives such as CD-ROM, DVD-ROM, etc. You can.
  • the processor 150 may execute software such as a program to control at least one other component (e.g., hardware or software component) of the edge AI model creation system 100, and may perform various data processing or calculations. there is.
  • software such as a program to control at least one other component (e.g., hardware or software component) of the edge AI model creation system 100, and may perform various data processing or calculations. there is.
  • the processor 150 When the processor 150 receives the first CCTV image analysis result analyzed from at least one edge CCTV among a plurality of edge CCTVs through the communication unit, the processor 150 inputs the first CCTV image analysis result into the video analysis AI model to generate the second CCTV image. Generate analysis results.
  • the processor 150 configures a learning data set based on the event occurrence time information, video clip, and the second video analysis result corresponding to the first CCTV image analysis result, and then configures an edge for edge CCTV based on the learning data set. Train and create AI models.
  • the processor 150 uses at least one of machine learning, neural network, or deep learning algorithms as an artificial intelligence algorithm to generate an edge AI model.
  • an artificial intelligence algorithm at least one of machine learning, neural network, or deep learning algorithm may be used.
  • neural network networks include Convolutional Neural Network (CNN) and Deep Neural Network (DNN). Network) and RNN (Recurrent Neural Network).
  • edge AI model generation method performed by the edge AI model generation system 100 according to an embodiment of the present invention will be described.
  • FIG. 2 is a flowchart of the preparation steps of the edge AI model generation method according to an embodiment of the present invention.
  • the administrator installs edge CCTV at a specific location (S105).
  • edge CCTV refers to a security terminal equipped with an edge AI module that performs video analysis on a CCTV that performs video capture. This edge CCTV generates the first CCTV video analysis result, which is the result of video capture and analysis of the captured video.
  • information about the edge CCTV may include a unique ID, location, performance information, IP address, etc. for the installed edge CCTV.
  • step S110 the system waits for the user's input (S115), and when the user's input is received, the input information is determined (S120). At this time, if the user's input information is 'Finish' (S120-Finish), the entire process is terminated (S125).
  • FIG. 3 is a flowchart of detailed steps in the edge AI model generation method according to an embodiment of the present invention.
  • step S120 If the user's input information is determined to be 'apply edge AI model' in step S120 (S120-apply edge AI model), search and select the edge AI model to be applied to the edge CCTV (S130).
  • an edge AI model in step S130, can be searched according to the analysis function of the edge AI model, and an edge AI model can be selected based on the search results.
  • an edge AI model in step S130, can be searched according to the size of the edge AI model, and an edge AI model can be selected based on the search results.
  • step S130 all edge AI models stored in the edge AI model creation system can be searched and an edge AI model can be selected based on the search results.
  • the selected edge AI model is distributed to the corresponding edge CCTV (S135), and the edge CCTV that has received the distributed edge AI model applies it and operates (S140).
  • the first CCTV image analysis result is generated by at least one edge CCTV among the plurality of edge CCTVs (S145)
  • the first CCTV image analysis result is received from the edge CCTV (S150).
  • the first CCTV image analysis result may include an object detection result.
  • the first CCTV video analysis result may include event detection results such as fire, flooding, assault, invasion, loitering, etc.
  • event detection results such as fire, flooding, assault, invasion, loitering, etc.
  • the object detection results and event detection results may be combined and provided as one information.
  • a second CCTV video analysis result is generated using the video analysis AI model for the first CCTV video analysis result (S155).
  • the second CCTV video analysis result refers to a result analyzed more precisely by a more high-performance computer specification than the first edge CCTV video analysis result.
  • a learning data set is generated based on the event occurrence time information corresponding to the first CCTV video analysis result, the video clip, and the second video analysis result (S160).
  • a learning data set may be formed by defining the second image analysis result as GT (Ground Truth). Accordingly, a learning data set is configured to set video clips, objects, and event occurrence time information as input data, and the second video analysis result is set as output data, and an edge CCTV model for the edge CCTV model is created based on the learning data set. AI models can be learned and created (S165).
  • step S165 trains and generates an edge AI model when at least one of the following is satisfied: when preset time information is satisfied and when the size of the learning data set is sufficient to satisfy a predetermined size or more. can do.
  • the first CCTV image analysis result and the corresponding second CCTV image analysis result may be matched and managed (first case).
  • the plurality of first CCTV image analysis results determined to be the same object or event can be grouped. You can.
  • a second CCTV video analysis result can be generated using a video analysis AI model for the grouped first CCTV video analysis results. At this time, the grouped first and second CCTV video analysis results can be matched and managed (second case).
  • the learning data set consists of the second CCTV video analysis results generated based on the first CCTV video analysis results grouped as in the second case
  • the quality of the learning data set is maintained as is and at the same time the first CCTV video analysis results are grouped together. It has the advantage of being able to learn edge AI models faster than in case 1.
  • the results of analyzing the plurality of first CCTV images determined to be the same object or event are generated by edge CCTVs placed at different angles or positions in the same place. Therefore, the results of analyzing the first CCTV images obtained by the plurality of edge CCTVs installed in the same place can be applied simultaneously when learning the edge AI model.
  • one second CCTV video analysis result was generated based on one first CCTV video analysis result, but in the second case, the second CCTV video analysis result was generated in a multi:1 manner, making it faster. It has the advantage of being learnable.
  • a weight may be set and assigned to each second CCTV image analysis result.
  • the first and second CCTV image analysis results in the first case are compared with the first and second CCTV image analysis results in the second case, and each second CCTV image analysis result in the second case is determined according to the degree of agreement. Weights can also be assigned to .
  • Weights for each second CCTV image analysis result may be variably set and assigned. That is, even if the number of overlapping sections where the same object or event is detected in the video clip in each edge CCTV in the first group and the second group is the same, if the length of the overlapping section in the first group is longer, , greater weight can be given to the second CCTV image analysis result corresponding to the first group.
  • step S170 it is determined whether there is a user input (S170), and when it is determined that there is no user input (S170-NO), the process returns to step S135. At this time, if a new edge AI model is not created in step S165, steps S135 and S140 are omitted, and the process can be performed starting from step S145.
  • step S170-YES when it is determined that there is a user input (S170-YES), the process returns to step S120 and performs an operation according to the corresponding input information.
  • steps S105 to S170 may be further divided into additional steps or combined into fewer steps, depending on the implementation of the present invention. Additionally, some steps may be omitted or the order between steps may be changed as needed. In addition, even if other omitted content, the content described in FIG. 1 and the content described in FIGS. 2 and 3 are mutually applicable.
  • the embodiments of the present invention described above may be implemented as a program (or application) and stored in a medium in order to be executed in conjunction with a server, which is hardware.
  • the above-mentioned program is C, C++, JAVA, machine language, etc. that can be read by the processor (CPU) of the computer through the device interface of the computer in order for the computer to read the program and execute the methods implemented in the program.
  • It may include code coded in a computer language. These codes may include functional codes related to functions that define the necessary functions for executing the methods, and include control codes related to execution procedures necessary for the computer's processor to execute the functions according to predetermined procedures. can do.
  • these codes may further include memory reference-related codes that indicate at which location (address address) in the computer's internal or external memory additional information or media required for the computer's processor to execute the above functions should be referenced. there is.
  • the code uses the computer's communication module to determine how to communicate with any other remote computer or server. It may further include communication-related codes regarding whether communication should be performed and what information or media should be transmitted and received during communication.
  • the storage medium refers to a medium that stores data semi-permanently and can be read by a device, rather than a medium that stores data for a short period of time, such as a register, cache, or memory.
  • examples of the storage medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc., but are not limited thereto. That is, the program may be stored in various recording media on various servers that the computer can access or on various recording media on the user's computer. Additionally, the medium may be distributed to computer systems connected to a network, and computer-readable code may be stored in a distributed manner.
  • the steps of the method or algorithm described in connection with embodiments of the present invention may be implemented directly in hardware, implemented as a software module executed by hardware, or a combination thereof.
  • the software module may be RAM (Random Access Memory), ROM (Read Only Memory), EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), Flash Memory, hard disk, removable disk, CD-ROM, or It may reside on any type of computer-readable recording medium well known in the art to which the present invention pertains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de génération d'un modèle d'IA embarquée pour une CCTV embarquée. Le procédé comprend les étapes consistant à : recevoir un premier résultat d'analyse d'image de CCTV, analysé à partir d'au moins une CCTV embarquée parmi une pluralité de CCTV embarquées ; générer un second résultat d'analyse d'image de CCTV à l'aide d'un modèle d'IA d'analyse d'image sur la base du premier résultat d'analyse d'image de CCTV ; générer un ensemble de données d'apprentissage sur la base du second résultat d'analyse d'image, d'un clip vidéo et d'informations concernant le moment d'occurrence d'événement correspondant au premier résultat d'analyse d'image de CCTV ; et entraîner et générer un modèle d'IA embarqué pour la CCTV embarquée sur la base de l'ensemble de données d'apprentissage.
PCT/KR2023/010760 2022-10-07 2023-07-25 Procédé et système de génération de modèle d'ia embarquée pour cctv embarquée WO2024075950A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220129119A KR20240049048A (ko) 2022-10-07 2022-10-07 엣지 cctv를 위한 엣지 ai 모델 생성 방법 및 시스템
KR10-2022-0129119 2022-10-07

Publications (1)

Publication Number Publication Date
WO2024075950A1 true WO2024075950A1 (fr) 2024-04-11

Family

ID=90608638

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/010760 WO2024075950A1 (fr) 2022-10-07 2023-07-25 Procédé et système de génération de modèle d'ia embarquée pour cctv embarquée

Country Status (2)

Country Link
KR (1) KR20240049048A (fr)
WO (1) WO2024075950A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101850286B1 (ko) * 2017-06-27 2018-04-19 한국기술교육대학교 산학협력단 딥 러닝 기반 cctv용 영상 인식 방법
KR102152237B1 (ko) * 2020-05-27 2020-09-04 주식회사 와치캠 상황 분석 기반의 cctv 관제 방법 및 시스템
KR20210050889A (ko) * 2019-10-29 2021-05-10 주식회사 디비엔텍 Cctv 관제 시스템 업데이트 방법 및 cctv 관제 시스템 업데이트 장치
KR20210133503A (ko) * 2020-04-29 2021-11-08 주식회사 디케이아이테크놀로지 인공지능형 토탈 보안관제 서비스시스템
KR20220055512A (ko) * 2020-10-26 2022-05-04 한국전자기술연구원 딥러닝 모델의 가변적 운용이 가능한 인공지능 cctv

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210096405A (ko) 2020-01-28 2021-08-05 한국전자통신연구원 사물 학습모델 생성 장치 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101850286B1 (ko) * 2017-06-27 2018-04-19 한국기술교육대학교 산학협력단 딥 러닝 기반 cctv용 영상 인식 방법
KR20210050889A (ko) * 2019-10-29 2021-05-10 주식회사 디비엔텍 Cctv 관제 시스템 업데이트 방법 및 cctv 관제 시스템 업데이트 장치
KR20210133503A (ko) * 2020-04-29 2021-11-08 주식회사 디케이아이테크놀로지 인공지능형 토탈 보안관제 서비스시스템
KR102152237B1 (ko) * 2020-05-27 2020-09-04 주식회사 와치캠 상황 분석 기반의 cctv 관제 방법 및 시스템
KR20220055512A (ko) * 2020-10-26 2022-05-04 한국전자기술연구원 딥러닝 모델의 가변적 운용이 가능한 인공지능 cctv

Also Published As

Publication number Publication date
KR20240049048A (ko) 2024-04-16

Similar Documents

Publication Publication Date Title
WO2021261696A1 (fr) Segmentation d'instances d'objets visuels à l'aide d'une imitation de modèle spécialisé de premier plan
CN107667505A (zh) 用于监控和管理数据中心的系统
WO2018070768A1 (fr) Procédé de commande de système de surveillance et dispositif électronique le prenant en charge
WO2020189884A1 (fr) Approche basée sur un apprentissage machine pour une inférence d'attribut démographique à l'aide de caractéristiques sensibles au temps
WO2019088335A1 (fr) Serveur et système de collaboration intelligent, et procédé d'analyse associé basé sur la collaboration
WO2021225360A1 (fr) Procédé permettant d'effectuer un apprentissage sur dispositif d'un réseau d'apprentissage automatique sur un véhicule autonome à l'aide d'un apprentissage à plusieurs étages présentant des ensembles d'hyperparamètres adaptatifs et dispositif l'utilisant
WO2015102126A1 (fr) Procédé et système pour gérer un album électronique à l'aide d'une technologie de reconnaissance de visage
WO2022097927A1 (fr) Procédé de détection d'événement vidéo en direct sur la base d'interrogations en langage naturel, et appareil correspondant
WO2017086757A1 (fr) Procédé et dispositif de maîtrise de la sécurité d'un dispositif cible à l'aide d'un tunnel sécurisé
CN115580450A (zh) 流量检测的方法、装置、电子设备及计算机可读存储介质
WO2024075950A1 (fr) Procédé et système de génération de modèle d'ia embarquée pour cctv embarquée
WO2020101121A1 (fr) Procédé d'analyse d'image basée sur l'apprentissage profond, système et terminal portable
WO2023229305A1 (fr) Système et procédé d'insertion de contexte pour l'entraînement d'un réseau siamois à contraste
WO2024117448A1 (fr) Procédé et système de commande automatique de caméra cctv ptz
WO2022108127A1 (fr) Procédé et système de recherche d'image cctv stockée sur la base d'informations d'espace de photographie
WO2019066319A1 (fr) Procédé de provisionnement d'informations de clé et appareil utilisant le procédé
WO2016122022A1 (fr) Procédé de configuration de système iot/m2m par analyse d'utilisation
KR20220074693A (ko) 쿼리 처리 시스템, 장치 및 방법
WO2024043390A1 (fr) Système et procédé de génération de modèle d'apprentissage profond basés sur l'apprentissage par transfert hiérarchique pour la reconnaissance d'informations environnementales
WO2024117457A1 (fr) Procédé et système pour empêcher le vol dans un environnement de magasin en libre-service
WO2018093158A1 (fr) Système éducatif d'apprentissage et d'évaluation vidéo basé sur un apprentissage profond
WO2023286947A1 (fr) Système d'apprentissage distribué multitâche et procédé basé sur un réseau neuronal de billets de loterie
WO2021033814A1 (fr) Système de service de reconnaissance gestuelle non vestimentaire basé sur un radar
WO2018016798A1 (fr) Dispositif de gestion de sécurité et procédé de gestion de sécurité permettant de gérer la sécurité d'un terminal client
WO2023033229A1 (fr) Procédé et système de traitement par lot adaptatif

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23875020

Country of ref document: EP

Kind code of ref document: A1