WO2021261656A1 - Appareil et système de fourniture d'un service de surveillance de sécurité sur la base de l'informatique en périphérie de réseau, et son procédé de fonctionnement - Google Patents

Appareil et système de fourniture d'un service de surveillance de sécurité sur la base de l'informatique en périphérie de réseau, et son procédé de fonctionnement Download PDF

Info

Publication number
WO2021261656A1
WO2021261656A1 PCT/KR2020/011457 KR2020011457W WO2021261656A1 WO 2021261656 A1 WO2021261656 A1 WO 2021261656A1 KR 2020011457 W KR2020011457 W KR 2020011457W WO 2021261656 A1 WO2021261656 A1 WO 2021261656A1
Authority
WO
WIPO (PCT)
Prior art keywords
deep learning
information
security monitoring
image
detection
Prior art date
Application number
PCT/KR2020/011457
Other languages
English (en)
Korean (ko)
Inventor
박주영
이동식
이원경
Original Assignee
주식회사 자비스넷
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 자비스넷 filed Critical 주식회사 자비스넷
Publication of WO2021261656A1 publication Critical patent/WO2021261656A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present invention relates to a system for providing a security service and a method for operating the same. More specifically, the present invention relates to a security monitoring service providing apparatus, system, and operating method thereof that improve monitoring performance and system efficiency by providing a security service based on edge (EDGE) computing.
  • EDGE edge
  • various security systems are provided that provide security for the area to be monitored through image analysis and object identification of images received from remote cameras for security in the area to be monitored.
  • various services are provided, such as notifying the manager whether there is a danger and recording an image according to the detection of the object, thereby increasing the convenience of managing the area to be monitored.
  • a security monitoring system detects a monitoring target located in the monitoring target area through a sensor located in the monitoring target area, and when a monitoring target is detected, the monitoring target is captured by a camera in conjunction with the detection of the monitoring target. By providing the image, it is provided so that the monitoring target detected through the sensor can be confirmed through the image.
  • a recent security monitoring system is set to operate so that an alarm is provided to the administrator when a monitoring target object corresponding to the monitoring target is identified in the image captured by the camera when an intrusion signal is generated through the sensor along with the detection of the sensor This is to minimize false information in the security system.
  • the area to be monitored generally has a low-light environment, and for this reason, even in recent security systems, when the sensor detects the object to be monitored, the object does not appear clearly in the image generated by the camera operating in the low-light environment, so the object is not the object to be monitored. Even when an alarm is provided or a monitoring target object appears, the report is frequently omitted because the monitoring target object is not identified through the video, so there is a problem in that the false information rate cannot be significantly improved.
  • a deep learning analysis device or server is often located in a separate remote location connected to an external network. Accordingly, object classification and accurate event detection are possible only after the image information is analyzed through the deep learning analysis server, so there is a problem that not only computing cost but also the most important time resource in the security monitoring system is consumed.
  • the present invention was devised to solve the above problems, and the security monitoring device installed in the field is configured as an intelligent security monitoring device to build an artificial intelligence-based local video monitoring system, but the tracking data requiring deep learning analysis is Edge computing-based security monitoring service providing device, system and An object of the present invention is to provide a method of operation thereof.
  • a method for operating an intelligent security monitoring apparatus comprising: collecting image information for security monitoring; acquiring tracking data for tracking an object of the image information according to an initial analysis process for the image information; generating deep learning distributed processing request data based on the tracking data and the image information; and transmitting the deep learning distributed processing request data to a deep learning distributed processing device.
  • an apparatus for solving the above problems, in an intelligent security monitoring apparatus, includes: an image information collecting unit for collecting image information for security monitoring; an object tracking unit configured to acquire tracking data for tracking an object of the image information according to an initial analysis process for the image information; and a distributed data processing unit for generating deep learning distributed processing request data based on the tracking data and the image information, and transmitting the deep learning distributed processing request data to a deep learning distributed processing apparatus.
  • an intelligent camera device comprising: a camera unit for capturing a real-time image to obtain; an image information stabilization unit for stabilizing the real-time image received from the camera unit into image information for security monitoring; an object tracking unit configured to acquire tracking data for tracking an object of the image information according to an initial analysis process for the image information; and a distributed data processing unit for generating deep learning distributed processing request data based on the tracking data and the image information, and transmitting the deep learning distributed processing request data to one or more deep learning distributed processing devices.
  • the method according to an embodiment of the present invention for solving the above problems may be implemented as a program for executing the method in a computer and a recording medium in which the program is recorded.
  • a security monitoring device installed in the field is configured as an intelligent security monitoring device to construct a primary AI-based video monitoring system, but only monitoring data requiring deep learning analysis is processed into distributed data.
  • FIG. 1 is a conceptual diagram schematically illustrating an entire system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating in more detail the configuration and connection relationship of an intelligent monitoring device and a deep learning distributed server according to an embodiment of the present invention.
  • FIG. 3 is a block diagram illustrating a detection module of an event detection unit according to an embodiment of the present invention.
  • FIG. 4 is a ladder diagram for explaining the overall system operation according to an embodiment of the present invention.
  • FIG. 5 is a conceptual diagram illustrating an intelligent camera device-based system according to another embodiment of the present invention.
  • FIG. 6 is a block diagram illustrating in more detail the configuration and connection relationship of a deep learning analysis system implemented with an intelligent camera device according to an embodiment of the present invention.
  • processors control, or similar concepts should not be construed as exclusively referring to hardware having the ability to execute software, and without limitation, digital signal processor (DSP) hardware, ROM for storing software. It should be understood to implicitly include (ROM), RAM (RAM) and non-volatile memory. Other common hardware may also be included.
  • DSP digital signal processor
  • FIG. 1 is a conceptual diagram schematically illustrating an entire system according to an embodiment of the present invention.
  • the entire system includes an intelligent security monitoring device 100, a user terminal 200, a deep learning distributed processing device 300, a relay server 400, a control service server ( 500 ), one or more camera devices 600 , and one or more sensor devices 700 .
  • the camera device 600 and the sensor device 700 may be disposed in a security monitoring area, and may be connected to the intelligent security monitoring device 100 through a local monitoring network.
  • the camera device 600 and the sensor device 700 each photograph a security monitoring area or sense a detection signal generated in or around the security monitoring area to obtain image information and sensor information, and obtain image information and a sensor The information is transmitted to the intelligent security monitoring device 100 .
  • the intelligent security monitoring device 100 may collect, store, and manage image information and sensor information received from the camera device 600 and the sensor device 700, and is generated in a security monitoring area based on the image information and sensor information It is possible to detect one or more security monitoring events, and perform monitoring service processing corresponding to the detected event.
  • the monitoring service processing may be exemplified by an alarm message transmission service to the user terminal 200 corresponding to the generated event, an alarm notification function operation or cancellation service, a status information transmission service to the control service server 500 , and the like.
  • the intelligent security monitoring device 100 may be connected to the relay server 400, the control service server 500, or the user terminal 200 through a wired/wireless network, and the intelligent security monitoring device 100 is One or more wired/wireless communication modules for performing communication through the wired/wireless network may be provided. Various well-known communication methods may be applied to the wired/wireless communication network.
  • the intelligent security monitoring apparatus 100 may include one or more information processing modules and information analysis modules for detecting an event by analyzing image information and sensor information.
  • the intelligent security monitoring apparatus 100 may detect predetermined first monitoring events based on the initial analysis information according to information processing and analysis, but the second detection and classification of objects that require more accurate analysis For the monitoring event, separate deep learning analysis processing may be processed in conjunction with one or more deep learning distributed processing devices 300 .
  • the first monitoring event and the second monitoring event may be grouped according to whether or not deep learning analysis is processed, and the first monitoring event is an intrusion event that is quickly detected according to the initial analysis of sensor information and image information, and an unauthorized camera change event. , fire occurrence event, etc. may be exemplified, and the second monitoring event is a dynamic object tracking event accompanied by accurate object detection and classification information according to deep learning analysis, a trip wire event, a wandering object detection event, a traffic direction violation detection event, Unauthorized object detection event, unauthorized moving object detection event, access counting and statistical event, crowd density detection event, unusual behavior detection event, etc. may be exemplified.
  • the intelligent security monitoring apparatus 100 transmits the initial analysis information according to the occurrence of the first monitoring event to the control service server 500 or the user terminal 200 through the relay server 400 first, and 1 According to the result of deep learning analysis of video information within a certain time range related to the monitoring event, a second monitoring event is detected as detailed analysis information for the first monitoring event, and a control service server ( 500) or the user terminal 200 may provide the second monitoring event information.
  • the intelligent security monitoring device 100 can not only quickly process the basic monitoring service processing according to the first monitoring event, but also deep dive of distributed data for the data initially analyzed and processed in the intelligent security monitoring device 100 . Since only the learning analysis processing is requested and processed by a separate specialized deep learning distributed processing device 300, the occurrence of the second monitoring event, which is provided only when more accurate object classification is possible, can also be quickly detected according to the distributed processing of data. there will be
  • the intelligent security monitoring device 100 configures distributed data corresponding to image information requiring deep learning analysis for the second event detection processing, and transmits a deep learning analysis processing request to the deep learning distributed processing device 300 .
  • the intelligent security monitoring device 100 processes the initial analysis information capable of detecting a first event on the collected image information, and tracking data for tracking an object corresponding to the image information from the initial analysis information. can be obtained
  • the initial analysis processing may include at least one of image stabilization processing, background modeling processing, shape calculation processing, element connection processing, and object tracking processing
  • the tracking data is, for example, from image information to a foreground pixel.
  • the intelligent security monitoring device 100 may generate deep learning distributed processing request data based on the tracking data and the image information, and transmit the deep learning distributed processing request data to the deep learning distributed processing device.
  • the deep learning distributed processing device 300 may be composed of one or more devices connected through a network to perform edge computing, and according to the deep learning distributed processing request data, the tracking data and the image information Deep learning analysis can be processed, and deep learning analysis result information can be delivered to the intelligent security monitoring device 100 . More specifically, the deep learning analysis result information may include object detection information and object classification information corresponding to the tracked object.
  • Such object detection information and object classification information may be, for example, dynamic object information identified according to a pattern such as moving or operating in a specific direction from pre-accumulated image information, and the intelligent security monitoring device 100 . may detect a second monitoring event corresponding to the image information based on the object detection information, the object classification information, and the tracking data.
  • the deep learning distributed processing device 300 processes the deep learning image analysis according to the deep learning distributed processing request data received from the intelligent security monitoring device 100, and transmits the result information to the intelligent security monitoring device 100 It can be delivered, and further, the deep learning image analysis information can also be transmitted to the relay server 400 .
  • the deep learning distributed processing device 300 based on the deep learning distributed processing request data, generally operates in a low-illumination environment, and an image for a video generated when a monitoring target is detected through a sensing signal of a sensor Even when an object is detected during the analysis process, it is difficult to identify the object, so it generates and reports an event for an object other than the object to be monitored. Deep learning analysis can be processed to improve problems with an increase in false positives, such as when a report is omitted according to the detection of a target.
  • the deep learning distributed processing device 300 continuously learns the characteristics of the monitoring target object appearing in the environmental characteristics (environmental conditions) of the monitoring target area through deep learning to increase the identification accuracy of the monitoring target object, and at the same time, the monitoring target object By continuously learning the image characteristics of the identified image to determine the optimal image for easy object identification, and by selecting and providing an image in which the object to be monitored is accurately identified, the object to be monitored is accurately distinguished from other objects and supported to obtain deep learning analysis result information that lowers the false alarm rate and prevents the case of missing a report when a monitoring target is detected, and provides the analysis result information to the intelligent security monitoring device 100 or the relay server 400 have.
  • the relay server 400 collects, stores and manages video surveillance analysis data from the intelligent security monitoring devices 100 installed in various places, and the deep learning image analysis result received from the deep learning distributed processing device 300 . and a server that relays the video monitoring analysis data to the user terminal 200 and the control service server 500 .
  • the relay server 400 indexes the deep learning image analysis information corresponding to the event detection data in response to the intelligent security monitoring device 100. It may be a server that transmits deep learning image analysis information corresponding to the event detection data to the pre-registered user terminal 200 .
  • the event detection data and deep learning image analysis information may be transmitted to the control service server 500 , and the control service server 500 reports an accident occurrence based on the event detection data and deep learning image analysis information, fire occurrence
  • Various control service processing such as reporting, user notification, monitoring device control, and real-time monitoring may be performed, and service performance information may be transmitted to the user terminal 200 .
  • FIG. 2 is a block diagram illustrating in more detail the configuration and connection relationship of an intelligent monitoring device and a deep learning distributed server according to an embodiment of the present invention.
  • the intelligent security monitoring apparatus 100 includes an image information collection unit 110 , a background modeling unit 120 , a shape calculation processing unit 130 , and an element connection processing unit 140 .
  • the data processing unit 190 may be connected to the deep learning distributed processing apparatus 300 .
  • the deep learning distributed processing apparatus 300 may include a deep learning-based image information analysis unit 310 , an object detection unit 320 , and an object classification processing unit 330 .
  • the image information collecting unit 110 collects the security monitoring area sensing information received from the sensor device 700 and the security monitoring area captured image data received from the camera device 600, and includes the image data captured by the camera.
  • image information stabilization according to shake correction can be processed.
  • the stabilization process may be exemplified by dividing an image captured by the camera into blocks, extracting feature points, and obtaining a stable image by correcting image shake by calculating a motion vector of the feature points.
  • the background modeling unit 120 may model the background region corresponding to the image information that has been stabilized.
  • a plurality of camera devices 600 may be provided, and the background modeling unit 120 may model a background of an image for each image area captured by each camera device 600 .
  • the background modeling unit 120 may identify an object corresponding to the foreground according to the background modeling and calculate background area data to be tracked.
  • the background modeling unit 120 may generate a background probabilistic model based on a well-known Gaussian Mixutre Model from image data so as to accurately detect an object in a complex environment. This makes it possible to create background area data by reflecting many variables such as changes in lighting, objects added or removed to the background, backgrounds with motion such as swaying branches or fountains, and areas with high traffic.
  • the background modeling unit 120 may construct a background model per pixel according to a statistical probability for time t through background subtraction processing.
  • the background modeling unit 120 subtracts the background information constructed according to the Gaussian mixture model from the image information of the current frame, and accumulates and updates the background information to the background model per pixel according to the MOG (Mixtrue of Guassians) method. Subtraction processing may be performed.
  • MOG Mattrue of Guassians
  • foreground pixel information remains in the image information from which the background is subtracted.
  • This may be referred to as a blob image, and the blob image is binary map data corresponding to the foreground pixel. It can also be said that
  • the shape calculation processing unit 130 may perform shape calculation processing to determine shape region information from binary map data of foreground pixel information that is subtracted and output according to the background modeling by the background modeling unit 120 .
  • the shape region information may be grouping information for grouping a plurality of pixels into one or more shape structure elements.
  • the shape calculation processing unit 130 may determine the shape region information by applying one or more morphological calculation filters to the binary map data of the foreground pixel information.
  • the one or more morphological operation filters may include a binary erosion operation filter and a binary dilation operation filter, where each filter determines the size of a bright region in the binary map data and information on the size of a predetermined morphological structure element. It is possible to perform arithmetic processing of expanding or reducing in proportion to . When the expansion operation filter is applied, dark areas smaller than the shape structure element are removed, and when the erosion operation filter is applied, the bright areas smaller than the shape structure element can be removed, and at the same time, the size of large areas that are not removed is also reduced or increased. can be
  • the shape operation processing unit 130 uses a binary open filter and a closed filter to remove only small areas while maintaining the size of a larger area than the shape structure element. can also be performed.
  • the element connection processing unit 140 acquires image information corresponding to the shape structure element specified by the shape operation processing unit 130 , and classifies the shape structure elements into independent object connection regions by linking and classifying them.
  • Connection component labeling processing of allocating unique label values corresponding to the connected object connection region may be performed. This connection component labeling process can be effectively used especially for binary map data.
  • the element connection processing unit 140 may classify the classified object connection regions by unique label values, determine and output characteristic values of the regions, such as the size, position, direction, and circumference of each object region.
  • the element connection processing unit 140 may apply a cyclic algorithm or a sequential algorithm to perform connection component labeling processing and classify object connection regions.
  • the sequential algorithm may be an algorithm for determining the label of the current pixel by searching the labels of the upper pixel and the left pixel of the pixel data of each foreground pixel.
  • the neighboring pixels including the upper pixel and the left pixel may be pixels that have already been processed in the labeling process.
  • the element connection processing unit 140 may allocate a new label value to the current pixel.
  • the element connection processing unit 140 may assign the label value of the pixel to the current pixel.
  • both neighboring pixels are foreground pixels and have the same label value, the element connection processing unit 140 may assign the same label value to the current pixel.
  • the element connection processing unit 140 may allocate the smaller of the label values of the two pixels as the label value of the current pixel, and register the two labels as equivalent labels in the equality table.
  • information on labels to be merged into the same region may be stored in the equivalence table.
  • the element connection processing unit 140 may assign the same label to all pixels of each object connection area in the second labeling process using the equivalence table.
  • the element connection processing unit 140 may output object connection region information according to the connection component labeling process together with blob image data including the foreground pixel data, and the object connection region information is for each object connection region.
  • the object connection region information is for each object connection region.
  • the object tracking unit 150 may acquire tracking data for tracking object information in the image information, based on the blob image data and the object connection region information.
  • the tracking data may include list information of objects found and tracked in the image. More specifically, the object tracking unit 150 may cumulatively match the blob image at time t and the blob image at t-1 to identify and process the object corresponding to the object connection area, and may be included in the list, Tracing processing using a Kalman filter method may be exemplified.
  • the Kalman filter is a filter that detects the state of a linear dynamic system by recursively accumulating input data based on measurements performed over time.
  • the object tracking unit 150 may output object tracking data identified in the image information.
  • the object tracking data may include size information and movement speed information of each identified object along with list information of the object.
  • the viewpoint converting unit 160 converts the size information and movement speed information of each object in the above-described object tracking data into actual values according to the 3D viewpoint conditions for each camera device. That is, the tracking data of the object tracking unit 150 has information on the size and speed of the object, but since it is a relative value between pixels, the viewpoint converting unit 160 can convert the pixel coordinates of the image into coordinates in real space. , conversion processing using camera characteristic information such as angle of view and height specified in advance through calibration may be performed.
  • the viewpoint converting unit 160 may convert size information and speed information of objects listed in the object tracking data into size information and speed information in real space.
  • the object classification processing unit 170 may perform a process of allocating classification information to the object identified in the object tracking data based on a preset classification criterion.
  • the process of setting the classification criterion itself or accurately determining whether it corresponds to the classification criterion is processed in a separate deep learning distributed processing device 300 rather than the intelligent security monitoring device 100 installed in the local monitoring network. It could be more efficient.
  • the object classification processing unit 170 uses the distributed data processing unit 190 to process the tracking data for object classification processing. and transmits a deep learning analysis processing request including image information to the deep learning distributed processing apparatus 300 , receives analysis result information from the deep learning distributed processing apparatus 300 , and object classification processing using the received analysis result information can be performed.
  • the distributed data processing unit 190 generates deep learning distributed processing request data based on the tracking data and the image information according to the initial analysis obtained through the viewpoint converting unit 160, and the deep learning distributed processing request data may be transmitted to the deep learning distributed processing device, and the deep learning-based image information analysis result received from the deep learning distributed processing device 300 may be received and transmitted to the object classification processing unit 170 .
  • the deep learning distributed processing apparatus 300 responds to the distributed processing request data using the deep learning-based image information analysis unit 310 that builds neural network data from pre-trained image information according to the deep learning method, and the neural network data It may include an object detection unit 320 that detects object information to be used, and an object classification processing unit 330 that determines classification information of the detected object information.
  • a neural network according to a deep learning algorithm based on DNN (Deep Neural Network) is set, and the neural network is an input layer and one or more hidden layers (Hidden). Layers) and an output layer (Output Layer) may be configured.
  • DNN Deep Neural Network
  • Layers Layers
  • Output Layer an output layer
  • a neural network other than DNN may be applied, for example, a neural network such as a Convolution Neural Network (CNN) or a Recurrent Neural Network (RNN) may be applied.
  • CNN Convolution Neural Network
  • RNN Recurrent Neural Network
  • the deep learning-based image information analysis unit 310 of the deep learning distributed processing device 300 analyzes one or more normal images obtained from the image information through a deep learning-based neural network to identify a monitoring target object and a normal image in which the monitoring target object is identified may be stored.
  • the deep learning-based image information analysis unit 310 may identify a monitoring target object from one or more specific images obtained from the learning image information, and a specific object other than the monitoring target object is identified. If the similarity between the specific image and one or more of the normal images is calculated, if the similarity is greater than or equal to a preset reference value, the error between the output value and object information processed based on the neural network for the specific image Error information may be generated, and parameters configuring the neural network may be adjusted based on the error information through a preset back propagation algorithm.
  • the similarity comparison method between images is histogram matching.
  • the deep learning-based image information analysis unit 310 based on the error information through the back-propagation algorithm to the weight of the connection strength between the input layer, one or more hidden layers, and the output layer constituting the neural network through the weight (weight)
  • the learning process may be performed so that the identification error of the monitoring target object is minimized.
  • the deep learning-based image information analysis unit 310 can update the object information by repeatedly learning the image continuously received from the intelligent security monitoring device 100 or other various imaging devices through the neural network, and The object corresponding to the parameter for each object characteristic included in the object information in the image by adapting to the change in the object characteristic appearing in the image according to the environmental change (environmental condition change (eg, illuminance, obstacle)) of the monitoring target area through the can be accurately identified.
  • environmental condition change eg, illuminance, obstacle
  • the object detection unit 320 may detect one or more objects identified from tracking data and image information obtained from deep learning distributed processing request data based on the learned neural network (or neural network) data, and classify the object
  • the processing unit 330 may generate object classification information including parameters for various object characteristics, such as size, color distribution, outline, etc. for each location on an image for each object, and use the generated object classification information to intelligent security monitoring device 100 can be transmitted to the distributed data processing unit 190 of
  • the deep learning distributed processing apparatus 300 may include a separate user input unit or a communication unit, and the deep learning distributed processing apparatus 300 includes control information related to user input through the user input unit or received from the outside through the communication unit. Any one of the one or more objects classified by the object classification processing unit 330 based on the control information may be set as a monitoring target object, and setting information thereof may be stored.
  • the intelligent security monitoring device 100 receives only the processing result of the high-performance deep learning operation through the analysis processing of the distributed data provided from the deep learning distributed processing device 300 and quickly the event detection unit 180 high-performance analysis equipment to the intelligent security monitoring device 100 by distributing only the data that requires complex deep learning analysis along with the rapid data processing of the intelligent security monitoring device 100 itself through separate data processing. It is possible to secure the advantages of deep learning image analysis and the speed of data processing without mounting it.
  • This distributed data processing can be processed by an edge computing method in which the intelligent security monitoring device 100 and the deep learning distributed processing device 300 are interlocked, and the deep learning distributed processing device 300 is a distributed data processing unit ( 190) can be an edge device that can provide fast service processing by collecting and analyzing only the minimum data to minimize the data processing delay time, and can be built with a distributed open architecture that can provide it.
  • deep learning-based image analysis can increase the accuracy of the identification of the monitored object through repeated learning of the monitored object, as well as the change in the characteristics of the monitored object appearing in the video or image reflecting the environmental characteristics of the monitored area.
  • the deep learning algorithm is optimized according to the characteristics of the monitored object appearing in the surveillance target area to accurately identify the surveillance target photographed through the camera generally operated in low-light environments due to the operational characteristics of the security system.
  • Object identification accuracy can be improved, and through this, it is possible to accurately classify objects other than the monitoring target object, thereby supporting the prevention of misinformation by erroneously judging the non-monitoring object as the monitoring target.
  • the event detection unit 180 is one obtained according to at least one of the collected sensor information, the tracking data of the image information, and the object classification information transmitted from the distributed data processing unit 190 through the object classification processing unit 170 .
  • An abnormal event is detected, and the detected event information may be transmitted to the monitoring service processing unit 185 .
  • the monitoring service processing unit 185 may perform a predetermined monitoring service process according to each event information, which will be described in more detail with reference to FIG. 3 .
  • FIG. 3 is a block diagram illustrating a detection module of an event detection unit according to an embodiment of the present invention.
  • the event detection unit 180 more accurately classifies moving objects in an image into a person performing a specific action, a vehicle of a specific type, etc. by the deep learning distributed processing device 300 .
  • the monitoring service processing unit 185 processes and controls a notification service to the user terminal 200 suitable for each event detection or a transmission service to the relay server 400 .
  • Request service processing, etc. may be performed.
  • the event detection unit 180 performs dynamic object tracking in the image information from the tracking data and object classification information, and detects or detects one or more events according to the dynamic object tracking information. can do.
  • the event detection unit 180 maps the tracking data of the object tracking unit 150 and the deep learning-based object classification information received from the distributed data processing unit 190, and uses it within the camera viewable area (Filed of View). By tracking all moving objects individually, it is possible to identify the moment of passage of a trip wire or an area of interest, and further, by analyzing the movement trajectory for each object, detection or statistical information used for analysis of customer flow in the store is output. You may.
  • the event detection unit 180 may include a trip wire detection unit.
  • the trip wire detection unit may set a virtual boundary line and process intrusion detection for perimeter detection or intrusion detection within a specific area through the dynamic object tracking.
  • the trip wire detection unit can also process object counting that passes (intrudes) the boundary line, and detects when there is an object (person, vehicle, or other object) passing on the ground based on the trip wire, but the It is possible to determine whether an event has occurred by checking whether the trip wire has passed (intruded) while moving from the first direction to the second direction through tracking the direction of movement (bidirectional or unidirectional).
  • the event detection unit 180 may include an intrusion detection unit.
  • the intrusion detection unit may perform a region-of-interest-based intrusion detection process such as detecting a boundary-based intrusion using the trip wire detector or detecting an intrusion when an object movement corresponding to a preset region of interest is detected.
  • the trip wire method is preferable for a perimeter, sea or coast, air (sky), and entrance, and it may be desirable to detect intrusion using an area of interest in a building.
  • the chip detection unit may simultaneously perform intrusion detection processing based on the trip wire and the region of interest.
  • the event detection unit 180 may include a wandering object detection unit.
  • the wandering object detection unit may detect a wandering object, such as a person or a vehicle, to obtain information on the wandering object event, and transmit the wandering object event information to the monitoring service processing unit 185 .
  • the wandering object detection unit may be used to observe major security facilities, security boundary areas, high-value storage facilities, and the like.
  • the event detection unit 180 detects whether an object such as an outsider or an external vehicle has wandered around the restricted access area for more than a certain period of time through the wandering object detection unit, and the object wandering for more than a certain amount of time can be detected as a wandering object. .
  • a wandering person or vehicle has the potential to cause an accident, so it becomes a subject of intensive monitoring, and through this, an accident can be prevented in advance.
  • the event detection unit 180 may include a traffic direction violation detection unit.
  • the Wrong Directions detection unit may detect a traffic violation object in a traffic direction observance area, such as an airport.
  • the event detection unit 180 may detect a passage direction violation object, and such a passage direction violation object detection unit may operate even with respect to a theater or a performance hall in which incoming and outgoing directions are designated differently.
  • the traveling direction violation object detection unit may process reverse traveling detection and the like to detect the reverse traveling vehicle object as a traveling direction violation object.
  • the event detection unit 180 may include an unauthorized object detection unit.
  • the Unattended Object section detects unattended objects that remain stationary even after the specified time has elapsed after an object (package, bag, cart, etc.) left unattended by someone in the predefined area of interest appears. and the detected unauthorized object object information may be transmitted to the monitoring service processing unit 185 . This makes it possible to detect and prevent the arbitrariness of leaving explosives for terrorism, particularly in airports, terminals, station platforms, and major event venues.
  • the event detection unit 180 may include an unauthorized moving object detection unit.
  • the unauthorized moving object detection unit detects the event information obtained by detecting the disappearance of the pre-designated objects from the video object in places where theft frequently occurs, such as museums, exhibitions, high-end displays, airports, stores, and warehouses, etc.
  • the monitoring service processing unit 185 can be passed as
  • the event detection unit 180 may include an access counting and statistics unit.
  • Access counting Object Counting
  • the statistical unit may acquire statistical information on the number of visitors and exits of public facilities such as department stores and shopping centers, museums, exhibitions, and theaters, and transmit it to the monitoring service processing unit 185 .
  • the monitoring service processing unit 185 may provide real-time counting and occupant statistical information for each time period to the control service device 500 through the user terminal 200 or the relay server 400, and the user manages the statistical information. It can be used as basic information required for store customer management and store layout management.
  • the user terminal 200 may output statistical information for comparing the sales compared to the number of customers obtained based on the visitor statistics information.
  • the access counting and statistics unit may perform a process of automatically calculating the number of vehicles passing by lane on a specific road, and the monitoring service processing unit 185 may transmit statistical data of the number of vehicles passing by time to the user terminal 200 or a relay server. It may be provided to 400 or the control service server 500 .
  • the event detection unit 180 may include a crowd density detection unit.
  • the crowd density detection unit may detect an event that develops to an overcrowd scale exceeding a designated crowd density within a preset region of interest.
  • the event detection unit 180 may check the current density state in real time from the object tracking data and classification information, and may transmit event information corresponding to the density state to the monitoring service processing unit 185 .
  • the monitoring service processing unit 185 may transmit the density information based on the event information to the user terminal 200 or the relay server 400 or the control service server 500 .
  • the manager can check the crowding information and decide whether to open additional doors or checkout counters when the crowd becomes overcrowded at stores, airports, and theaters, and defend in places where protests and assemblies are frequent. You can decide whether to deploy additional agents.
  • Crowd density counting may use a method of counting all people classified within the region of interest at a certain time.
  • the event detection unit 180 may include a specific behavior detection unit.
  • the Suspicious Behavior detection unit may detect a person performing an unusual behavior, such as a person who slipped or tripped, a person who is fighting, a person who is running.
  • the event detection unit 180 may utilize the deep learning analysis information and object classification information of the deep learning distributed processing device 300, and the deep learning distributed processing device 300 is a neural network learning of image information. Through this, classification processing for specific actions such as slip and fall detection, fighting person detection, and running person detection (Running) may be possible.
  • the event detection unit 180 can process the occurrence of an event through detection of a slipping or tripping person, and transmit it to the monitoring service processing unit 185, which can be effectively used for monitoring the elderly in a silver town or a nursing home.
  • the monitoring service processing unit 185 may be in danger of life after an elderly person falls if a person who can help does not come immediately. Therefore, notification request information or control to the user terminal 200 or the control service server 500 corresponding thereto Request information can be transmitted.
  • the event detection unit 180 may include a camera unauthorized change detection unit.
  • the camera tampering detection unit detects short circuiting of the camera cable by a third party, defocusing due to unauthorized lens manipulation, when the camera lens is covered with a hand or other object, or when the lens is arbitrarily rotated in a different direction. It is possible to detect tampering with the camera, and the deep learning distributed processing device 300 may pre-process image learning for each situation for detecting tampering with the camera and build it as neural network data.
  • FIG. 4 is a ladder diagram for explaining the overall system operation according to an embodiment of the present invention.
  • the user terminal 200 performs pre-user registration processing in the intelligent security monitoring apparatus 100 and the control service server 500 ( S101 ).
  • User registration processing for example, user information as an administrator using the intelligent security monitoring device 100 and mobile phone information registration processing may be exemplified, and the user terminal 200 is the processing information of the intelligent security monitoring device 100 And as an input/output device for receiving and outputting the control service processing information of the control service server 500, or transmitting input information of the user terminal 200 to the intelligent security monitoring device 100 or the control service server 500, a computer , various electronic devices such as smartphones, tablet PCs, and navigation devices may be exemplified.
  • the intelligent security monitoring device 100 collects and stabilizes image information collected from one or more camera devices (S103).
  • the intelligent security monitoring device 100 performs a background modeling process of the image information that has been stabilized (S105), and performs a shape calculation process for detecting a shape element region of the foreground binary data from which the background image is subtracted according to the background modeling process. (S107), element connection processing for labeling by connecting shape element regions to object connection regions is performed (S109), and object tracking processing for composing an object list corresponding to object connection regions is performed (S111).
  • the intelligent security monitoring device 100 the object tracking information including the object list in the image information obtained according to the initial analysis of steps S103 to S111 as described above, and processing the image information into deep learning distributed processing request data Thus, it is transmitted to the deep learning distributed processing device 300 .
  • the deep learning distributed processing device 300 applies the object tracking information and image information of the deep learning distributed processing request data to the pre-built neural network data, performs deep learning analysis-based object detection processing (S115), and detection A classification process is performed by identifying the pattern of the object that has been changed (S117).
  • the deep learning distributed processing apparatus 300 responds to the object detection information and classification information according to the deep learning-based analysis to the intelligent security monitoring apparatus 100 (S119).
  • the object detection information and classification information may include information used for event detection by the above-described event detection unit 180, for example, the type and pattern of the object passing through the trip wire, and the intrusion detection of the object.
  • It may include various detection information and classification pattern information, such as the type and pattern of objects that are the basis of the detection of the dense state, the types and behavior patterns of objects corresponding to the specific behavior, and the types and patterns of objects indicating unauthorized camera change detection. .
  • the intelligent security monitoring apparatus 100 determines event detection (S123).
  • the sensor information of the above-described sensor device 700 may also be used for event detection determination.
  • the intelligent security monitoring apparatus 100 may transmit an event notification message to the user terminal 200 (S125) or may transmit event detection data to the relay server 400 (S127).
  • the deep learning distributed processing device 300 transfers the image analysis information to the relay server 400 to be stored and managed in advance (S121), which is a deep learning corresponding to the event detection data to the user terminal 200 (S121). It may be used to provide running image analysis information (S129).
  • the user terminal 200 may output a notification interface including the event analysis information based on deep learning together with the event notification message (S131).
  • the user may input processing information of the intelligent security monitoring device 100 appropriate for alert, release, alarm, etc. according to the type of event (133), and the input processing information is the intelligent security monitoring device 100 or the control service server. (500).
  • the intelligent security monitoring apparatus 100 performs processing according to the user input information, and transmits the processing result information to the user terminal 200 (S135).
  • the relay server 400 receives the control service request information according to the user input of the intelligent security monitoring device 100 or the user terminal 200 (S137), and the control service request based on the received request information and event detection data may be transmitted to a pre-specified control service server 500 (S139).
  • emergency dispatch service to the location of the intelligent security monitoring device 100 according to the control service request, fire or intrusion report service, real-time video monitoring service, surrounding notification service, intelligent security monitoring device 100 Processes a control service such as a control service of (S141), and transmits control processing data including processing result information to the intelligent security monitoring device 100 or the user terminal 200 (S143, S145).
  • a control service such as a control service of (S141)
  • control processing data including processing result information to the intelligent security monitoring device 100 or the user terminal 200 (S143, S145).
  • FIG. 5 is a conceptual diagram for explaining an intelligent camera device-based system according to another embodiment of the present invention
  • FIG. 6 is a configuration and connection relationship of a deep learning analysis system implemented with an intelligent camera device according to an embodiment of the present invention. It is a block diagram showing in more detail.
  • the intelligent security monitoring device 100 may be an intelligent camera device connected to a security network, and the camera unit 105 is directly built-in security. It can function as a camera. Therefore, the intelligent security monitoring device 100 according to an embodiment of the present invention may be an intelligent IP camera that performs edge computing, and image information stabilization, background modeling, shape calculation processing, and background modeling of the intelligent security monitoring device 100 described above. It may be a first intelligent camera device 100 having an analysis module for performing one or more of element connection processing, object tracking, viewpoint transformation, object classification processing, and event detection processing, and the remaining analysis module is another second intelligent The camera device 300 may be distributed and provided.
  • the second intelligent camera device 300 may be another security camera in which a separate camera unit 305 is directly embedded, and the above-described deep learning-based image information analysis unit 310 and object detection unit 320 . And it may include at least one of the configuration of the deep learning distributed processing apparatus 300 including the object classification processing unit (330).
  • the security monitoring system can be built as an edge computing network including the first intelligent camera device 100 and the second intelligent camera device 300 as an intelligent IP camera that performs edge computing.
  • the security monitoring network that is superior to the existing intelligent video monitoring device and minimizes the cost of building a computing network and equipment while being superior to that of the existing intelligent video monitoring device.
  • the method according to the present invention described above may be produced as a program to be executed by a computer and stored in a computer-readable recording medium.
  • Examples of the computer-readable recording medium include ROM, RAM, CD-ROM, magnetic tape. , floppy disks, and optical data storage devices.
  • the computer-readable recording medium is distributed in a network-connected computer system, so that the computer-readable code can be stored and executed in a distributed manner.
  • functional programs, codes, and code segments for implementing the method can be easily inferred by programmers in the art to which the present invention pertains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

Un mode de réalisation de la présente invention concerne un procédé de fonctionnement d'un appareil intelligent de surveillance de sécurité, le procédé comprenant les étapes consistant : à collecter des informations d'image destinées à une surveillance de sécurité; à acquérir des données de suivi permettant de suivre un objet des informations d'image, selon un traitement d'analyse initial des informations d'image; à générer des données de requête de traitement distribué par apprentissage profond sur la base des données de suivi et des informations d'image; et à transmettre les données de requête de traitement distribué par apprentissage profond à un ou plusieurs appareils de traitement distribué par apprentissage profond.
PCT/KR2020/011457 2020-06-25 2020-08-27 Appareil et système de fourniture d'un service de surveillance de sécurité sur la base de l'informatique en périphérie de réseau, et son procédé de fonctionnement WO2021261656A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0077811 2020-06-25
KR1020200077811A KR102397837B1 (ko) 2020-06-25 2020-06-25 엣지 컴퓨팅 기반 보안 감시 서비스 제공 장치, 시스템 및 그 동작 방법

Publications (1)

Publication Number Publication Date
WO2021261656A1 true WO2021261656A1 (fr) 2021-12-30

Family

ID=79281461

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/011457 WO2021261656A1 (fr) 2020-06-25 2020-08-27 Appareil et système de fourniture d'un service de surveillance de sécurité sur la base de l'informatique en périphérie de réseau, et son procédé de fonctionnement

Country Status (2)

Country Link
KR (1) KR102397837B1 (fr)
WO (1) WO2021261656A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114783047A (zh) * 2022-03-10 2022-07-22 慧之安信息技术股份有限公司 基于边缘计算的室内吸烟检测方法和装置
CN114821273A (zh) * 2022-03-10 2022-07-29 慧之安信息技术股份有限公司 基于边缘计算的天文望远镜设备智能化方法和装置
CN114821936A (zh) * 2022-03-21 2022-07-29 慧之安信息技术股份有限公司 基于边缘计算的违法犯罪行为检测方法和装置
CN115103110A (zh) * 2022-06-10 2022-09-23 慧之安信息技术股份有限公司 基于边缘计算的家庭智能监控方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008523641A (ja) * 2004-12-06 2008-07-03 三菱電機株式会社 入力画像シーケンスを安全に処理する方法及びシステム
KR20180138558A (ko) * 2018-10-10 2018-12-31 에스케이텔레콤 주식회사 객체 검출을 위한 영상분석 서버장치 및 방법
KR102099687B1 (ko) * 2020-01-29 2020-04-10 주식회사 원테크시스템 영상 촬영 장치 및 시스템
KR20200055812A (ko) * 2018-11-08 2020-05-22 전자부품연구원 딥러닝 기반 이상 행위 인지 장치 및 방법
KR102126197B1 (ko) * 2020-01-29 2020-06-24 주식회사 카카오뱅크 비식별화된 이미지를 이용한 신경망 학습 방법 및 이를 제공하는 서버

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101726315B1 (ko) * 2015-05-28 2017-04-12 (주)지인테크 이벤트 알림 기능이 있는 네트워크 기반 감시 시스템 및 네트워크 기반 감시 카메라
KR20190035186A (ko) * 2017-09-26 2019-04-03 주식회사 바이캅 딥 러닝 기법을 활용한 지능형 무인 보안 시스템
KR102021441B1 (ko) * 2019-05-17 2019-11-04 정태웅 인공지능을 이용한 영상 기반의 실시간 침입 감지 방법 및 감시카메라

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008523641A (ja) * 2004-12-06 2008-07-03 三菱電機株式会社 入力画像シーケンスを安全に処理する方法及びシステム
KR20180138558A (ko) * 2018-10-10 2018-12-31 에스케이텔레콤 주식회사 객체 검출을 위한 영상분석 서버장치 및 방법
KR20200055812A (ko) * 2018-11-08 2020-05-22 전자부품연구원 딥러닝 기반 이상 행위 인지 장치 및 방법
KR102099687B1 (ko) * 2020-01-29 2020-04-10 주식회사 원테크시스템 영상 촬영 장치 및 시스템
KR102126197B1 (ko) * 2020-01-29 2020-06-24 주식회사 카카오뱅크 비식별화된 이미지를 이용한 신경망 학습 방법 및 이를 제공하는 서버

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114783047A (zh) * 2022-03-10 2022-07-22 慧之安信息技术股份有限公司 基于边缘计算的室内吸烟检测方法和装置
CN114821273A (zh) * 2022-03-10 2022-07-29 慧之安信息技术股份有限公司 基于边缘计算的天文望远镜设备智能化方法和装置
CN114783047B (zh) * 2022-03-10 2024-04-19 慧之安信息技术股份有限公司 基于边缘计算的室内吸烟检测方法和装置
CN114821936A (zh) * 2022-03-21 2022-07-29 慧之安信息技术股份有限公司 基于边缘计算的违法犯罪行为检测方法和装置
CN115103110A (zh) * 2022-06-10 2022-09-23 慧之安信息技术股份有限公司 基于边缘计算的家庭智能监控方法

Also Published As

Publication number Publication date
KR102397837B1 (ko) 2022-05-16
KR20220000172A (ko) 2022-01-03

Similar Documents

Publication Publication Date Title
WO2021261656A1 (fr) Appareil et système de fourniture d'un service de surveillance de sécurité sur la base de l'informatique en périphérie de réseau, et son procédé de fonctionnement
KR101877294B1 (ko) 객체, 영역 및 객체가 유발하는 이벤트의 유기적 관계를 기반으로 한 복수 개 기본행동패턴 정의를 통한 복합 상황 설정 및 자동 상황 인지가 가능한 지능형 방범 cctv 시스템
US20070122000A1 (en) Detection of stationary objects in video
KR20220000226A (ko) 엣지 컴퓨팅 기반 지능형 보안 감시 서비스 제공 시스템
Davies et al. A progress review of intelligent CCTV surveillance systems
KR102107957B1 (ko) 건물 외벽 침입감지를 위한 cctv 모니터링 시스템 및 방법
KR20220000216A (ko) 딥러닝 분산 처리 기반 지능형 보안 감시 서비스 제공 장치
JP3942606B2 (ja) 変化検出装置
KR20160093253A (ko) 영상 기반 이상 흐름 감지 방법 및 그 시스템
KR101863846B1 (ko) 이벤트 감지 및 현장 사진 정보 제공 방법 및 시스템
KR20220000209A (ko) 딥러닝 분산 처리 기반 지능형 보안 감시 장치의 동작 프로그램이 기록된 기록매체
KR20220000175A (ko) 엣지 컴퓨팅 기반 지능형 보안 감시 서비스 제공 장치의 동작방법
KR20220000424A (ko) 엣지 컴퓨팅 기반 지능형 보안 감시 카메라 시스템
KR20220064213A (ko) 보안 감시 장치의 동작 프로그램
KR20220000204A (ko) 딥러닝 분산 처리 기반 지능형 보안 감시 장치의 동작 프로그램
KR20220000202A (ko) 딥러닝 분산 처리 기반 지능형 보안 감시 장치의 동작방법
KR20220000184A (ko) 엣지 컴퓨팅 기반 지능형 보안 감시 서비스 제공 장치의 동작 프로그램 기록매체
KR20220000181A (ko) 엣지 컴퓨팅 기반 지능형 보안 감시 서비스 제공 장치의 동작프로그램
KR20220000221A (ko) 엣지 컴퓨팅 기반 지능형 보안 감시 카메라 장치
KR20220000189A (ko) 엣지 컴퓨팅 기반 지능형 보안 감시 서비스 제공 장치
KR101653820B1 (ko) 열 감지 기반의 대상체 및 상황 감지 시스템
Prabhakar et al. An efficient approach for real time tracking of intruder and abandoned object in video surveillance system
KR102397839B1 (ko) Ai 영상 자율 센서의 영상 분석 기반 캡션 센서 장치 및 그 동작 방법
KR20210158037A (ko) 영상감시시스템에서의 고속 이동물체의 위치 및 모션 캡쳐 방법
KR20220031316A (ko) 능동형 보안 관제 서비스 제공 프로그램이 기록된 기록매체

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20942001

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20942001

Country of ref document: EP

Kind code of ref document: A1