WO2022145712A1 - 엣지 컴퓨팅 환경에서 영상에 기초하여 방문객을 분석하기 위한 방법, 디바이스 및 비일시성의 컴퓨터 판독 가능한 기록 매체 - Google Patents
엣지 컴퓨팅 환경에서 영상에 기초하여 방문객을 분석하기 위한 방법, 디바이스 및 비일시성의 컴퓨터 판독 가능한 기록 매체 Download PDFInfo
- Publication number
- WO2022145712A1 WO2022145712A1 PCT/KR2021/016654 KR2021016654W WO2022145712A1 WO 2022145712 A1 WO2022145712 A1 WO 2022145712A1 KR 2021016654 W KR2021016654 W KR 2021016654W WO 2022145712 A1 WO2022145712 A1 WO 2022145712A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- detection data
- appearance
- detection
- present
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000001514 detection method Methods 0.000 claims abstract description 93
- 238000013528 artificial neural network Methods 0.000 claims abstract description 25
- 230000010354 integration Effects 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 2
- 238000004891 communication Methods 0.000 description 35
- 230000006870 function Effects 0.000 description 16
- 230000000875 corresponding effect Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013140 knowledge distillation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 239000013585 weight reducing agent Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/771—Feature selection, e.g. selecting representative features from a multi-dimensional feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the present invention relates to a method, a device, and a non-transitory computer-readable recording medium for analyzing a visitor based on an image in an edge computing environment.
- the present inventor(s) integratedly generate various data regarding the location and appearance of the visitor included in the image captured in the offline space by the device present at the client end in the edge computing environment, so that the user visits the offline space.
- An object of the present invention is to solve all of the problems of the prior art described above.
- the present invention extracts feature data from a captured image for an offline space, and generates detection data on the location and appearance of an object included in the captured image from the above feature data using an artificial neural network-based detection model
- Another purpose is to integrate the detection data on the location and appearance of the target object to generate various data on the visitor's entry and exit and demographic information included in the image captured in the offline space. do.
- the present invention generates integrated detection data on the location and appearance of a visitor using a lightweight detection model in an edge computing device rather than a server, so that communication between the device and the server or advanced analysis in the server is required. Another purpose is to save time when the edge computing device is installed (ie, offline space), and to be able to find out information about visitor access and demographic information of visitors right away.
- the present invention does not transmit an image of a visitor to an external server, but uses only the resource of an edge computing device to generate detection data about the visitor, thereby providing Another purpose is to reduce the risk of issues arising.
- a representative configuration of the present invention for achieving the above object is as follows.
- a method for analyzing a visitor based on an image in an edge computing environment comprising: extracting feature data from a captured image for an offline space; and using an artificial neural network-based detection model from the feature data
- a method comprising the steps of generating detection data on the position and appearance of an object included in the captured image, and integrating detection data on the position and appearance of the target object.
- a device for analyzing a visitor based on an image in an edge computing environment a feature extractor that extracts feature data from a captured image for an offline space, and an artificial neural network-based detection model
- a device comprising: an information detector for generating detection data on the location and appearance of an object included in the captured image from data; and a data integrator for integrating detection data on the location and appearance of a target object.
- the time required for communication between the device and the server or for advanced analysis in the server is saved, and information on the entry and exit of the visitor at the site where the edge computing device is installed (that is, the offline space) and Demographic information of visitors will be available immediately.
- FIG. 1 is a diagram illustrating a schematic configuration of an entire system for analyzing a visitor based on an image in an edge computing environment according to an embodiment of the present invention.
- FIG. 2 is a diagram illustrating in detail an internal configuration of a device according to an embodiment of the present invention.
- FIG. 3 is a diagram illustrating in detail an internal configuration of an object recognition management unit according to an embodiment of the present invention.
- FIG. 1 is a diagram illustrating a schematic configuration of an entire system for analyzing a visitor based on an image in an edge computing environment according to an embodiment of the present invention.
- the entire system may include a communication network 100 , a server 200 , and a device 300 .
- the communication network 100 may be configured regardless of communication aspects such as wired communication or wireless communication, and includes a local area network (LAN), a metropolitan area network (MAN) ), a wide area network (WAN), etc. may be configured as various communication networks.
- the communication network 100 as used herein may be a well-known Internet or World Wide Web (WWW).
- WWW World Wide Web
- the communication network 100 is not necessarily limited thereto, and may include a known wired/wireless data communication network, a known telephone network, or a known wired/wireless television communication network in at least a part thereof.
- the communication network 100 is a wireless data communication network, such as Wi-Fi communication, Wi-Fi Direct communication, Long Term Evolution (LTE) communication, 5G communication, Bluetooth communication (Low Energy Bluetooth (BLE)) (including Bluetooth Low Energy) communication), infrared communication, ultrasonic communication, and the like, may be implemented in at least a part thereof.
- the communication network 100 may be an optical communication network that implements at least a part of a conventional communication method such as LiFi (Light Fidelity).
- the server 200 is a device capable of communicating with a device 300 to be described later through the communication network 100, and obtains various data transmitted from the device 300 and 300) may perform a function of transmitting various data necessary for operation to the device 300 .
- the device 300 is a digital device capable of communicating with the server 200 or another system (not shown) through the communication network 100 , and a photographed image of an offline space extracts feature data from the , and generates detection data on the location and appearance of an object included in the captured image from the feature data using an artificial neural network-based detection model, and detects the location and appearance of the target object
- a photographed image of an offline space extracts feature data from the , and generates detection data on the location and appearance of an object included in the captured image from the feature data using an artificial neural network-based detection model, and detects the location and appearance of the target object
- the device 300 according to an embodiment of the present invention can be adopted as the device 300 according to the present invention as long as it is a digital device equipped with a memory means and a microprocessor is equipped with arithmetic capability.
- the device 300 according to an embodiment of the present invention may refer to the device itself (eg, a commercial security camera, an IP camera, etc.) capable of capturing an image, but may refer to a wired and/or It may also refer to devices (eg, smartphones, tablets, PCs, etc.) that can be connected (or coupled) wirelessly.
- the device 300 according to the present invention may include an application (not shown) supporting the function according to the present invention.
- Such an application may be downloaded from an external application distribution server (not shown).
- an external application distribution server not shown
- at least a part of the application may be replaced with a hardware device or a firmware device capable of performing substantially the same or equivalent function as the application, if necessary.
- FIG. 2 is a diagram illustrating in detail an internal configuration of a device 300 according to an embodiment of the present invention.
- the device 300 includes an object recognition management unit 310 , an object tracking management unit 320 , an access determination management unit 330 , a communication unit 340 and a control unit ( 350 , where the object recognition management unit 310 may include a feature extraction unit 311 , an information detection unit 312 , and a data integration unit 313 .
- the object recognition management unit 310, the object tracking management unit 320, the access determination management unit 330, the communication unit 340, and the control unit 350 at least some of them are external systems (not shown) and may be a program module in communication with.
- Such a program module may be included in the device 300 in the form of an operating system, an application program module, or other program modules, and may be physically stored in various known storage devices. Also, such a program module may be stored in a remote storage device capable of communicating with the device 300 . Meanwhile, such a program module includes, but is not limited to, routines, subroutines, programs, objects, components, data structures, etc. that perform specific tasks or execute specific abstract data types according to the present invention.
- the object recognition management unit 310 is a location of an object (mainly a visitor) included in an image captured in an offline space (eg, a store, an office, a school, a performance hall, a stadium, etc.) and generating integrated detection data for the appearance.
- an offline space eg, a store, an office, a school, a performance hall, a stadium, etc.
- the object recognition management unit 310 counts the number of visitors entering and exiting the offline space by analyzing the captured image, and estimates the visitor's demographic information (ie, estimated from the visitor's appearance). possible information) can be estimated.
- the object recognition management unit 310 is an auxiliary provided separately from the device 300 according to the present invention in order to perform analysis using an artificial neural network-based model requiring a large amount of computation.
- Computing resources of a computing device may be utilized.
- the captured image to be analyzed is collected from a separate image capturing device (eg, commercial security camera, IP camera, etc.) installed in an offline space or a device according to the present invention It may be collected from the image capturing module provided in 300 .
- the captured image as described above is sampled at a predetermined period (eg, 10 fps, etc.) or when a motion (difference between adjacent frames) found in the captured image is greater than or equal to a predetermined level. It may be sampled, and the captured image sampled as above may be transmitted to the object recognition management unit 310 .
- the object recognition management unit 310 may include a feature extraction unit 311 , an information detection unit 312 , and a data integration unit 313 .
- the feature extraction unit 311 may perform a function of extracting feature data from a captured image for an offline space.
- the feature extractor 311 may receive a frame of any size constituting a captured image and output feature data in the form of a tensor.
- the feature extraction unit 311 may use an artificial neural network (mainly a deep neural network)-based model as a means for extracting feature data from a captured image.
- an artificial neural network may be implemented based on a well-known structure such as Deep Layer Aggregation (DLA) and residual neural network (RESNET).
- DLA Deep Layer Aggregation
- RESNET residual neural network
- the information detection unit 312 performs a function of generating detection data on the location and appearance of an object included in the captured image from the feature data using an artificial neural network-based detection model.
- the objectness score of the bounding box corresponding to the object (that is, the possibility that the bounding box corresponds to the actual object) score), width, height, center offset, etc. may be included, and detection data about the position of the foot of the object may be included.
- the detection data regarding the appearance of the object includes demographic information that is detected from the appearance of the visitor, such as the age and gender of the object (ie, the visitor), and can be usefully used for marketing. detection data may be included.
- detection data related to an object's age, gender, etc. may be anonymized.
- an artificial neural network-based detection model may be trained to detect certain attributes about a visitor from feature data, for example, based on an artificial neural network such as a Fully Convolutional Network (FCN).
- FCN Fully Convolutional Network
- detection data generated as a result of analyzing the feature data by the artificial neural network-based detection model may be generated based on a feature map, and accordingly, A plurality of feature data may be correlated with each other via a feature map (or coordinates on the feature map).
- the information detection unit 312 may generate detection data on the location and appearance of an object by using two or more artificial neural network-based detection models.
- the artificial neural network-based detection model includes a first detection model that generates some of the detection data on the position and appearance of the object, and a second detection model that generates the remaining part of the detection data on the position and appearance of the object. may include
- the artificial neural network-based detection model used in the information detection unit 312 may be separated from or integrated with each other as needed or according to the attribute to be detected.
- the artificial neural network-based detection model used in the information detection unit 312 includes various properties of an object (that is, objectization score, width, height, center offset, and object of a bounding box corresponding to the object).
- the detection model may include a detection model that generates detection data for one attribute of a foot position, an object's gender, and an object's age based on one feature map.
- the artificial neural network-based detection model used in the information detection unit 312 includes various properties of an object (ie, objectification score of a bounding box corresponding to the object, width, height,
- the detection model may include a detection model that together generates detection data for two or more attributes of a center offset, a foot position of an object, an object's gender, and an object's age) based on one feature map.
- the data integrator 313 may perform a function of integrating detection data on the location and appearance of a target object.
- the data integration unit 313 is configured to use at least one coordinate on a feature map that is a basis of the detection data as a medium to obtain at least a portion of the detection data for the location and appearance of the target object. By assigning to the target object, it is possible to integrate detection data on the position and appearance of the target object.
- the data integration unit 313 when the data integration unit 313 according to an embodiment of the present invention has an objectification score of a bounding box corresponding to a target object greater than or equal to a predetermined level, and the corresponding bounding box is located at the first coordinates on the feature map For example, it may be determined that the target object is located at the first coordinates on the feature map. Accordingly, the data integrator 313 according to an embodiment of the present invention uses the first coordinates on the feature map as a medium for the A pixel value corresponding to 1 coordinate can be assigned to a target object.
- the pixel values that can be assigned to the target object include the length of the width of the bounding box, the length of the height of the bounding box, the position of the center offset of the bounding box, the position of the foot of the target object, and the gender (0 and 1) of the target object. value), the age of the target object (score vector for each class), and the like may be included.
- the artificial neural network technology that can be used in the present invention is not necessarily limited to that described above, and is within the scope capable of achieving the object of the present invention. Please note that it can be changed or expanded at any time. For example, you can extract feature data or generate detection data using artificial neural network technologies such as R-CNN (Region-based Convolutional Neural Networks), YOLO (You Only Look Once), and SSD (Single Shot multibox Detector). have.
- R-CNN Regular-based Convolutional Neural Networks
- YOLO You Only Look Once
- SSD Single Shot multibox Detector
- the artificial neural network-based extraction model or detection model that can be used in the present invention in order to smoothly operate in the device 300 with relatively insufficient computational resources in an edge computing environment, pruning, quantization ), may be a lightweight model by a lightweight algorithm such as knowledge distillation, and the lightweight model as above may be generated in the server 200 or an external system (not shown) and distributed to the device 300 .
- a lightweight algorithm such as knowledge distillation
- the lightweight model as above may be generated in the server 200 or an external system (not shown) and distributed to the device 300 .
- the weight reduction algorithm according to an embodiment of the present invention is not limited to those listed above, and can be variously changed within the scope that can achieve the object of the present invention.
- the object tracking management unit 320 may perform a function of tracking the target object with reference to the detection data integratedly generated by the object recognition management unit 310 above.
- the object tracking management unit 320 connects an existing tracklet to a target object detected in a new frame or creates a new tracklet while managing tracklets for each frame of a captured image. You can create tracklets. For example, the object tracking management unit 320 according to an embodiment of the present invention, based on the degree of overlap between the predicted bounding box for the target object and the actually inputted bounding box for each frame (for example, You can decide whether to link an existing tracklet or create a new tracklet for the target object (based on IoU (Intersection over Union)).
- IoU Intersection over Union
- the detection data of the target object generated by the object recognition management unit 310 ie, a bounding box corresponding to the target object, the foot of the target object) location, detection data regarding the gender, age, etc. of the target object
- the detection data of the target object generated by the object recognition management unit 310 ie, a bounding box corresponding to the target object, the foot of the target object
- detection data regarding the gender, age, etc. of the target object may be provided to the tracklet corresponding to the target object.
- object tracking algorithm according to an embodiment of the present invention is not limited to those listed above, and may be variously changed within the scope that can achieve the object of the present invention.
- the access determination management unit 330 refers to the target object tracking information (ie, tracklet information) generated by the object tracking management unit 320 above. By determining whether the object passes through a predetermined detection line, it is possible to perform a function of determining whether the target object enters or exits the offline space.
- the access determination management unit 330 sets the foot position of the target object specified by the tracklet in the previous frame as a starting point, and the target object specified by the tracklet in the current frame. It is possible to set a vector having the position of the foot as the end point, and if there is an intersection between the vector set in this way and a predetermined detection line set near the door, it can be determined that the target object has passed the detection line. Furthermore, the access determination management unit 330 according to an embodiment of the present invention refers to the information about the direction of the vector above and the information about the entrance direction based on the above detection line, so that the target object is located in an offline space (that is, , store) or whether the target object exits the offline space.
- an offline space that is, , store
- the access determination algorithm according to an embodiment of the present invention is not limited to those listed above, and can be variously changed within the scope that can achieve the object of the present invention.
- the device 300 determines the detection data integratedly generated in the process of recognizing the target object, data related to the tracklet generated in the process of tracking the target object, and the entry/exit determination of the target object. All data on entry or exit generated in the process may be integrated, and the integrated data may be transmitted to the server 200 or an external system.
- the communication unit 340 performs a function of enabling data transmission/reception to and from the object recognition management unit 310 , the object tracking management unit 320 , and the access determination management unit 330 . can do.
- control unit 350 functions to control the flow of data between the object recognition management unit 310 , the object tracking management unit 320 , the access determination management unit 330 , and the communication unit 340 .
- the control unit 250 controls the data flow to/from the outside of the device 300 or the data flow between each component of the device 300, so that the object recognition management unit 310, the object tracking management unit ( 320 ), the access determination management unit 330 , and the communication unit 340 may control each to perform its own function.
- the embodiments according to the present invention described above may be implemented in the form of program instructions that can be executed through various computer components and recorded in a computer-readable recording medium.
- the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
- the program instructions recorded on the computer-readable recording medium may be specially designed and configured for the present invention or may be known and used by those skilled in the computer software field.
- Examples of the computer-readable recording medium include hard disks, magnetic media such as floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and magneto-optical media such as floppy disks. medium), and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
- Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
- a hardware device may be converted into one or more software modules to perform processing in accordance with the present invention, and vice versa.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (11)
- 엣지 컴퓨팅 환경에서 영상에 기초하여 방문객을 분석하기 위한 방법으로서,오프라인 공간에 대한 촬영 영상으로부터 특징 데이터를 추출하는 단계,인공 신경망 기반 검출 모델을 이용하여 상기 특징 데이터로부터 상기 촬영 영상에 포함된 객체의 위치 및 외관에 대한 검출 데이터를 생성하는 단계, 및대상 객체의 위치 및 외관에 대한 검출 데이터를 통합시키는 단계를 포함하는방법.
- 제1항에 있어서,상기 검출 데이터는 특징 맵(feature map)에 기초하여 생성되는방법.
- 제1항에 있어서,상기 객체의 위치에 대한 검출 데이터에는 상기 객체에 대응하는 바운딩 박스(bounding box)의 객체화 점수(objectness score), 폭(width), 높이(height) 및 중심 오프셋(center offset) 중 적어도 하나에 대한 검출 데이터와 상기 객체의 발의 위치에 대한 검출 데이터가 포함되고, 상기 객체의 외관에 관한 검출 데이터에는 상기 객체의 나이 및 성별 중 적어도 하나에 관한 검출 데이터가 포함되는방법.
- 제1항에 있어서,상기 검출 모델은, 상기 객체의 위치 및 외관에 대한 검출 데이터 중 일부를 생성하는 제1 검출 모델 및 상기 객체의 위치 및 외관에 대한 검출 데이터 중 나머지 일부를 생성하는 제2 검출 모델을 포함하는방법.
- 제1항에 있어서,상기 검출 모델은, 상기 객체의 위치 및 외관에 관한 복수의 속성 중 하나의 속성에 대한 검출 데이터를 하나의 특징 맵에 기초하여 생성하는 검출 모델을 포함하는방법.
- 제1항에 있어서,상기 검출 모델은, 상기 객체의 위치 및 외관에 관한 복수의 속성 중 둘 이상의 속성에 대한 검출 데이터를 하나의 특징 맵에 기초하여 함께 생성하는 검출 모델을 포함하는방법.
- 제1항에 있어서,상기 통합 단계에서, 상기 생성되는 검출 데이터의 기초가 되는 특징 맵상의 적어도 하나의 좌표를 매개로 하여 상기 대상 객체의 위치 및 외관에 대한 검출 데이터 중 적어도 일부를 상기 대상 객체에 대하여 할당함으로써, 상기 대상 객체의 위치 및 외관에 대한 검출 데이터를 통합시키는방법.
- 제1항에 있어서,상기 대상 객체의 위치에 대한 검출 데이터를 참조하여 상기 촬영 영상에서 상기 대상 객체를 추적하는 단계를 더 포함하는방법.
- 제1항에 있어서,상기 추적에 관한 정보를 참조하여 상기 대상 객체가 소정의 검지선을 통과하는지 여부를 판단함으로써 상기 대상 객체의 출입 여부를 결정하는 단계를 더 포함하는방법.
- 제1항에 따른 방법을 실행하기 위한 컴퓨터 프로그램을 기록하는 비일시성의 컴퓨터 판독 가능 기록 매체.
- 엣지 컴퓨팅 환경에서 영상에 기초하여 방문객을 분석하기 위한 디바이스로서,오프라인 공간에 대한 촬영 영상으로부터 특징 데이터를 추출하는 특징 추출부,인공 신경망 기반 검출 모델을 이용하여 상기 특징 데이터로부터 상기 촬영 영상에 포함된 객체의 위치 및 외관에 대한 검출 데이터를 생성하는 정보 검출부, 및대상 객체의 위치 및 외관에 대한 검출 데이터를 통합시키는 데이터 통합부를 포함하는디바이스.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/270,408 US20240062408A1 (en) | 2020-12-31 | 2021-11-15 | Method, device, and non-transitory computer-readable recording medium for analyzing visitor on basis of image in edge computing environment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2020-0188854 | 2020-12-31 | ||
KR1020200188854A KR102610494B1 (ko) | 2020-12-31 | 2020-12-31 | 엣지 컴퓨팅 환경에서 영상에 기초하여 방문객을 분석하기 위한 방법, 디바이스 및 비일시성의 컴퓨터 판독 가능한 기록 매체 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022145712A1 true WO2022145712A1 (ko) | 2022-07-07 |
Family
ID=82260482
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2021/016654 WO2022145712A1 (ko) | 2020-12-31 | 2021-11-15 | 엣지 컴퓨팅 환경에서 영상에 기초하여 방문객을 분석하기 위한 방법, 디바이스 및 비일시성의 컴퓨터 판독 가능한 기록 매체 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240062408A1 (ko) |
KR (1) | KR102610494B1 (ko) |
WO (1) | WO2022145712A1 (ko) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101448392B1 (ko) * | 2013-06-21 | 2014-10-13 | 호서대학교 산학협력단 | 피플 카운팅 방법 |
JP2016053869A (ja) * | 2014-09-04 | 2016-04-14 | 富士ゼロックス株式会社 | 情報処理装置及び情報処理プログラム |
JP2016177755A (ja) * | 2015-03-23 | 2016-10-06 | 日本電気株式会社 | 注文端末装置、注文システム、客情報生成方法、及びプログラム |
KR20170006356A (ko) * | 2015-07-08 | 2017-01-18 | 주식회사 케이티 | 이차원 영상 기반 고객 분석 방법 및 장치 |
KR102138301B1 (ko) * | 2020-05-06 | 2020-07-27 | 유정환 | Pos 기반 고객 마케팅 시스템 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102340134B1 (ko) | 2015-02-10 | 2021-12-15 | 한화테크윈 주식회사 | 매장 방문 정보 제공 시스템 및 방법 |
KR101779096B1 (ko) * | 2016-01-06 | 2017-09-18 | (주)지와이네트웍스 | 지능형 영상분석 기술 기반 통합 매장관리시스템에서의 객체 추적방법 |
KR101839827B1 (ko) * | 2017-09-06 | 2018-03-19 | 한국비전기술주식회사 | 원거리 동적 객체에 대한 얼굴 특징정보(연령, 성별, 착용된 도구, 얼굴안면식별)의 인식 기법이 적용된 지능형 감시시스템 |
US11250243B2 (en) * | 2019-03-26 | 2022-02-15 | Nec Corporation | Person search system based on multiple deep learning models |
-
2020
- 2020-12-31 KR KR1020200188854A patent/KR102610494B1/ko active IP Right Grant
-
2021
- 2021-11-15 US US18/270,408 patent/US20240062408A1/en active Pending
- 2021-11-15 WO PCT/KR2021/016654 patent/WO2022145712A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101448392B1 (ko) * | 2013-06-21 | 2014-10-13 | 호서대학교 산학협력단 | 피플 카운팅 방법 |
JP2016053869A (ja) * | 2014-09-04 | 2016-04-14 | 富士ゼロックス株式会社 | 情報処理装置及び情報処理プログラム |
JP2016177755A (ja) * | 2015-03-23 | 2016-10-06 | 日本電気株式会社 | 注文端末装置、注文システム、客情報生成方法、及びプログラム |
KR20170006356A (ko) * | 2015-07-08 | 2017-01-18 | 주식회사 케이티 | 이차원 영상 기반 고객 분석 방법 및 장치 |
KR102138301B1 (ko) * | 2020-05-06 | 2020-07-27 | 유정환 | Pos 기반 고객 마케팅 시스템 |
Also Published As
Publication number | Publication date |
---|---|
US20240062408A1 (en) | 2024-02-22 |
KR20220096436A (ko) | 2022-07-07 |
KR102610494B1 (ko) | 2023-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021180004A1 (zh) | 视频分析方法、视频分析的管理方法及相关设备 | |
US10657365B2 (en) | Specific person detection system and specific person detection method | |
CN107078917A (zh) | 托管电话会议 | |
WO2016099084A1 (ko) | 비콘신호를 이용한 안전 서비스 제공 시스템 및 방법 | |
WO2019221416A1 (ko) | 실시간 현장 동영상 중계를 이용한 시각장애인 안내 서비스 제공 방법 | |
WO2015102126A1 (ko) | 얼굴 인식 기술을 이용한 전자 앨범 관리 방법 및 시스템 | |
WO2011078596A2 (ko) | 상황에 따라 적응적으로 이미지 매칭을 수행하기 위한 방법, 시스템, 및 컴퓨터 판독 가능한 기록 매체 | |
CN112008736A (zh) | 迎宾机器人调配方法、装置、存储介质和电子装置 | |
WO2019190076A1 (ko) | 시선 추적 방법 및 이를 수행하기 위한 단말 | |
CN114332925A (zh) | 电梯内宠物检测方法、系统、装置及计算机可读存储介质 | |
KR20190099216A (ko) | Rgbd 감지 기반 물체 검출 시스템 및 그 방법 | |
WO2022145712A1 (ko) | 엣지 컴퓨팅 환경에서 영상에 기초하여 방문객을 분석하기 위한 방법, 디바이스 및 비일시성의 컴퓨터 판독 가능한 기록 매체 | |
CN110717941B (zh) | 图像物件追踪系统及方法 | |
WO2021107734A1 (ko) | 골프에 관한 콘텐츠를 추천하기 위한 방법, 디바이스 및 비일시성의 컴퓨터 판독 가능한 기록 매체 | |
CN112418062A (zh) | 人脸识别方法、系统、电子设备及存储介质 | |
WO2023096133A1 (ko) | 경량화된 자세 추정 모델 제공 장치 및 방법 | |
WO2022191380A1 (ko) | 블록체인 기반의 영상 위변조 방지 시스템 및 방법과 이를 위한 컴퓨터 프로그램 | |
WO2022131733A1 (ko) | Lpwan(low power wide area network) 환경에서 영상에 기초하여 객체에 관한 정보를 추정하기 위한 방법, 시스템 및 비일시성의 컴퓨터 판독 가능한 기록 매체 | |
WO2016186327A1 (ko) | 공간행동 의미분석 시스템 | |
WO2017200333A2 (ko) | 공간 행동 의미 분석 시스템 및 공간 행동 의미 분석 방법 | |
CN112418234A (zh) | 识别车牌号的方法、装置、电子设备及存储介质 | |
WO2024106925A1 (ko) | 증강 현실 기반의 통신 시스템 및 방법과 이를 수행하기 위한 컴퓨팅 장치 | |
WO2023243904A1 (ko) | 이미지 분석을 위한 파생 이미지를 생성하기 위한 방법, 시스템 및 비일시성의 컴퓨터 판독 가능한 기록 매체 | |
KR102633147B1 (ko) | 에지-클라우드 플랫폼 시스템 | |
CN206039561U (zh) | 一种动态人像采集比对系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21915510 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18270408 Country of ref document: US |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.11.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21915510 Country of ref document: EP Kind code of ref document: A1 |