WO2022131733A1 - Procédé, système et support d'enregistrement lisible par ordinateur non transitoire pour estimer des informations concernant un objet sur la base d'images dans un environnement de réseau étendu à faible puissance (lpwan) - Google Patents

Procédé, système et support d'enregistrement lisible par ordinateur non transitoire pour estimer des informations concernant un objet sur la base d'images dans un environnement de réseau étendu à faible puissance (lpwan) Download PDF

Info

Publication number
WO2022131733A1
WO2022131733A1 PCT/KR2021/018927 KR2021018927W WO2022131733A1 WO 2022131733 A1 WO2022131733 A1 WO 2022131733A1 KR 2021018927 W KR2021018927 W KR 2021018927W WO 2022131733 A1 WO2022131733 A1 WO 2022131733A1
Authority
WO
WIPO (PCT)
Prior art keywords
category information
captured image
detection result
present
image
Prior art date
Application number
PCT/KR2021/018927
Other languages
English (en)
Korean (ko)
Inventor
정종수
박수민
송보근
Original Assignee
주식회사 콕스랩
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 콕스랩 filed Critical 주식회사 콕스랩
Publication of WO2022131733A1 publication Critical patent/WO2022131733A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • the present invention relates to a method, a system, and a non-transitory computer-readable recording medium for estimating information about an object based on an image in an LPWAN (Low Power Wide Area Network) environment.
  • LPWAN Low Power Wide Area Network
  • a technique for detecting a specific object in the above captured image by analyzing a captured image obtained by a plurality of network cameras using an artificial neural network model can be exemplified.
  • an object from an image using an artificial neural network model In order to detect an object from an image using an artificial neural network model
  • concepts such as edge computing and on-device AI which have been in the spotlight recently, reduce the load on the central server by distributing the functions of the central server, and reduce the amount of data exchanged with the central server through the communication network.
  • the goal is to increase the response speed of the model.
  • reducing the communication load that is, the amount of data exchanged with the central server, is a factor that directly affects the reduction of system construction cost in addition to increasing the response speed of AI models.
  • a processor with limited computing power is used instead of a high-spec GPU for the purpose of reducing the system construction cost, and a broadband network such as Ethernet, Wi-Fi, LTE, etc.
  • a bandwidth-limited communication network such as LPWAN (Low Power Wide Area Network) is used (that is, the amount of data exchanged with the server is minimized)
  • the artificial neural network model will be better than when using a high-end GPU and broadband network. performance is likely to deteriorate. This is because, as described above, many calculations are required for object detection based on an artificial neural network model.
  • the present inventor(s) supports the construction of an object detection system using an artificial neural network model at a low cost by using a network camera having only limited computing power and an LPWAN having a limited bandwidth, while supporting the performance of the artificial neural network model (ie, detection speed and accuracy) is also proposed to support a high level of support.
  • An object of the present invention is to solve all of the problems of the prior art described above.
  • the present invention provides an object detection model that is learned to estimate upper category information of an object to be detected in a captured image, based on the detection result of the object in the captured image of the object and the detection result Obtaining the generated partial image of the captured image, and using the object classification model trained to estimate sub-category information of the object from the partial image regarding the captured image, the obtained detection result and the above Another purpose is to estimate sub-category information of the above object based on a partial image of , and to include upper-category information of the above object in the detection result.
  • the present invention detects the above object in the captured image of the above object using an object detection model that is learned to estimate upper category information of the object to be detected in the captured image, and the detection result and the above detection Another purpose is to transmit a partial image related to the above captured image generated based on the result to the server, and to include upper category information of the above object in the above detection result.
  • Another object of the present invention is to support the construction of an object detection system using an artificial neural network model at a low cost while maintaining high performance of the artificial neural network model.
  • a representative configuration of the present invention for achieving the above object is as follows.
  • an object detection model is trained to estimate upper category information of an object to be detected in the captured image, and the object is detected in the captured image of the object and generated based on the detection result obtaining a partial image related to the captured image, and using an object classification model trained to estimate sub-category information of the object from the partial image regarding the captured image and estimating lower category information of the object based on the method, wherein the detection result includes upper category information of the object.
  • detecting the object in a captured image of the object using an object detection model trained to estimate upper category information of the object to be detected in the captured image, and the detection result and the detection There is provided a method comprising transmitting a partial image related to the captured image generated based on a result to a server, wherein the detection result includes upper category information of the object.
  • a system comprising an object classifier for estimating lower category information of the object based on the partial image, and the detection result includes upper category information of the object.
  • an object detection unit for detecting the object from a captured image of the object by using an object detection model trained to estimate upper category information of an object to be detected from the captured image, and the detection result and A system
  • a detection result management unit configured to transmit a partial image related to the captured image generated based on the detection result to a server, wherein the detection result includes upper category information of the object.
  • the present invention it is possible to support the construction of an object detection system using an artificial neural network model at a low cost while maintaining the performance of the artificial neural network model high.
  • FIG. 1 is a diagram illustrating a schematic configuration of an entire system for estimating information about an object based on an image in an LPWAN environment according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating in detail an internal configuration of a server according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating in detail an internal configuration of a network camera according to an embodiment of the present invention.
  • FIG. 4 is a diagram exemplarily illustrating a process of estimating information about an object according to an embodiment of the present invention.
  • FIG. 1 is a diagram illustrating a schematic configuration of an entire system for estimating information about an object based on an image in an LPWAN environment according to an embodiment of the present invention.
  • the entire system may include an LPWAN 100 , a server 200 , and a network camera 300 .
  • LPWAN Low Power Wide Area Network
  • LPWAN 100 means a low-power wireless wide area network that has a very wide service range of 10 km or more and provides a communication speed of several hundred kilobits per second (kbps) do.
  • LPWAN 100 includes LoRaWAN, SIGFOX, LTE-MTC, Narrowband Internet of Things (NB-IoT), and the like, and in particular, LoRaWAN
  • the communication speed is slower than the existing short-distance wireless communication such as silver, Wi-Fi, Bluetooth, and Zigbee, long-distance communication of about 30 km in open areas and about 1 km in the city center is possible.
  • the server 200 detects the above object in the captured image of the above object using an object detection model that is learned to estimate upper category information of the object to be detected in the captured image.
  • an object classification model that is trained to acquire a partial image of the captured image generated based on the obtained result and the detection result above, and estimate sub-category information of the object from the partial image regarding the captured image Accordingly, it is possible to perform a function of estimating sub-category information of the above object based on the obtained detection result and the above partial image.
  • the above detection result may include upper category information of the above object.
  • the network camera 300 detects the above object in the captured image of the above object using an object detection model that is learned to estimate upper category information of the object to be detected in the captured image. Detect and transmit the detection result and the partial image related to the captured image generated based on the detection result to the server.
  • the above detection result may include upper category information of the above object.
  • the network camera 300 is a digital device including a function capable of communicating with the server 200 and an image capturing function in an LPWAN environment, and is provided with a memory means and equipped with a microprocessor. Any digital device with computing capability may be employed as the network camera 300 according to the present invention.
  • the network camera 300 according to an embodiment of the present invention may refer to the network camera itself (eg, a commercial security camera), but is connected (or combined) with the network camera by wire and/or wirelessly. It may also refer to a hardware device that can be used inclusively.
  • the server 200 and the network camera 300 may include an application (not shown) supporting a function of estimating information about an object based on an image in an LPWAN environment according to the present invention.
  • an application may be downloaded from an external application distribution server (not shown).
  • at least a part of the application may be replaced with a hardware device or a firmware device capable of performing substantially the same or equivalent function as the application, if necessary.
  • FIG. 2 is a diagram illustrating in detail the internal configuration of the server 200 according to an embodiment of the present invention.
  • the server 200 includes a detection result acquisition unit 210 , an object classification unit 220 , a model management unit 230 , a communication unit 240 , and a control unit ( 250) may be included.
  • the detection result acquisition unit 210, the object classification unit 220, the model management unit 230, the communication unit 240, and the control unit 250, at least some of them are external systems (not shown) and may be a program module in communication with.
  • Such a program module may be included in the server 200 in the form of an operating system, an application program module, or other program modules, and may be physically stored in various known storage devices.
  • Such a program module may be stored in a remote storage device capable of communicating with the server 200 .
  • a program module includes, but is not limited to, routines, subroutines, programs, objects, components, data structures, etc. that perform specific tasks or execute specific abstract data types according to the present invention.
  • server 200 although described above with respect to the server 200, this description is exemplary, and at least some of the components or functions of the server 200 may be realized or included in an external system (not shown) as needed. is apparent to those skilled in the art.
  • the detection result acquisition unit 210 uses an object detection model that is learned to estimate upper category information of an object to be detected in a captured image, and is configured to It is possible to perform a function of acquiring a result of detecting an object and a partial image related to the above captured image generated based on the above detection result.
  • the object detection unit 310 may detect the above object from a captured image of the object using the above object detection model, and based on the detection result of the above object Thus, it is possible to generate a partial image related to the above captured image.
  • the detection result acquisition unit 210 may acquire the detection result and the partial image in the LPWAN environment.
  • the above detection result may include upper category information of an object to be detected in a captured image and identification information of the network camera 300 that has performed the detection. That is, the detection result obtained by the detection result obtaining unit 210 according to an embodiment of the present invention may mean a result of detecting an object to be detected in the captured image by the network camera 300 at a higher category level. . And, according to an embodiment of the present invention, the upper category information may be associated with two or more lower category information of an object selected by a user, that is, an object to be detected in a captured image.
  • the detection result acquisition unit 210 determines the upper category information of the above object as 'vehicle', or the user above determines the upper category information associated with the lower category information. As the information, appropriate information may be provided to the above user so that they can select 'vehicle', 'large vehicle', and the like.
  • the detection result acquisition unit 210 determines the sub-category information of the object as 'adult' and 'child', or the user As related sub-category information, appropriate information may be provided to the above user so that 'adult', 'child', 'man', 'female', etc. can be selected.
  • upper category information and lower category information of an object and a method of determining corresponding information according to an embodiment of the present invention are not limited to those described above, and may vary within the scope capable of achieving the object of the present invention. can be changed to
  • the partial image related to the captured image obtained by the detection result obtaining unit 210 is an image related to a region in which an object is detected in the captured image (eg, the above object). may mean an image cut along the boundary of a detected bounding box). And, when a plurality of objects are detected in the captured image, the object detection unit 310 according to an embodiment of the present invention may generate a partial image for each of the plurality of detected objects. In addition, the detection result acquisition unit 210 according to an embodiment of the present invention may acquire the respective generated partial images. Meanwhile, according to an embodiment of the present invention, the partial image above may mean only the partial image itself, but may also include information (eg, coordinates) about the position occupied by the partial image in the captured image.
  • the detection result acquisition unit 210 acquires only a partial image of the captured image, not the entire captured image of the object captured by the network camera 300 as described above, the LPWAN and Even in a communication environment where bandwidth is limited, the object detection system can be operated stably.
  • the object detection model used in the object detection unit 310 may be distributed by reducing the weight of the object detection model generated by the server 200 .
  • the model manager 230 may generate an object detection model that is learned to estimate upper category information of an object to be detected from a captured image.
  • the learning data on the object to be detected in the photographed image used in generating such an object detection model is the photographed image about the object and the labeling data about the photographed image. It may include information about an area in which the above object is located in the image and information on the upper category of the above object. That is, the model manager 230 according to an embodiment of the present invention may train the object detection model so that the object detection model estimates upper category information instead of lower category information of the object.
  • the types of objects to be detected are reduced, and thus, an object can be detected with higher accuracy while processing a smaller amount of calculations compared to the case of estimating sub-category information.
  • the model manager 230 may reduce the weight of the object detection model generated as described above and distribute it to the network camera 300 .
  • the model manager 230 generates an object detection model that is learned to estimate upper category information of an object to be detected from a captured image, and performs pruning and quantization. ) and knowledge distillation, it is possible to reduce the weight of the generated model by using an artificial neural network model lightweight algorithm.
  • the model management unit 230 according to an embodiment of the present invention, in order to enable smooth use even in the network camera 300, which has lower computing power compared to the server 200, a lightweight model as described above.
  • an object detection model used by the object detection unit 310 it may be distributed to the network camera 300 .
  • the weight reduction algorithm according to an embodiment of the present invention is not limited to the ones listed above, and may be variously changed within a range that can achieve the object of the present invention.
  • the object classifying unit 220 provides sub-category information of an object from a partial image related to a captured image obtained by the detection result acquiring unit 210 according to an embodiment of the present invention. Using an object classification model trained to estimate function can be performed.
  • the object classifier 220 uses an object classification model that is learned to estimate sub-category information of an object from the partial image.
  • the above object based on the result of detecting the object in the above captured image by the object detection unit 310 according to an embodiment of the present invention (specifically, upper category information of the above object) and the above partial image of subcategory information can be estimated.
  • the model manager 230 may generate an object classification model that is learned to estimate sub-category information of an object from the partial image above.
  • the learning data on the object used in generating such an object classification model is an image about a region in which the object is detected in a captured image of the object, that is, a partial image and As labeling data regarding the partial image, sub-category information of the above object may be included. That is, the model manager 230 according to an embodiment of the present invention may train the object detection model to estimate information on two or more subcategories of the object selected by the user.
  • a function of specifically classifying an object to be detected in a captured image requires processing a large amount of calculation. According to an embodiment of the present invention, as described above, this function has a lower computing power than the network camera 300 as described above. It is possible to reduce the computational burden of the network camera 300 by performing the operation in the high server 200 .
  • the communication unit 240 performs a function of enabling data transmission/reception to/from the detection result acquisition unit 210 , the object classification unit 220 , and the model management unit 230 .
  • control unit 250 functions to control the flow of data between the detection result acquisition unit 210 , the object classification unit 220 , the model management unit 230 , and the communication unit 240 .
  • the control unit 250 according to the present invention controls the data flow to/from the outside of the server 200 or the data flow between each component of the server 200, so that the detection result obtaining unit 210, the object classifying unit 220 , the model management unit 230 , and the communication unit 240 may be controlled to perform their own functions, respectively.
  • FIG. 3 is a diagram illustrating in detail an internal configuration of a network camera 300 according to an embodiment of the present invention.
  • a network camera 300 may be configured to include an object detection unit 310 , a detection result management unit 320 , a communication unit 330 , and a control unit 340 .
  • the object detection unit 310, the detection result management unit 320, the communication unit 330, and the control unit 340 are program modules in which at least some of them communicate with an external system (not shown).
  • Such a program module may be included in the network camera 300 in the form of an operating system, an application program module, or other program module, and may be physically stored in various known storage devices.
  • such a program module may be stored in a remote storage device capable of communicating with the network camera 300 .
  • such a program module includes, but is not limited to, routines, subroutines, programs, objects, components, data structures, etc. that perform specific tasks or execute specific abstract data types according to the present invention.
  • network camera 300 Although described above with respect to the network camera 300, this description is exemplary, and at least some of the components or functions of the network camera 300 may be realized or included in an external system (not shown) as needed. It will be apparent to those skilled in the art that this may be the case.
  • the object detection unit 310 detects the object in the captured image of the object by using the object detection model that is learned to estimate upper category information of the object to be detected in the captured image.
  • the detection function can be performed.
  • the object detection unit 310 may detect the above object from a photographed image of the object by using the above object detection model, and the above detection result is
  • the upper category information of the object and identification information of the network camera 300 that has performed the detection may be included.
  • the object detection model used by the object detection unit 310 according to an embodiment of the present invention to detect an object at a higher category level in the captured image as described above is R-CNN (Region-based Convolutional Neural Networks), YOLO (You Only Look Once) or SSD (Single Shot Multibox Detector) may be generated based on an object recognition model based on an artificial neural network.
  • R-CNN Region-based Convolutional Neural Networks
  • YOLO You Only Look Once
  • SSD Single Shot Multibox Detector
  • the object recognition model based on the artificial neural network according to an embodiment of the present invention is not limited to the above-listed ones, and may be variously changed within the scope that can achieve the object of the present invention.
  • the upper category information of the object to be detected in the captured image may be associated with information on at least two lower categories of the object selected by the user, that is, the object to be detected in the captured image. Meanwhile, since the upper category information and the lower category information have been described in detail above, a description of the corresponding content will be omitted here.
  • the object detection model used in the object detection unit 310 may be distributed by reducing the weight of the object detection model generated by the server 200 .
  • the model manager 230 may generate an object detection model that is learned to estimate upper category information of an object to be detected from a captured image.
  • the model manager 230 may reduce the weight of the object detection model generated as above and distribute it to the network camera 300 . Meanwhile, since the generation of the object detection model and the weight reduction of the model have been described in detail above, a description of the corresponding content will be omitted herein.
  • the detection result management unit 320 may include a result of detecting the object in the captured image regarding the object as described above, and a portion related to the captured image generated based on the detection result.
  • a function of transmitting an image to the server 200 may be performed.
  • the object detection unit 310 may include an image (eg, a bounding box in which the above object is detected) about an area in which an object is detected at a higher category level in the captured image. image cut along the boundary of , that is, a partial image may be generated. And, when a plurality of objects are detected in the captured image, the object detection unit 310 according to an embodiment of the present invention may generate a partial image for each of the plurality of detected objects.
  • the detection result management unit 320 may transmit the partial image generated as described above to the server.
  • the partial image above may mean only the partial image itself, but may also include information (eg, coordinates) about the position occupied by the partial image in the captured image. .
  • FIG. 4 is a diagram exemplarily illustrating a process of estimating information about an object according to an embodiment of the present invention.
  • the network camera 300 collects upper category information (eg, vehicle) of an object to be detected in a captured image.
  • the object may be detected from the captured image 410 of the object by using the object detection model learned to estimate ( 411 and 412 ).
  • the object detection unit 310 may generate partial images 420 and 430 of the captured image 410 based on a result of detecting a corresponding object.
  • the detection result management unit 320 may transmit the detection result and the partial images 420 and 430 above to the server 200 .
  • the object classifier 220 may estimate sub-category information of the above object by using an object classification model that is trained to estimate sub-category information (eg, sedan or truck) of the object from the partial image above. There are (420 and 450).
  • the embodiments according to the present invention described above may be implemented in the form of program instructions that can be executed through various computer components and recorded in a computer-readable recording medium.
  • the computer-readable recording medium may include program instructions, data files, data structures, etc. alone or in combination.
  • the program instructions recorded on the computer-readable recording medium may be specially designed and configured for the present invention or may be known and used by those skilled in the computer software field.
  • Examples of the computer-readable recording medium include hard disks, magnetic media such as floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and magneto-optical media such as floppy disks. medium), and hardware devices specially configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine language codes such as those generated by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • a hardware device may be converted into one or more software modules to perform processing in accordance with the present invention, and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

Selon un aspect, la présente invention concerne un procédé d'estimation d'informations concernant un objet sur la base d'images dans un environnement de réseau étendu à faible puissance (LPWAN), le procédé comprenant les étapes consistant à : obtenir, à l'aide d'un modèle de détection d'objet entraîné pour estimer des informations de catégorie supérieure concernant un objet à détecter à partir d'une image capturée, un résultat de détection de l'objet à partir de l'image capturée de l'objet et d'une image partielle de l'image capturée, qui est générée sur la base du résultat de détection ; et estimer, à l'aide d'un modèle de classification d'objet entraîné pour estimer des informations de catégorie inférieure concernant l'objet à partir de l'image partielle de l'image capturée, les informations de catégorie inférieure concernant l'objet sur la base du résultat de détection obtenu et de l'image partielle, le résultat de détection comprenant les informations de catégorie supérieure concernant l'objet.
PCT/KR2021/018927 2020-12-16 2021-12-14 Procédé, système et support d'enregistrement lisible par ordinateur non transitoire pour estimer des informations concernant un objet sur la base d'images dans un environnement de réseau étendu à faible puissance (lpwan) WO2022131733A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0176824 2020-12-16
KR1020200176824A KR102597045B1 (ko) 2020-12-16 2020-12-16 LPWAN(Low Power Wide Area Network) 환경에서 영상에 기초하여 객체에 관한 정보를 추정하기 위한 방법, 시스템 및 비일시성의 컴퓨터 판독 가능한 기록 매체

Publications (1)

Publication Number Publication Date
WO2022131733A1 true WO2022131733A1 (fr) 2022-06-23

Family

ID=82059321

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/018927 WO2022131733A1 (fr) 2020-12-16 2021-12-14 Procédé, système et support d'enregistrement lisible par ordinateur non transitoire pour estimer des informations concernant un objet sur la base d'images dans un environnement de réseau étendu à faible puissance (lpwan)

Country Status (2)

Country Link
KR (1) KR102597045B1 (fr)
WO (1) WO2022131733A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101732382B1 (ko) * 2016-01-01 2017-05-24 차보영 Cctv 범죄예방 cpted tower 및 cpted 환경시스템
KR101801846B1 (ko) * 2015-08-26 2017-11-27 옴니어스 주식회사 상품 영상 검색 및 시스템
KR20190052785A (ko) * 2017-11-09 2019-05-17 재단법인대구경북과학기술원 객체 검출 방법, 장치 및 컴퓨터 프로그램
KR20190122606A (ko) * 2019-10-11 2019-10-30 엘지전자 주식회사 차량 내 객체 모니터링 장치 및 방법
KR20200046188A (ko) * 2018-10-19 2020-05-07 삼성전자주식회사 인공 지능 모델을 재구성하기 위한 전자 장치 및 그의 제어 방법

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102029751B1 (ko) 2017-10-27 2019-10-08 (주)테크노니아 소리 또는 영상에 기반하여 외부 환경을 모니터링할 수 있는 센서 디바이스 및 이를 포함하는 환경 모니터링 시스템

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101801846B1 (ko) * 2015-08-26 2017-11-27 옴니어스 주식회사 상품 영상 검색 및 시스템
KR101732382B1 (ko) * 2016-01-01 2017-05-24 차보영 Cctv 범죄예방 cpted tower 및 cpted 환경시스템
KR20190052785A (ko) * 2017-11-09 2019-05-17 재단법인대구경북과학기술원 객체 검출 방법, 장치 및 컴퓨터 프로그램
KR20200046188A (ko) * 2018-10-19 2020-05-07 삼성전자주식회사 인공 지능 모델을 재구성하기 위한 전자 장치 및 그의 제어 방법
KR20190122606A (ko) * 2019-10-11 2019-10-30 엘지전자 주식회사 차량 내 객체 모니터링 장치 및 방법

Also Published As

Publication number Publication date
KR20220086403A (ko) 2022-06-23
KR102597045B1 (ko) 2023-11-02

Similar Documents

Publication Publication Date Title
WO2019156283A1 (fr) Mappage dynamique de mémoires pour réseaux neuronaux
WO2014051246A1 (fr) Procédé et appareil pour déduire un composite facial
WO2022055099A1 (fr) Procédé de détection d'anomalies et dispositif associé
WO2021118041A1 (fr) Procédé pour distribuer un travail d'étiquetage en fonction de sa difficulté, et appareil l'utilisant
WO2021107416A1 (fr) Système de gestion d'informations de transport de fret utilisant une analyse d'image et une chaîne de blocs
KR101963404B1 (ko) 2-단계 최적화 딥 러닝 방법, 이를 실행시키기 위한 프로그램을 기록한 컴퓨터 판독 가능한 기록매체 및 딥 러닝 시스템
CN111710177A (zh) 智慧交通信号灯组网协同优化控制系统及控制方法
WO2021075772A1 (fr) Procédé et dispositif de détection d'objet au moyen d'une détection de plusieurs zones
WO2021153861A1 (fr) Procédé de détection de multiples objets et appareil associé
WO2023080455A1 (fr) Procédé et appareil de traitement d'informations de pied
WO2018012855A1 (fr) Procédé de recherche de chemin basé sur un graphe hiérarchique, et procédé de recherche de chemin dans l'environnement de l'internet des objets l'utilisant
WO2020111571A1 (fr) Dispositif et procédé de détection de délinquance grave à base d'intelligence artificielle
WO2022131733A1 (fr) Procédé, système et support d'enregistrement lisible par ordinateur non transitoire pour estimer des informations concernant un objet sur la base d'images dans un environnement de réseau étendu à faible puissance (lpwan)
WO2021112401A1 (fr) Robot et son procédé de commande
WO2024101466A1 (fr) Appareil et procédé de suivi de personne disparue basé sur des attributs
CN113139650A (zh) 深度学习模型的调优方法和计算装置
WO2022108127A1 (fr) Procédé et système de recherche d'image cctv stockée sur la base d'informations d'espace de photographie
WO2022045697A1 (fr) Serveur de moteur d'ia modulaire à base de mégadonnées et procédé de commande de celui-ci
WO2020175729A1 (fr) Appareil et procédé pour détecter un point de caractéristique faciale à l'aide d'une carte de points caractéristiques gaussiens et d'un schéma de régression
WO2022139109A1 (fr) Procédé, système et support d'enregistrement lisible par ordinateur non transitoire pour surveiller un objet
WO2022145712A1 (fr) Procédé, dispositif et support d'enregistrement lisible par ordinateur non transitoire pour analyser un visiteur sur la base d'une image dans un environnement d'informatique en périphérie
WO2023095934A1 (fr) Procédé et système d'allégement d'un réseau neuronal à tête d'un détecteur d'objet
WO2021075701A1 (fr) Procédé de détection d'interaction et appareil associé
WO2023277219A1 (fr) Dispositif de traitement d'apprentissage profond léger et procédé pour véhicule auquel un générateur de caractéristiques adaptatif au changement environnemental est appliqué
WO2024106925A1 (fr) Système et procédé de communication basés sur une réalité augmentée et dispositif informatique pour la mise en œuvre de celui-ci

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21907037

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21907037

Country of ref document: EP

Kind code of ref document: A1