WO2020085653A1 - Procédé et système de suivi multi-piéton utilisant un fern aléatoire enseignant-élève - Google Patents

Procédé et système de suivi multi-piéton utilisant un fern aléatoire enseignant-élève Download PDF

Info

Publication number
WO2020085653A1
WO2020085653A1 PCT/KR2019/012101 KR2019012101W WO2020085653A1 WO 2020085653 A1 WO2020085653 A1 WO 2020085653A1 KR 2019012101 W KR2019012101 W KR 2019012101W WO 2020085653 A1 WO2020085653 A1 WO 2020085653A1
Authority
WO
WIPO (PCT)
Prior art keywords
teacher
random
fun
pedestrians
student
Prior art date
Application number
PCT/KR2019/012101
Other languages
English (en)
Korean (ko)
Inventor
고병철
남재열
김상준
Original Assignee
계명대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 계명대학교 산학협력단 filed Critical 계명대학교 산학협력단
Publication of WO2020085653A1 publication Critical patent/WO2020085653A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Definitions

  • the present invention relates to a plurality of pedestrian tracking methods and systems, and more particularly, to a plurality of pedestrian tracking methods and systems using a teacher-student random fun.
  • ITS Intelligent Transportation System
  • ADAS Advanced Driver Assistance System
  • the state-of-the-art driver assistance system In order to achieve the appropriate level of safety in an active intelligent traffic system, the state-of-the-art driver assistance system must track all pedestrians in motion to identify pedestrians at risk of entering the road in advance.
  • the Kalman filter is a recursive filter that tracks the dynamic state of noise, and is based on measurements made over time.
  • the Kalman filter repeatedly performs state prediction and measurement update when the motion model and measurement model are linear, or when the motion model and measurement model follow the Gaussian distribution, but cannot be used unless the above two cases are true. have.
  • CNN convolutional neural network
  • Patent No. 10-1588648 name of the invention: pedestrian detection and tracking method for intelligent video surveillance
  • the present invention is proposed to solve the above problems of the previously proposed methods, and extracts a feature value of a pedestrian using tiny YOLO, a type of deep network, and uses a random fun ( Random Fern) aims to provide a number of pedestrian tracking methods and systems using teacher-student random fun that enable real-time learning to minimize false tracking due to pedestrian shape change and size change. Is done.
  • the present invention by using a teacher-student random fun (Teacher-Student Random Ferns) to reduce the number of Ferns (Ferns) to enable real-time tracking, it is possible to quickly and accurately track multiple pedestrians in real time.
  • Another object is to provide a plurality of pedestrian tracking methods and systems using teacher-student random fun.
  • step (3) extracting feature values by inputting an image including a plurality of pedestrians detected in step (2) into a deep network
  • step (2-2) detecting the pedestrians classified in step (2-1) as the plurality of pedestrians.
  • the deep network in step (3) is,
  • It can be a synthetic product neural network.
  • the synthetic product neural network may be tiny YOLO.
  • the tiny YOLO Even more preferably, the tiny YOLO,
  • It may be composed of 9 convolution layers, 6 max pooling layers, and 1 fully connected layers.
  • step (3) the step (3)
  • Feature values may be extracted for each of the plurality of pedestrians detected in step (2).
  • step (4-2) Using the teacher random fun (Teacher Random Fern) learned in step (4-1) may include the step of learning a student random fun (Student Random Fern).
  • the number of Ferns may be less than that of the Teacher Random Ferns.
  • step (5) Preferably, in step (5),
  • a plurality of pedestrians can be tracked by reducing the number of Ferns using the learned Teacher-Student Random Ferns.
  • a camera unit for photographing images including a plurality of pedestrians from a camera installed in a moving vehicle;
  • a detection unit that detects a plurality of pedestrians from the image taken by the camera unit
  • An extraction unit that extracts feature values by inputting an image containing a plurality of pedestrians detected by the detection unit into a deep network
  • It is characterized in that it comprises a tracking unit for tracking a plurality of pedestrians using a teacher-student random fun (Teacher-Student Random Ferns) learned from the learning unit.
  • a teacher-student random fun Teacher-Student Random Ferns
  • the detection unit Preferably, the detection unit, the senor
  • a segmentation module that distinguishes pedestrians and non-pedestrians from the images captured by the camera unit
  • It may include a detection module for detecting the pedestrians identified in the classification module to the plurality of pedestrians.
  • the deep network may be a synthetic product neural network.
  • the synthetic product neural network may be tiny YOLO.
  • the tiny YOLO Even more preferably, the tiny YOLO,
  • It may be composed of 9 convolution layers, 6 max pooling layers, and 1 fully connected layers.
  • the extraction unit Preferably, the extraction unit, the extraction unit, and
  • Feature values may be extracted for each of the pedestrians detected by the detection unit.
  • the learning unit Preferably, the learning unit, the learning unit, the learning unit, and
  • It may include a second learning module for learning the student random fun (Student Random Ferns) using the teacher random fun (Teacher Random Ferns) learned in the first learning module.
  • the number of Ferns may be less than that of the Teacher Random Ferns.
  • the tracking unit Preferably, the tracking unit,
  • a plurality of pedestrians can be tracked by reducing the number of Ferns using the learned Teacher-Student Random Ferns.
  • a feature value of a pedestrian is extracted using tiny YOLO, a type of deep network, and a random value is extracted using the extracted feature value
  • Fern Random Ferns
  • FIG. 1 is a flowchart illustrating a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating a multi-layer perceptron (MLP) network among deep networks.
  • MLP multi-layer perceptron
  • step S200 is a view showing a detailed flow of step S200 in a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • FIG. 4 is a view illustrating step S210 of a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • FIG. 5 is a view illustrating step S300 of a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • step S400 is a view showing the detailed flow of step S400 in a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating an overall process of learning a teacher random ferns in a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • FIG. 8 is a diagram showing an algorithm for learning a student random fun in a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • FIG. 10 is a view showing the configuration of a plurality of pedestrian tracking systems using a teacher-student random fun according to an embodiment of the present invention.
  • FIG. 11 is a view showing the detailed configuration of a detection unit in a plurality of pedestrian tracking systems using a teacher-student random fun according to an embodiment of the present invention.
  • FIG. 12 is a diagram showing the detailed configuration of a learning unit in a plurality of pedestrian tracking systems using a teacher-student random fun according to an embodiment of the present invention.
  • step S210 Step to distinguish pedestrian and non-pedestrian from the image taken in step S100
  • step S220 detecting pedestrians classified in step S210 as a plurality of pedestrians
  • step S300 extracting feature values by inputting an image including a plurality of pedestrians detected in step S200 into a deep network
  • step S420 Step of learning student random fun using teacher random ferns learned in step S410.
  • Each step of a plurality of pedestrian tracking methods using a teacher-student random fun may be performed by a computer device.
  • the subject may be omitted in each step.
  • a method for tracking a plurality of pedestrians using a teacher-student random fun includes photographing images including a plurality of pedestrians in a camera installed in a moving vehicle (S100) , Detecting a plurality of pedestrians from the image taken in step S100 (S200), extracting feature values by inputting an image including a plurality of pedestrians detected in step S200 into a deep network (S300), in step S300 Using the extracted feature value to learn a teacher-student random fun (Teacher-Student Random Ferns) step (S400), and using a teacher-student random fun (Teacher-Student Random Ferns) learned in step S400 a plurality of It may be implemented, including the step of tracking the pedestrian (S500).
  • ANN Artificial Neural Network
  • the artificial neural network refers to an entire network that has problem-solving ability by changing the strength of synaptic binding through learning by artificial neurons (nodes) that form a network through synaptic binding. In a narrow sense, it may refer to a multi-layer perceptron using error back propagation, but this is a misuse, and the artificial neural network is not limited thereto.
  • a deep network or a deep neural network is an artificial neural network composed of several hidden layers between an input layer and an output layer.
  • Deep networks can model complex non-linear relationships, just like a normal artificial neural network.
  • each object may be represented by a hierarchical configuration of basic elements of an image, where additional layers can aggregate features of progressively gathered lower layers. This feature of the deep network allows modeling of complex data with fewer units than a similarly performed artificial neural network.
  • FIG. 2 is a diagram illustrating a multi-layer perceptron (MLP) network among deep networks.
  • the MLP network is a neural network in which one or more intermediate layers exist between the input layer and the output layer, and the intermediate layer between the input layer and the output layer is called a hidden layer.
  • the network is connected to the input layer, the hidden layer, and the output layer, and there is no direct connection from each layer to the input layer from the output layer.
  • the MLP network has a structure similar to that of the single-layer perceptron, but improves the network capability by overcoming the input / output characteristics of the middle layer and each unit to overcome various disadvantages of the single-layer perceptron.
  • the characteristics of the crystal region formed by perceptrons become more advanced. More specifically, in the case of a single layer, the pattern space is divided into two sections, and in the case of the second floor, a convex open zone or a concave closed zone is formed, and in the case of the third floor, any type of zone may be formed in theory.
  • the Convolutional Neural Network is a type of MLP network designed to use minimal preprocessing.
  • the synthetic product neural network is a neural network composed of one or several convolutional layers, a pooling layer, and a fully connected layer, and has a structure suitable for learning two-dimensional data. Since it can be trained through a backpropagation algorithm, it can be widely used in various application fields such as object classification in image and object detection.
  • the convolution layer can serve to extract features from the input data.
  • the convolution layer may consist of a filter that functions to extract features and an activation function that converts the values extracted from the filter into nonlinear values.
  • Synthetic product neural networks can be trained through gradient descent and backpropagation algorithms.
  • the gradient descent method is an optimization algorithm for first-order approximation values. It is a method of finding the gradient (slope) of a function and continuously moving the gradient to the lower side and repeating it until an extreme value is reached.
  • the backpropagation algorithm is used for multi-layer perceptron learning It refers to a statistical technique, which is a method of adjusting individual weights so that a desired value is output for the same input layer.
  • Random Ferns a method proposed by Ozuysal in 2007, is a modification of Bayes' theory. Random Ferns overcomes the limitations of Bayes' theory by considering the correlation between feature functions. Also, it is possible to perform simple and fast calculations by implementing a feature function using the difference between two pixels. The performance of Random Ferns has better classification performance than the Random Tree, and has the same performance as the object recognition rate of SIFT and a faster computation speed than SIFT.
  • Random Ferns can be defined through the following process.
  • Equation 1 can be defined as Equation 2 by using Bayes definition.
  • Equation 2 Assuming P (C) and P (f 1 , f 2 ,..., f k ) in Equation 2 as predetermined probability values, the multi-class c z may be defined as in Equation 3 below.
  • each feature extraction function can be calculated as in Equation 4 below.
  • Equation (4) can be modified as shown in Equation (5) by using random ferns.
  • an image including a plurality of pedestrians may be photographed by a camera installed in a moving vehicle.
  • a camera installed in a moving vehicle.
  • all pedestrians in motion must be tracked to identify pedestrians at risk of entering the road in advance.
  • images containing multiple pedestrians can be captured.
  • step S200 a plurality of pedestrians may be detected from the image photographed in step S100.
  • 3 is a view showing a detailed flow of step S200 in a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • step S200 of a method for tracking a plurality of pedestrians using a teacher-student random fun according to an embodiment of the present invention may include distinguishing a pedestrian and a non-pedestrian from the image photographed in step S100 ( S210), and detecting the pedestrians identified in step S210 as a plurality of pedestrians (S220).
  • step S210 is a diagram illustrating steps S210 of a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • a pedestrian and a non-pedestrian may be distinguished from the image photographed in step S100.
  • the pedestrian may be a person
  • the non-pedestrian may be a power pole, a tree, a building, or the like.
  • step S220 the pedestrians classified in step S210 may be detected as a plurality of pedestrians.
  • the method of tracking a plurality of pedestrians using a teacher-student random fun since it is necessary to detect a plurality of pedestrians and input them to a deep network, feature values of each pedestrian must be extracted, in step S220, step The pedestrians classified in S210 may be detected as a plurality of pedestrians, and input to the deep network of step S300 described below to extract feature values of each pedestrian.
  • step S300 of a plurality of pedestrian tracking methods using a teacher-student random fun is a view illustrating a step S300 of a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • step S300 of a method for tracking multiple pedestrians using a teacher-student random fun an image including a plurality of pedestrians detected in step S200 is input to a deep network
  • the feature values can be extracted.
  • step S300 of a plurality of pedestrian tracking methods using a teacher-student random fun using a teacher-student random fun according to an embodiment of the present invention, as a deep network, using tiny YOLO, which is a kind of synthetic multiplicity neural network, in step S200 Feature values may be extracted for each pedestrian from an image including a plurality of detected pedestrians.
  • the tiny YOLO may consist of 9 Convolution layers, 6 Max pooling layers, and 1 fully connected layers. At this time, the feature value of the pedestrian can be extracted through the last connection layer, which is the last layer of tiny YOLO.
  • step S400 a teacher-student random ferns may be learned using the feature values extracted in step S300.
  • 6 is a view showing the detailed flow of step S400 in a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • step S400 of a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention uses a teacher random fun using the feature values extracted in step S300.
  • Learning S410
  • teacher random fun Teacher Random Ferns learned in step S410 to learn the student random fun (Student Random Ferns) (S420).
  • Teacher-Student Random Ferns is a tracker composed of Random Ferns.
  • Teacher Random Ferns are constructed based on a large amount of training data, so they have high tracking performance, but tracking speed may be slow and it may be difficult to track pedestrians in real time.
  • a plurality of pedestrian tracking methods using a teacher-student random fun use a student random fun to maintain a tracking performance of a teacher random ferns while maintaining a tracking performance. By reducing the number of (Ferns), pedestrians can be tracked faster and more accurately than before.
  • a teacher random ferns detects and detects a plurality of pedestrians in step S200.
  • an image including a plurality of pedestrians may be input to a deep network and learned using the extracted feature values.
  • the teacher random fern (Teacher Random Ferns) may have a plurality of ferns (Fern), for example, 1 to L (L is a natural number) may have L ferns (Fern).
  • a teacher random fun can be learned using the feature values extracted in step S300. More specifically, the teacher random fun can be learned using the feature value of the pedestrian extracted in step S300 and tiny YOLO, which is one of the synthetic product neural networks.
  • step S420 the student random fun can be learned using the teacher random ferns learned in step S410. More specifically, by using the teacher random fun (Teacher Random Ferns) learned in step S410, it is possible to learn by dividing the case where the pedestrian first or twice appeared.
  • FIG. 8 is a diagram illustrating an algorithm for learning a student random fun in a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention.
  • a student random fun can be learned when a pedestrian first appears.
  • the above algorithm can be repeated as many as the number of pedestrians detected through step S200 to learn student random ferns.
  • the number of Ferns in Student Random Ferns in a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention It can be less than the Teacher Random Ferns.
  • step S500 a plurality of pedestrians may be tracked using a teacher-student random ferns learned in step S400. More specifically, in step S500 of a plurality of pedestrian tracking methods using a teacher-student random fun according to an embodiment of the present invention, using a teacher-student random ferns learned in step S400 By reducing the number of ferns, multiple pedestrians can be tracked.
  • FIG. 10 is a view showing the configuration of a plurality of pedestrian tracking system 10 using a teacher-student random fun according to an embodiment of the present invention.
  • a plurality of pedestrian tracking system 10 using a teacher-student random fun according to an embodiment of the present invention, the camera unit 100, the detection unit 200, the extraction unit 300 , It may be configured to include a learning unit 400 and the tracking unit 500.
  • a plurality of pedestrian tracking systems 10 using a teacher-student random fun includes a camera unit 100 for photographing images including a plurality of pedestrians from a camera installed in a moving vehicle. ), A detection unit 200 for detecting a plurality of pedestrians from an image captured by the camera unit 100, and inputting an image including a plurality of pedestrians detected by the detection unit 200 into a deep network to extract feature values Extractor 300, a learning unit 400 for learning a teacher-student random fun using feature values extracted from the extracting unit 300, and a teacher trained in the learning unit 400 -It may be configured to include a tracker 500 that tracks a plurality of pedestrians using a student-fund random (Teacher-Student Random Ferns).
  • a tracker 500 that tracks a plurality of pedestrians using a student-fund random (Teacher-Student Random Ferns).
  • the detection unit 200 may be used in an image captured by the camera unit 100. It may be configured to include a detection module 220 for detecting the pedestrians separated from the pedestrians and non-pedestrians, and the pedestrians classified in the partitioning module 210 as the plurality of pedestrians.
  • FIG. 12 is a view showing the detailed configuration of the learning unit 400 in a plurality of pedestrian tracking systems 10 using a teacher-student random fun according to an embodiment of the present invention.
  • the learning unit 400 feature values extracted from the extraction unit 300
  • a first learning module 410 for learning a teacher random fun by using, and a student random fun by using a teacher random ferns learned in the first learning module 410 Ferns can be configured to include a second learning module 420.
  • the plurality of pedestrian tracking systems 10 using a teacher-student random fun have been sufficiently described in connection with a plurality of pedestrian tracking methods using a teacher-student random fun, and thus detailed description will be omitted. Shall be
  • a feature value of a pedestrian is extracted using tiny YOLO, a type of deep network, By learning the random ferns using the extracted feature values, real-time learning is possible, thereby minimizing mistracking due to pedestrian shape change and size change.
  • the present invention in order to reduce the number of Ferns (Ferns) to enable real-time tracking, it is possible to quickly and accurately track multiple pedestrians in real time using a Teacher-Student Random Ferns. have.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé et un système de suivi multi-piéton utilisant un fern aléatoire enseignant-élève, consistant à extraire des valeurs de caractéristiques de piétons au moyen d'un YOLO restreint qui est un type de réseau profond, et à former un fern aléatoire au moyen des valeurs de caractéristiques extraites, l'apprentissage en temps réel étant possible et un suivi erroné dû à des changements des formes et des tailles des piétons pouvant être réduit au minimum.
PCT/KR2019/012101 2018-10-26 2019-09-18 Procédé et système de suivi multi-piéton utilisant un fern aléatoire enseignant-élève WO2020085653A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0129172 2018-10-26
KR1020180129172A KR102164950B1 (ko) 2018-10-26 2018-10-26 교사-학생 랜덤 펀을 이용한 다수의 보행자 추적 방법 및 시스템

Publications (1)

Publication Number Publication Date
WO2020085653A1 true WO2020085653A1 (fr) 2020-04-30

Family

ID=70330343

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/012101 WO2020085653A1 (fr) 2018-10-26 2019-09-18 Procédé et système de suivi multi-piéton utilisant un fern aléatoire enseignant-élève

Country Status (2)

Country Link
KR (1) KR102164950B1 (fr)
WO (1) WO2020085653A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668487A (zh) * 2020-12-29 2021-04-16 杭州晨安科技股份有限公司 一种基于身体重合度与人体相似性相融合的老师跟踪方法
CN113392754A (zh) * 2021-06-11 2021-09-14 成都掌中全景信息技术有限公司 一种基于yolov5行人检测算法减少行人误检测率的方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158897A (zh) * 2021-04-21 2021-07-23 新疆大学 一种基于嵌入式YOLOv3算法的行人检测系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160132731A (ko) * 2015-05-11 2016-11-21 계명대학교 산학협력단 열 영상에서 온라인 랜덤 펀 학습을 이용하여 보행자를 추적하는 장치 및 방법
KR20170028591A (ko) * 2015-09-04 2017-03-14 한국전자통신연구원 컨볼루션 신경망을 이용한 객체 인식 장치 및 방법
KR101771146B1 (ko) * 2017-03-22 2017-08-24 광운대학교 산학협력단 스테레오 카메라를 이용한 컨볼루션 신경망 기반의 보행자 및 차량 검출 방법 및 장치
JP2018060511A (ja) * 2016-10-06 2018-04-12 株式会社アドバンスド・データ・コントロールズ シミュレーションシステム、シミュレーションプログラム及びシミュレーション方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160132731A (ko) * 2015-05-11 2016-11-21 계명대학교 산학협력단 열 영상에서 온라인 랜덤 펀 학습을 이용하여 보행자를 추적하는 장치 및 방법
KR20170028591A (ko) * 2015-09-04 2017-03-14 한국전자통신연구원 컨볼루션 신경망을 이용한 객체 인식 장치 및 방법
JP2018060511A (ja) * 2016-10-06 2018-04-12 株式会社アドバンスド・データ・コントロールズ シミュレーションシステム、シミュレーションプログラム及びシミュレーション方法
KR101771146B1 (ko) * 2017-03-22 2017-08-24 광운대학교 산학협력단 스테레오 카메라를 이용한 컨볼루션 신경망 기반의 보행자 및 차량 검출 방법 및 장치

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SON , SE JI: "YOLO : Real-Time Object Detection DY N DY", LET'S PRACTICE YOLO : REAL-TIME OBJECT DETECTION. DY N DY, 19 October 2016 (2016-10-19), pages 1 - 7 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668487A (zh) * 2020-12-29 2021-04-16 杭州晨安科技股份有限公司 一种基于身体重合度与人体相似性相融合的老师跟踪方法
CN112668487B (zh) * 2020-12-29 2022-05-27 杭州晨安科技股份有限公司 一种基于身体重合度与人体相似性相融合的老师跟踪方法
CN113392754A (zh) * 2021-06-11 2021-09-14 成都掌中全景信息技术有限公司 一种基于yolov5行人检测算法减少行人误检测率的方法

Also Published As

Publication number Publication date
KR20200052429A (ko) 2020-05-15
KR102164950B1 (ko) 2020-10-13

Similar Documents

Publication Publication Date Title
WO2020085653A1 (fr) Procédé et système de suivi multi-piéton utilisant un fern aléatoire enseignant-élève
WO2020040391A1 (fr) Système combiné basé sur un réseau de couches profondes pour une reconnaissance de piétons et une extraction d'attributs
WO2020105948A1 (fr) Appareil de traitement d'images et son procédé de commande
WO2019031714A1 (fr) Procédé et appareil de reconnaissance d'objet
WO2020085694A1 (fr) Dispositif de capture d'image et procédé de commande associé
WO2019164251A1 (fr) Procédé de réalisation d'apprentissage d'un réseau neuronal profond et appareil associé
WO2022114731A1 (fr) Système de détection de comportement anormal basé sur un apprentissage profond et procédé de détection pour détecter et reconnaître un comportement anormal
WO2020138745A1 (fr) Procédé de traitement d'image, appareil, dispositif électronique et support d'informations lisible par ordinateur
WO2019182269A1 (fr) Dispositif électronique, procédé de traitement d'image du dispositif électronique, et support lisible par ordinateur
WO2020184748A1 (fr) Dispositif d'intelligence artificielle et procédé de commande d'un système d'arrêt automatique sur la base d'informations de trafic
WO2020130747A1 (fr) Appareil et procédé de traitement d'image pour transformation de style
WO2021006404A1 (fr) Serveur d'intelligence artificielle
WO2022045425A1 (fr) Appareil et procédé de détection de moyen de livraison basés sur l'apprentissage par renforcement inverse
WO2019074316A1 (fr) Système de reconnaissance basé sur un réseau neuronal artificiel convolutif dans lequel l'enregistrement, la recherche et la reproduction d'une image et d'une vidéo sont divisés entre un dispositif mobile et un serveur, et exécutés par ceux-ci
WO2022139111A1 (fr) Procédé et système de reconnaissance d'objet marin sur la base de données hyperspectrales
WO2020241930A1 (fr) Procédé d'estimation d'emplacement à l'aide de capteurs multiples et robot de mise en œuvre de ceux-ci
WO2020262746A1 (fr) Appareil à base d'intelligence artificielle pour recommander un parcours de linge, et son procédé de commande
WO2020241934A1 (fr) Procédé d'estimation de position par synchronisation de multi-capteur et robot pour sa mise en œuvre
WO2020184747A1 (fr) Dispositif d'intelligence artificielle et procédé pour commander un système d'arrêt automatique
WO2018117538A1 (fr) Procédé d'estimation d'informations de voie et dispositif électronique
WO2020184746A1 (fr) Appareil d'intelligence artificielle permettant de commander un système d'arrêt automatique sur la base d'informations de conduite, et son procédé
WO2021206221A1 (fr) Appareil à intelligence artificielle utilisant une pluralité de couches de sortie et procédé pour celui-ci
WO2021006482A1 (fr) Appareil et procédé de génération d'image
WO2020171605A1 (fr) Procédé de fourniture d'informations de conduite et serveur de fourniture de carte de véhicules et procédé associé
WO2018186625A1 (fr) Dispositif électronique, procédé de délivrance de message d'avertissement associé, et support d'enregistrement non temporaire lisible par ordinateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19876462

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19876462

Country of ref document: EP

Kind code of ref document: A1