WO2018190362A1 - Method and device for detecting pedestrian around vehicle - Google Patents

Method and device for detecting pedestrian around vehicle Download PDF

Info

Publication number
WO2018190362A1
WO2018190362A1 PCT/JP2018/015194 JP2018015194W WO2018190362A1 WO 2018190362 A1 WO2018190362 A1 WO 2018190362A1 JP 2018015194 W JP2018015194 W JP 2018015194W WO 2018190362 A1 WO2018190362 A1 WO 2018190362A1
Authority
WO
WIPO (PCT)
Prior art keywords
rider
frame
vehicle
pedestrians
riders
Prior art date
Application number
PCT/JP2018/015194
Other languages
French (fr)
Japanese (ja)
Inventor
依若 戴
楊 張
孝一 照井
健 志磨
Original Assignee
日立オートモティブシステムズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立オートモティブシステムズ株式会社 filed Critical 日立オートモティブシステムズ株式会社
Priority to JP2019512544A priority Critical patent/JP6756908B2/en
Publication of WO2018190362A1 publication Critical patent/WO2018190362A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes

Definitions

  • the present invention relates to a method and apparatus for detecting pedestrians around a vehicle.
  • Patent Document 1 In the on-board camera as a sensor, first a human and a bicycle part are detected, and a rider detection algorithm for determining whether or not a rider is based on the spatial positional relationship of both Is disclosed.
  • Patent Document 1 since it is necessary to detect a circular wheel of a bicycle, it cannot be applied when the bicycle wheel is not visible, and it cannot be determined whether or not a pedestrian is a rider.
  • the operating environment and conditions of the in-vehicle sensor are very severe, and the collision avoidance time of the vehicle running at high speed is extremely short. Therefore, the in-vehicle collision avoidance system is required to have high real-time characteristics and high processing speed.
  • the in-vehicle collision avoidance system is required to have high real-time characteristics and high processing speed.
  • In an actual traffic situation there are many light vehicles, motor vehicles, and many stationary objects such as pedestrians and barricades. Therefore, a plurality of riders may be detected.
  • an algorithm that is complicated in computation and has a large calculation amount often cannot satisfy real-time characteristics. Therefore, according to the prior art, it becomes difficult to detect a plurality of riders in real time.
  • the present invention provides a method and apparatus for quickly distinguishing a plurality of riders and a plurality of pedestrians among pedestrians and detecting a pedestrian around the vehicle that can detect a plurality of riders in real time.
  • an apparatus for detecting pedestrians around a vehicle acquires image information around the vehicle from an imaging device for the vehicle, and an acquisition unit that detects the current vehicle speed of the vehicle; Based on the image information, an initial candidate frame parameter calculation unit that calculates an initial candidate frame parameter including an orientation that is an angle of the pedestrian with respect to the imaging device at each time point of the plurality of pedestrians around the vehicle; An adjustment unit that adjusts the initial candidate frame parameters based on the orientation to obtain the adjusted candidate frame parameters at each time point, and a plurality of pedestrians based on the adjusted candidate frame parameters.
  • the initial candidate frame parameter further includes a frame centroid coordinate, a frame width w, and a frame height h
  • the adjusted candidate frame parameter includes the frame centroid coordinate, the adjusted frame width w ′, and the frame height. h, and the orientation ⁇ , where
  • the classification unit sets the adjusted candidate frame parameter at each time point of each of the one or more riders as the rider detection frame parameter.
  • the classification unit uses the initial candidate frame parameters of each of the one or more pedestrians as pedestrian detection frame parameters at the respective time points of the one or more pedestrians
  • the risk factor setting unit is configured such that the pedestrian detection frame parameter, the state information of each of the one or more pedestrians, and the change frequency of the direction, each pedestrian among the one or more pedestrians. Set the risk factor to.
  • the risk factor setting unit is based on the frame center of gravity coordinates of the rider detection frame parameter at the previous time point and the frame center of gravity coordinates of the rider detection frame parameter at the current time point.
  • the relative speed and the relative movement direction of each rider with respect to the vehicle are acquired.
  • the risk factor setting unit determines whether or not each rider is a new rider. If the rider is a new rider, the risk factor is set to 1. If the rider is not a new rider, the current rider detection frame is set. The risk factor is set based on the frame center-of-gravity coordinates and the direction of the parameter, the relative speed, the relative movement direction, the state information, and the change frequency of the direction.
  • the state information is a normal state or an abnormal state
  • the risk factor setting unit determines that the rider is a child based on the frame height of the rider detection frame parameter, or from the image information
  • the state information is determined as an abnormal state.
  • a method for detecting a pedestrian around a vehicle the first step of acquiring image information around the vehicle from a photographing device of the vehicle and detecting a current vehicle speed of the vehicle, and the image information.
  • a risk factor is set for each rider among the one or more riders based on the step, the rider detection frame parameter, the state information of each of the one or more riders, and the direction change frequency;
  • FIG. 1 is a structural diagram of an apparatus for detecting pedestrians around a vehicle according to an embodiment of the present invention.
  • 3 is a flowchart of a method for detecting pedestrians around a vehicle according to an embodiment of the present invention. It is a schematic diagram of an initial candidate frame at time t1 of one pedestrian among a plurality of pedestrians according to an embodiment of the present invention. It is a schematic diagram of the candidate frame after adjustment of one pedestrian shown in FIG. It is a schematic diagram of the initial candidate frame at the time t1 of another pedestrian among a plurality of pedestrians according to an embodiment of the present invention. It is a schematic diagram of the candidate frame after adjustment of the other pedestrian shown by Fig.4 (a). It is a schematic diagram which shows the definition of direction.
  • FIG. 1 is a structural diagram of an apparatus 10 for detecting pedestrians around a vehicle according to an embodiment of the present invention.
  • the device 10 for detecting pedestrians around a vehicle includes an acquisition unit 11, an initial candidate frame parameter calculation unit 12, an adjustment unit 13, a classification unit 14, and a risk coefficient setting unit 15.
  • FIG. 2 is a flowchart of a method for detecting pedestrians around a vehicle according to an embodiment of the present invention.
  • the acquisition unit 11 acquires image information around the vehicle from the vehicle imaging device and detects the current vehicle speed of the vehicle.
  • the photographing device (not shown) is one or a plurality of camera systems attached to appropriate positions of the vehicle (the upper end of the vehicle windshield, the rear end of the vehicle tail, or both sides of the vehicle body). Collect and store image information from the rear and both sides.
  • a camera system includes an optical system and a camera.
  • the optical system may have a zoom function, an autofocus function, and the like, and a color CCD (charge coupled device) video camera may be used as the camera.
  • CCD charge coupled device
  • the initial candidate frame parameter calculation unit 12 determines the angle of the pedestrian shooting device at each time point of the plurality of pedestrians around the vehicle based on the image information, that is, the pedestrian torso shooting device. An initial candidate frame parameter including the direction ⁇ which is an angle is calculated.
  • FIG. 5 is a schematic diagram showing the definition of the direction ⁇ .
  • When the orientation of the pedestrian with respect to the imaging device is right-handed, ⁇ is 0, and when the orientation of the pedestrian with respect to the imaging device is front-facing, ⁇ is ⁇ / 2, and the orientation of the pedestrian with respect to the imaging device is ⁇ is ⁇ or ⁇ when the pedestrian is facing left, and ⁇ is ⁇ / 2 when the pedestrian is facing the imaging device.
  • Other directions are output according to the actual size and normalized to [ ⁇ , ⁇ ].
  • the direction ⁇ divided into a certain region may be divided into the same angle of one class. As shown in FIG.
  • the initial candidate frame parameters further include frame centroid coordinates, frame width w, and frame height h.
  • FIG. 3A is a schematic diagram of the initial candidate frame F at the time point t1 of one pedestrian among a plurality of pedestrians according to an embodiment of the present invention.
  • the parameters include frame centroid coordinates (x t1 , y t1 , z t1 ), frame width w t1 , frame height h t1 , and orientation ⁇ t1 (not shown).
  • the frame center-of-gravity coordinates are set as the world coordinate system of the vehicle.
  • the coordinate origin is a position directly below the middle of the vehicle head.
  • a feature extraction method (dense feature: DPM deformed part model, ACF total channel feature, HOG direction gradient histogram, dense edge, etc .; sparse feature: size, shape, sparse edge, body part, walking , Texture, gray scale / edge symmetry, etc.), contour template matching, and the like.
  • an initial candidate frame including frame centroid coordinates, frame width, and frame height is calculated.
  • a deep learning algorithm of machine learning is calculated.
  • the orientation of the pedestrian is analyzed by a deep learning algorithm or a normal machine learning algorithm.
  • different models based on dedicated orientation detection networks or different directional gradient histograms can be used to divide the input pedestrian initial candidate frames into different orientations.
  • the type of method is called a cascade model.
  • the frame center-of-gravity coordinates, the frame width, the frame height, and the direction can be collectively calculated by the same deep learning network or normal machine learning, that is, a single model algorithm.
  • the pedestrian orientation is trained and detected as part of the regression loss function in all connected layers at the end of the deep learning network.
  • the initial candidate frame parameter calculation unit 12 can identify all pedestrians from the image information by the above calculation.
  • the pedestrian includes a rider and / or a pedestrian.
  • the adjustment unit 13 adjusts the initial candidate frame parameter based on the orientation, and acquires the adjusted candidate frame parameter at each time point.
  • FIG.3 (b) is a schematic diagram of the candidate frame after adjustment of one pedestrian shown by Fig.3 (a).
  • the adjustment unit 13 adjusts the initial candidate frame parameter of the initial candidate frame F at the time point t1 of the pedestrian to adjust the candidate frame after the adjustment at the time point t1 of the pedestrian.
  • F ' is acquired.
  • the candidate frame parameters of the adjusted candidate frame F ′ are the frame centroid coordinates (x t1 , y t1 , z t1 ), the frame width w ′ t1 , the frame height h t1 , and the orientation ⁇ t1 (not shown). Including. here,
  • the adjustment unit 13 changes the frame width w t1 of the initial candidate frame F based on the frame height h t1 and the orientation ⁇ t1 to obtain the adjusted frame width w ′ t1 of the candidate frame F ′. Keep other parameters constant. In this way, the candidate frame parameters of the adjusted candidate frame at the time t1 of each pedestrian among the plurality of pedestrians can be acquired.
  • the size of the initial candidate frame is not changed, that is, the adjusted candidate frame parameter and the initial candidate frame parameter are the same.
  • the frame height of the initial candidate frame is kept constant, and the frame width is adjusted to be equal to the frame height.
  • the frame height of the initial candidate frame is held constant and the frame width is adjusted to half the frame height.
  • the change (adjustment) is based on the premise that the frame center-of-gravity coordinates are held constant. As shown in FIG. 3B, once the frame width changes, the coordinates of the four vertices of the adjusted candidate frame also change correspondingly.
  • step S24 the classification unit 14 classifies the plurality of pedestrians into one or a plurality of riders and one or a plurality of pedestrians based on the adjusted candidate frame parameters, and each of the one or a plurality of riders. Obtain rider detection frame parameters at each time point.
  • the classification unit 14 calculates the candidate frame parameters after adjustment of each pedestrian by, for example, a deep learning classification algorithm, and classifies based on the calculation result. Since the specific calculation and classification method is the same as the conventional method, a redundant description will not be given.
  • FIG. 4A is a schematic diagram of an initial candidate frame G at time t1 of another pedestrian among a plurality of pedestrians according to the embodiment of the present invention
  • FIG. 4B is a schematic diagram of FIG. It is a schematic diagram of candidate frame G 'after adjustment of the other pedestrians shown in FIG.
  • the classification unit 14 classifies, for example, one pedestrian in FIG. 3B as a rider (hereinafter referred to as “rider A”) and walks the other pedestrians in FIG. 4B. (Hereinafter, the other pedestrians in FIGS. 4A and 4B are classified as “pedestrian B”).
  • the classification unit 14 uses the adjusted candidate frame parameter at each time point of each of one or more riders as the rider detection frame parameter.
  • the classification unit 14 uses the candidate frame parameter of the adjusted candidate frame F ′ at time t1 shown in FIG. That is, the rider detection frame parameters include frame barycentric coordinates (x t1 , y t1 , z t1 ), frame width w ′ t1 , frame height h t1 , and orientation ⁇ t1 (not shown).
  • the classification unit 14 sets the initial candidate frame parameter of each of one or more pedestrians as the pedestrian detection frame parameter at each time point of one or more pedestrians.
  • the classification unit 14 uses the initial candidate frame parameter of the initial candidate frame G at time t1 shown in FIG. 4A as the pedestrian detection frame parameter of the pedestrian.
  • the pedestrian detection frame parameter and the initial candidate frame parameter are the same, that is, the frame width of the pedestrian detection frame is adjusted. It is smaller than the frame width of the subsequent candidate frame, thereby narrowing the detection range for pedestrians.
  • the risk factor setting unit 15 determines the rider detection frame parameter, the status information of each of one or more riders, and the change frequency of the direction, and the respective rider among the one or more riders. Set the risk factor to.
  • the risk factor setting unit 15 is based on the frame center of gravity coordinates of the rider detection frame parameter at the previous time point and the frame center of gravity coordinates of the rider detection frame parameter at the current time point. The relative speed and the relative movement direction of each rider with respect to the vehicle are acquired.
  • the previous time is t1, for example, and the current time is t2, for example.
  • the frame center-of-gravity coordinates of the rider detection frame parameter at t1 are, for example, (x t1 , y t1 , z t1 )
  • the frame center of gravity coordinates of the rider detection frame parameter at t2 are, for example, (x t2 , y t2 , Z t2 )
  • the relative speed v and the relative movement direction r of the rider A with respect to the vehicle are acquired from the amount of change in (x t2 , y t2 , z t2 ) and (x t1 , y t1 , z t1 ).
  • the relative speed and relative movement direction of each of the other riders can be obtained by a similar method.
  • the risk factor setting unit 15 determines whether or not each rider is a new rider. If the rider is a new rider, the risk factor setting unit 15 sets the risk factor to 1. If the rider is not a new rider, the current risk detection frame parameter is set. A risk coefficient is set based on the frame center-of-gravity coordinates and direction, relative speed, relative movement direction, state information, and direction change frequency.
  • the risk coefficient setting unit 15 first determines whether or not it is a new rider. Specifically, it is determined whether or not the rider detection frame parameter exists at the previous time point t1 for the rider A, and if it is determined that the rider detection frame parameter does not exist, the rider A is a new rider, that is, a rider newly appearing at the current time t2. The risk factor W is set to 1 and the next rider is determined.
  • the risk factor W is initialized to 0, and the frame center of gravity coordinates and direction of the rider detection frame parameter at the current time t2, relative speed, relative movement direction, and state information
  • the risk factor is set based on the direction change frequency.
  • condition 1 to Condition 5 when it is determined that the rider A is not a new rider, the risk factor W is initialized to 0, and the risk factor is set for the rider A based on the following condition 1 to condition 5.
  • Condition 1 to Condition 5 will be described in detail.
  • Condition 1 It is determined whether the alarm safety distance d w less than or not the frame center coordinates of the rider detection frame parameters at the current time t2 is calculated from the following equation (1).
  • v is the relative speed of the rider A
  • t f is the braking time required for the vehicle
  • t d is the response time of the vehicle driver
  • is the coefficient of friction of the road where the vehicle is located
  • g is the acceleration of gravity.
  • ⁇ h is the current vehicle speed detected by the acquisition unit 11 in step S21.
  • the value of the friction coefficient ⁇ of the road can be determined based on the photographed image information.
  • the coordinate origin is a position directly below the middle of the vehicle head, and in this case, the distance between the frame center of gravity coordinates and the coordinate origin is the distance between the rider and the middle position of the vehicle head.
  • Rider frame center coordinates of the rider detection frame parameters at the moment t2 of A is a (x t2, y t2, z t2), thereby obtaining the distance D between the intermediate position of the rider A and headway.
  • the distance D determines whether the alarm safety distance d w smaller.
  • Condition 2 From the relative movement direction of rider A, it is determined whether rider A is approaching the vehicle. Here, for example, whether or not the rider A is approaching the vehicle is determined based on the calculated relative movement direction r.
  • Condition 3 Whether or not the state information of rider A is in an abnormal state.
  • the state information is a normal state or an abnormal state, and the risk coefficient setting unit 15 determines that the rider is a child based on the frame height of the rider detection frame parameter, or the rider and other riders from the image information.
  • the state information is determined as an abnormal state.
  • the frame height of the rider detection frame parameter is h t2 at present t2 rider A, since the frame height is h t2, the rider A is equal to or a child, children If it is, it is determined that the state is abnormal.
  • riders A and other riders such as the distance between the rider C is the distance D AC the frame centroid coordinates and rider C of the frame barycentric coordinates of the rider A.
  • the predetermined safety distance d s is obtained from the following equation (3), for example.
  • k is the relative speed between rider A and rider C
  • t ′ f is the time required for the rider's brake
  • t d ′ is the rider's response time
  • is the coefficient of friction of the road where rider A is located ( For example, obtained from the above equation (2))
  • g is the gravitational acceleration.
  • k can be acquired from the amount of change between the frame center of gravity coordinates of the rider A and the frame center of gravity coordinates of the rider C.
  • the distance D AC with a predetermined safety distance d s determines whether a predetermined safety distance d s is less than. If it is smaller than the predetermined safety distance d s , it is determined that the state is abnormal, and if it is equal to or longer than the predetermined safety distance d s , it is determined that the state is normal.
  • the rider A is interfered by an external object, for example, whether or not an unknown object appears within a predetermined angle range starting from the direction ⁇ of the rider A. If it does appear, it is determined that it is in an abnormal state, and if it does not appear, it is determined that it is in a normal state.
  • Condition 4 It is determined whether or not the rider A can see the vehicle based on the direction ⁇ of the rider A at the current time t2.
  • the direction ⁇ of the rider is any one of true rearward, left rearward, right rearward, true leftward, and rightward, that is, the direction ⁇ is the class VI, class V, class VII, class shown in FIG. If it belongs to either IV or class VIII, it is determined that the vehicle cannot be seen by the rider, and otherwise, it is determined that the vehicle is visible.
  • Condition 5 Whether the direction change frequency is greater than a predetermined threshold.
  • the direction change frequency is calculated.
  • the predetermined threshold value is obtained based on, for example, statistical data obtained by a large number of riders. Next, it is determined whether or not the direction change frequency is greater than a predetermined threshold.
  • condition 4 or 5 when it is determined that the vehicle is visible to the rider or the change frequency of the direction is greater than a predetermined threshold, the risk coefficient determined based on condition 3 is further increased. 1 is added.
  • each rider's risk factor at each point in time can be obtained, each rider can be tracked according to the risk factor, and an alarm can be issued to the vehicle if necessary.
  • the risk coefficient setting unit 15 further includes a pedestrian detection frame parameter, each state information of one or more pedestrians, and a change frequency of the direction, and each pedestrian among one or more pedestrians.
  • the risk factor can be set for the pedestrian B by the above-described risk factor setting method for the rider.
  • a plurality of riders and pedestrians among pedestrians can be quickly distinguished, and a plurality of riders can be detected in real time.
  • the present invention is not limited to the above-described embodiments, and includes various modifications.
  • the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described.

Abstract

Provided are a method and a device for detecting pedestrians around a vehicle, the method and the device being capable of quickly discriminating, among pedestrians, a plurality of riders from a plurality of pedestrians, and detecting a plurality of riders in real time. A device for detecting pedestrians around a vehicle comprises: an acquisition unit for acquiring image information around the vehicle from an imaging device in the vehicle to detect a current vehicle speed of the vehicle; an initial candidate frame parameter calculation unit for calculating, on the basis of the image information, initial candidate frame parameters of pedestrians around the vehicle, the parameters each including an orientation that is the angle of the pedestrian with respect to the imaging device; an adjustment unit for adjusting the initial candidate frame parameters on the basis of the orientations to acquire adjusted candidate frame parameters; a classification unit for classifying a plurality of pedestrians into riders and pedestrians on the basis of the adjusted candidate frame parameters to acquire rider detection frame parameters of the riders; and a danger coefficient setting unit for setting a danger coefficient for each rider on the basis of the rider detection frame parameter, state information on each rider, and change frequency of the orientation.

Description

車両周囲の歩行者を検出する方法及び装置Method and apparatus for detecting pedestrians around a vehicle
 本発明は、車両周囲の歩行者を検出する方法及び装置に関する。 The present invention relates to a method and apparatus for detecting pedestrians around a vehicle.
 自動車の発展は人間に大きな利便性をもたらしている反面、それによる交通事故が長期にわたって人を悩ます大きな問題になっている。自動車運転者自身の規則違反や悪い習慣による事故のほか、中国のほとんどの道路では、歩行者の交通違反も非常に深刻である。特に、街のあちこちを走行するライダー(オートバイ、モペット、電動自転車、自転車、三輪車などの軽車両を含む)が自動車の車線を占有したり、交通信号の規定を遵守しなかったり、勝手に道路を横断したり、逆走したりするなどの交通違反は珍しくない。中国では『道路交通安全法』の第58条に、「身体障害者用の電動車椅子、電動自転車が軽車両専用道路内を走行しているとき、最高速度を15km以下にしなければならない」との規定があるが、実際には、電動自転車の実際の時速は一般的に40-50km前後で、法律で規定される最高時速を遥かに超えている。2016年には、中国の軽車両の保有台数は約3.9億台であり、そのうち、約2.2億台は電動自転車であった。このように多くの台数があるので、道路交通の軽車両による隠れた危険を無視することはできない。 The development of automobiles has brought great convenience to human beings, but traffic accidents caused by it have been a major problem for people. In addition to accidents caused by breach of motorists' own rules and bad habits, pedestrian traffic violations are very serious on most roads in China. In particular, riders (including light vehicles such as motorcycles, mopeds, electric bicycles, bicycles, tricycles) that occupy the city occupy the lanes of automobiles, do not comply with traffic signal regulations, Traffic violations such as crossing and running backwards are not uncommon. In China, Article 58 of the Road Traffic Safety Act states that “the maximum speed must be 15 km or less when electric wheelchairs and electric bicycles for the physically challenged are traveling on light vehicle roads.” In practice, the actual speed of an electric bicycle is generally around 40-50 km, far exceeding the maximum speed specified by law. In 2016, China owned about 390 million light vehicles, of which about 220 million were electric bicycles. Because there are so many cars, the hidden dangers of light vehicles on road traffic cannot be ignored.
 それと同時に、センサや制御技術の急速な発展により、事故の危険防止のための自動車の能動的安全技術がますます注目されている。世界中で、自動車の安全に関連する衝突回避の標準化も進んでおり、2018年以降、ヨーロッパ新車アセスメントプログラム(EuroNCAP)は自転車との衝突を避ける機能を自動車の評価対象に含める見込みである。コンチネンタル・コーポレーションはレーダーと高精度の全地球測位システムセンサーを用いて、自動車と自転車の同期移動を実現し、自転車を監視して、自動的な制動を実現した。ボルボ、スウェーデンのスポーツ用品会社及びエリクソン社は歩行者に衝突することを防止できる運転者とライダーの間の双方向通信システムを共同開発しており、ジャガーランドローバーも自転車衝突警報技術を開発中である。 At the same time, due to the rapid development of sensors and control technology, active safety technology for automobiles to prevent accidents is gaining more and more attention. The standardization of collision avoidance related to automobile safety is also progressing around the world, and since 2018, the European New Car Assessment Program (EuroNCAP) is expected to include the ability to avoid collisions with bicycles in the evaluation of automobiles. Continental Corporation used radar and high-precision global positioning system sensors to achieve synchronized movement of the car and bicycle, monitoring the bicycle, and automatic braking. Volvo, Swedish sporting goods company and Ericsson have jointly developed a two-way communication system between driver and rider that can prevent collisions with pedestrians, and Jaguar Land Rover is also developing bicycle collision warning technology .
米国特許出願第9,087,263B2号公報US Patent Application No. 9,087,263B2
 ライダー衝突警報技術においては、センサに基づくライダー検出と追跡アルゴリズムが重要な役割を果たす。下記特許、『視覚に基づく歩行者および自転車乗車者の検出方法』(『Vision based Pedestrian and Cyclist Detection Method』、米国特許出願第9,087,263B2号、公開日2015年7月21日、台湾中山科学研究院出願)(特許文献1)において、車載カメラをセンサとして、まず人間と自転車の部分をそれぞれ検出して、両方の空間位置関係に基づいてライダーであるか否かを判定するライダー検出アルゴリズムが開示されている。しかしながら、特許文献1では、自転車の円形車輪を検出しなければならないため、自転車の車輪が見えない場合は適用できず、歩行者がライダーであるか否かを判定できない。 In the rider collision warning technology, sensor-based rider detection and tracking algorithms play an important role. The following patents, “Detection of pedestrians and bicycle riders based on vision” (“Vision based Pedestrian and Cyclist Detection Method”, US Patent Application No. 9,087,263B2, published on July 21, 2015, Nakayama, Taiwan) (Science Research Institute application) (Patent Document 1) In the on-board camera as a sensor, first a human and a bicycle part are detected, and a rider detection algorithm for determining whether or not a rider is based on the spatial positional relationship of both Is disclosed. However, in Patent Document 1, since it is necessary to detect a circular wheel of a bicycle, it cannot be applied when the bicycle wheel is not visible, and it cannot be determined whether or not a pedestrian is a rider.
 さらに、車載センサの作動環境及び条件は非常に苛酷であり、高速走行中の車両の衝突回避時間が極めて短いため、車載衝突回避システムに高いリアルタイム性、高い処理速度の特徴が求められる。実際の交通状況では、多数の軽車両、動力車、さらに多数の歩行者やバリケードなどの静止物体が存在しているため、複数のライダーを検出する場合も起こり得る。しかしながら、複数のライダーを検出する場合は、演算が複雑であるとともに計算量が大きいアルゴリズムはリアルタイム性を満足できない場合が多い。従って、従来技術によれば、リアルタイムに複数のライダーを検出することが困難となる。 Furthermore, the operating environment and conditions of the in-vehicle sensor are very severe, and the collision avoidance time of the vehicle running at high speed is extremely short. Therefore, the in-vehicle collision avoidance system is required to have high real-time characteristics and high processing speed. In an actual traffic situation, there are many light vehicles, motor vehicles, and many stationary objects such as pedestrians and barricades. Therefore, a plurality of riders may be detected. However, when detecting a plurality of riders, an algorithm that is complicated in computation and has a large calculation amount often cannot satisfy real-time characteristics. Therefore, according to the prior art, it becomes difficult to detect a plurality of riders in real time.
 そこで、本発明は、歩行者のうち複数のライダーと複数の歩行者を迅速に区別し、リアルタイムに複数のライダーを検出し得る車両周囲の歩行者を検出する方法及び装置を提供する。 Therefore, the present invention provides a method and apparatus for quickly distinguishing a plurality of riders and a plurality of pedestrians among pedestrians and detecting a pedestrian around the vehicle that can detect a plurality of riders in real time.
 上記課題を解決するため、本発明に係る車両周囲の歩行者を検出する装置は、車両の撮影装置から前記車両周囲の画像情報を取得して、前記車両の現在車速を検出する取得ユニットと、前記画像情報に基づき、前記車両周囲の複数の歩行者のそれぞれの各時点での、歩行者の前記撮影装置に対する角度である向きを含む初期候補フレームパラメータを算出する初期候補フレームパラメータ算出ユニットと、前記向きに基づき、前記初期候補フレームパラメータを調整して、前記各時点での調整後の候補フレームパラメータを取得する調整ユニットと、前記調整後の候補フレームパラメータに基づき、前記複数の歩行者を1人又は複数のライダーと1人又は複数の歩行者に分類し、前記1人又は複数のライダーのそれぞれの各時点でのライダー検出フレームパラメータを取得する分類ユニットと、前記ライダー検出フレームパラメータ、前記1人又は複数のライダーのそれぞれの状態情報、及び前記向きの変化頻度に基づき、前記1人又は複数のライダーのうちそれぞれのライダーに危険係数を設定する危険係数設定ユニットと、を備えることを特徴とする。 In order to solve the above-described problem, an apparatus for detecting pedestrians around a vehicle according to the present invention acquires image information around the vehicle from an imaging device for the vehicle, and an acquisition unit that detects the current vehicle speed of the vehicle; Based on the image information, an initial candidate frame parameter calculation unit that calculates an initial candidate frame parameter including an orientation that is an angle of the pedestrian with respect to the imaging device at each time point of the plurality of pedestrians around the vehicle; An adjustment unit that adjusts the initial candidate frame parameters based on the orientation to obtain the adjusted candidate frame parameters at each time point, and a plurality of pedestrians based on the adjusted candidate frame parameters. Riders at each point in time of each of the one or more riders, classified into one or more riders and one or more pedestrians Based on the classification unit for obtaining the detection frame parameter, the rider detection frame parameter, the state information of each of the one or more riders, and the change frequency of the direction, each rider of the one or more riders And a risk factor setting unit for setting a risk factor.
 前記初期候補フレームパラメータは、さらに、フレーム重心座標、フレーム幅w、フレーム高さhを含み、前記調整後の候補フレームパラメータは、前記フレーム重心座標、調整後のフレーム幅w’、前記フレーム高さh、及び前記向きαを含み、ここで、  The initial candidate frame parameter further includes a frame centroid coordinate, a frame width w, and a frame height h, and the adjusted candidate frame parameter includes the frame centroid coordinate, the adjusted frame width w ′, and the frame height. h, and the orientation α, where
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
である。 
 前記分類ユニットは、前記1人又は複数のライダーのそれぞれの前記各時点での前記調整後の候補フレームパラメータを前記ライダー検出フレームパラメータとする。
It is.
The classification unit sets the adjusted candidate frame parameter at each time point of each of the one or more riders as the rider detection frame parameter.
 このように、調整後の候補フレームパラメータによって、歩行者からライダーを分類できる。 In this way, riders can be classified from pedestrians according to the adjusted candidate frame parameters.
 前記分類ユニットは、前記1人又は複数の歩行者のそれぞれの初期候補フレームパラメータを前記1人又は複数の歩行者のそれぞれの前記各時点での歩行者検出フレームパラメータとし、 
前記危険係数設定ユニットは、前記歩行者検出フレームパラメータ、前記1人又は複数の歩行者のそれぞれの状態情報、前記向きの変化頻度に基づき、前記1人又は複数の歩行者のうちそれぞれの歩行者に危険係数を設定する。
The classification unit uses the initial candidate frame parameters of each of the one or more pedestrians as pedestrian detection frame parameters at the respective time points of the one or more pedestrians,
The risk factor setting unit is configured such that the pedestrian detection frame parameter, the state information of each of the one or more pedestrians, and the change frequency of the direction, each pedestrian among the one or more pedestrians. Set the risk factor to.
 前記1人又は複数のライダーのうちそれぞれのライダーに対して、前記危険係数設定ユニットは、前の時点でのライダー検出フレームパラメータのフレーム重心座標と現時点でのライダー検出フレームパラメータのフレーム重心座標に基づき、前記各ライダーの前記車両に対する相対速度と相対移動方向を取得する。 For each rider of the one or more riders, the risk factor setting unit is based on the frame center of gravity coordinates of the rider detection frame parameter at the previous time point and the frame center of gravity coordinates of the rider detection frame parameter at the current time point. The relative speed and the relative movement direction of each rider with respect to the vehicle are acquired.
 前記危険係数設定ユニットは、前記ライダーごとに新ライダーであるか否かを判定し、新ライダーである場合、前記危険係数を1に設定し、新ライダーではない場合、現時点での前記ライダー検出フレームパラメータの前記フレーム重心座標と前記向き、前記相対速度、前記相対移動方向、前記状態情報、前記向きの変化頻度に基づき、前記危険係数を設定する。 The risk factor setting unit determines whether or not each rider is a new rider. If the rider is a new rider, the risk factor is set to 1. If the rider is not a new rider, the current rider detection frame is set. The risk factor is set based on the frame center-of-gravity coordinates and the direction of the parameter, the relative speed, the relative movement direction, the state information, and the change frequency of the direction.
 前記状態情報は正常状態又は異常状態であり、前記危険係数設定ユニットは、前記ライダー検出フレームパラメータの前記フレーム高さに基づいて、前記ライダーが子供であると判定し、又は、前記画像情報から前記ライダーとほかのライダーとの距離が所定安全距離より小さいと判定し、又は、前記画像情報から前記ライダーが外部物体による干渉を受けていると判定した場合、前記状態情報を異常状態として決定する。 The state information is a normal state or an abnormal state, and the risk factor setting unit determines that the rider is a child based on the frame height of the rider detection frame parameter, or from the image information When it is determined that the distance between the rider and another rider is smaller than a predetermined safe distance, or when it is determined from the image information that the rider is receiving interference from an external object, the state information is determined as an abnormal state.
 また、本発明に係る車両周囲の歩行者を検出する方法は、車両の撮影装置から前記車両周囲の画像情報を取得して、前記車両の現在車速を検出する第1のステップと、前記画像情報に基づき、前記車両周囲の複数の歩行者のそれぞれの各時点での、歩行者の前記撮影装置に対する角度である向きを含む初期候補フレームパラメータを算出する第2のステップと、前記向きに基づき、前記初期候補フレームパラメータを調整して、前記各時点での調整後の候補フレームパラメータを取得する第3のステップと、前記調整後の候補フレームパラメータに基づき、前記複数の歩行者を1人又は複数のライダーと1人又は複数の歩行者に分類し、前記1人又は複数のライダーのそれぞれの各時点でのライダー検出フレームパラメータを取得する第4のステップと、前記ライダー検出フレームパラメータ、前記1人又は複数のライダーのそれぞれの状態情報、前記向きの変化頻度に基づき、前記1人又は複数のライダーのうちそれぞれのライダーに危険係数を設定する第5のステップと、を含むことを特徴とする。 According to another aspect of the present invention, there is provided a method for detecting a pedestrian around a vehicle, the first step of acquiring image information around the vehicle from a photographing device of the vehicle and detecting a current vehicle speed of the vehicle, and the image information. A second step of calculating an initial candidate frame parameter including an orientation that is an angle of the pedestrian with respect to the imaging device at each time point of the plurality of pedestrians around the vehicle, and based on the orientation, A third step of adjusting the initial candidate frame parameters to obtain adjusted candidate frame parameters at each time point, and one or a plurality of the pedestrians based on the adjusted candidate frame parameters A rider detection frame parameter at each time point for each of the one or more riders. A risk factor is set for each rider among the one or more riders based on the step, the rider detection frame parameter, the state information of each of the one or more riders, and the direction change frequency; These steps are included.
 本発明によれば、歩行者のうち複数のライダーと複数の歩行者を迅速に区別し、リアルタイムに複数のライダーを検出し得る車両周囲の歩行者を検出する方法及び装置を提供することが可能となる。 
 上記した以外の課題、構成及び効果は、以下の実施形態の説明により明らかにされる。
ADVANTAGE OF THE INVENTION According to this invention, it is possible to provide the method and apparatus which detect the pedestrian around a vehicle which can distinguish a some rider and several pedestrian among pedestrians rapidly, and can detect several riders in real time. It becomes.
Problems, configurations, and effects other than those described above will be clarified by the following description of embodiments.
本発明の一実施例に係る車両周囲の歩行者を検出する装置の構造図である。1 is a structural diagram of an apparatus for detecting pedestrians around a vehicle according to an embodiment of the present invention. 本発明の一実施例に係る車両周囲の歩行者を検出する方法のフローチャートである。3 is a flowchart of a method for detecting pedestrians around a vehicle according to an embodiment of the present invention. 本発明の一実施例に係る複数の歩行者のうち1人の歩行者の時点t1での初期候補フレームの模式図である。It is a schematic diagram of an initial candidate frame at time t1 of one pedestrian among a plurality of pedestrians according to an embodiment of the present invention. 図3(a)に示される1人の歩行者の調整後の候補フレームの模式図である。It is a schematic diagram of the candidate frame after adjustment of one pedestrian shown in FIG. 本発明の実施例に係る複数の歩行者のうちほかの歩行者の時点t1での初期候補フレームの模式図である。It is a schematic diagram of the initial candidate frame at the time t1 of another pedestrian among a plurality of pedestrians according to an embodiment of the present invention. 図4(a)に示されるほかの歩行者の調整後の候補フレームの模式図である。It is a schematic diagram of the candidate frame after adjustment of the other pedestrian shown by Fig.4 (a). 向きの定義を示す模式図である。It is a schematic diagram which shows the definition of direction.
 以下、図面を用いて本発明の実施例について説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は本発明の一実施例に係る車両周囲の歩行者を検出する装置10の構造図である。図1に示すように、車両周囲の歩行者を検出する装置10は、取得ユニット11、初期候補フレームパラメータ算出ユニット12、調整ユニット13、分類ユニット14、及び危険係数設定ユニット15を備える。 FIG. 1 is a structural diagram of an apparatus 10 for detecting pedestrians around a vehicle according to an embodiment of the present invention. As shown in FIG. 1, the device 10 for detecting pedestrians around a vehicle includes an acquisition unit 11, an initial candidate frame parameter calculation unit 12, an adjustment unit 13, a classification unit 14, and a risk coefficient setting unit 15.
 図2は本発明の一実施例に係る車両周囲の歩行者を検出する方法のフローチャートである。図2に示すように、ステップS21では、取得ユニット11は、車両の撮影装置から車両周囲の画像情報を取得して、車両の現在車速を検出する。上記撮影装置(図示せず)は、車両の適切な位置(車両フロントガラスの上端、車尾後端、車体両側でもよい)に取り付けられた1台又は複数台のカメラシステムであり、車体の前方、後方、両側方からの画像情報を収集して記憶する。通常、カメラシステムは光学系とカメラから構成され、光学系はズーム機能、オートフォーカス機能等を有してもよく、カメラとしては、カラーCCD(電荷結合素子)ビデオカメラが使用されてもよい。 FIG. 2 is a flowchart of a method for detecting pedestrians around a vehicle according to an embodiment of the present invention. As shown in FIG. 2, in step S <b> 21, the acquisition unit 11 acquires image information around the vehicle from the vehicle imaging device and detects the current vehicle speed of the vehicle. The photographing device (not shown) is one or a plurality of camera systems attached to appropriate positions of the vehicle (the upper end of the vehicle windshield, the rear end of the vehicle tail, or both sides of the vehicle body). Collect and store image information from the rear and both sides. In general, a camera system includes an optical system and a camera. The optical system may have a zoom function, an autofocus function, and the like, and a color CCD (charge coupled device) video camera may be used as the camera.
 ステップS22では、初期候補フレームパラメータ算出ユニット12は、上記画像情報に基づき車両周囲の複数の歩行者のそれぞれの各時点での歩行者の撮影装置に対する角度、すなわち、歩行者の胴体の撮影装置に対する角度である向きαを含む初期候補フレームパラメータを算出する。 In step S22, the initial candidate frame parameter calculation unit 12 determines the angle of the pedestrian shooting device at each time point of the plurality of pedestrians around the vehicle based on the image information, that is, the pedestrian torso shooting device. An initial candidate frame parameter including the direction α which is an angle is calculated.
 図5は向きαの定義を示す模式図である。歩行者の撮影装置に対する向きが真右向きである場合、αは0であり、歩行者の撮影装置に対する向きが真前向きである場合、αはπ/2であり、歩行者の撮影装置に対する向きが真左向きである場合、αはπ又は-πであり、歩行者の撮影装置に対する向きが真後向きである場合、αは-π/2である。ほかの向きは実際の大きさに応じて出力し、[-π,π]に正規化される。同様に図5のように、ある領域に分割される向きαを1つのクラスの同一角度に分割してもよい。図5に示すように、歩行者の撮影装置に対する向きが右前向きである場合、クラスIに分類して、αはπ/4であり、歩行者の撮影装置に対する向きが真前向きである場合、クラスIIに分類して、αはπ/2であり、歩行者の撮影装置に対する向きが左前向きである場合、クラスIIIに分類して、αはπ×3/4であり、歩行者の撮影装置に対する向きが真左向きである場合、クラスIVに分類して、αはπ又は-πであり、歩行者の撮影装置に対する向きが左後向きである場合、クラスVに分類して、αは-π×3/4であり、歩行者の撮影装置に対する向きが真後向きである場合、クラスVIに分類して、αは-π/2であり、歩行者の撮影装置に対する向きが右後向きである場合、クラスVIIに分類して、αは-π/4であり、歩行者の撮影装置に対する向きが真右向きである場合、クラスVIIIに分類して、αは0である。 FIG. 5 is a schematic diagram showing the definition of the direction α. When the orientation of the pedestrian with respect to the imaging device is right-handed, α is 0, and when the orientation of the pedestrian with respect to the imaging device is front-facing, α is π / 2, and the orientation of the pedestrian with respect to the imaging device is Α is π or −π when the pedestrian is facing left, and α is −π / 2 when the pedestrian is facing the imaging device. Other directions are output according to the actual size and normalized to [−π, π]. Similarly, as shown in FIG. 5, the direction α divided into a certain region may be divided into the same angle of one class. As shown in FIG. 5, when the pedestrian's orientation with respect to the imaging device is right forward, it is classified into class I, α is π / 4, and when the pedestrian's orientation with respect to the imaging device is true forward, Class II, α is π / 2, and when the direction of the pedestrian with respect to the imaging device is left front facing, class III, α is π × 3/4, and pedestrian imaging If the orientation to the device is true left, classify as class IV, α is π or -π, and if the orientation of the pedestrian to the imaging device is left rearward, class A is classified as class V and α is- When π × 3/4 and the orientation of the pedestrian with respect to the imaging device is true rearward, it is classified into class VI, α is −π / 2, and the orientation of the pedestrian with respect to the imaging device is right rearward. In the case of class VII, α is −π / 4, and pedestrian shooting When the orientation with respect to the device is right-facing, it is classified into class VIII and α is 0.
 初期候補フレームパラメータはさらに、フレーム重心座標、フレーム幅w、フレーム高さhを含む。図3(a)は、本発明の一実施例に係る複数の歩行者のうち1人の歩行者の時点t1での初期候補フレームFの模式図であり、該初期候補フレームFの初期候補フレームパラメータはフレーム重心座標(xt1,yt1,zt1)、フレーム幅wt1、フレーム高さht1、及び向きαt1(図示せず)を含む。ここで、フレーム重心座標は、車両の世界座標系として設定され、たとえば、座標原点は車両の車頭の中間直下位置である。 The initial candidate frame parameters further include frame centroid coordinates, frame width w, and frame height h. FIG. 3A is a schematic diagram of the initial candidate frame F at the time point t1 of one pedestrian among a plurality of pedestrians according to an embodiment of the present invention. The parameters include frame centroid coordinates (x t1 , y t1 , z t1 ), frame width w t1 , frame height h t1 , and orientation α t1 (not shown). Here, the frame center-of-gravity coordinates are set as the world coordinate system of the vehicle. For example, the coordinate origin is a position directly below the middle of the vehicle head.
 ここで、たとえば、特徴抽出方法(密な特徴:DPM変形部品モデル、ACF総チャネル特徴、HOG方向勾配ヒストグラム、密なエッジ等;疎な特徴:寸法、形状、疎なエッジ、ボディパーツ、歩きぶり、テクスチャ、グレースケール/エッジ対称性等)、輪郭テンプレートマッチング等の方法を用いてもよく、まず、フレーム重心座標、フレーム幅、フレーム高さを含む初期候補フレームを算出し、特徴抽出方法は、たとえば機械学習のうちの深層学習アルゴリズムである。次に、深層学習アルゴリズム又は通常の機械学習アルゴリズムによって、歩行者の向きを分析する。たとえば、専用の向き検出ネットワーク又は異なる方向勾配ヒストグラムに基づく異なるモデルを用いて、入力した歩行者の初期候補フレームを異なる向きに分けることができる。該方法のタイプはカスケードモデルと呼ばれる。 Here, for example, a feature extraction method (dense feature: DPM deformed part model, ACF total channel feature, HOG direction gradient histogram, dense edge, etc .; sparse feature: size, shape, sparse edge, body part, walking , Texture, gray scale / edge symmetry, etc.), contour template matching, and the like. First, an initial candidate frame including frame centroid coordinates, frame width, and frame height is calculated. For example, a deep learning algorithm of machine learning. Next, the orientation of the pedestrian is analyzed by a deep learning algorithm or a normal machine learning algorithm. For example, different models based on dedicated orientation detection networks or different directional gradient histograms can be used to divide the input pedestrian initial candidate frames into different orientations. The type of method is called a cascade model.
 さらに、たとえば、同一の深層学習ネットワーク又は通常の機械学習、すなわち、単一モデルアルゴリズムで、フレーム重心座標、フレーム幅、フレーム高さ及び向きを一括して算出できる。深層学習ネットワークの場合、歩行者の向きを深層学習ネットワークの最後に全接続層における回帰ロス関数の一部として訓練と検出を行う。 Further, for example, the frame center-of-gravity coordinates, the frame width, the frame height, and the direction can be collectively calculated by the same deep learning network or normal machine learning, that is, a single model algorithm. In the case of a deep learning network, the pedestrian orientation is trained and detected as part of the regression loss function in all connected layers at the end of the deep learning network.
 以上から分かるように、図2のステップS22では、初期候補フレームパラメータ算出ユニット12は、上記計算によって、画像情報からすべての歩行者を識別できる。本実施例では、歩行者はライダー及び/又は歩行者を含む。 As can be seen from the above, in step S22 of FIG. 2, the initial candidate frame parameter calculation unit 12 can identify all pedestrians from the image information by the above calculation. In this embodiment, the pedestrian includes a rider and / or a pedestrian.
 図2のステップS23では、調整ユニット13は、向きに基づき、初期候補フレームパラメータを調整して、各時点での調整後の候補フレームパラメータを取得する。 2, the adjustment unit 13 adjusts the initial candidate frame parameter based on the orientation, and acquires the adjusted candidate frame parameter at each time point.
 図3(b)は図3(a)に示される1人の歩行者の調整後の候補フレームの模式図である。図3(b)に示すように、調整ユニット13は、該歩行者の時点t1での初期候補フレームFの初期候補フレームパラメータを調整して、該歩行者の時点t1での調整後の候補フレームF’を取得する。たとえば、調整後の候補フレームF’の候補フレームパラメータはフレーム重心座標(xt1,yt1,zt1)、フレーム幅w’t1、フレーム高さht1、及び向きαt1(図示せず)を含む。ここで、  FIG.3 (b) is a schematic diagram of the candidate frame after adjustment of one pedestrian shown by Fig.3 (a). As shown in FIG. 3B, the adjustment unit 13 adjusts the initial candidate frame parameter of the initial candidate frame F at the time point t1 of the pedestrian to adjust the candidate frame after the adjustment at the time point t1 of the pedestrian. F 'is acquired. For example, the candidate frame parameters of the adjusted candidate frame F ′ are the frame centroid coordinates (x t1 , y t1 , z t1 ), the frame width w ′ t1 , the frame height h t1 , and the orientation α t1 (not shown). Including. here,
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
である。つまり、調整ユニット13はフレーム高さht1、及び向きαt1に基づいて初期候補フレームFのフレーム幅wt1を変更して、調整後の候補フレームF’のフレーム幅w’t1を取得し、ほかのパラメータを一定に保持する。 
 このようにして、複数の歩行者のうち、それぞれの歩行者のt1時点での調整後の候補フレームの候補フレームパラメータを取得できる。
It is. That is, the adjustment unit 13 changes the frame width w t1 of the initial candidate frame F based on the frame height h t1 and the orientation α t1 to obtain the adjusted frame width w ′ t1 of the candidate frame F ′. Keep other parameters constant.
In this way, the candidate frame parameters of the adjusted candidate frame at the time t1 of each pedestrian among the plurality of pedestrians can be acquired.
 歩行者の撮像装置に対する向きが前向き又は後向きである場合、初期候補フレームの大きさを変更せず、すなわち、調整後の候補フレームパラメータと初期候補フレームパラメータは同様である。歩行者の撮像装置に対する向きが左向き又は右向きである場合、初期候補フレームのフレーム高さを一定に保持して、フレーム幅をフレーム高さと等しく調整する。歩行者の撮像装置に対する向きが左後向き又は右後向き又は左前向き又は右前向きである場合、初期候補フレームのフレーム高さを一定に保持して、フレーム幅をフレーム高さの半分に調整する。また、該変化(調整)はフレーム重心座標を一定に保持することを前提とする。また、図3(b)に示すように、一旦フレーム幅が変化すると、調整後の候補フレームの4つの頂点の座標も対応して変化する。 When the orientation of the pedestrian with respect to the imaging device is forward or backward, the size of the initial candidate frame is not changed, that is, the adjusted candidate frame parameter and the initial candidate frame parameter are the same. When the orientation of the pedestrian with respect to the imaging device is leftward or rightward, the frame height of the initial candidate frame is kept constant, and the frame width is adjusted to be equal to the frame height. When the orientation of the pedestrian with respect to the imaging device is left rearward, right rearward, left frontward, or right frontward, the frame height of the initial candidate frame is held constant and the frame width is adjusted to half the frame height. The change (adjustment) is based on the premise that the frame center-of-gravity coordinates are held constant. As shown in FIG. 3B, once the frame width changes, the coordinates of the four vertices of the adjusted candidate frame also change correspondingly.
 ステップS24では、分類ユニット14は、調整後の候補フレームパラメータに基づき、複数の歩行者を1人又は複数のライダーと1人又は複数の歩行者に分類し、1人又は複数のライダーのそれぞれの各時点でのライダー検出フレームパラメータを取得する。ここで、分類ユニット14は、たとえば深層学習分類アルゴリズムで、各歩行者の調整後の候補フレームパラメータを計算し、計算結果に基づいて分類する。具体的な計算及び分類方法は従来と同様であるため、重複した説明はしない。 In step S24, the classification unit 14 classifies the plurality of pedestrians into one or a plurality of riders and one or a plurality of pedestrians based on the adjusted candidate frame parameters, and each of the one or a plurality of riders. Obtain rider detection frame parameters at each time point. Here, the classification unit 14 calculates the candidate frame parameters after adjustment of each pedestrian by, for example, a deep learning classification algorithm, and classifies based on the calculation result. Since the specific calculation and classification method is the same as the conventional method, a redundant description will not be given.
 図4(a)は本発明の実施例に係る複数の歩行者のうちのほかの歩行者の時点t1での初期候補フレームGの模式図であり、図4(b)は図4(a)に示されるほかの歩行者の調整後の候補フレームG’の模式図である。ステップS24では、分類ユニット14は、たとえば図3(b)中の1人の歩行者をライダー(以下、「ライダーA」という)として分類し、図4(b)中のほかの歩行者を歩行者(以下、図4(a)と図4(b)中のほかの歩行者を「歩行者B」という)として分類する。 FIG. 4A is a schematic diagram of an initial candidate frame G at time t1 of another pedestrian among a plurality of pedestrians according to the embodiment of the present invention, and FIG. 4B is a schematic diagram of FIG. It is a schematic diagram of candidate frame G 'after adjustment of the other pedestrians shown in FIG. In step S24, the classification unit 14 classifies, for example, one pedestrian in FIG. 3B as a rider (hereinafter referred to as “rider A”) and walks the other pedestrians in FIG. 4B. (Hereinafter, the other pedestrians in FIGS. 4A and 4B are classified as “pedestrian B”).
 分類ユニット14は1人又は複数のライダーのそれぞれの各時点での調整後の候補フレームパラメータをライダー検出フレームパラメータとする。本例では、たとえば、分類ユニット14は、図3(b)に示されるt1時点での調整後の候補フレームF’の候補フレームパラメータをライダーAのライダー検出フレームパラメータとする。すなわち、該ライダー検出フレームパラメータはフレーム重心座標(xt1,yt1,zt1)、フレーム幅w’t1、フレーム高さht1、及び向きαt1(図示せず)を含む。 The classification unit 14 uses the adjusted candidate frame parameter at each time point of each of one or more riders as the rider detection frame parameter. In this example, for example, the classification unit 14 uses the candidate frame parameter of the adjusted candidate frame F ′ at time t1 shown in FIG. That is, the rider detection frame parameters include frame barycentric coordinates (x t1 , y t1 , z t1 ), frame width w ′ t1 , frame height h t1 , and orientation α t1 (not shown).
 さらに、分類ユニット14は1人又は複数の歩行者のそれぞれの初期候補フレームパラメータを1人又は複数の歩行者のそれぞれの各時点での歩行者検出フレームパラメータとする。本例では、たとえば、分類ユニット14は図4(a)に示されるt1時点での初期候補フレームGの初期候補フレームパラメータを該歩行者の歩行者検出フレームパラメータとする。 Furthermore, the classification unit 14 sets the initial candidate frame parameter of each of one or more pedestrians as the pedestrian detection frame parameter at each time point of one or more pedestrians. In this example, for example, the classification unit 14 uses the initial candidate frame parameter of the initial candidate frame G at time t1 shown in FIG. 4A as the pedestrian detection frame parameter of the pedestrian.
 ここで、図4(a)と図4(b)に示すように、歩行者Bの場合、歩行者検出フレームパラメータと初期候補フレームパラメータは同じであり、すなわち歩行者検出フレームのフレーム幅は調整後の候補フレームのフレーム幅より小さく、それによって、歩行者に対する検出範囲を狭めることができる。 Here, as shown in FIGS. 4A and 4B, in the case of pedestrian B, the pedestrian detection frame parameter and the initial candidate frame parameter are the same, that is, the frame width of the pedestrian detection frame is adjusted. It is smaller than the frame width of the subsequent candidate frame, thereby narrowing the detection range for pedestrians.
 次に、ステップS25では、危険係数設定ユニット15は、ライダー検出フレームパラメータ、1人又は複数のライダーのそれぞれの状態情報、向きの変化頻度に基づき、1人又は複数のライダーのうち、それぞれのライダーに危険係数を設定する。 
 ここで、1人又は複数のライダーのうち、ライダーごとに、危険係数設定ユニット15は、前の時点でのライダー検出フレームパラメータのフレーム重心座標と現時点でのライダー検出フレームパラメータのフレーム重心座標に基づき、各ライダーの車両に対する相対速度と相対移動方向を取得する。
Next, in step S25, the risk factor setting unit 15 determines the rider detection frame parameter, the status information of each of one or more riders, and the change frequency of the direction, and the respective rider among the one or more riders. Set the risk factor to.
Here, for each rider among one or more riders, the risk factor setting unit 15 is based on the frame center of gravity coordinates of the rider detection frame parameter at the previous time point and the frame center of gravity coordinates of the rider detection frame parameter at the current time point. The relative speed and the relative movement direction of each rider with respect to the vehicle are acquired.
 たとえば、図3(b)中のライダーを例にして、前の時点はたとえばt1、現時点はたとえばt2とする。ライダーAの場合、t1でのライダー検出フレームパラメータのフレーム重心座標は、たとえば(xt1,yt1,zt1)、t2でのライダー検出フレームパラメータのフレーム重心座標は、たとえば(xt2,yt2,zt2)であり、(xt2,yt2,zt2)と(xt1,yt1,zt1)の変化量から、ライダーAの車両に対する相対速度vと相対移動方向rを取得する。同様な手法によって、ほかのライダーのそれぞれの相対速度と相対移動方向を取得できる。 For example, taking the rider in FIG. 3B as an example, the previous time is t1, for example, and the current time is t2, for example. In the case of the rider A, the frame center-of-gravity coordinates of the rider detection frame parameter at t1 are, for example, (x t1 , y t1 , z t1 ), and the frame center of gravity coordinates of the rider detection frame parameter at t2 are, for example, (x t2 , y t2 , Z t2 ), and the relative speed v and the relative movement direction r of the rider A with respect to the vehicle are acquired from the amount of change in (x t2 , y t2 , z t2 ) and (x t1 , y t1 , z t1 ). The relative speed and relative movement direction of each of the other riders can be obtained by a similar method.
 危険係数設定ユニット15は、ライダーごとに、新ライダーであるか否かを判定し、新ライダーである場合、危険係数を1に設定し、新ライダーではない場合、現時点でのライダー検出フレームパラメータのフレーム重心座標と向き、相対速度、相対移動方向、状態情報、向きの変化頻度に基づき、危険係数を設定する。 The risk factor setting unit 15 determines whether or not each rider is a new rider. If the rider is a new rider, the risk factor setting unit 15 sets the risk factor to 1. If the rider is not a new rider, the current risk detection frame parameter is set. A risk coefficient is set based on the frame center-of-gravity coordinates and direction, relative speed, relative movement direction, state information, and direction change frequency.
 たとえば、図3(b)中のライダーAを例にすると、危険係数設定ユニット15は、まず新ライダーであるか否かを判定する。具体的には、該ライダーAについて、前の時点t1にライダー検出フレームパラメータが存在するか否かを判定し、存在しないと判定した場合、新ライダー、すなわち、現時点t2に新しく出現したライダーであると判定し、危険係数Wを1に設定し、次のライダーについて判定を行う。存在すると判断した場合、新ライダーではないと判定し、危険係数Wを0として初期化して、次に現時点t2でのライダー検出フレームパラメータのフレーム重心座標と向き、相対速度、相対移動方向、状態情報、向きの変化頻度に基づき、危険係数を設定する。 For example, taking the rider A in FIG. 3B as an example, the risk coefficient setting unit 15 first determines whether or not it is a new rider. Specifically, it is determined whether or not the rider detection frame parameter exists at the previous time point t1 for the rider A, and if it is determined that the rider detection frame parameter does not exist, the rider A is a new rider, that is, a rider newly appearing at the current time t2. The risk factor W is set to 1 and the next rider is determined. If it is determined that it exists, it is determined that the rider is not a new rider, the risk factor W is initialized to 0, and the frame center of gravity coordinates and direction of the rider detection frame parameter at the current time t2, relative speed, relative movement direction, and state information The risk factor is set based on the direction change frequency.
 本例では、たとえばライダーAが新ライダーではないと判定した場合、危険係数Wを0として初期化し、下記条件1-条件5に基づいて、ライダーAに危険係数を設定する。以下、条件1-条件5について詳細に説明する。 In this example, for example, when it is determined that the rider A is not a new rider, the risk factor W is initialized to 0, and the risk factor is set for the rider A based on the following condition 1 to condition 5. Hereinafter, Condition 1 to Condition 5 will be described in detail.
 条件1:現時点t2でのライダー検出フレームパラメータのフレーム重心座標が以下の式(1)から算出する警報安全距離dより小さいか否かを判定する。  Condition 1: It is determined whether the alarm safety distance d w less than or not the frame center coordinates of the rider detection frame parameters at the current time t2 is calculated from the following equation (1).
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 式(1)中、vは上記ライダーAの相対速度、tは車両に必要なブレーキ時間、tは車両運転者の応答時間、μは車両が所在する道路の摩擦係数、gは重力加速度、νはステップS21で取得ユニット11により検出された車両の現在車速である。 In the formula (1), v is the relative speed of the rider A, t f is the braking time required for the vehicle, t d is the response time of the vehicle driver, μ is the coefficient of friction of the road where the vehicle is located, and g is the acceleration of gravity. , Ν h is the current vehicle speed detected by the acquisition unit 11 in step S21.
 ここで、たとえば以下の式(2)に示されるように、撮影された画像情報に基づいて道路の摩擦係数μの値を決定できる。  Here, for example, as shown in the following equation (2), the value of the friction coefficient μ of the road can be determined based on the photographed image information. *
Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006
 上述のとおり、座標原点は車両の車頭中間の直下位置であり、この場合、フレーム重心座標と座標原点の距離はライダーと車両の車頭の中間位置との距離になる。ライダーAの現時点t2でのライダー検出フレームパラメータのフレーム重心座標が(xt2,yt2,zt2)であり、それによって、ライダーAと車頭の中間位置との距離Dを取得する。該距離Dを以上のように算出した警報安全距離dと比較し、距離Dが警報安全距離dより小さいか否かを判定する。 As described above, the coordinate origin is a position directly below the middle of the vehicle head, and in this case, the distance between the frame center of gravity coordinates and the coordinate origin is the distance between the rider and the middle position of the vehicle head. Rider frame center coordinates of the rider detection frame parameters at the moment t2 of A is a (x t2, y t2, z t2), thereby obtaining the distance D between the intermediate position of the rider A and headway. Compared with alarm safety distance d w calculated as described above the distance D, the distance D determines whether the alarm safety distance d w smaller.
 条件2:ライダーAの相対移動方向から、ライダーAが車両に近づいているか否かを判定する。ここで、たとえば、上記算出した相対移動方向rに基づいて、ライダーAが車両に近づいているか否かを判定する。 Condition 2: From the relative movement direction of rider A, it is determined whether rider A is approaching the vehicle. Here, for example, whether or not the rider A is approaching the vehicle is determined based on the calculated relative movement direction r.
 条件3:ライダーAの状態情報は異常状態であるか否か。 
 状態情報は正常状態又は異常状態であり、危険係数設定ユニット15はライダー検出フレームパラメータのフレーム高さに基づいて、ライダーが子供であると判定し、又は、画像情報からライダーとほかのライダーとの距離が所定安全距離より小さいと判定し、又は、画像情報からライダーが外部物体による干渉を受けていると判定した場合、状態情報を異常状態として決定する。
Condition 3: Whether or not the state information of rider A is in an abnormal state.
The state information is a normal state or an abnormal state, and the risk coefficient setting unit 15 determines that the rider is a child based on the frame height of the rider detection frame parameter, or the rider and other riders from the image information. When it is determined that the distance is smaller than the predetermined safe distance, or when it is determined from the image information that the rider is receiving interference from an external object, the state information is determined as an abnormal state.
 たとえば、ライダーAの現時点t2でのライダー検出フレームパラメータのフレーム高さがht2であれば、該フレーム高さがht2であることから、ライダーAが子供であるか否かを判定し、子供である場合、異常状態であると判定する。 For example, if the frame height of the rider detection frame parameter is h t2 at present t2 rider A, since the frame height is h t2, the rider A is equal to or a child, children If it is, it is determined that the state is abnormal.
 たとえば、現時点t2を例にすると、ライダーAとほかのライダー、たとえばライダーCとの距離はライダーAのフレーム重心座標とライダーCのフレーム重心座標との距離DACである。所定安全距離dは、たとえば以下の式(3)から得られる。  For example, when the present time t2 example, riders A and other riders, such as the distance between the rider C is the distance D AC the frame centroid coordinates and rider C of the frame barycentric coordinates of the rider A. The predetermined safety distance d s is obtained from the following equation (3), for example.
Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007
 式(3)中、kはライダーAとライダーCの相対速度、t’はライダーのブレーキに必要な時間、t’はライダーの応答時間、μはライダーAが所在する道路の摩擦係数(たとえば、上記式(2)から取得)、gは重力加速度である。ここで、たとえば、ライダーAのフレーム重心座標とライダーCのフレーム重心座標との変化量からkを取得できる。 In equation (3), k is the relative speed between rider A and rider C, t ′ f is the time required for the rider's brake, t d ′ is the rider's response time, μ is the coefficient of friction of the road where rider A is located ( For example, obtained from the above equation (2)), g is the gravitational acceleration. Here, for example, k can be acquired from the amount of change between the frame center of gravity coordinates of the rider A and the frame center of gravity coordinates of the rider C.
 次に、距離DACを所定安全距離dと比較し、距離DACが所定安全距離dより小さいか否かを判定する。所定安全距離dより小さい場合、異常状態であると判定し、所定安全距離d以上である場合、正常状態であると判定する。 Then, the distance D AC with a predetermined safety distance d s, the distance D AC determines whether a predetermined safety distance d s is less than. If it is smaller than the predetermined safety distance d s , it is determined that the state is abnormal, and if it is equal to or longer than the predetermined safety distance d s , it is determined that the state is normal.
 さらに、画像情報からライダーAが外部物体による干渉を受けているか否か、たとえば、ライダーAの向きαを起点とする所定角度の範囲内に不明な物体が現れているか否かを判定する。現れている場合、異常状態であると判定し、現れていない場合、正常状態であると判定する。 Further, it is determined from the image information whether or not the rider A is interfered by an external object, for example, whether or not an unknown object appears within a predetermined angle range starting from the direction α of the rider A. If it does appear, it is determined that it is in an abnormal state, and if it does not appear, it is determined that it is in a normal state.
 上記3つの判定結果がいずれもノーである場合、該ライダーが正常状態であると判定する。 If all the above three determination results are NO, it is determined that the rider is in a normal state.
 条件4:ライダーAの現時点t2での向きαに基づいてライダーAに車両が見えるか否かを判定する。 
 ここで、たとえば、ライダーの向きαが真後向き、左後向き、右後向き、真左向き、真右向きのいずれかであり、すなわち、向きαが図5に示されるクラスVI、クラスV、クラスVII、クラスIV、クラスVIIIのいずれかに属する場合、ライダーに車両が見えないと判定し、それ以外の場合、車両が見えると判定する。
Condition 4: It is determined whether or not the rider A can see the vehicle based on the direction α of the rider A at the current time t2.
Here, for example, the direction α of the rider is any one of true rearward, left rearward, right rearward, true leftward, and rightward, that is, the direction α is the class VI, class V, class VII, class shown in FIG. If it belongs to either IV or class VIII, it is determined that the vehicle cannot be seen by the rider, and otherwise, it is determined that the vehicle is visible.
 条件5:向きの変化頻度が所定閾値より大きいか否か。 Condition 5: Whether the direction change frequency is greater than a predetermined threshold.
 ここで、たとえば、ライダーAの場合、現時点t2の前の連続的な10個の時点での10個の向きαを抽出して、これらの向きの変化回数、すなわち、向きの変化頻度を算出する。該所定閾値はたとえば大量のライダーによる統計データに基づいて得られるものである。次に、向きの変化頻度が所定閾値より大きいか否かを判定する。 Here, for example, in the case of the rider A, 10 orientations α at 10 consecutive time points before the current time t2 are extracted, and the number of changes in these orientations, that is, the direction change frequency is calculated. . The predetermined threshold value is obtained based on, for example, statistical data obtained by a large number of riders. Next, it is determined whether or not the direction change frequency is greater than a predetermined threshold.
 以下、ライダーAを例に、条件1-条件5に基づくライダーAの危険係数の設定について詳細に説明する。 Hereinafter, taking rider A as an example, the setting of the risk factor for rider A based on condition 1 to condition 5 will be described in detail.
 まず、条件1又は条件2の判定結果がイエスであり、すなわち、ライダーAの現時点t2でのフレーム重心座標が警報安全距離より小さく、又は、ライダーAが車両に近づいている場合、初期化された危険係数0に1を加算し、すなわちW=0+1=1であり、それに対して、条件1と条件2の判定結果がいずれもノーである場合、危険係数は依然として初期化された危険係数のままであり、すなわちW=0である。本例では、たとえば条件1の判定結果がイエスであり、W=1である。 First, if the determination result of condition 1 or 2 is yes, that is, if the frame center-of-gravity coordinates of rider A at the current time t2 is smaller than the warning safety distance, or if rider A is approaching the vehicle, it is initialized. If 1 is added to the risk factor 0, that is, W = 0 + 1 = 1, whereas if both the judgment results of the conditions 1 and 2 are no, the risk factor is still the initialized risk factor That is, W = 0. In this example, for example, the determination result of condition 1 is yes and W = 1.
 次に、条件3の判定結果がイエスであり、すなわち、現状が異常状態である場合、条件1と条件2に基づいて判定された危険係数に1を加算し、ノーの場合は、条件1と条件2に基づいて判定された危険係数を維持する。本例では、たとえば条件3の判定結果がイエスであり、W=1+1=2である。 Next, if the determination result of condition 3 is yes, that is, if the current state is abnormal, 1 is added to the risk factor determined based on conditions 1 and 2, and if no, the condition 1 The risk factor determined based on condition 2 is maintained. In this example, for example, the determination result of condition 3 is yes, and W = 1 + 1 = 2.
 最後に、条件4又は条件5の判定結果がイエスであり、すなわち、ライダーに車両が見え又は向きの変化頻度が所定閾値より大きいと判定した場合、条件3に基づいて判定された危険係数にさらに1を加算し、それに対して、条件4と条件5の判定結果がいずれもノーである場合、条件3に基づいて判定された危険係数を保持する。たとえば条件4と条件5の判定結果がいずれもノーである場合、条件3に基づいて判定された危険係数を保持し、すなわち、W=2である。 Finally, when the determination result of condition 4 or 5 is yes, that is, when it is determined that the vehicle is visible to the rider or the change frequency of the direction is greater than a predetermined threshold, the risk coefficient determined based on condition 3 is further increased. 1 is added. On the other hand, when both the determination results of the condition 4 and the condition 5 are no, the risk coefficient determined based on the condition 3 is held. For example, when both the determination results of the condition 4 and the condition 5 are no, the risk factor determined based on the condition 3 is held, that is, W = 2.
 このようにして、それぞれのライダーの各時点での危険係数を取得して、危険係数に応じて各ライダーを追跡し、必要な場合、車両にアラームを出すことができる。 In this way, each rider's risk factor at each point in time can be obtained, each rider can be tracked according to the risk factor, and an alarm can be issued to the vehicle if necessary.
 さらに、危険係数設定ユニット15はさらに、歩行者検出フレームパラメータ、1人又は複数の歩行者のそれぞれの状態情報、向きの変化頻度に基づき、1人又は複数の歩行者のうち、それぞれの歩行者に危険係数を設定する。たとえば、上記したライダーに対する危険係数の設定方式によって歩行者Bに危険係数を設定できる。 Furthermore, the risk coefficient setting unit 15 further includes a pedestrian detection frame parameter, each state information of one or more pedestrians, and a change frequency of the direction, and each pedestrian among one or more pedestrians. Set the risk factor to. For example, the risk factor can be set for the pedestrian B by the above-described risk factor setting method for the rider.
 以上のとおり本実施例によれば、歩行者のうちの複数のライダーと複数の歩行者を迅速に区別して、リアルタイムに複数のライダーを検出できる。 As described above, according to the present embodiment, a plurality of riders and pedestrians among pedestrians can be quickly distinguished, and a plurality of riders can be detected in real time.
 本発明について実施例を参照しながら説明したが、当業者であれば、上述の内容について各種の置換、変更や変形を行えることは自明なことである。従って、このような置換、変更や変形は添付した特許請求の範囲の主旨や範囲を逸脱しない限り、本発明に属すべきである。 Although the present invention has been described with reference to the embodiments, it is obvious that those skilled in the art can make various substitutions, changes and modifications to the above contents. Accordingly, such substitutions, changes and modifications should belong to the present invention without departing from the spirit and scope of the appended claims.
 また、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。 Further, the present invention is not limited to the above-described embodiments, and includes various modifications. For example, the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described.
10…車両周囲の歩行者を検出する装置、11…取得ユニット、12…初期候補フレームパラメータ算出ユニット、13…調整ユニット、14…分類ユニット、15…危険係数設定ユニット DESCRIPTION OF SYMBOLS 10 ... Apparatus which detects the pedestrian around a vehicle, 11 ... Acquisition unit, 12 ... Initial candidate frame parameter calculation unit, 13 ... Adjustment unit, 14 ... Classification unit, 15 ... Risk factor setting unit

Claims (12)

  1.  車両周囲の歩行者を検出する装置であって、
     車両の撮影装置から前記車両周囲の画像情報を取得して、前記車両の現在車速を検出する取得ユニットと、
     前記画像情報に基づき、前記車両周囲の複数の歩行者のそれぞれの各時点での、歩行者の前記撮影装置に対する角度である向きを含む初期候補フレームパラメータを算出する初期候補フレームパラメータ算出ユニットと、
     前記向きに基づき、前記初期候補フレームパラメータを調整して、前記各時点での調整後の候補フレームパラメータを取得する調整ユニットと、
     前記調整後の候補フレームパラメータに基づき、前記複数の歩行者を1人又は複数のライダーと1人又は複数の歩行者に分類し、前記1人又は複数のライダーのそれぞれの各時点でのライダー検出フレームパラメータを取得する分類ユニットと、
     前記ライダー検出フレームパラメータ、前記1人又は複数のライダーのそれぞれの状態情報、及び前記向きの変化頻度に基づき、前記1人又は複数のライダーのうちそれぞれのライダーに危険係数を設定する危険係数設定ユニットと、を備えることを特徴とする車両周囲の歩行者を検出する装置。
    A device for detecting pedestrians around a vehicle,
    An acquisition unit that acquires image information around the vehicle from a vehicle imaging device and detects a current vehicle speed of the vehicle;
    Based on the image information, an initial candidate frame parameter calculation unit that calculates an initial candidate frame parameter including an orientation that is an angle of the pedestrian with respect to the imaging device at each time point of the plurality of pedestrians around the vehicle;
    An adjustment unit that adjusts the initial candidate frame parameter based on the orientation to obtain the adjusted candidate frame parameter at each time point;
    Based on the adjusted candidate frame parameters, the plurality of pedestrians are classified into one or more riders and one or more pedestrians, and the rider detection at each time point of the one or more riders. A classification unit for obtaining frame parameters;
    A risk coefficient setting unit that sets a risk coefficient for each of the one or more riders based on the rider detection frame parameter, the state information of each of the one or more riders, and the change frequency of the direction. And a device for detecting a pedestrian around the vehicle.
  2.  前記初期候補フレームパラメータは、さらに、フレーム重心座標、フレーム幅w、フレーム高さhを含み、
     前記調整後の候補フレームパラメータは、前記フレーム重心座標、調整後のフレーム幅w’、前記フレーム高さh、及び前記向きαを含み、ここで、
    Figure JPOXMLDOC01-appb-M000001
    であり、
     前記分類ユニットは、前記1人又は複数のライダーのそれぞれの前記各時点での前記調整後の候補フレームパラメータを前記ライダー検出フレームパラメータとすることを特徴とする請求項1に記載の車両周囲の歩行者を検出する装置。
    The initial candidate frame parameters further include frame centroid coordinates, frame width w, and frame height h,
    The adjusted candidate frame parameters include the frame centroid coordinates, the adjusted frame width w ′, the frame height h, and the orientation α, where
    Figure JPOXMLDOC01-appb-M000001
    And
    The walking around the vehicle according to claim 1, wherein the classification unit uses the adjusted candidate frame parameter at each time point of each of the one or more riders as the rider detection frame parameter. A device that detects a person.
  3.  前記分類ユニットは、前記1人又は複数の歩行者のそれぞれの初期候補フレームパラメータを前記1人又は複数の歩行者のそれぞれの前記各時点での歩行者検出フレームパラメータとし、
     前記危険係数設定ユニットは、前記歩行者検出フレームパラメータ、前記1人又は複数の歩行者のそれぞれの状態情報、及び前記向きの変化頻度に基づき、前記1人又は複数の歩行者のうちそれぞれの歩行者に危険係数を設定することを特徴とする請求項1に記載の車両周囲の歩行者を検出する装置。
    The classification unit uses the initial candidate frame parameters of each of the one or more pedestrians as pedestrian detection frame parameters at the respective time points of the one or more pedestrians,
    The risk factor setting unit is configured to determine whether the pedestrian detection frame parameter, the state information of each of the one or more pedestrians, and the change frequency of the direction of each of the one or more pedestrians. The apparatus for detecting a pedestrian around a vehicle according to claim 1, wherein a risk factor is set for the person.
  4.  前記1人又は複数のライダーのうちそれぞれのライダーに対して、前記危険係数設定ユニットは、前の時点でのライダー検出フレームパラメータのフレーム重心座標と現時点でのライダー検出フレームパラメータのフレーム重心座標に基づき、前記各ライダーの前記車両に対する相対速度と相対移動方向を取得することを特徴とする請求項2に記載の車両周囲の歩行者を検出する装置。 For each rider of the one or more riders, the risk factor setting unit is based on the frame center of gravity coordinates of the rider detection frame parameter at the previous time point and the frame center of gravity coordinates of the rider detection frame parameter at the current time point. The apparatus for detecting a pedestrian around a vehicle according to claim 2, wherein a relative speed and a relative movement direction of each rider with respect to the vehicle are acquired.
  5.  前記危険係数設定ユニットは、前記ライダーごとに新ライダーであるか否かを判定し、新ライダーである場合、前記危険係数を1に設定し、新ライダーではない場合、現時点での前記ライダー検出フレームパラメータの前記フレーム重心座標と前記向き、前記相対速度、前記相対移動方向、前記状態情報、前記向きの変化頻度に基づき、前記危険係数を設定することを特徴とする請求項4に記載の車両周囲の歩行者を検出する装置。 The risk factor setting unit determines whether or not each rider is a new rider. If the rider is a new rider, the risk factor is set to 1. If the rider is not a new rider, the current rider detection frame is set. 5. The vehicle periphery according to claim 4, wherein the risk coefficient is set based on the frame center-of-gravity coordinates and the direction of the parameter, the relative speed, the relative movement direction, the state information, and the change frequency of the direction. Detecting pedestrians.
  6.  前記状態情報は正常状態又は異常状態であり、
     前記危険係数設定ユニットは、前記ライダー検出フレームパラメータの前記フレーム高さに基づいて、前記ライダーが子供であると判定し、又は、前記画像情報から前記ライダーとほかのライダーとの距離が所定安全距離より小さいと判定し、又は、前記画像情報から前記ライダーが外部物体による干渉を受けていると判定した場合、前記状態情報を異常状態として決定することを特徴とする請求項5に記載の車両周囲の歩行者を検出する装置。
    The state information is a normal state or an abnormal state,
    The risk factor setting unit determines that the rider is a child based on the frame height of the rider detection frame parameter, or the distance between the rider and another rider is a predetermined safety distance based on the image information. The vehicle surroundings according to claim 5, wherein the state information is determined as an abnormal state when it is determined that the vehicle is smaller than the image information or when it is determined that the rider receives interference from an external object. Detecting pedestrians.
  7.  車両周囲の歩行者を検出する方法であって、
     車両の撮影装置から前記車両周囲の画像情報を取得して、前記車両の現在車速を検出する第1のステップと、
     前記画像情報に基づき、前記車両周囲の複数の歩行者のそれぞれの各時点での、歩行者の前記撮影装置に対する角度である向きを含む初期候補フレームパラメータを算出する第2のステップと、
     前記向きに基づき、前記初期候補フレームパラメータを調整して、前記各時点での調整後の候補フレームパラメータを取得する第3のステップと、
     前記調整後の候補フレームパラメータに基づき、前記複数の歩行者を1人又は複数のライダーと1人又は複数の歩行者に分類し、前記1人又は複数のライダーのそれぞれの各時点でのライダー検出フレームパラメータを取得する第4のステップと、
     前記ライダー検出フレームパラメータ、前記1人又は複数のライダーのそれぞれの状態情報、前記向きの変化頻度に基づき、前記1人又は複数のライダーのうちそれぞれのライダーに危険係数を設定する第5のステップと、を含むことを特徴とする車両周囲の歩行者を検出する方法。
    A method for detecting pedestrians around a vehicle,
    A first step of acquiring image information around the vehicle from a vehicle photographing device and detecting a current vehicle speed of the vehicle;
    A second step of calculating an initial candidate frame parameter including an orientation that is an angle of the pedestrian with respect to the photographing device at each time point of the plurality of pedestrians around the vehicle based on the image information;
    A third step of adjusting the initial candidate frame parameters based on the orientation to obtain adjusted candidate frame parameters at each time point;
    Based on the adjusted candidate frame parameters, the plurality of pedestrians are classified into one or more riders and one or more pedestrians, and the rider detection at each time point of the one or more riders. A fourth step of obtaining frame parameters;
    A fifth step of setting a risk factor for each of the one or more riders based on the rider detection frame parameter, the state information of each of the one or more riders, and the direction change frequency; The method of detecting the pedestrian around the vehicle characterized by including these.
  8.  前記初期候補フレームパラメータはさらに、フレーム重心座標、フレーム幅w、フレーム高さhを含み、
     前記第3のステップにおいて、前記調整後の候補フレームパラメータは、前記フレーム重心座標、調整後のフレーム幅w’、前記フレーム高さh、及び前記向きαを含み、ここで、
    Figure JPOXMLDOC01-appb-M000002
    であり、
     前記第4のステップにおいて、前記1人又は複数のライダーのそれぞれの前記各時点での前記調整後の候補フレームパラメータを前記ライダー検出フレームパラメータとすることを特徴とする請求項7に記載の車両周囲の歩行者を検出する方法。
    The initial candidate frame parameters further include frame centroid coordinates, frame width w, frame height h,
    In the third step, the adjusted candidate frame parameters include the frame centroid coordinates, the adjusted frame width w ′, the frame height h, and the orientation α, where
    Figure JPOXMLDOC01-appb-M000002
    And
    The vehicle surroundings according to claim 7, wherein, in the fourth step, the adjusted candidate frame parameter at each time point of each of the one or more riders is set as the rider detection frame parameter. To detect pedestrians in the city.
  9.  前記第4のステップにおいて、前記1人又は複数の歩行者のそれぞれの初期候補フレームパラメータを前記1人又は複数の歩行者のそれぞれの前記各時点での歩行者検出フレームパラメータとし、
     前記第5のステップにおいて、前記歩行者検出フレームパラメータ、前記1人又は複数の歩行者のそれぞれの状態情報、及び前記向きの変化頻度に基づき、前記1人又は複数の歩行者のうちそれぞれの歩行者に危険係数を設定することを特徴とする請求項7に記載の車両周囲の歩行者を検出する方法。
    In the fourth step, the initial candidate frame parameter of each of the one or more pedestrians is set as a pedestrian detection frame parameter at each time point of the one or more pedestrians,
    In the fifth step, based on the pedestrian detection frame parameter, the state information of each of the one or more pedestrians, and the change frequency of the direction, each of the one or more pedestrians walking 8. The method for detecting a pedestrian around a vehicle according to claim 7, wherein a risk factor is set for the person.
  10.  前記1人又は複数のライダーのうちそれぞれのライダーに対して、前の時点でのライダー検出フレームパラメータのフレーム重心座標と現時点でのライダー検出フレームパラメータのフレーム重心座標に基づき、前記各ライダーの前記車両に対する相対速度と相対移動方向を取得することを特徴とする請求項8に記載の車両周囲の歩行者を検出する方法。 Based on the frame center of gravity coordinates of the rider detection frame parameter at the previous time point and the frame center of gravity coordinates of the rider detection frame parameter at the current time point for each of the one or more riders, the vehicle of each rider The method for detecting a pedestrian around a vehicle according to claim 8, wherein a relative speed and a relative movement direction with respect to the vehicle are acquired.
  11.  前記第5のステップにおいて、前記ライダーごとに新ライダーであるか否かを判定し、新ライダーである場合、前記危険係数を1に設定し、新ライダーではない場合、現時点での前記ライダー検出フレームパラメータの前記フレーム重心座標と前記向き、前記相対速度、前記相対移動方向、前記状態情報、前記向きの変化頻度に基づき、前記危険係数を設定することを特徴とする請求項10に記載の車両周囲の歩行者を検出する方法。 In the fifth step, it is determined whether or not each rider is a new rider. If the rider is a new rider, the risk factor is set to 1. If the rider is not a new rider, the current rider detection frame is set. The vehicle surroundings according to claim 10, wherein the risk factor is set based on the frame center-of-gravity coordinates and the direction of the parameter, the relative speed, the relative movement direction, the state information, and the change frequency of the direction. To detect pedestrians in the city.
  12.  前記状態情報は正常状態又は異常状態であり、
     前記ライダー検出フレームパラメータの前記フレーム高さに基づき、前記ライダーが子供であると判定し、又は、前記画像情報から前記ライダーとほかのライダーとの距離が所定安全距離より小さいと判定し、又は、前記画像情報から前記ライダーが外部物体による干渉を受けていると判定した場合、前記状態情報を異常状態として決定することを特徴とする請求項11に記載の車両周囲の歩行者を検出する方法。
    The state information is a normal state or an abnormal state,
    Based on the frame height of the rider detection frame parameter, it is determined that the rider is a child, or it is determined from the image information that the distance between the rider and another rider is smaller than a predetermined safe distance, or 12. The method for detecting pedestrians around a vehicle according to claim 11, wherein when it is determined from the image information that the rider is receiving interference from an external object, the state information is determined as an abnormal state.
PCT/JP2018/015194 2017-04-12 2018-04-11 Method and device for detecting pedestrian around vehicle WO2018190362A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019512544A JP6756908B2 (en) 2017-04-12 2018-04-11 Methods and devices for detecting pedestrians around the vehicle

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710234918.6 2017-04-12
CN201710234918.6A CN108694363A (en) 2017-04-12 2017-04-12 The method and apparatus that the pedestrian of vehicle periphery is detected

Publications (1)

Publication Number Publication Date
WO2018190362A1 true WO2018190362A1 (en) 2018-10-18

Family

ID=63792644

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/015194 WO2018190362A1 (en) 2017-04-12 2018-04-11 Method and device for detecting pedestrian around vehicle

Country Status (3)

Country Link
JP (1) JP6756908B2 (en)
CN (1) CN108694363A (en)
WO (1) WO2018190362A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111856493A (en) * 2019-04-25 2020-10-30 北醒(北京)光子科技有限公司 Camera triggering device and method based on laser radar

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409309A (en) * 2018-11-05 2019-03-01 电子科技大学 A kind of intelligent alarm system and method based on human testing
AU2018286579A1 (en) 2018-11-09 2020-05-28 Beijing Didi Infinity Technology And Development Co., Ltd. System and method for detecting in-vehicle conflicts
CN111429754A (en) * 2020-03-13 2020-07-17 南京航空航天大学 Vehicle collision avoidance track risk assessment method under pedestrian crossing working condition
CN115527074B (en) * 2022-11-29 2023-03-07 深圳依时货拉拉科技有限公司 Vehicle detection frame generation method and device and computer equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009053925A (en) * 2007-08-27 2009-03-12 Toyota Motor Corp Behavior prediction device
WO2017056382A1 (en) * 2015-09-29 2017-04-06 ソニー株式会社 Information processing device, information processing method, and program
WO2017158983A1 (en) * 2016-03-18 2017-09-21 株式会社Jvcケンウッド Object recognition device, object recognition method, and object recognition program
JP2017194432A (en) * 2016-04-22 2017-10-26 株式会社デンソー Object detection device and object detection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105216792A (en) * 2014-06-12 2016-01-06 株式会社日立制作所 Obstacle target in surrounding environment is carried out to the method and apparatus of recognition and tracking
EP3136288A1 (en) * 2015-08-28 2017-03-01 Autoliv Development AB Vision system and method for a motor vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009053925A (en) * 2007-08-27 2009-03-12 Toyota Motor Corp Behavior prediction device
WO2017056382A1 (en) * 2015-09-29 2017-04-06 ソニー株式会社 Information processing device, information processing method, and program
WO2017158983A1 (en) * 2016-03-18 2017-09-21 株式会社Jvcケンウッド Object recognition device, object recognition method, and object recognition program
JP2017194432A (en) * 2016-04-22 2017-10-26 株式会社デンソー Object detection device and object detection method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111856493A (en) * 2019-04-25 2020-10-30 北醒(北京)光子科技有限公司 Camera triggering device and method based on laser radar

Also Published As

Publication number Publication date
JPWO2018190362A1 (en) 2020-01-16
CN108694363A (en) 2018-10-23
JP6756908B2 (en) 2020-09-16

Similar Documents

Publication Publication Date Title
WO2018190362A1 (en) Method and device for detecting pedestrian around vehicle
US11390276B2 (en) Control device, control method, and non-transitory storage medium
US20220073068A1 (en) Rider assistance system and method
CN106485233B (en) Method and device for detecting travelable area and electronic equipment
KR101891460B1 (en) Method and apparatus for detecting and assessing road reflections
CN111033510A (en) Method and device for operating a driver assistance system, driver assistance system and motor vehicle
Smaldone et al. The cyber-physical bike: A step towards safer green transportation
GB2538572A (en) Safety system for a vehicle to detect and warn of a potential collision
CN113044059A (en) Safety system for a vehicle
LU101647B1 (en) Road pedestrian classification method and top-view pedestrian risk quantitative method in two-dimensional world coordinate system
US9870513B2 (en) Method and device for detecting objects from depth-resolved image data
WO2009101660A1 (en) Vehicle periphery monitoring device, vehicle, and vehicle periphery monitoring program
JP2023532045A (en) Appearance- and Movement-Based Models for Determining Micromobility User Risk
Rajendar et al. Prediction of stopping distance for autonomous emergency braking using stereo camera pedestrian detection
CN111081045A (en) Attitude trajectory prediction method and electronic equipment
CN105679090B (en) A kind of night driver driving householder method based on smart mobile phone
Michalke et al. Towards a closer fusion of active and passive safety: Optical flow-based detection of vehicle side collisions
CN109308442B (en) Vehicle exterior environment recognition device
WO2018212090A1 (en) Control device and control method
KR102345798B1 (en) Intersection signal violation recognition and image storage device
JP7454685B2 (en) Detection of debris in vehicle travel paths
Rammohan et al. Automotive Collision Avoidance System: A Review
KR20150092505A (en) A method for tracking a vehicle, a method for warning a distance between vehicles and a device for warning a distance between vehicles
Khandelwal et al. Automatic braking system for two wheeler with object detection and depth perception
JP2018088237A (en) Information processing device, imaging device, apparatus control system, movable body, information processing method, and information processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18784623

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019512544

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18784623

Country of ref document: EP

Kind code of ref document: A1