CN109059863B - Method for mapping track point vector of head-up pedestrian to two-dimensional world coordinate system - Google Patents
Method for mapping track point vector of head-up pedestrian to two-dimensional world coordinate system Download PDFInfo
- Publication number
- CN109059863B CN109059863B CN201810697513.0A CN201810697513A CN109059863B CN 109059863 B CN109059863 B CN 109059863B CN 201810697513 A CN201810697513 A CN 201810697513A CN 109059863 B CN109059863 B CN 109059863B
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- world coordinate
- coordinate system
- dimensional world
- mapping
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/36—Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
Abstract
A method for mapping head-up pedestrian track point vectors to a two-dimensional world coordinate system belongs to the field of driving risk analysis, and aims to solve the problem of converting a overlooking visual angle into a head-up visual angle, the method is characterized in that S1, pedestrian track points of all pedestrian targets are calculated for pedestrian images shot by a vehicle-mounted camera, the number of the pedestrian targets is N, and all the pedestrian target head-up pedestrian track point vectors at real time are obtained and updated; s2, mapping all the head-up pedestrian track point vectors to a two-dimensional world coordinate system to obtain the corresponding N pedestrian track point vectors of the overlooking two-dimensional world coordinate system, and the effect is that a driver can conveniently observe the target motion trend of each pedestrian at a more accurate visual angle.
Description
Technical Field
The invention belongs to the field of driving risk analysis, and relates to a method for mapping a track point vector of a head-up pedestrian to a two-dimensional world coordinate system.
Background
The road traffic in many areas in China has the danger condition of mixed traffic of people and vehicles for a long time, pedestrians serve as weak groups in the road traffic, occupy a large proportion in the fatality rate of accident personnel throughout the year, and are reasonably protected by obstacle avoidance of vehicles, so that the importance of improving the safety avoidance capacity of automobiles for the pedestrians is self-evident.
The pedestrian risk analysis method based on the vehicle onboard system mainly uses a sensor to sense vehicle environment information, combines the motion state of a pedestrian target, judges the danger of the pedestrian target and adjusts driving decision according to the judgment, and achieves early protection of the dangerous pedestrian target. A pedestrian risk analysis method based on vehicle-mounted images is the mainstream research direction at present, and many researchers analyze pedestrian movement trends by recognizing pedestrian target postures so as to classify dangerous pedestrians. The JokoHariyono et al uses an optical flow method to divide the pedestrian outline, and uses a pedestrian posture ratio method to identify the horizontal movement trend of the pedestrian, so as to judge that the pedestrian moving to the vehicle driving area is a dangerous pedestrian. In addition, Keller and Gavrila et al use gaussian dynamic process models and trajectory probability hierarchical matching to identify standing or horizontal motion states of pedestrian objects in images.
Most pedestrian risk analysis methods based on vehicle-mounted images directly analyze pedestrian risks from image view angles, but due to imaging distortion of the vehicle-mounted images, researchers can only recognize the motion postures of pedestrians instead of mastering the exact motion states of the pedestrians. Therefore, the existing pedestrian risk analysis method can only give qualitative two-classification judgment on whether the pedestrian is dangerous or not, so that the method is mainly used for providing real-time early warning for a driver and cannot provide fine data support for vehicle decision making.
In order to realize accurate driver assistance and improve the on-vehicle autonomic cruise performance of intelligence, the publication no: CN107240167A, a chinese patent application, discloses a pedestrian monitoring system with a car data recorder, and provides a quantitative pedestrian risk analysis method. The system uses sensing equipment comprising a body sensing controller, an infrared sensor and a sounding meter to obtain pedestrian information in a vehicle running environment, and calculates a pedestrian collision coefficient and achieves pedestrian danger early warning in a mode of matching a pedestrian depth image stream with a pedestrian target model. Although a quantitative risk analysis result is given, risk quantification factors come from the postures of the pedestrians, and the intention of the pedestrians in deliberate collision with the vehicle is actually judged, so that the quantification coefficients do not have the objective property of kinematics and are not enough to reflect the real motion risk degree of the pedestrians.
Publication No.: the CN104239741A chinese patent application for automobile driving safety assistance method based on automobile risk field, from three comprehensive angles of people, vehicle and road, by analyzing kinetic energy field, potential energy field and behavior field of the vehicle environment, a vehicle risk field model of vehicle driving to obstacle risk is constructed in a fusion manner, and the driving risk of vehicle to road obstacle is quantified, so as to evaluate different degrees. The invention gives reasonable kinematics principle to the driving risk field by introducing the potential field theory, so that the risk quantification result can be objectively and effectively used for driving decision.
Disclosure of Invention
In order to solve the problem of converting a overlooking visual angle into a head-up visual angle, the invention provides a method for mapping a track point vector of a head-up pedestrian to a two-dimensional world coordinate system, and the technical scheme is as follows:
a method for mapping a head-up pedestrian track point vector to a two-dimensional world coordinate system comprises the following steps:
s1, calculating pedestrian track points of all pedestrian targets for pedestrian images shot by a vehicle-mounted camera, wherein the number of the pedestrian targets is N, and acquiring and updating vectors of all pedestrian targets for horizontally observing the pedestrian track points at real time;
and S2, mapping all the head-up pedestrian track point vectors to a two-dimensional world coordinate system to obtain pedestrian track point vectors corresponding to the N overlooking two-dimensional world coordinate systems.
Further, step S2 is to look up the pedestrian track point vectorMapping to overlook pedestrian track point vectorThe method comprises the following specific steps:
firstly, calculating mapping factors rFactor and cFactor:
wherein u and v are input values representing inverse perspective mapping points in the image, M and N are constant values representing the width and height of the image, AlphaU is a horizontal hole near angle, and AlphaV is a vertical hole near angle;
secondly, calculating two-dimensional world coordinate initial mapping points (x ', y'):
wherein C isx、CyAnd CzFor fixed value, representing the coordinates of the camera in the world coordinate system, setting Cx=C y0 and CzH is the height from the ground; theta is the pitch angle between the camera and the ground;
thirdly, correcting the initial mapping point to obtain a mapping coordinate point (x, y) of a two-dimensional world coordinate system:
where γ is a constant value and represents the camera deflection angle.
Further, the calculation method of the horizontal hole near angle AlphaU and the vertical hole near angle AlphaV includes:
wherein the focal length is f and the length of the photosensitive element is dxThe width of the photosensitive element is dy。
Has the advantages that: the pedestrian risk analysis uses a two-dimensional world coordinate system with an intuitive visual angle, so that a driver can observe the moving trend of each pedestrian target at a more accurate visual angle conveniently; the static risk distribution of the area in front of the vehicle is described by overlooking the risk matrix in front of the two-dimensional world coordinate system, the risk distribution condition of the static risk distribution is related to urban speed limit, the static risk distribution is not influenced by the road surface environment and the vehicle running speed, and the complexity of practical application is reduced.
Drawings
FIG. 1 image coordinate system;
FIG. 2 a world coordinate system;
FIG. 3 is a top view of a two-dimensional world coordinate system;
FIG. 4 parameter FIG. 1;
FIG. 5 parameter FIG. 2;
FIG. 6 is a head-up trajectory plot;
FIG. 7 is a two-dimensional world coordinate system pedestrian trajectory matrix diagram from above;
FIG. 8 is a top view of a two-dimensional world coordinate system pre-vehicle risk matrix diagram;
FIG. 9 is a graph of a method of calculating a risk factor for an adjacent pedestrian;
fig. 10 is a graph of the calculation result of the risk coefficient of an adjacent pedestrian according to embodiment 1;
fig. 11 is a graph of the calculation result of the risk coefficient of an adjacent pedestrian according to embodiment 2;
fig. 12 is a graph of the adjacent pedestrian risk factor calculation results of embodiment 3;
fig. 13 is a schematic diagram of the invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific real-time modes:
as shown in fig. 13, the invention discloses a pedestrian risk quantification method in an overhead view based on a two-dimensional world coordinate system, which can be implemented by using software, and can solve the quantification risk degree of a pedestrian target in front of a vehicle under the overhead view condition by transforming the video of a vehicle-mounted camera.
The method mainly comprises the following implementation steps:
Step 2: mapping all head-up pedestrian track points to a two-dimensional world coordinate system and based on an origin O of the two-dimensional world coordinate systemWObtaining a pedestrian track matrix corresponding to N two-dimensional world coordinate systems by taking the horizontal distance of +/-10 m and the vertical distance of 0-20 m as the analysis range of the pedestrian motion
Step 4, for pedestrian target i ∈ [1, N]Using the formulaAnd calculating a risk coefficient R of the adjacent pedestrians.
The present disclosure provides a detailed description of the above method, which aims at the problem that it is difficult to accurately determine the pedestrian target risk by directly adopting the image view, and the principle of the method is as shown in fig. 13, and mainly maps the pedestrian motion track point to the two-dimensional world coordinate system of the depression view, and calculates the risk weight in front of the vehicle in the two-dimensional world coordinate system. Further, a pedestrian track matrix and an automobile front risk matrix are generated through quantitative mapping, each pedestrian target has an independent pedestrian track matrix, the same automobile front risk matrix is shared, quantitative risk calculation is achieved, and normalized adjacent pedestrian risk coefficients of different pedestrian targets are obtained. The adjacent pedestrian risk coefficient is used as an overlooking pedestrian risk quantification method based on a two-dimensional world coordinate system to output a result, and can be used for supporting the operation of a driving decision module of an auxiliary driving and an autonomous vehicle.
The technical scheme of the invention relates to related image coordinate system, world coordinate system and camera parameter definition, and the specific schematic diagrams can be seen in fig. 1, fig. 2, fig. 3 and fig. 4.
Image coordinate system definition (see fig. 1): and defining an image coordinate system by taking the upper left corner of the image as an origin O, the horizontal right side as a u axis and the vertical downward side as a v axis.
World coordinate system definition (see fig. 2): using the light center of the vehicle-mounted camera to the ground projection point as the origin OWThe running direction of the vehicle is YWPositive direction, coplanar with the driving plane of the vehicle and with YWPerpendicular to the right direction as XWThe positive direction of the axis, the direction of the pointing camera is ZWThe positive direction of the axis is defined as the world coordinate system.
Two-dimensional world coordinate system definition (see fig. 3): ignoring the world coordinate system ZWThe world coordinate system of the axis (height axis) is defined as a two-dimensional world coordinate system.
The present invention requires that the vehicle-mounted camera be mounted in such a manner as to be mounted at the roof of the vehicle and face in the traveling direction, as shown in fig. 2. The vehicle-mounted camera needs to carry out dynamic shooting, so intrinsic parameters and assembly parameters of the camera are relatively fixed, and the intrinsic parameters comprise a focal length f and a photosensitive element length dxWidth d of the photosensitive elementyImage length M and image width N; the assembly parameters include ground height H, yaw angle γ, pitch angle θ, horizontal aperture angle AlphaU, and vertical aperture angle AlphaV.
The invention has the following internal parameter adaptive values: the focal length f is 16mm-23 mm; the size of the photosensitive element has no special requirement; the image length M is conventionally 1920 pixels and should not be smaller than 1080 pixels; image width N is conventionally 1080 pixels and no less than 640 pixels. The invention has the following assembly parameter adaptive values: the adaptation range of the height H is between 1.2m and 1.6 m; the ideal assembly angle of the yaw angle is 0 degree, and the acceptable range of the assembly error is +/-1 degree; the ideal assembly angle of the pitch angle is 0 degree, and the acceptable range of the assembly error is +/-3 degrees. The method for calculating the horizontal hole near angle AlphaU and the vertical hole near angle AlphaV comprises the following steps:
firstly, pedestrian track points in an input image are converted into a world coordinate system through inverse perspective mapping, and a two-dimensional world coordinate system pedestrian track matrix M is constructedP。
Let pt(ut,vt) For the t frame image of the input video, pedestrian track points are utAnd vtRepresenting column coordinates and row coordinates in the image; p is a radical oft'(xt,yt) Mapping coordinates of pedestrian track points in a two-dimensional world coordinate system for the t frame image of the video, wherein xtAnd ytRepresenting horizontal and vertical coordinates in a two-dimensional world coordinate system. Accordingly, then there areFor inputting a video head-up pedestrian track point vector,is a vectorAnd overlooking the pedestrian track point vector in a two-dimensional world coordinate system.
Head-up pedestrian track point vectorTo overlook pedestrian track point vectorThe mapping transformation step of (2) is:
firstly, calculating mapping factors rFactor and cFactor (see formula (2)), wherein u and v are input values to represent inverse perspective mapping points in an image, and M and N are constant values to represent the width and height of the image;
second, the two-dimensional world coordinate initial mapping point (x ', y') (see equation (3)) is calculated, where Cx、CyAnd CzTo represent the coordinates of the camera in the world coordinate system in constant values, C is usually setx=C y0 and CzH; theta is the camera-to-ground pitch angle.
And thirdly, correcting the initial mapping point to obtain a mapping coordinate point (x, y) of a two-dimensional world coordinate system (see formula (4)), wherein gamma is a fixed value and represents the deflection angle of the camera.
Fourthly, generating a pedestrian track matrix M by utilizing a matrix mapping function (shown as a formula (5)) as shown in the formula (5)P。
(n,m)=fwm(x,y) (5)
In the formula (5), (x, y) represents a coordinate point of a two-dimensional world coordinate system, and (n, m) represents the row and column positions of elements in an operation matrix. Constructing a pedestrian trajectory matrix MPAiming at representing the pedestrian track point information within a vehicle-ahead defined distance in a two-dimensional world coordinate system by a matrix method, and aiming at the inverse perspective mapping effect, the mapping range from the two-dimensional world coordinate system to an operation matrix is set as a distance OWHorizontal + -10 m and vertical 0-20 m. From this, a two-dimensional world coordinate system pedestrian trajectory matrix M as shown in FIG. 6 can be constructedP。
Then, a two-dimensional world coordinate system front risk matrix M corresponding to the two-dimensional world coordinate system pedestrian track matrix is constructedV. The risk equipotential lines of the two-dimensional world coordinate system are composed of 6 lines with respect to YWThe second-order curve is formed and satisfies:
y=γ(x)=α1x2+α2x+α3(6)
in formula (6), α1、α2And α3Is a second-order polynomial coefficient vector and satisfies:
the function for calculating the risk weight of the vehicle front influenced by the distance between the vehicle front is given as shown in the formula (8), and the function for calculating the risk weight of the vehicle front is itself prototype to be a Gaussian distribution function. Wherein, C1And C2For normalizing the parameters, the values are set to C10.05 and C247.7; μ and σ are function expectations and variances, the physical meaning of which is a risk distribution parameter affected by vehicle braking capability, with values set to μ -0 and σ -8. W in formula (8)rTo normalize the intensity of risk, a certain area wrThe closer the value is to 1, the more dangerous the area is, whereas the more toward 0, the safer it is.
The coordinate in the two-dimensional world coordinate system is equipotential to Y through the formula (6)WAnd (4) calculating the corresponding risk weight according to the formula (8). The vehicle-front risk matrix is mainly used for matching with a pedestrian track matrix to realize pedestrian risk coefficient quantification, so that the same matrix mapping function is selected for constructing the vehicle-front risk matrix. Accordingly, the risk matrix M in front of the vehicle can be generated by further using the formula (5) according to the risk weight of the vehicle running at each coordinate in the two-dimensional world coordinate systemVAs shown in fig. 7. Generating an in-vehicle risk matrix M using a matrix mapping functionV:
(n,m)=fwm(x,y,wr)
(x,y,wr) And (2) representing coordinate points of a two-dimensional world coordinate system and corresponding risk intensity, and (n, m) representing the row and column positions of elements in the operation matrix.
Finally, combining a two-dimensional world coordinate system pedestrian track matrix MPAnd a two-dimensional world coordinate system risk matrix M in front of the vehicleVAnd calculating a risk coefficient R of the adjacent pedestrians.
Let N different pedestrian objects exist in the continuous image, and for any pedestrian object i ∈ [1, N]All have unique head-up pedestrian track point vector ofCorresponding thereto. Further, vector quantitiesThen the overlook pedestrian track point vector can be obtained through the second stepAnd can independently and correspondingly overlook a two-dimensional world coordinate system pedestrian track matrix from the world coordinate system
As shown in FIG. 8, looking down the two-dimensional world coordinate system front risk matrix is obtained by copying the same copy as itselfAnd the pedestrian track matrix of the two-dimensional world coordinate system is overlookedQuantifying an adjacent pedestrian risk factor RiThe formula is as follows:
equation (9) is a formula for quantifying the risk factor of the neighboring pedestrian according to the present invention, wherein k isiThe output result R is the number of the trace points of the pedestrianiI.e. the vicinity of the pedestrian object iCoefficient of pedestrian risk, RiCloser to 1 indicates a more dangerous pedestrian target, whereas closer to 0 is safer. The physical significance of the calculation method in the formula (9) is that the pedestrian track matrix is utilized to screen the vehicle front risk matrix, so that the vehicle front risk degree corresponding to the pedestrian track point position is obtained.
The invention relates to a method for quantifying pedestrian target risk degree by vehicle-mounted video images, which is used for quantifying the pedestrian target risk during vehicle driving into a normalized risk index, thereby providing an important vehicle decision data basis for the pedestrian target obstacle avoidance function of advanced assistant driving and autonomous cruising of an intelligent vehicle. The algorithm has the beneficial effects that: (1) the pedestrian risk analysis uses a two-dimensional world coordinate system with an intuitive visual angle, so that a driver can observe the moving trend of each pedestrian target at a more accurate visual angle conveniently; (2) the static risk distribution of the area in front of the vehicle is described by overlooking the risk matrix in front of the two-dimensional world coordinate system, the risk distribution condition of the static risk distribution is related to urban speed limit, and the static risk distribution is not influenced by the road surface environment and the vehicle running speed, so that the complexity of practical application is reduced; (3) the movement conditions of different pedestrian targets and vehicles in the two-dimensional world coordinate system are considered independently, the movement of the pedestrians is not interfered with each other, and corresponding attention can be given to the specific pedestrian target according to the attention requirement of a driver or an autonomous driving system. (4) The normalized adjacent pedestrian risk coefficient of the pedestrian target is obtained through quantification, different risk degrees of the pedestrian target are reflected from 0 to 1, and the method can be used for dangerous pedestrian classification and vehicle driving avoidance priority determination.
Example 1:
in the embodiment, for an actually measured road scene vehicle-mounted video with a pixel size of 1920 × 1080, the adjacent pedestrian risk coefficients of 2 pedestrian targets in the image are quantized. The results of the calculation of the risk coefficients of neighboring pedestrians can be seen in fig. 10 (a), (b), (c) and (d), and it can be seen that reasonable pedestrian risk quantification results are given for two pedestrian objects crossing the front region of the vehicle in the image.
Example 2:
the present embodiment provides calculation results of risk coefficients of neighboring pedestrians for 2 pedestrian targets in an actual measurement road scene vehicle-mounted video with a size of 1920 × 1080 as shown in (a), (b), (c), and (d) of fig. 11. Therefore, the method and the device provide an accurate pedestrian risk quantification result aiming at the pedestrian target which is independent of the vehicle and moves in opposite directions.
Example 3:
the present embodiment quantifies 2 pedestrian objects in the continuous image for the vehicle-mounted video as the actual measurement road scene image with the pixel size of 1920 × 1080, and the calculation results of the risk coefficients of the adjacent pedestrians are shown in (a), (b), (c) and (d) of fig. 12. Therefore, for the pedestrian crossing the front area in the video image, the invention provides an accurate pedestrian risk quantification result.
The above description is only for the purpose of creating a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the inventive concept of the present invention within the technical scope of the present invention.
Claims (2)
1. A method for mapping a head-up pedestrian track point vector to a two-dimensional world coordinate system is characterized by comprising the following steps:
s1, calculating pedestrian track points of all pedestrian targets for pedestrian images shot by a vehicle-mounted camera, wherein the number of the pedestrian targets is N, and acquiring and updating vectors of all pedestrian targets head-up pedestrian track points at real time:
let pt(ut,vt) For inputting the t frame image pedestrian track point, u of the videotAnd vtRepresenting the column coordinates and the row coordinates in the image,
pt'(xt,yt) Mapping coordinates, x, of pedestrian track points in the t-th frame image of the video in a two-dimensional world coordinate systemtAnd ytRepresenting horizontal and vertical coordinates in a two-dimensional world coordinate system,
is provided withLooking up pedestrian track point direction for input videoThe amount of the compound (A) is,is a vectorOverlooking a pedestrian track point vector in a two-dimensional world coordinate system;
kithe number of the track points i is a pedestrian target;
s2, all head-up pedestrian track point vectorsMapping to a two-dimensional world coordinate system to obtain pedestrian track point vectors corresponding to the N overlooking two-dimensional world coordinate systems
Firstly, calculating mapping factors rFactor and cFactor:
wherein u and v are input values representing inverse perspective mapping points in the image, M and N are constant values representing the width and height of the image, AlphaU is a horizontal hole near angle, and AlphaV is a vertical hole near angle;
secondly, calculating two-dimensional world coordinate initial mapping points (x ', y'):
wherein C isx、CyAnd CzFor fixed value, representing the coordinates of the camera in the world coordinate system, setting Cx=Cy0 and CzH is the height from the ground; theta is the pitch angle between the camera and the ground;
thirdly, correcting the initial mapping point to obtain a mapping coordinate point (x, y) of a two-dimensional world coordinate system:
where γ is a constant value and represents the camera deflection angle.
2. The method of mapping a head-up pedestrian trajectory point vector to a two-dimensional world coordinate system of claim 1, wherein the horizontal aperture near angle AlphaU and the vertical aperture near angle AlphaV are calculated by:
wherein the focal length is f and the length of the photosensitive element is dxThe width of the photosensitive element is dy。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810697513.0A CN109059863B (en) | 2018-06-29 | 2018-06-29 | Method for mapping track point vector of head-up pedestrian to two-dimensional world coordinate system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810697513.0A CN109059863B (en) | 2018-06-29 | 2018-06-29 | Method for mapping track point vector of head-up pedestrian to two-dimensional world coordinate system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109059863A CN109059863A (en) | 2018-12-21 |
CN109059863B true CN109059863B (en) | 2020-09-22 |
Family
ID=64818475
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810697513.0A Active CN109059863B (en) | 2018-06-29 | 2018-06-29 | Method for mapping track point vector of head-up pedestrian to two-dimensional world coordinate system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109059863B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113033441A (en) * | 2021-03-31 | 2021-06-25 | 广州敏视数码科技有限公司 | Pedestrian collision early warning method based on wide-angle imaging |
CN113450597B (en) * | 2021-06-09 | 2022-11-29 | 浙江兆晟科技股份有限公司 | Ship auxiliary navigation method and system based on deep learning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102930287A (en) * | 2012-09-26 | 2013-02-13 | 上海理工大学 | Overlook-based detection and counting system and method for pedestrians |
CN103879404A (en) * | 2012-12-19 | 2014-06-25 | 财团法人车辆研究测试中心 | Moving-object-traceable anti-collision warning method and device thereof |
CN104239741A (en) * | 2014-09-28 | 2014-12-24 | 清华大学 | Travelling risk field-based automobile driving safety assistance method |
CN104573646A (en) * | 2014-12-29 | 2015-04-29 | 长安大学 | Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle |
KR20160017269A (en) * | 2014-08-01 | 2016-02-16 | 현대자동차주식회사 | Device and method for detecting pedestrians |
DE102015216352A1 (en) * | 2015-08-27 | 2017-03-02 | Bayerische Motoren Werke Aktiengesellschaft | Method for detecting a possible collision of a vehicle with a pedestrian on the basis of high-resolution recordings |
-
2018
- 2018-06-29 CN CN201810697513.0A patent/CN109059863B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102930287A (en) * | 2012-09-26 | 2013-02-13 | 上海理工大学 | Overlook-based detection and counting system and method for pedestrians |
CN103879404A (en) * | 2012-12-19 | 2014-06-25 | 财团法人车辆研究测试中心 | Moving-object-traceable anti-collision warning method and device thereof |
KR20160017269A (en) * | 2014-08-01 | 2016-02-16 | 현대자동차주식회사 | Device and method for detecting pedestrians |
CN104239741A (en) * | 2014-09-28 | 2014-12-24 | 清华大学 | Travelling risk field-based automobile driving safety assistance method |
CN104573646A (en) * | 2014-12-29 | 2015-04-29 | 长安大学 | Detection method and system, based on laser radar and binocular camera, for pedestrian in front of vehicle |
DE102015216352A1 (en) * | 2015-08-27 | 2017-03-02 | Bayerische Motoren Werke Aktiengesellschaft | Method for detecting a possible collision of a vehicle with a pedestrian on the basis of high-resolution recordings |
Non-Patent Citations (2)
Title |
---|
基于人-车-路协同的行车风险场概念、原理及建模;王建强 等;《中国公路学报》;20160115;第29卷(第1期);第105页-第113页 * |
速度异常的横穿马路行人检测算法;许烨豪 等;《大连民族大学学报》;20080515;第20卷(第3期);第218页-第221页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109059863A (en) | 2018-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108961313B (en) | Overlooking pedestrian risk quantification method of two-dimensional world coordinate system | |
CN107031623B (en) | A kind of road method for early warning based on vehicle-mounted blind area camera | |
CN108596058A (en) | Running disorder object distance measuring method based on computer vision | |
CN108960183A (en) | A kind of bend target identification system and method based on Multi-sensor Fusion | |
Wu et al. | Applying a functional neurofuzzy network to real-time lane detection and front-vehicle distance measurement | |
Gandhi et al. | Vehicle surround capture: Survey of techniques and a novel omni-video-based approach for dynamic panoramic surround maps | |
US8036424B2 (en) | Field recognition apparatus, method for field recognition and program for the same | |
Zhang et al. | Radar and vision fusion for the real-time obstacle detection and identification | |
CN109373974A (en) | A kind of autonomous driving vehicle context aware systems having active probe function | |
CN110065494A (en) | A kind of vehicle collision avoidance method based on wheel detection | |
CN110444014A (en) | The anti-method for early warning that knocks into the back based on reversed ST-MRF vehicle tracking algorithm | |
CN107985189B (en) | Early warning method for lane changing depth of driver in high-speed driving environment | |
US10984264B2 (en) | Detection and validation of objects from sequential images of a camera | |
JP2023510734A (en) | Lane detection and tracking method for imaging system | |
CN107796373B (en) | Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model | |
WO2020001395A1 (en) | Road pedestrian classification method and top-view pedestrian risk quantitative method in two-dimensional world coordinate system | |
CN107229906A (en) | A kind of automobile overtaking's method for early warning based on units of variance model algorithm | |
DE112017008079T5 (en) | DISPLAY SYSTEM, DISPLAY METHOD AND PROGRAM | |
DE102010005290A1 (en) | Vehicle controlling method for vehicle operator i.e. driver, involves associating tracked objects based on dissimilarity measure, and utilizing associated objects in collision preparation system to control operation of vehicle | |
CN109059863B (en) | Method for mapping track point vector of head-up pedestrian to two-dimensional world coordinate system | |
CN110816527A (en) | Vehicle-mounted night vision safety method and system | |
Adamshuk et al. | On the applicability of inverse perspective mapping for the forward distance estimation based on the HSV colormap | |
CN114419874A (en) | Target driving safety risk early warning method based on data fusion of roadside sensing equipment | |
TWI680898B (en) | Light reaching detection device and method for close obstacles | |
CN108805105B (en) | Method for constructing risk matrix before looking down two-dimensional world coordinate system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |