CN102306284B - Digital reconstruction method of traffic accident scene based on monitoring videos - Google Patents

Digital reconstruction method of traffic accident scene based on monitoring videos Download PDF

Info

Publication number
CN102306284B
CN102306284B CN 201110231117 CN201110231117A CN102306284B CN 102306284 B CN102306284 B CN 102306284B CN 201110231117 CN201110231117 CN 201110231117 CN 201110231117 A CN201110231117 A CN 201110231117A CN 102306284 B CN102306284 B CN 102306284B
Authority
CN
China
Prior art keywords
space coordinate
vehicle
reference mark
dlt
observation point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201110231117
Other languages
Chinese (zh)
Other versions
CN102306284A (en
Inventor
苗新强
金先龙
韩学源
景旭斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuhai Network Technology Shanghai Co ltd
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN 201110231117 priority Critical patent/CN102306284B/en
Publication of CN102306284A publication Critical patent/CN102306284A/en
Application granted granted Critical
Publication of CN102306284B publication Critical patent/CN102306284B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a digital reconstruction method of a traffic accident scene based on monitoring videos, comprising the following steps of: steps one, continuously extracting frames related to movement positions of a detected vehicle from a monitoring video sequence; step two, calibrating a control point of each frame image in sequence and calculating corresponding values of DLT (direct linear transformation) coefficients; step three, selecting an observation point to acquire image space coordinates of the observation point at different moments and respectively calculating object space coordinates of the observation point at the corresponding moments by combining the values of the DLT coefficients; and step four, carrying out the digital reconstruction of the scene on a movement state of the vehicle to obtain a path line of the vehicle movement, and solving a displacement curve, a speed curve and an accelerating curve of the vehicle movement. The method provided by the invention can realize the digital reconstruction on the vehicle movement state in the scene at different road sections, thus the problem of monitoring on the vehicle movement state in a large range is solved, measurement accuracy is improved and cost is reduced.

Description

Scene of a traffic accident digitizing reconstructing method based on monitoring video
Technical field
The present invention relates to the field digitized reconstructing method in a kind of traffic safety field, specifically is a kind of scene of a traffic accident digitizing reconstructing method based on monitoring video.
Background technology
The travel speed of motor vehicles and traffic safety have very high correlativity: along with the raising of the motor-driven speed of a motor vehicle, the probability that traffic hazard and casualties take place also is rapid ascendant trend.Therefore, countries in the world are all actively taking measures to forbid vehicle to be driven over the speed limit in relevant road segments, to guarantee the people's safety of life and property.In this course, the field digitized reconstruct of state of motion of vehicle becomes the important evidence of analyzing the traffic hazard origin cause of formation, also is the key factor of dividing traffic accident responsibility.
Find by prior art documents: Wen Liu etc. are at IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, " Automated Vehicle Extracion and Speed Determination From QuickBird Satellite Images " (" from the QuickBird satellite image, carry out automated vehicle extract and velocity determination " of publishing an article on the 2011:75-82, use IEEE's periodical of earth observation and the selected theme of remote sensing, 2011:75-82), this article utilization panchromatic multispectral QuickBird satellite image in pairs detects moving vehicle automatically, and measures the motion state of vehicle according to the delay between full-colour image and the multispectral image.But this method is very restricted in actual applications, mainly is because the coverage of QuickBird satellite and bearing accuracy is limited and the cycle of heavily visiting is longer is difficult to satisfy the accurate field digitized reconfiguration request of zones of different state of motion of vehicle.Also find by retrieval, Kostia Robert is at 2010 Seventh IEEE International Conference on Advanced Video and Signal Based Surveillance, publish an article on the 2010:9-12 " Bringing Richer Information with Reliability to Automated Traffic Monitoring from the Fusion of Multiple Cameras; Inductive Loops and Road Maps " (" from multiple cameras; the fusion of inductive coil and route map brings abundanter information and reliability to the robotization traffic monitoring ", the 7th monitoring IEEE international conference based on advanced video and signal in 2010,2010:9-12), this article at first is fused to a plurality of video cameras and inductive coil in the satellite map plane, determine to detect the position of vehicle in the satellite map coordinate system according to camera calibration then, follow the trail of vehicle by Kalman filtering is linear along the track again, thereby reach the purpose of real-time high reliability ground monitoring vehicle motion state.But this method operation is too complicated, and is difficult in the position of accurately locating a plurality of video cameras and inductive coil in the satellite map, so also be difficult to realize the accurate field digitized reconstruct of state of motion of vehicle on a large scale.
Summary of the invention
The present invention is directed to the deficiencies in the prior art, propose a kind of scene of a traffic accident digitizing reconstructing method based on monitoring video.Achieve the field digitized reconstruct of different highway sections state of motion of vehicle, solved the monitoring problem of interior state of motion of vehicle on a large scale, improved the mensuration precision simultaneously, reduced cost.
The present invention is achieved by the following technical solutions:
A kind of scene of a traffic accident digitizing reconstructing method based on monitoring video comprises following four steps:
The first step, the initial moment of the vehicle that monitors according to camera and vehicle travel exceed the camera head monitor field range termination constantly, with the video sequence in this time period by OpenCV HighGUI(open computer vision advanced figure user interface) Video processing function in the module is decomposed into continuous frame images, and records the license plate number of frame rate and the driving vehicle of this video sequence;
Second step, being chosen in the fixation mark thing that is positioned on the road surface that position and shape remain unchanged in the whole monitor procedure is the fixed reference thing, on the fixed reference thing, every width of cloth two field picture is demarcated the reference mark successively, described reference mark is at least four, and wherein three can not conllinear, again according to previously selected coordinate system, determine image space coordinate and the corresponding object space coordinate thereof at described reference mark, solve corresponding DLT(direct linear transformation) value of coefficient;
The 3rd step, choose the contact point of some wheels and ground in the vehicle as observation point, from continuous frame images, obtain this observation point successively at difference image space coordinate constantly by Visual C++ application program, and resolve the object space coordinate of corresponding this observation point of the moment in conjunction with the value of above-mentioned two-dimentional DLT coefficient of trying to achieve respectively;
The 4th step, obtain the trajectory of vehicle movement by fitting of a polynomial according to the above-mentioned different object space coordinates of observation point constantly of trying to achieve, trajectory carries out the curvilinear integral of arc length is obtained the displacement curve of vehicle movement thus then, the rate curve of vehicle movement and accelerating curve are then determined by first order derivative and the second derivative of this displacement curve respectively, thereby state of motion of vehicle are carried out field digitized reconstruct.
In the described first step, described two field picture refers to the picture material of each frame of extracting from video sequence, then it is preserved into the image of JPG or BMP form, wherein each two field picture correspondence the corresponding sports position of different moment vehicles, the time interval between adjacent two two field pictures is determined by the frame rate of this video sequence.
The extraction scope of described video sequence associated frame began to finish constantly to the vehicle termination that exceeds the camera head monitor field range of travelling from the initial moment that camera monitors vehicle, this scope can arrange video attribute by function cvSetCaptureProperty(before extracting video sequence image) arrange, for determining the time interval between adjacent two frames, can obtain video attribute by function cvGetCaptureProperty() obtain the frame rate of this video sequence.
Described camera is fixed on the position of certain altitude (on 3 meters), can overlook monitoring to the vehicle that travels on the road; Camera remains unchanged at the whole monitor procedure China and foreign countries element of orientation, thus the motion conditions of vehicle in can the same angular field of view of continuous monitoring; In the monitoring visual field scope of camera, can photograph fixed reference thing on the road surface such as crossing, well lid or other has fixed area of obvious characteristic etc., in order in the operation of postorder, can demarcate it.
The monitoring visual field scope of described camera refers to following whole scene scope that can photograph of situation that camera remains unchanged at elements of exterior orientation, and wherein the elements of exterior orientation of camera comprises three outer orientation vertical elements
Figure 2011102311177100002DEST_PATH_IMAGE001
With three foreign side's parallactic angle elements
Figure 414639DEST_PATH_IMAGE002
, it determined camera in the object space coordinate system the position and towards.
Described reference mark is positioned on the fixed reference thing, its image space coordinate is determined according to the image space coordinate system that with summit, an image left side is initial point, two-dimentional Descartes's rectangular coordinate that its object space coordinate is set up according to the scene is determined, the image space coordinate at each reference mark and corresponding object space coordinate thereof are kept at from MFC(microsoft foundation class storehouse) base class CPoint class (some class) in the object making up of the CPoint2 class (extension point class) that derives from, the CPoint2 class is except the attribute and behavior of having inherited the CPoint class, also encapsulate image space coordinate and the corresponding object space coordinate thereof at reference mark, set up the correct mapping relations between them.For guaranteeing to solve the value of all two-dimentional DLT coefficients, should determine image space coordinate and the corresponding object space coordinate thereof at four reference mark at least, and in these four reference mark any 3 can not conllinear.
Described DLT coefficient resolve the two kinds of situations that are divided into: when having only four reference mark, can be at first by the image space coordinate at reference mark and corresponding object space coordinate thereof according to the 2 d dlt formula group that establishes an equation, then by OpenCV(open computer vision) API(application programming interfaces in the nucleus module) function finds the solution the approximate value that this system of linear equations can draw the DLT coefficient; When having four above reference mark, for improving precision and reliability, can be the image space coordinate at reference mark
Figure 2011102311177100002DEST_PATH_IMAGE003
Be considered as observed reading, add corresponding accidental error correction
Figure 518730DEST_PATH_IMAGE004
With non-linear object lens photogrammetric distortion
Figure 2011102311177100002DEST_PATH_IMAGE005
List error equation, draw the value of DLT coefficient again according to the corresponding normal equation of least square method iterative.
In described the 3rd step, described observation point is benchmark with the contact point on wheel and ground, and it all is apparent in each two field picture, thereby can directly determine its image space coordinate at image by Visual C++ application program, for guaranteeing that observation point has enough spacings in adjacent two moment, every interval n two field picture obtains the image space coordinate of an observation point, finding the solution in two kinds of situation of the object space coordinate of different observation point constantly: when having only four reference mark,, find the solution this system of linear equations by the api function in the OpenCV nucleus module then and obtain according to the 2 d dlt formula group that establishes an equation by the value of the image space coordinate in the corresponding moment and the DLT coefficient of trying to achieve; When having four above reference mark, should be at first the image space coordinate of observation point be carried out distortion correction, obtained according to the 2 d dlt formula group that establishes an equation by the value of the image space coordinate after the correction in the corresponding moment and the DLT coefficient of trying to achieve again; Said n is the integer greater than zero.
Described state of motion of vehicle comprises: vehicle track at any time, displacement, speed and acceleration.
Compared with prior art, the present invention has the following advantages:
(1) but duplicate measurements checking, for afterwards secondary evidence obtaining provides legal basis
Because monitoring video has intactly been preserved the travel situations of vehicle in relevant road segments, obtain trajectory, displacement, speed and the accelerating curve that vehicle travels so can from video, carry out field digitized reconstruct to vehicle motion state at that time according to the present invention, and can carry out repeated authentication to measurement result.
(2) monitoring range is wide
Along with the foundation of intelligent transportation net, at each intersection and busy highway section monitoring camera is installed all, improved the real-time monitoring of Traffic Information and monitoring capability on a large scale greatly.As long as the highway section of monitoring camera is installed, all can utilizes the present invention to carry out the field digitized reconstruct of state of motion of vehicle, thereby realize the monitoring function on a large scale of state of motion of vehicle.
(3) Installation and Debugging are convenient, and maintenance cost is low
The present invention need not increase extras, and the effective information that only needs to extract driving vehicle on the basis of original monitoring camera from monitor video can be realized the field digitized reconstruct of state of motion of vehicle.For the highway section that monitor and control facility is not installed as yet, the Installation and Debugging of monitoring camera are also very convenient, and maintenance cost is low.
(4) state of motion of vehicle is measured the precision height
Because the present invention adopts error equation effectively to proofread and correct picture point observational error and non-linear object lens photogrammetric distortion, and accurately realized the field digitized reconstruct of state of motion of vehicle by 2 d dlt and fitting of a polynomial, so can obtain higher mensuration precision.
Description of drawings
Fig. 1 is the realization flow figure of the inventive method;
Fig. 2 demarcates synoptic diagram in reference mark of the present invention;
Fig. 3 is that the two-dimentional DLT coefficient when having four above reference mark is found the solution process flow diagram.
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are elaborated: present embodiment is to implement under the prerequisite with the technical solution of the present invention, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
As shown in Figure 1, the overall flow based on the scene of a traffic accident digitizing reconstruct of monitoring video mainly comprises: extract video frame images, carry out the reference mark and demarcate, find the solution two-dimentional DLT coefficient, obtain observation point image space coordinate, find the solution several steps such as observation point object space coordinate and the field digitized reconstruct of state of motion of vehicle.The sequencing of finding the solution according to system describes in detail successively below:
Described camera is fixed on the position of certain altitude (on 3 meters), can overlook monitoring to the vehicle that travels on the road; Camera remains unchanged at the whole monitor procedure China and foreign countries element of orientation, thus the motion conditions of vehicle in can the same angular field of view of continuous monitoring; In the monitoring visual field scope of camera, can photograph fixed reference thing on the road surface such as crossing, well lid or other has fixed area of obvious characteristic etc., in order in the operation of postorder, can demarcate it.
The monitoring visual field scope of described camera refers to following whole scene scope that can photograph of situation that camera remains unchanged at elements of exterior orientation.Wherein the elements of exterior orientation of camera comprises three outer orientation vertical elements
Figure 924566DEST_PATH_IMAGE001
With three foreign side's parallactic angle elements
Figure 444409DEST_PATH_IMAGE002
, it determined camera in the object space coordinate system the position and towards.
The first step is extracted the frame relevant with detecting the vehicle movement position continuously from the monitor video sequence, and it is preserved into the image of JPG or BMP form.
At first make up a CvCapture (video obtains structure) class pointer, from file, obtain video by the Video processing function cvCaptureFromFile(in the OpenCV HighGUI module then) it is pointed to the monitor video file, simultaneously it is carried out initialization and distributes video flowing.Then by function cvQueryFrame(extracting two field picture) grasp and return the two field picture in the current monitor video sequence successively, again it is preserved into the image of JPG or BMP form, constantly repeat this operation till the whole extractions of the frame relevant with detecting the vehicle movement position are finished.
The extraction scope of video sequence associated frame began to finish constantly to the vehicle termination that exceeds the camera head monitor field range of travelling from the initial moment that camera monitors vehicle, and this scope can arrange video attribute by function cvSetCaptureProperty(before extracting video sequence image) arrange.For determining the time interval between adjacent two frames, can obtain video attribute by function cvGetCaptureProperty() obtain the frame rate of this video sequence.The license plate number of driving vehicle can directly read out from the monitor video image.
Second step, every width of cloth two field picture is carried out the reference mark successively demarcate, adopt least square method to calculate the value of corresponding DLT coefficient by image space coordinate and the corresponding object space coordinate thereof at reference mark according to the 2 d dlt formula.
As shown in Figure 2, select crossing in the image as the fixed reference thing in the present embodiment, and it is carried out the reference mark demarcate.The initial point of reference mark image space coordinate system is based upon the place, left summit of image, and its x axle along continuous straight runs is being to the right for just, and its y axle is vertically downwards for just; The initial point of reference mark object space coordinate system is based upon an angle point of crossing
Figure 902458DEST_PATH_IMAGE006
The place, to the right for just, its Y-axis is downward for just along the long limit of crossing along the minor face of crossing for its X-axis.As shown in FIG., the choose angle point of capable lateral road
Figure 2011102311177100002DEST_PATH_IMAGE007
Demarcate Deng as the reference mark, and its image space coordinate and corresponding object space coordinate thereof are kept in the object that the CPoint2 class that derives from from the base class CPoint class of MFC creates.
Fixed reference thing in the image is carried out just can finding the solution the value of two-dimentional DLT coefficient according to the suitable method of number selection at reference mark after the reference mark demarcates.When having only four reference mark, can be at first by the image space coordinate at reference mark and corresponding object space coordinate thereof according to the 2 d dlt formula group (1) that establishes an equation, then by the api function cvSolve(Solving Linear in the OpenCV nucleus module) find the solution the approximate value that this system of linear equations can draw the DLT coefficient.
(1)
Wherein:
Figure 2011102311177100002DEST_PATH_IMAGE009
Be respectively n reference mark at horizontal ordinate and the ordinate of image space;
Figure 537969DEST_PATH_IMAGE010
Be respectively n reference mark at horizontal ordinate and the ordinate of object space;
Figure 2011102311177100002DEST_PATH_IMAGE011
Be the 2 d dlt coefficient.
When having four above reference mark, for improving precision and reliability, can be the image space coordinate at reference mark
Figure 181834DEST_PATH_IMAGE003
Be considered as observed reading, add corresponding accidental error correction
Figure 505368DEST_PATH_IMAGE004
With non-linear object lens photogrammetric distortion
Figure 236563DEST_PATH_IMAGE005
List error equation (2), draw the value of DLT coefficient again according to the corresponding normal equation of least square method iterative (4).
Figure 850166DEST_PATH_IMAGE012
(2)
Wherein:
Figure 355228DEST_PATH_IMAGE014
Figure 2011102311177100002DEST_PATH_IMAGE015
Be the image space coordinate; Be the coordinate of principal point in image space coordinate system;
Figure 2011102311177100002DEST_PATH_IMAGE017
Be symmetry object lens distortion factor undetermined;
Figure 490992DEST_PATH_IMAGE018
Be asymmetry object lens distortion factor undetermined;
Figure 2011102311177100002DEST_PATH_IMAGE019
Be radius vector, its value is:
Figure 609251DEST_PATH_IMAGE020
If only get
Figure 2011102311177100002DEST_PATH_IMAGE021
, then corresponding error equation is expressed as with the form of matrix:
Figure 471116DEST_PATH_IMAGE022
(3)
Wherein:
Figure 2011102311177100002DEST_PATH_IMAGE023
Figure 2011102311177100002DEST_PATH_IMAGE025
Figure 641908DEST_PATH_IMAGE026
According to least square indirect adjustment principle, the normal equation corresponding with this error equation is:
Figure 2011102311177100002DEST_PATH_IMAGE027
(4)
Because error equation is nonlinear, therefore the whole process of resolving must adopt process of iteration, and it finds the solution flow process as shown in Figure 3 in detail.
The 3rd step, choose the contact point of some wheels and ground in the vehicle as observation point, from continuous frame images, obtain this observation point successively at difference image space coordinate constantly by Visual C++ application program, and resolve the object space coordinate of corresponding this observation point of the moment in conjunction with the value of the two-dimentional DLT coefficient of trying to achieve respectively.
As shown in Figure 2, choose the contact point on car the near front wheel and ground as observation point.This observation point image space coordinate can at first be loaded into the two field picture that extracts the client area of window application successively by Visual C++ application program, directly select to obtain at image by mouse, keyboard or touch-screen etc. then.For guaranteeing that observation point has enough spacings in adjacent two moment, every interval n(n is for greater than zero integer) two field picture obtains the image space coordinate of an observation point.
The object space coordinate of different observation point constantly then by the value of the image space coordinate in the corresponding moment and the DLT coefficient of trying to achieve according to the 2 d dlt formula group (5) that establishes an equation, find the solution this system of linear equations by the api function cvSolve in the OpenCV nucleus module then and obtain.When having four above reference mark, should be at first the image space coordinate of observation point be carried out distortion correction, use respectively With
Figure 2011102311177100002DEST_PATH_IMAGE029
Value as in (5) formula
Figure 513186DEST_PATH_IMAGE030
With
Figure 2011102311177100002DEST_PATH_IMAGE031
The value substitution find the solution and get final product.
(5)
Wherein:
Figure 840710DEST_PATH_IMAGE015
Horizontal ordinate and ordinate for the image space that is respectively observation point;
Figure 2011102311177100002DEST_PATH_IMAGE033
Be respectively observation point at horizontal ordinate and the ordinate of object space;
Figure 363089DEST_PATH_IMAGE011
Be the 2 d dlt coefficient.
In the 4th step, state of motion of vehicle is carried out field digitized reconstruct.At first obtain the trajectory of vehicle movement by fitting of a polynomial according to the object space coordinate of difference moment observation point, trajectory carries out the curvilinear integral of arc length is obtained the displacement curve of vehicle movement thus then, and the rate curve of vehicle movement and accelerating curve are then determined by first order derivative and the second derivative of this displacement curve respectively.
The track of vehicle polynomial fitting is:
Figure 438362DEST_PATH_IMAGE034
(6)
Wherein: n is polynomial number of times;
Figure DEST_PATH_IMAGE035
Be multinomial coefficient.
Figure 9283DEST_PATH_IMAGE033
Be respectively observation point at horizontal ordinate and the ordinate of object space.
The displacement of vehicle movement is:
Figure 817943DEST_PATH_IMAGE036
(7)
The speed of vehicle movement is:
Figure DEST_PATH_IMAGE037
(8)
The acceleration of vehicle movement is:
Figure 511224DEST_PATH_IMAGE038
(9)
Present embodiment adopts Visual C++ platform development, by good Windows GUI(Windows graphical user interface) interface and user carry out alternately.On the basis of original traffic monitoring camera, by from monitor video, extracting effective information, realized the field digitized reconstruct of state of motion of vehicle.This method not only monitoring range is wide, and Installation and Debugging are convenient, and cost is low; Also but duplicate measurements is verified, collecting evidence for secondary afterwards provides legal basis.

Claims (4)

1. the scene of a traffic accident digitizing reconstructing method based on monitoring video is characterized in that, comprises following four steps:
The first step, the initial moment of the vehicle that monitors according to camera and vehicle travel exceed the camera head monitor field range termination constantly, with the video sequence in this time period by OpenCV HighGUI(open computer vision advanced figure user interface) Video processing function in the module is decomposed into continuous frame images, and records the license plate number of frame rate and the driving vehicle of this video sequence;
Second step, being chosen in the fixation mark thing that is positioned on the road surface that position and shape remain unchanged in the whole monitor procedure is the fixed reference thing, on the fixed reference thing, every width of cloth two field picture is demarcated the reference mark successively, described reference mark is at least four, and wherein three can not conllinear, again according to previously selected coordinate system, determine image space coordinate and the corresponding object space coordinate thereof at described reference mark, solve corresponding DLT(direct linear transformation) value of coefficient;
The 3rd step, choose the contact point of some wheels and ground in the vehicle as observation point, from continuous frame images, obtain this observation point successively at difference image space coordinate constantly by the visual c++ application program, and resolve the object space coordinate of corresponding this observation point of the moment in conjunction with the value of above-mentioned two-dimentional DLT coefficient of trying to achieve respectively;
The 4th step, obtain the trajectory of vehicle movement by fitting of a polynomial according to the above-mentioned different object space coordinates of observation point constantly of trying to achieve, trajectory carries out the curvilinear integral of arc length is obtained the displacement curve of vehicle movement thus then, the rate curve of vehicle movement and accelerating curve are then determined by first order derivative and the second derivative of this displacement curve respectively, thereby state of motion of vehicle are carried out field digitized reconstruct;
In described second step, the image space coordinate at described reference mark is determined according to the image space coordinate system that with summit, an image left side is initial point, two-dimentional Descartes's rectangular coordinate that its object space coordinate is set up according to the scene determines that the image space coordinate at each reference mark and corresponding object space coordinate thereof are kept in the object of CPoint2 class (extension point class) structure that derives from from the base class CPoint class (some class) of MFC (microsoft foundation class storehouse);
In described second step, described DLT coefficient resolve the two kinds of situations that are divided into: when having only four reference mark, can be at first by the image space coordinate at reference mark and corresponding object space coordinate thereof according to the 2 d dlt formula group that establishes an equation, then by OpenCV(open computer vision) API(application programming interfaces in the nucleus module) function finds the solution the approximate value that this system of linear equations can draw the DLT coefficient; When having four above reference mark, be to improve precision and reliability, can (x y) be considered as observed reading, adds corresponding accidental error correction v the image space coordinate at reference mark x, v yWith non-linear object lens photogrammetric distortion Δ x, Δ y lists error equation, draws the value of DLT coefficient again according to the corresponding normal equation of least square method iterative;
In described the 3rd step, for guaranteeing that observation point has enough spacings in adjacent two moment, every interval n two field picture obtains the image space coordinate of an observation point, finding the solution in two kinds of situation of the object space coordinate of different observation point constantly: when having only four reference mark,, find the solution this system of linear equations by the api function in the OpenCV nucleus module then and obtain according to the 2 d dlt formula group that establishes an equation by the value of the image space coordinate in the corresponding moment and the DLT coefficient of trying to achieve; When having four above reference mark, should be at first the image space coordinate of observation point be carried out distortion correction, obtained according to the 2 d dlt formula group that establishes an equation by the value of the image space coordinate after the correction in the corresponding moment and the DLT coefficient of trying to achieve again; Said n is the integer greater than zero.
2. the scene of a traffic accident digitizing reconstructing method based on monitoring video according to claim 1, it is characterized in that, in the described first step, described two field picture refers to the picture material of each frame of extracting from video sequence, then it is preserved into the image of JPG or BMP form, wherein each two field picture correspondence the corresponding sports position of different moment vehicles, the time interval between adjacent two two field pictures is determined by the frame rate of this video sequence.
3. the scene of a traffic accident digitizing reconstructing method based on monitoring video according to claim 1, it is characterized in that, the extraction scope of described video sequence associated frame began to finish constantly to the vehicle termination that exceeds the camera head monitor field range of travelling from the initial moment that camera monitors vehicle, this scope can arrange video attribute by function cvSetCaptureProperty(before extracting video sequence image) arrange, for determining the time interval between adjacent two frames, can obtain video attribute by function cvGetCaptureProperty() obtain the frame rate of this video sequence.
4. the scene of a traffic accident digitizing reconstructing method based on monitoring video according to claim 1 is characterized in that, in described second step, described fixed reference thing can be crossing, well lid and fixed area with obvious characteristic.
CN 201110231117 2011-08-12 2011-08-12 Digital reconstruction method of traffic accident scene based on monitoring videos Active CN102306284B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110231117 CN102306284B (en) 2011-08-12 2011-08-12 Digital reconstruction method of traffic accident scene based on monitoring videos

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110231117 CN102306284B (en) 2011-08-12 2011-08-12 Digital reconstruction method of traffic accident scene based on monitoring videos

Publications (2)

Publication Number Publication Date
CN102306284A CN102306284A (en) 2012-01-04
CN102306284B true CN102306284B (en) 2013-07-17

Family

ID=45380144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110231117 Active CN102306284B (en) 2011-08-12 2011-08-12 Digital reconstruction method of traffic accident scene based on monitoring videos

Country Status (1)

Country Link
CN (1) CN102306284B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121941A (en) * 2016-11-30 2018-06-05 上海联合道路交通安全科学研究中心 A kind of object speed calculation method based on monitoring device
CN107608344B (en) * 2017-08-21 2020-02-14 上海蔚来汽车有限公司 Vehicle motion control method and device based on trajectory planning and related equipment
CN107818685A (en) * 2017-10-25 2018-03-20 司法部司法鉴定科学技术研究所 A kind of method that state of motion of vehicle is obtained based on Vehicular video
CN108447256B (en) * 2018-03-22 2023-09-26 连云港杰瑞电子有限公司 Arterial road vehicle track reconstruction method based on data fusion of electric police and fixed point detector
CN112308786B (en) * 2019-08-01 2023-04-07 司法鉴定科学研究院 Method for resolving target vehicle motion in vehicle-mounted video based on photogrammetry
CN112396557B (en) * 2019-08-01 2023-06-06 司法鉴定科学研究院 Method for resolving vehicle motion in monitoring video based on close-range photogrammetry
CN111951295B (en) * 2020-07-07 2024-02-27 中国人民解放军93114部队 Method and device for determining flight trajectory with high precision based on polynomial fitting and electronic equipment
CN113704374B (en) * 2021-08-25 2022-05-03 河北省科学院应用数学研究所 Spacecraft trajectory fitting method, device and terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101173856A (en) * 2007-08-30 2008-05-07 上海交通大学 Vehicle collision accident reappearance method based on phototopography and exterior profile deformation of car body
CN101604448A (en) * 2009-03-16 2009-12-16 北京中星微电子有限公司 A kind of speed-measuring method of moving target and system
CN102147971A (en) * 2011-01-14 2011-08-10 赵秀江 Traffic information acquisition system based on video image processing technology

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101173856A (en) * 2007-08-30 2008-05-07 上海交通大学 Vehicle collision accident reappearance method based on phototopography and exterior profile deformation of car body
CN101604448A (en) * 2009-03-16 2009-12-16 北京中星微电子有限公司 A kind of speed-measuring method of moving target and system
CN102147971A (en) * 2011-01-14 2011-08-10 赵秀江 Traffic information acquisition system based on video image processing technology

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
任晓映.基于图像序列分析的车速检测技术研究.《中国优秀硕士学位论文全文数据库》.2011,
基于图像序列分析的车速检测技术研究;任晓映;《中国优秀硕士学位论文全文数据库》;20110530;第1-12,26-28,60-64页 *
基于数字化摄影测量的交通事故信息采集和过程再现;杨博等;《汽车工程》;20100625;第32卷(第6期);第530-534,546页 *
杨博等.基于数字化摄影测量的交通事故信息采集和过程再现.《汽车工程》.2010,第32卷(第6期),

Also Published As

Publication number Publication date
CN102306284A (en) 2012-01-04

Similar Documents

Publication Publication Date Title
CN102306284B (en) Digital reconstruction method of traffic accident scene based on monitoring videos
CN111551958B (en) Mining area unmanned high-precision map manufacturing method
CN104567708B (en) Full section of tunnel high speed dynamical health detection means and method based on active panoramic vision
CN104575003B (en) A kind of vehicle speed detection method based on traffic surveillance videos
CN103325255B (en) The method of region transportation situation detection is carried out based on photogrammetric technology
CN102564431B (en) Multi-sensor-fusion-based unstructured environment understanding method
EP3775777A1 (en) Systems and methods for vehicle navigation
WO2020112827A2 (en) Lane mapping and navigation
WO2020163311A1 (en) Systems and methods for vehicle navigation
US20210341303A1 (en) Clustering event information for vehicle navigation
WO2018015811A1 (en) Crowdsourcing and distributing a sparse map, and lane measurements for autonomous vehicle navigation
CN107194957B (en) The method that laser radar point cloud data is merged with information of vehicles in intelligent driving
CN112380312B (en) Laser map updating method based on grid detection, terminal and computer equipment
WO2020174279A2 (en) Systems and methods for vehicle navigation
CN108364466A (en) A kind of statistical method of traffic flow based on unmanned plane traffic video
CN104280036A (en) Traffic information detection and positioning method, device and electronic equipment
US20230195122A1 (en) Systems and methods for map-based real-world modeling
Józsa et al. Towards 4D virtual city reconstruction from Lidar point cloud sequences
WO2021198775A1 (en) Control loop for navigating a vehicle
Hu et al. A novel approach to extracting street lamps from vehicle-borne laser data
Alamry et al. Using single and multiple unmanned aerial vehicles for microscopic driver behaviour data collection at freeway interchange ramps
CN110415299B (en) Vehicle position estimation method based on set guideboard under motion constraint
Cao et al. Mobile traffic surveillance system for dynamic roadway and vehicle traffic data integration
Shastry et al. Airborne video registration for visualization and parameter estimation of traffic flows
Busch et al. High definition mapping using lidar traced trajectories

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20170106

Address after: 223001 science and Technology Industrial Park, Jiangsu, Huaian science and technology road, No. 18

Patentee after: North Jiangsu Institute of Shanghai Jiaotong University

Address before: 200240 Minhang District, Shanghai, Dongchuan Road, No. 800, No.

Patentee before: SHANGHAI JIAO TONG University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220823

Address after: 201109 floor 1, building 5, No. 951, Jianchuan Road, Minhang District, Shanghai (centralized registration place)

Patentee after: Yuhai network technology (Shanghai) Co.,Ltd.

Address before: 223001 18 science and technology road, science and Education Industrial Park, Huaian, Jiangsu

Patentee before: North Jiangsu Institute of Shanghai Jiaotong University