CN1804927A - Omnibearing visual sensor based road monitoring apparatus - Google Patents

Omnibearing visual sensor based road monitoring apparatus Download PDF

Info

Publication number
CN1804927A
CN1804927A CN 200510062304 CN200510062304A CN1804927A CN 1804927 A CN1804927 A CN 1804927A CN 200510062304 CN200510062304 CN 200510062304 CN 200510062304 A CN200510062304 A CN 200510062304A CN 1804927 A CN1804927 A CN 1804927A
Authority
CN
China
Prior art keywords
vehicle
formula
value
color
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200510062304
Other languages
Chinese (zh)
Other versions
CN100419813C (en
Inventor
汤一平
叶永杰
金顺敬
顾校凯
高飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CNB2005100623041A priority Critical patent/CN100419813C/en
Publication of CN1804927A publication Critical patent/CN1804927A/en
Application granted granted Critical
Publication of CN100419813C publication Critical patent/CN100419813C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a road monitor based on omni-bearing vision sensor, which comprises a microprocessor and a monitoring sensor used to monitor the road condition, wherein the monitoring sensor is an omni-bearing vision sensor; it uses the vision sensor to monitor the running vehicle and transmits the shot continuous omni-bearing pictures to the computer, which can measure the speed of the vehicle and statistics the vehicle amount by picture preprocessing, color space transferring, vehicle testing, vehicle speed testing and background updating and so on.

Description

Road monitoring apparatus based on omnibearing vision sensor
(1) technical field
The present invention relates to a kind of road monitoring apparatus based on omnibearing vision sensor.
(2) background technology
It is the key factor that causes traffic hazard that overspeed of vehicle travels.Traditional overspeed detection is to detect vehicular traffic with knotmeter by the law enfrocement official on highway.The shortcoming that adopts this working method be take a large amount of police strength, law enfrocement official tired easily, can not all weather operations, strong evidence can't be provided.Overspeed violation punishment system based on the license plate identification technology can automatically detect over-speed vehicles, and at the road way outlet or the outpost of the tax office over-speed vehicles is deployed to ensure effective monitoring and control of illegal activities, tackled.This mode can all weather operations, can in time provide data such as overspeed of vehicle time, place, photo, the speed of a motor vehicle as evidence.
The technology that detects over-speed vehicles at present mainly contains following three means: one, portable speed-measuring method; Two, fixed speed-measuring method; Three, two site distance are from the time average speed-measuring method.
One, portable speed-measuring method, portable speed-measuring method is that knotmeter is placed in the police car, knotmeter aligning tested vehicle tested the speed when police car on the way went on patrol, according to the relevant regulations of vehicle in the highway travel speed, preestablish the normal travel speed of various vehicles (considering that error problem can suitably loosen the speed limit scope) in system, capturing system is just captured or video recording automatically when measuring overspeed of vehicle.The advantage of this mode mainly contains: the vehicle that travels that 1, exeeds the regulation speed can be punished when expressway exit is paid dues then and there, has improved the law enforcement efficiency of public security department.2, because picture and digital evidence are arranged, improved the fairness and the notarization property of law enforcement.But also there is shortcoming simultaneously: 1,, loaded down with trivial details relatively in technical finesse owing to the relative velocity problem that when testing the speed, involves police car and tested vehicle.2, because the instructed degree difference of driver, cultural quality is uneven, and part driver sees the place ahead the security vehicle patrol driving of will slowing down, but does not have the highway section of security vehicle patrol can exeed the regulation speed again, can not fundamentally prevent driver's furious driving.3, can not accomplish that round-the-clock system-wide section tests the speed.4, set out on a journey to test the speed and increased the police strength of public security department and material resources.
Two, fixed speed-measuring method, fixed speed-measuring method is in the median strip of the monitored point of highway or video camera is installed in the roadside, two groups of ground induction coils are buried on the road surface underground, maintain a certain distance between the coil, when vehicle by two groups of coils apart from the time system can self-clocking, travel speed when going out vehicle by two groups of coils according to formula speed=distance/Time Calculation then, the distance of certain two groups of ground induction coils is as can be known, when detected car speed surpasses predefined travel speed (setting of various vehicle speed is the same with movable type), video camera is just captured automatically.Then the information of the vehicle of driving over the speed limit being delivered to terminal by transmission means handles and prints.Its advantage: 1, because system's fixed installation on the way, has been saved the man power and material of the patrol of setting out on a journey.2, the picture of candid photograph is arranged equally as evidence, improved the fairness and the fairness of law enforcement.3, can carry out round-the-clock testing the speed.When major defect is locality sense coil fault, need the envelope road to excavate pavement maintenance, increased maintenance personal's danger, at present because the overload traffic condition is more serious, will safeguard ground induction coil every 1 year, so not only increased maintenance cost, given simultaneously that very busy road traffic condition has increased pressure, the maintenance cost that causes is very big.
Three, whether two site distance with present Fare Collection System interlock, utilize the information of existing candid photograph picture and each vehicle in and out port to come measuring vehicle to exceed the speed limit from the time average speed-measuring method, realize information sharing, comprehensive utilization of resources.Speed-measuring method is as follows: set up a measuring vehicle overspeed system, the inlet name of station of each car in the datagram processing unit server, entry time, the outlet name of station, the outlet time, vehicle pictures (car plate), information such as vehicle are connected to velocity-measuring system, because highway is totally-enclosed management, distance between each gateway is as can be known, according to formula speed=distance/time (time be vehicle at the outlet time of highway and entry time poor), velocity-measuring system can calculate the average velocity of each car during highway travels automatically, when this speed surpasses predefined this vehicle cruising speed of system, illustrate that then this car has the hypervelocity phenomenon during highway travels.For example certain dolly enters highway from the first station, in second station outlet, should be 2 hour from the first station to the minimum running time at second station by the setting dolly of system, but this dolly travel between first and second stations and has only used 1.5 hours, according to formula speed=distance/Time Calculation, this dolly is to drive over the speed limit.Velocity-measuring system is the terminal of the file maintenance of all over-speed vehicles to appointment, do not have the vehicle data of hypervelocity can remove automatically behind the temporary transient preservation certain hour in the server of system, the temporary transient time of preserving can be come fixed (such as a week or longer time) according to the capacity of managerial needs and server.
People such as external in recent years Michalopoulos PG have worked out a kind of new detection method---Video Detection, are applied in the detection and traffic control of the magnitude of traffic flow, are successful.It is a kind of detection technique based on video image, is a kind of technology in conjunction with digital video image and artificial mode identification.Compare with traditional detecting device, video detector has remarkable advantages, is mainly reflected in:
(1) has complete detection means, can detect most traffic flow datas,, can also realize the automatic detection of traffic hazard comprising the magnitude of traffic flow, car speed and occupation rate etc.
(2) have the advantages that big zone is detected, help the management and the control of traffic.
(3) use installation to need not to contact the highway entity, and easy to maintenance.
The shortcoming of Cun Zaiing also: the installation accuracy of (1), video frequency pick-up head requires high; (2), detection and tracking moving object process complexity, algorithm complexity; (3), because the speed of Video processing is slower, real-time is poor.
(3) summary of the invention
Existing road monitoring video-unit installation accuracy requirement is high, algorithm is complicated, the deficiency of real-time difference, the invention provides a kind of installation site freedom, simple, the real-time road monitoring apparatus based on omnibearing vision sensor of algorithm in order to overcome.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of road monitoring apparatus based on omnibearing vision sensor, described road monitoring apparatus comprises microprocessor, is used for the monitoring sensor of monitor road situation, described microprocessor comprises: the view data read module is used to read the video image information of coming from the omnibearing vision sensor biography; The image data file memory module, the video image information that is used for reading is kept at storage unit by file mode; On-the-spot real-time play module is used for the video image real-time play that will read; Network transmission module, the video and graphic information that is used for reading is transferred to network by communication module; Described monitoring sensor is an omnibearing vision sensor, described omnibearing vision sensor comprises in order to the evagination catadioptric minute surface of object in the reflection monitoring field, dark circles cone, transparent cylinder, the camera in order to prevent that anaclasis and light are saturated, described evagination catadioptric minute surface is positioned at the top of transparent cylinder, evagination catadioptric minute surface down, the dark circles cone is fixed on the center of catadioptric minute surface male part, camera faces toward the evagination mirror surface up, and described camera is positioned at the virtual focus position of evagination mirror surface;
Described microprocessor also comprises:
The transducer calibration module is used to set up a kind of road image of space and the corresponding relation of the video image that is obtained, and the material picture coordinate and the image left side are linear in the horizontal direction; The virtual detection line setting module is used to set up some virtual detection lines, and the corresponding relation by described space road plane image and the video image that is obtained can be corresponding becomes the detection line on real road;
The image stretching processing module, the circular video image that is used for reading expands into the panorama histogram;
The color space conversion module is used for the traffic image rgb color space is transformed into yuv space;
The vehicle judge module is used for determining that by the grey scale change situation in the statistics detection line zone target vehicle enters or leave detection line;
Vehicle judge module based on the YUV model is used to utilize the YUV color space characteristic, the color-values in the forward-backward correlation frame is compared identify information of vehicles:
Virtual detection line is divided into uniform N section, and every segment length is relevant with the geometric scale of minimum vehicle, and establishing minimum vehicle length is Car Wmin, segments is S Num, detection line total length M, minimum detection primitive size can be by formula (23), (24) expression;
BL = Car w min S num - - - ( 23 )
N = M BL - - - ( 24 )
Utilize formula (25) to judge for one in each N section of evenly cutting apart son section,
PS = Σ t = 0 N DS ( i ) T2 is the threshold value of difference signals of vehicles and background signal component variation in the formula; SD is a frame difference accumulation mean, computing formula following (26),
SD = Σ i = 0 i = n - 1 | I c ( i ) - I B ( i ) | 2 n - - - ( 26 )
I in the formula c(i) be i pixel color value of present frame, and I B(i) be i pairing color value of pixel of background frames;
If PS>Car Wmin/ 2, judge that automobile storage exists, otherwise judge that vehicle does not exist;
Speed of a motor vehicle detection module is used for calculating car speed according to the series of values of the forward-backward correlation frame of vehicle detection by least square method, is converted into car speed on the real road according to the relation of demarcating:
Suppose that vehicle ' is at the uniform velocity in monitoring range, with the move distance and the time relation of formula (28) expression vehicle,
l i=α+βt ii (28)
In the following formula, be monitored vehicle motion track detected value l iExpression i two field picture and the distance that (i+1) vehicle moved between the two field picture, t iExpression i two field picture and the time that (i+1) vehicle displacement is spent between the two field picture, i is a natural number;
Unknown parameter in the employing least square method calculating formula (28) (α, estimated value β) ( ), make formula (29) value for minimum:
Σ i = 1 n ( l i - α ^ - β ^ t i ) 2 = min α , β Σ i = 1 n ( l i - α - β t i ) 2 - - - ( 29 )
Utilize the partial differential method to find the solution estimated value Be the vehicle speed estimated value, represent suc as formula (30);
β ^ = Σ i = 1 n ( t i - t ‾ ) ( l i - l ‾ ) Σ i = 1 n ( t i - t ‾ ) - - - ( 30 )
In the following formula, t, l are respectively the average of time and distance.
Further, described vehicle judge module, in order to set pavement of road greyish white darkly between being evenly distributed of its pixel grey scale Gb, the pixel grey scale Gv that the vehicle of travels down constitutes and the pixel grey scale Gb on background road surface have a difference, default gray threshold TH, Gv<TH<Gb, and Gv in the detection line zone is set to the critical value NTH of number of pixels between the TH, and the Gv in the statistics detection line zone is to number of pixels between the TH:
Less than critical value NTH, judging does not have vehicle passing detection line zone as the number of statistics;
Number as statistics increases to critical value NTH, judges that vehicle enters the detection line zone;
Number as statistics is reduced to critical value NTH, judges that vehicle leaves the detection line zone.
Further again, described microprocessor also comprises the background refresh module, be used for refreshing strategy and carry out context update according to the result of vehicle detection and selected background, to adapt to the dynamic change of road scene, in Y, U, V color model, will determine initial background earlier, the calculating formula of each component value of initial background is (31):
I 0 = 1 N Σ k = 1 N I k ( x ) color - - - color = Y , U , V - - - ( 31 )
In the following formula, N is an empirical value, I k(x) Color(color=Y, U, the V) Y at x place in the expression K two field picture, U, the V color component value:
Judge whether it is that the condition of foreground pixel is:
Figure A20051006230400131
Wherein, U T, V T, Y TThreshold value for Y, U, each component of V;
When satisfying above-mentioned condition, the pixel at x place is a foreground pixel, otherwise is background pixel;
The calculating formula that background refreshes is (32):
Figure A20051006230400132
In the following formula, B t(x) Y, U, the V component value of expression t moment background, I T+ Δ t(x) Y, U, the V component value of expression t+ Δ t moment background, α is the coefficient less than 1.
Further, described microprocessor also comprises the speed tracking module, is used for based on the interframe pixel difference statistical value of YUV model as the pattern feature index:
Judge that the criterion that whether has vehicle to pass through on each pixel judges with formula (26):
Y '=255Y/Y wherein B
In the following formula, Y BBe the mean flow rate component value of background, Y, U, V are pixel color values, Y T, U T, V TBe threshold value;
When formula (26) has been judged as vehicle when passing through, begin to follow the tracks of speed of a motor vehicle detected characteristics point, then along the auxiliary detection end of speed of a motor vehicle detection line to main detection line, each pixel value relatively one by one, comparison criterion provides with formula (27);
|f t(x)-g(x)|>T (27)
In the following formula, x 0≤ x≤x 0+ M M is a comparison window, and x is the point on the speed of a motor vehicle detection line, f t(x) be the current color-values of all pixels in this zone, g (x) is the backcolor value of all pixels in this zone, and T represents threshold value;
If (27) set up, open up an internal memory and write down all color of pixel of this zone, when next frame arrives, again finish above-mentioned following calculation once, then regional color value and this frame color-values that find of last registration in internal memory compared, just think speed tracking success if satisfy formula (27), continue to follow the tracks of up to vehicle be determined leave away till, and registration of vehicle track.
Described image stretching processing module is used for according to a point (x on the circular omnidirectional images *, y *) and the rectangle panorama sketch on a point (x *, y *) corresponding relation, set up (x *, y *) and (x *, y *) mapping matrix, by M mapping matrix opening relationships formula (21):
P **(x **,y **)← M× P *(x *,y *) (21)
In the following formula, P *(x *, y *) be each picture element matrix on the imaging plane, P *(x *, y *) be the corresponding matrix of each point on the omnidirectional images, M is a mapping matrix.
Described color space conversion module, the relational expression that is transformed into yuv space from rgb color space is formula (22):
Y=0.301*R+0.586*G+0.113*B
U=-0.301*R-0.586*G+0.887*B (22)
V=0.699*R-0.586*G-0.113*B
In the following formula, Y represents the brightness of YUV color model, and U, V are two chrominance components of YUV color model, the expression aberration; R represents the redness of rgb color space; G represents the green of rgb color space; B represents the blueness of rgb color space.
Principle of work of the present invention is: the manufacturing technology scheme of the opticator of ODVS camera head, ODVS camera head are mainly constituted by vertically downward catadioptric mirror with towards last camera.It is concrete that to constitute be to be fixed on bottom by the cylinder of transparent resin or glass by the image unit that collector lens and CCD constitute, the top of cylinder is fixed with the catadioptric mirror of a downward deep camber, the coniform body that between catadioptric mirror and collector lens, has a diameter to diminish gradually, this coniform body is fixed on the middle part of catadioptric mirror, and the purpose of coniform body is the light saturated phenomenon that causes in order to prevent superfluous light from injecting in cylinder inside.Fig. 2 is the schematic diagram of the optical system of expression omnibearing imaging device of the present invention.
Catadioptric omnidirectional imaging system can be carried out imaging analysis with the pin-hole imaging model, but obtaining the perspective panorama picture must be to the contrary projection of the real scene image of gathering, thereby calculated amount is big, particularly is used in the vehicle of running at high speed is monitored, and must satisfy the requirement of real-time.General conduct is tested the speed, the omnibearing vision device of vehicle flowrate function for monitoring is to be used for holding whole overall road catadioptric situation; Automatic candid photograph evidence obtaining and car plate identification for the violation vehicle need be finished with another picture pick-up device, and this picture pick-up device is responsible for certain local obtaining of parts of images in violation of rules and regulations.
The coordinate of the horizontal coordinate of object point and corresponding picture point is linear in the scene just can guarantee that horizontal scene is undistorted, as test the speed, the omnibearing vision device of vehicle flowrate function for monitoring is installed in from 3 meters parts of pavement-height, therefore monitor the vehicle condition on the road horizontal direction, when the catadioptric minute surface of design omnibearing vision device, will guarantee in the horizontal direction indeformable.
At first select for use CCD (CMOS) device and imaging len to constitute camera in the design, preresearch estimates system physical dimension on the basis that the camera inner parameter is demarcated is determined the mirror surface shape parameter according to the visual field of short transverse then.
As shown in Figure 1, the projection centre C of camera is the horizontal scene h of distance place above the horizontal scene of road, and the summit of catoptron is above projection centre, apart from projection centre zo place.Be that true origin is set up coordinate system with the camera projection centre among the present invention, the face shape of catoptron is with z (X) function representation.The pixel q of distance images central point ρ has accepted from horizontal scene O point (apart from Z axle d), at the light of mirror M point reflection in as the plane.Horizontal scene is undistorted to require the coordinate of the horizontal coordinate of scene object point and corresponding picture point linear;
d(ρ)=αρ (1)
ρ is and the distance of the face shape central point of catoptron in the formula (1), and α is the magnification of imaging system.
If the normal that catoptron is ordered at M and the angle of Z axle are γ, the angle of incident ray and Z axle is φ, and the angle of reflection ray and Z axle is θ.Then
tg ( x ) = d ( x ) - x z ( x ) - h - - - ( 2 )
tgγ = dz ( x ) dx - - - ( 3 )
tg ( 2 γ ) = 2 dz ( x ) dx 1 - d 2 z ( x ) dx 2 - - - ( 4 )
Figure A20051006230400154
By reflection law
2γ=φ-θ
tg ( 2 γ ) = tg ( φ - θ ) = tgφ - tgθ 1 + tgφtgθ - - - ( 6 )
Obtain the differential equation (7) by formula (2), (4), (5) and (6)
d 2 z ( x ) dx 2 + 2 k dz ( k ) dx - 1 = 0 - - - ( 7 )
In the formula; k = z ( x ) [ z ( x ) - h ] + x [ d ( x ) - x ] z ( x ) [ d ( x ) - x ] + x [ z ( x ) - h - - - ( 8 )
Obtain the differential equation (9) by formula (7)
dz ( x ) dx + k - k 2 + 1 = 0 - - - ( 9 )
Obtain formula (10) by formula (1), (5)
d ( x ) = afx z ( x ) - - - ( 10 )
By formula (8), (9), (10) and starting condition, separate the digital solution that the differential equation can obtain reflecting mirror surface shape.The main digital reflex mirror of system's physical dimension is from the distance H o and the aperture of a mirror D of camera.Select suitable camera according to application requirements during the refractive and reflective panorama system design, calibrate R Min, the focal distance f of lens is determined the distance H o of catoptron from camera, calculates aperture of a mirror Do by (1) formula.
Determining of systematic parameter:
Determine systematic parameter af according to the visual field of using desired short transverse.Obtain formula (11) by formula (1), (2) and (5), done some simplification here, with z (x) ≈ z 0, main consideration is smaller with respect to the change in location of minute surface and camera for the height change of minute surface;
tgφ = ( af - z 0 ) ρ f z 0 - h - - - ( 11 )
With the inconocenter point largest circumference place in the center of circle as the plane ρ = R min → ω max = R min f
Corresponding visual field is Φ MaxThen can obtain formula (12);
ρ f = ( z 0 - h ) tg φ max ω max + z 0 - - - ( 12 )
The imaging simulation adopts the direction opposite with actual light to carry out.If light source is in the camera projection centre, equally spaced selected pixels point in the picture plane by the light of these pixels, intersects with surface level after mirror reflects, if intersection point is equally spaced, illustrates that then catoptron has the distortionless character of horizontal scene.The imaging simulation can be estimated the imaging character of catoptron on the one hand, can calculate aperture of a mirror and thickness exactly on the other hand.
Further specify the present invention and in implementation process, relate to demarcation and these 2 key issues of identification:
(1) how to demarcate the pixel distance in the imaging plane of omnibearing vision sensor and the corresponding relation of actual three dimensions distance.Because omni-directional visual video camera imaging plane is two-dimentional, is measurement unit with the pixel, on imaging plane, when observing the segment distance of vehicle by demarcating, can only know its pixel distance; And the distance that actual vehicle is passed through is unknown, only finds both corresponding relations, could go out the displacement of vehicle reality according to the distance calculation that vehicle moves in image.
(2) recognizer of traffick in the omni-directional visual camera field of view.When vehicle process virtual detection line, the time how system should go identification and registration of vehicle to pass through.
The demarcation of omni-directional visual camera field of view distance relates to the theory of imaging geometry, and the three-dimensional scenic of objective world is projected the two-dimentional image plane of video camera, need set up the model of video camera and describe.By determining the physical parameter and the direction parameter of video camera, could decide the tolerance of image plane, thereby calculate the actual range that vehicle passes through.
Image transformation relates to the conversion between the different coordinates.In the imaging system of video camera, what relate to has following 4 coordinate systems; (1) real-world coordinates is XYZ; (2) with the video camera be the coordinate system x^y^z^ that formulate at the center; (3) photo coordinate system, formed photo coordinate system x in video camera *y *o *(4) computer picture coordinate system, the coordinate system MN that the computer-internal digital picture is used is a unit with the pixel.
According to the different transformational relation of above several coordinate systems, just can obtain needed omnidirectional vision camera imaging model, converse the corresponding relation of two dimensional image to three-dimensional scenic.The approximate perspective imaging analytical approach that adopts catadioptric omnibearing imaging system among the present invention is with the formed corresponding relation that is converted to three-dimensional scenic as the planimetric coordinates two dimensional image in the video camera, Fig. 3 is general perspective imaging model, d is an object height, ρ is an image height, t is an object distance, and F is image distance (equivalent focal length).Can obtain formula (13)
d = t F ρ - - - ( 13 )
When the design of the catadioptric omnibearing imaging system that above-mentioned horizontal scene does not have, require the coordinate of the horizontal coordinate of scene object point and corresponding picture point linear, represent suc as formula (1); Comparison expression (13), (1), horizontal as can be seen scene does not have the be imaged as perspective imaging of the catadioptric omnibearing imaging system of distortion to horizontal scene.Therefore with regard to horizontal scene imaging, the catadioptric omnibearing imaging system that horizontal scene can not had distortion is considered as having an X-rayed camera, and α is the magnification of imaging system.If the projection centre of this virtual perspective camera is C point (seeing accompanying drawing 3), its equivalent focal length is F.Comparison expression (13), (1) formula can obtain formula (14);
α = t F ; t = h - - - ( 14 )
Obtain formula (15) by formula (12), (14)
F = fhω max ( z 0 - h ) tg φ max + z 0 ω max 0 - - - ( 15 )
Carry out the system imaging simulation according to above-mentioned omnidirectional vision camera imaging model, by the camera projection centre send through in the pixel planes equidistantly after the reflection of the light family of pixel, intersection point on the horizontal road face of distance projection centre 3m is equally spaced basically, as shown in Figure 4.Therefore according in the above-mentioned design concept this patent relation between the coordinate of the coordinate of road surface level and corresponding comprehensive picture point being reduced to linear relationship, that is to say that design by mirror surface be XYZ to the conversion of photo coordinate system with real-world coordinates can be the linear dependence of ratio with magnification α.Be conversion below from photo coordinate system to the used coordinate system of computer-internal digital picture, the image coordinate unit that uses in the computing machine is the number of discrete pixel in the storer, so also need round the imaging plane that conversion just can be mapped to computing machine to reality as the coordinate on plane, its conversion expression formula is for to be provided by formula (16);
M = O m - x * S x ; N = O n - y * S y - - - ( 16 )
In the formula: O m, O nBe respectively the line number and the columns at the some pixel place that the initial point of image plane shone upon on the computer picture plane; S x, S yBe respectively scale factor in the x and y direction.S x, S yDetermine it is by between camera and mirror surface, placing scaling board, video camera being demarcated obtained S apart from the Z place x, S yNumerical value, unit is (pixel); O m, O nDetermine it is that unit is (pixel) according to selected camera resolution pixel.
Further, 360 ° of comprehensive principles of making a video recording are described according to Fig. 6, a some A on the space (x1, y1, z1) (represent with pentagram among the figure) through catadioptric 1 direct reflection to the lens 4 to a subpoint P1 (x should be arranged *1, y *1), the light of scioptics 4 becomes directional light and projects CCD image unit 5, microprocessor 6 reads in this ring-type image by video interface, adopts software that this ring-type image is launched to obtain omnibearing image and be presented on the display unit 7 or by video server to be distributed on the webpage.
Further, on method of deploying, adopted a kind of algorithm of approximate expansion fast in this patent, can drop to minimum, kept Useful Information simultaneously as much as possible with time loss with to the requirement of various parameters.Consider in the algorithm of back several steps that the β component is that the information of orientation angles needs most; And in the vertical direction, some deformation take place does not almost have any influence to the result, the Approximate Fast Algorithm of this expansion, as shown in Figure 6.Among Fig. 6, B) figure is circular omnibearing imaging image, and wherein internal diameter is r, and external diameter is R, and between the interior external diameter is the effective coverage of image.Now it is launched into the rectangle panorama sketch C on the right), launching rule has three,
(1) X *Axle is a reference position, launches by counterclockwise mode;
(2) X among the left figure *Axle and the intersection point O of internal diameter r correspond to the initial point O (0,0) in the lower left corner among the right figure;
(3) width of the right figure after the expansion equals the girth of the circle shown in the dotted line among the left figure.Wherein broken circle is the concentric circles of external diameter in the left figure, and its radius r 1=(r+R)/2.
If circular diagram Fig. 6 B) center of circle O *Coordinate (x *0, y *0), the histogram lower left corner origin O of expansion *(0,0), histogram C) any 1 P in *=(x *, y *) pairing coordinate in circular diagram is (x *, y *).Below we need ask is (x *, y *) and (x *, y *) corresponding relation.Can obtain following formula according to geometric relationship:
β=tan -1(y */x *) (17)
r1=(r+R)/2 (18)
Make the radius r 1=(r+R)/2 of broken circle, purpose is in order to allow the figure after launching seem that deformation is even.
x *=y */(tan(2x **/(R+r))) (19)
y *=(y **+r)cosβ (20)
Can obtain a point (x on the circular omnidirectional images from formula (19), (20) *, y *) and the rectangle panorama sketch on a point (x *, y *) corresponding relation.This method has come down to do the process of an image interpolation.After the expansion, the image of dotted line top is that horizontal compression is crossed, and the image of dotted line below is that cross directional stretch is crossed, dotted line originally on one's body point then remain unchanged.
The calculating needs equally can be according to a point (x on the circular omnidirectional images in real time in order to satisfy *, y *) and the rectangle panorama sketch on a point (x *, y *) corresponding relation, set up (x *, y *) and (x *, y *) mapping matrix.Because this one-to-one relationship can be being transformed into indeformable panoramic picture by the mapping matrix method.Can set up formula (21) relation by the M mapping matrix.
P **(x **,y **)← M× P *(x *,y *) (21)
According to formula (21), for each the pixel P on the imaging plane *(x *, y *) a some P arranged on omnidirectional images *(x *, y *) correspondence, set up the M mapping matrix after, the task that realtime graphic is handled can obtain simplifying.The omnidirectional images of the distortion that obtains on imaging plane is finished the computing of tabling look-up, and generates indeformable omnidirectional images and is shown on the display 7 or is kept in the storage unit 8 or by Web service and be distributed to the management system of road supervision department or the information service of road traffic flow is provided.
Omnibearing vision sensor ODVS (OmniDirectional Vision Sensors) provide a kind of new solution for the panoramic picture that obtains scene in real time.The characteristics of ODVS are looking away (360 degree), can become piece image to the Information Compression in the hemisphere visual field, and the quantity of information of piece image is bigger; When obtaining a scene image, the riding position of ODVS in scene is free more; ODVS is without run-home during monitoring environment; Algorithm is simpler during moving object in the detection and tracking monitoring range; Can obtain the realtime graphic of scene.This ODVS video camera mainly is made up of a ccd video camera and a reflective mirror that faces camera.Reflective mirror is given the ccd video camera imaging with the image reflection in one week of horizontal direction, like this, just can obtain the environmental information of 360 ° of horizontal directions in piece image.This omnidirectional vision camera has very outstanding advantage, under the real-time processing requirements to panorama, is a kind of quick, approach of visual information collection reliably especially.But then, this image acquisition mode has also determined the omnidirectional images that obtains certainly existing compression and deformation to a certain extent simultaneously, and this has just influenced its observation precision to remote object.
This ODVS video camera can be at the comprehensive all situations that photographs in the hemisphere visual field.Can become piece image to the Information Compression in the hemisphere visual field, the quantity of information of piece image is bigger; When obtaining a scene image, the riding position of ODVS in scene is free more; ODVS is without run-home during monitoring environment; Algorithm is simpler during moving object in the detection and tracking monitoring range; Can obtain the realtime graphic of scene.Simultaneously, because omni-directional visual is a kind of typical machine vision, be that the people can not possess.The principle of the principle of camera acquisition image and eye-observation object is different, and the image difference that makes omnidirectional images and human eye see is also very big, even according to cylinder unwrapping, its deformation still exists.Therefore how to provide a kind of quick, approach of wagon flow visual information collection reliably for the intelligent traffic administration system field by comprehensive optical image technology, computer image processing and the network technology communication technology, and the real-time omnidirectional images that obtains according to the ODVS video camera, judge that by calculating whether operating vehicle is violating the regulations, also can obtain real-time car flow information simultaneously.
Beneficial effect of the present invention mainly shows: 1, the installation site freedom of omnibearing vision sensor, and monitoring environment is without run-home; 2, algorithm is simple; 3, real-time, can rapid and reliable collection visual information.
(4) description of drawings
Fig. 1 is the omni-directional visual optical schematic diagram.
Fig. 2 is the structure principle chart of road monitoring apparatus.
Fig. 3 is the perspective projection imaging model synoptic diagram of omnibearing vision device and general perspective imaging model equivalence.
Fig. 4 is the omnibearing vision device undeformed simulation synoptic diagram of epigraph in the horizontal direction.
Fig. 5 be omnibearing vision device in the horizontal direction, synoptic diagram is cut apart in road monitoring virtual detection toggle area and track.
Fig. 6 is the synoptic diagram that a circle on mirror surface is transformed into the panorama cylindrical image of computing machine demonstration through omnidirectional images.
Fig. 7 is the process flow diagram that calculates Vehicle Speed in the omnibearing vision device and calculate vehicle flowrate.
(5) embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1, Fig. 2, Fig. 3, Fig. 4, Fig. 5, Fig. 6, Fig. 7, a kind of road monitoring apparatus based on omnibearing vision sensor, described road monitoring apparatus comprises microprocessor 6, is used for the monitoring sensor 13 of monitor road situation, described microprocessor 6 comprises: view data read module 16 is used to read the video image information of coming from the omnibearing vision sensor biography; Image data file memory module 18, the video image information that is used for reading is kept at storage unit by file mode; On-the-spot real-time play module 20 is used for the video image real-time play that will read; Network transmission module 22, the video and graphic information that is used for reading is transferred to network by communication module; Described monitoring sensor 13 is omnibearing vision sensors, described omnibearing vision sensor 13 comprises the evagination catadioptric minute surface 1 in order to object in the reflection monitoring field, in order to the dark circles cone 2 that prevents that anaclasis and light are saturated, transparent cylinder 3, the camera 4 of camera head 5, described evagination catadioptric minute surface 1 is positioned at the top of transparent cylinder 3, evagination catadioptric minute surface 1 down, dark circles cone 2 is fixed on the center of catadioptric minute surface 1 male part, camera 4 faces toward the evagination mirror surface up, and described camera 4 is positioned at the virtual focus position of evagination mirror surface 1;
Described microprocessor 6 also comprises: transducer calibration module 17, be used to set up a kind of road image of space and the corresponding relation of the video image that is obtained, and the material picture coordinate and the image left side are linear in the horizontal direction; Virtual detection line setting module 23 is used to set up some virtual detection lines, and the corresponding relation by described space road plane image and the video image that is obtained can be corresponding becomes the detection line on real road; Image stretching processing module 19, the circular video image that is used for reading expands into the panorama histogram; Color space conversion module 25 is used for the traffic image rgb color space is transformed into yuv space; Vehicle judge module 29 is used for determining that by the grey scale change situation in the statistics detection line zone target vehicle enters or leave detection line; Vehicle judge module based on the YUV model is used to utilize the YUV color space characteristic, the color-values in the forward-backward correlation frame is compared identify information of vehicles:
Virtual detection line is divided into uniform N section, and every segment length is relevant with the geometric scale of minimum vehicle, and establishing minimum vehicle length is Car Wmin, segments is S Num, detection line total length M, minimum detection primitive size can be by formula (23), (24) expression;
BL = Car w min S num - - - ( 23 )
N = M BL - - - ( 24 )
Utilize formula (25) to judge for one in each N section of evenly cutting apart son section,
PS = Σ t = 0 N DS ( i )
T2 is the threshold value of difference signals of vehicles and background signal component variation in the formula; SD is a frame difference accumulation mean, computing formula following (26),
SD = Σ i = 0 i = n - 1 | I c ( i ) - I B ( i ) | 2 n - - - ( 26 )
I in the formula c(i) be i pixel color value of present frame, and I B(i) be i pairing color value of pixel of background frames;
If PS>Car Wmin/ 2, judge that automobile storage exists, otherwise judge that vehicle does not exist;
Speed of a motor vehicle detection module 32 is used for calculating car speed according to the series of values of the forward-backward correlation frame of vehicle detection by least square method, is converted into car speed on the real road according to the relation of demarcating:
Suppose that vehicle ' is at the uniform velocity in monitoring range, with the move distance and the time relation of formula (28) expression vehicle,
l i=α+βt ii (28)
In the following formula, be monitored vehicle motion track detected value l iExpression i two field picture and the distance that (i+1) vehicle moved between the two field picture, t iExpression i two field picture and the time that (i+1) vehicle displacement is spent between the two field picture, i is a natural number;
Adopt unknown parameter (α, estimated value β) in the least square method calculating formula (28) Make formula (29) value for minimum:
Σ i = 1 n ( l i - α ^ - β ^ t i ) 2 = min α , β Σ i = 1 n ( l i - α - β t i ) 2 - - - ( 29 )
Utilize the partial differential method to find the solution estimated value Be the vehicle speed estimated value, represent suc as formula (30);
β ^ = Σ i = 1 n ( t i - t ‾ ) ( l i - l ‾ ) Σ i = 1 n ( t i - t ‾ ) - - - ( 30 )
In the following formula, t, l are respectively the average of time and distance.
Omnibearing vision device is installed in from 3 meters parts of pavement-height in the present embodiment, monitoring the vehicle condition on the road horizontal direction, therefore when the catadioptric minute surface of design omnibearing vision device, to guarantee in the horizontal direction indeformable, as shown in Figure 1, the projection centre C of camera is the horizontal scene h of distance place above the horizontal scene of road, the summit of catoptron is above projection centre, apart from projection centre zo place.With the camera projection centre is that true origin is set up coordinate system, and the face shape of catoptron is used z (X) function representation.The pixel q of distance images central point ρ has accepted from horizontal scene O point (apart from Z axle d), at the light of mirror M point reflection in as the plane.Horizontal scene is undistorted to require the coordinate of the horizontal coordinate of scene object point and corresponding picture point linear;
d(ρ)=αρ (1)
ρ is and the distance of the face shape central point of catoptron in the formula (1), and α is the magnification of imaging system.
In conjunction with Fig. 1 and with reference to Fig. 2, the structure of the accessory of omni-directional visual function of the present invention by: catadioptric face mirror 1, dark circles cone 2, transparent housing right cylinder 3, base 9 are formed, described catadioptric face mirror 1 is positioned at the upper end of right cylinder 3, and the convex surface of mirror surface stretches in the right cylinder downward; Described dark circles cone 2 is fixed on the central part of the convex surface of catadioptric face mirror 1; The turning axle of described catadioptric face mirror 1, dark circles cone 2, right cylinder 3, base 9 is on same central axis; Described digital camera head 5 is positioned at the below of right cylinder 2; Have the circular groove identical on the described base 9 with the wall thickness of described right cylinder 2; Described base 9 is provided with camera lens 4 holes of a size with digital camera 5, and the bottom of described base 9 disposes embedded hardware and software systems 6.
In conjunction with Fig. 1 and with reference to Fig. 7, digital camera 13 is connected in the microprocessor 15 of device for monitoring vehicle during comprehensive shooting of the present invention by usb 14, described microprocessor 15 reads in module 16 through view data and reads in view data and carry out the image pre-service, initial environment image when initialization in order to obtain not having vehicle, need this image is deposited in the image data storage module 18 so that the image recognition of back and processing, simultaneously in order to discern vehicle movement, need demarcate 9 basic parameters that obtain the omnidirectional images system to volume coordinate and carry out image recognition and processing, handle transducer calibration in the present invention hereto, carry out in the virtual detection line setting module 17, this is the module that a user and system engage in the dialogue, and the user can carry out the setting of omnidirectional images systematic parameter and virtual detection line according to actual conditions by dialog box.
The demarcation of omni-directional visual camera field of view distance relates to the theory of imaging geometry, and the three-dimensional scenic of objective world is projected the two-dimentional image plane of video camera, and image transformation relates to the conversion between the different coordinates.In the imaging system of video camera, what relate to has following 4 coordinate systems; (1) real-world coordinates is XYZ; (2) with the video camera be the coordinate system x^y^z^ that formulate at the center; (3) photo coordinate system, formed photo coordinate system x in video camera *y *o *(4) computer picture coordinate system, the coordinate system MN that the computer-internal digital picture is used is a unit with the pixel.
According to the different transformational relation of above several coordinate systems, just can obtain needed omnidirectional vision camera imaging model, converse the corresponding relation of two dimensional image to three-dimensional scenic.The approximate perspective imaging analytical approach that adopts catadioptric omnibearing imaging system among the present invention is with the formed corresponding relation that is converted to three-dimensional scenic as the planimetric coordinates two dimensional image in the video camera, Fig. 3 is general perspective imaging model, d is an object height, ρ is an image height, t is an object distance, and F is image distance (equivalent focal length).Can obtain formula (13)
d = t F ρ - - - ( 13 )
When the design of the catadioptric omnibearing imaging system that above-mentioned horizontal scene does not have, require the coordinate of the horizontal coordinate of scene object point and corresponding picture point linear, represent suc as formula (1); Comparison expression (13), (1), horizontal as can be seen scene does not have the be imaged as perspective imaging of the catadioptric omnibearing imaging system of distortion to horizontal scene.Therefore with regard to horizontal scene imaging, the catadioptric omnibearing imaging system that horizontal scene can not had distortion is considered as having an X-rayed camera, and α is the magnification of imaging system.If the projection centre of this virtual perspective camera is C point (seeing accompanying drawing 3), its equivalent focal length is F.Comparison expression (13), (1) formula can obtain formula (14);
α = t F ; t = h - - - ( 14 )
Obtain formula (15) by formula (12), (14)
F = fhω max ( z 0 - h ) tg φ max + z 0 ω max 0 - - - ( 15 )
Carry out the system imaging simulation according to above-mentioned omnidirectional vision camera imaging model, by the camera projection centre send through in the pixel planes equidistantly after the reflection of the light family of pixel, intersection point on the horizontal road face of distance projection centre 3m is equally spaced basically, as shown in Figure 4.Therefore according in the above-mentioned design concept this patent relation between the coordinate of the coordinate of road surface level and corresponding comprehensive picture point being reduced to linear relationship, that is to say that design by mirror surface be XYZ to the conversion of photo coordinate system with real-world coordinates can be the linear dependence of ratio with magnification α.Be conversion below from photo coordinate system to the used coordinate system of computer-internal digital picture, the image coordinate unit that uses in the computing machine is the number of discrete pixel in the storer, so also need round the imaging plane that conversion just can be mapped to computing machine to reality as the coordinate on plane, its conversion expression formula is for to be provided by formula (16);
M = O m - x * S x ; N = O n - y * S y ; - - - ( 16 )
In the formula: O m, O nBe respectively the line number and the columns at the some pixel place that the initial point of image plane shone upon on the computer picture plane; S x, S yBe respectively scale factor in the x and y direction.S x, S yDetermine it is by between camera and mirror surface, placing scaling board, video camera being demarcated obtained S apart from the Z place x, S yNumerical value, unit is (pixel); O m, O nDetermine it is that unit is (pixel) according to selected camera resolution pixel.
Further, measurement and wagon flow quantitative statistics in the described speed of a motor vehicle at first will be discerned the traffick in the camera field of view, when vehicle passes through the visual field, adopt the virtual detection trigger that presets in advance among the present invention, with regard to the moment that start-up system opening entry number and vehicle pass through, calculate the speed of traffick then.Described virtual detection trigger can be by realizing by the mode that several lines are set in the internal memory of computing machine, in transducer calibration, virtual detection line setting module 17, carry out, what Fig. 5 represented is that omnibearing vision device is installed in the overhead situation of height 3m of the middle greenbelt of road, monitor the vehicle that is passed through on 6 tracks, both sides altogether, owing to adopted horizontal direction not have deformation design among the present invention, therefore can analyze from real road conditions.Three endways dotted lines are virtual detection line among Fig. 5.
Further, can control whether will carry out image stretching according to user's needs, described image stretching is handled to calculate and is carried out in image stretching processing module 19, the effect of this module is that the circular comprehensive figure of a width of cloth width of cloth is launched into corresponding rectangle column panorama sketch, and the figure after the expansion has easy calculating, is out of shape advantages such as little.According to a point (x on the circular omnidirectional images *, y *) and rectangle column panorama sketch on a point (x *, y *) corresponding relation, set up (x *, y *) and (x *, y *) mapping matrix.Because this one-to-one relationship can be being transformed into indeformable panoramic picture by the mapping matrix method.Can set up formula (21) relation by the M mapping matrix.
P **(x **,y **)← M× P *(x *,y *) (21)
According to formula (21), for each the pixel P on the imaging plane *(x *, y *) a some P arranged on omnidirectional images *(x *, y *) correspondence, set up the M mapping matrix after, the task that realtime graphic is handled can obtain simplifying.The omnidirectional images of each distortion that obtains on imaging plane is finished the computing of tabling look-up, and generates indeformable omnidirectional images; Indeformable omnidirectional images after the generation sends to real-time play module 20 and delivers to demonstration on the display 21; If the user need know on-the-spot real-time condition and can obtain on-the-spot omnidirectional images by network transmission module 22.
Detect in vehicle detection, the speed of a motor vehicle and to handle, mainly by virtual detection line setting module 23, color space conversion module 25, based on the vehicle judge module 29 of grey level histogram, based on the YUV model vehicle judge module 28, background refresh process module 30, shadow Detection processing module 31, speed of a motor vehicle detection module 32 and start overspeed of vehicle processing module 34 etc. and constitute;
It is uniform that described virtual detection trigger action detection module mainly utilizes the distribution of its pixel grey scale Gb between pavement of road greyish white darkly, and at the vehicle of travels down owing to reasons such as illumination and vehicle enclosure manufactured materials, constitute the pixel grey scale Gv of vehicle and the pixel grey scale Gb on background road surface a difference is arranged, pixel grey scale distribution in vehicle enters into the virtual detection line zone can change, can make the intensity profile expanded range, at this moment the virtual detection trigger action shows has car to enter in the virtual detection line zone;
Described color space conversion module is mainly finished the conversion of traffic image rgb color space to yuv space, for vehicle detection and background extracting do homework, the YUV color model is a kind of color model commonly used, its essential characteristic is that luminance signal is separated with color signal, Y represents brightness, U, V is two chrominance components, the expression aberration, generally be blue, red relative value is because human eye is to the variation comparison change in color sensitivity of brightness, therefore, the shared bandwidth of the value of Y component provides more than or equal to linear dependence between shared bandwidth YUV of chrominance component and the RGB model such as formula (22) in the YUV model
Y=0.301*R+0.586*G+0.113*B
U=-0.301*R-0.586*G+0.887*B (22)
V=0.699*R-0.586*G-0.113*B
Described vehicle judge module based on grey level histogram is to determine that by the grey scale change situation in the statistics detection line zone target vehicle enters or leave detection line and its time of passing through of calculating reaches the measurement purpose, the histogrammic grey value profile of described pixel grey scale do not have vehicle by with have vehicle that obvious variation can take place.The grey level histogram of image is a kind of statistical graph that the intensity profile of all pixels of image is shown its occurrence frequency by the size of gray-scale value, use two-dimensional coordinate system to represent grey level histogram among the present invention, the grey level of horizontal ordinate presentation video, ordinate are represented the number of pixels on a certain gray level.By comparing the method that the grey scale change rule can obtain the following detection speed of a motor vehicle: specific practice is that a gray threshold TH at first is set, this threshold value is between Gb and Gv, variation by the number of pixels N between statistics is from Gv to TH just can detect moving target and whether pass through detection line.When not having vehicle to pass through, pixel grey scale in the detection line zone distributes and concentrates, and the number of pixels between from Gv to TH is less, and the N value is very little; When the vehicle passing detection line, the gray scale major part of moving target is distributed in Gv between TH, pixel distribution expanded range in the detection line zone, number of pixels between from Gv to TH increases, the N value can increase to a predefined value NTH (number of pixels * 50% in the virtual detection line), thereby judges whether vehicle passes through detection line.In like manner, when the N value is reduced to NTH, just can judge that vehicle has passed through detection line;
Described vehicle judge module based on the YUV model is to discern vehicle by color characteristic, because vehicle is the target of big body, only occurs continuously when surpassing certain number of pixels point variation on the detection line, and could judge has the car line ball.In order to produce relation, introduced minimum vehicle commander's notion among the present invention with the vehicle physical size.Minimum vehicle commander is meant the minimum length vehicle near the detection line of form, the pixel length that is represented.For the reliability that improves system can adopt following determination methods:, then determine that it is vehicle line ball (wherein N can not too greatly can not be too little, gets N=8 here) if detect the contiguous pixels group of the 1/N length that has minimum vehicle commander at least.To achieve the above object, minimum vehicle commander's 1/N as the segmentation width, is divided into detection line the plurality of sections of length equalization.Be detected moving object in case have one section on the discovery detection line at least, promptly thinking has car to exist.
Further, detection line is divided into uniform N section, every segment length is relevant with the geometric scale of minimum vehicle, and for example minimum vehicle length is Car Wmin, segments is S Num, and detection line total length M, total length M=d1+d2 in the accompanying drawing 5.Then minimum detection primitive size can be by formula (23), (24) expression;
BL = Car w min S num - - - ( 23 )
N = M BL - - - ( 24 )
Utilize formula (25) to judge for one in each N section of evenly cutting apart son section,
Figure A20051006230400283
PS = Σ t = 0 N DS ( i )
Further, for fear of mistake numeration, whether sum that will be by judging motion pixel on the detection line has determined whether that above half of minimum overall width automobile storage exists, if i.e.: PS>Car Wmin, judge that automobile storage exists, otherwise vehicle does not exist.
The present invention proposes following interframe pixel difference statistical value based on the YUV model as the pattern feature index:
Judge that the criterion that whether has vehicle to pass through on each pixel judges with formula (26)
Wherein Y ′ = 255 Y Y B
Y BBe the mean flow rate component value of background, Y, U, V are pixel color values.Y T, U T, V TBe threshold value.
The unique point that above-mentioned speed of a motor vehicle detection line and vehicle edge junction are detected as the speed of a motor vehicle, by tracking to this unique point, can detect the motion track of this point, can calculate the travel speed of vehicle by motion track, when formula (26) has been judged as vehicle when passing through, begin to follow the tracks of speed of a motor vehicle detected characteristics point, then along the auxiliary detection end of speed of a motor vehicle detection line to main detection line, compare each pixel value one by one, comparison criterion provides with formula (27);
|f t(x)-g(x)|>T (27)
In the formula (27), x 0≤ x≤x 0+ M M is a comparison window, and x is the point on the speed of a motor vehicle detection line, f t(x) be the current color-values of all pixels in this zone, g (x) is the backcolor value of all pixels in this zone, writes down all color of pixel of this zone if their differences between the two, are opened up an internal memory greater than T.
When next frame arrives, again finish above-mentioned following calculation once, then regional color value and this frame color-values that find of last registration in internal memory compared, just thinks the speed tracking success if satisfy formula (28), continue to follow the tracks of up to vehicle be determined leave away till.When the speed detection line detected vehicle edge first, registration of vehicle edge current location when detecting vehicle in the back and occurring, can compare with the actual detected position according to the vehicle prediction, surpasses certain threshold value and thinks to follow the tracks of failure, and cancellation is followed the tracks of; Utilize this car track of vehicle record as the words of success, the utilization least square method is calculated car speed.
Described background refresh module mainly refreshes the renewal that strategy carries out background according to the result and the selected background of vehicle detection, to adapt to the dynamic change of road scene.As described above, adopted Y, U, V color model among the present invention, so carrying out with Y, U, V color model of refreshing of background, at first to determine initial background, obtain each component value of initial background with formula (31);
I 0 = 1 N Σ k = 1 N I k ( x ) color - - - color = Y , U , V - - - ( 31 )
In the formula (31), N is an empirical value, I k(x) Color(color=Y, U, the V) Y at x place in the expression K two field picture, U, V color component value.Then carrying out background with formula (32) refreshes;
Figure A20051006230400293
In the formula (32), B t(x) Y, U, the V component value of expression t moment background, I T+ Δ t(x) Y, U, the V component value of expression t+ Δ t moment background, α is the coefficient less than 1, and it is relatively good according to experiment this value to be taken at the 0.1 scope left and right sides, and also needing to judge whether in formula (32) is foreground pixel, adopts following criterion among the present invention:
When satisfying
Figure A20051006230400301
The time x place pixel be foreground pixel, otherwise be background pixel.
Wherein, U T, V T, Y TThreshold value for Y, U, each component of V.
Described shade evaluation algorithm, owing to judge have shade traffic background to compare with the shadow-free background at the YUV color space, the Y component has bigger variation, and characterize colouring information U, the V component variation is very little, therefore can focus on consideration in the Y component shade.Specific practice is at first to determine the standard illumination intensity value of a road background, as the standard basis value.After, whenever refresh the average brightness value that background all calculates road, compare to determine the relative intensity of illumination on present road surface then with this value and standard basis value; At last, calculate the contrast value of current road background and shade, utilize the last brightness range of just determining shade of this contrast value, carry out the shade judgement according to this relative value and one 's predetermined conversion scale (empirical value).(during fine day) can adopt the auxiliary judgment method of computer system time under the bigger situation of Y component, also can further confirm whether be the shade that driving vehicle produced on the adjacent lane from the rule of sunlight illumination in season, thereby can improve the reliability that vehicle is judged.
Described speed of a motor vehicle detection module is to have utilized least square method to calculate car speed, write down with a linear equations its match according to above-mentioned resultant track of vehicle, represent actual vehicle speed with slope then, prerequisite is that the hypothesis vehicle ' is at the uniform velocity in monitoring range, can use the move distance and the time relation of formula (28) expression vehicle
l i=α+βt ii (28)
In the formula, be monitored vehicle motion track detected value l iExpression i one two field picture and the distance that (i+1) vehicle moved between the two field picture, t iExpression i one two field picture and the time that (i+1) vehicle displacement is spent between the two field picture, i=1,2,3 ..., n, the employing least square method is asked unknown parameter (α, estimated value β) in the formula (28) among the present invention Make formula (29) value for minimum;
Σ i = 1 n ( l i - α ^ - β ^ t i ) 2 = min α , β Σ i = 1 n ( l i - α - βt i ) 2 - - - ( 29 )
Utilize the partial differential method to find the solution estimated value then Value is exactly the vehicle speed estimated value that the present invention will obtain, with formula (30) expression;
β ^ = Σ i = 1 n ( t i - t ‾ ) ( l i - l ‾ ) Σ i = 1 n ( t i - t ‾ ) - - - ( 30 )
T in the formula (30), l is respectively the average of time and distance.In general detect the long more velocity amplitude of estimating of distance More near the actual vehicle speed value.With the estimate velocity amplitude that obtains Concern the car speed that is converted on the real road according to demarcating.
Described startup overspeed of vehicle processing module is when having captured the violation vehicle according to above-mentioned speed of a motor vehicle detection module, start an event thread and send to another video camera by output interface in the locus of vehicle in violation of rules and regulations, this video camera is captured facing to the back part of this violation vehicle, so that the licence plate to this vehicle is discerned, the track, the temporal information that simultaneously omnibearing shooting device are captured the violation vehicle are preserved, so that send to traffic police administrative authority by network.
Embodiment 2
The described road monitoring apparatus of present embodiment is according to the needs of outdoor application, consider the environmental baseline problem, such as exposing to the sun and rain, the dust that flies upward is adsorbed on easily on the outer cover and makes that importing light into is affected, therefore (top of following fixed road monitoring apparatus has added a rainproof sunbonnet to outdoor use among the present invention, rainproof sunbonnet is screwed on outer cover, outer cover adopts the pmma material pressure injection to form in addition, base adopts the aluminum alloy materials pressure injection to form, simultaneously for the ease of cleaning and maintenance, the loading and unloading of outdoor use (following fixed) road monitoring apparatus will make things convenient for, whole road monitoring apparatus is fixed on the cantilever, and cantilever can be fixed on the metope of electric pole on the road or high-rise.
Further, described microprocessor 6 adopts flush bonding processor, adopt the EmbeddedLinux+Embedded linux software platform of combination like this among the present invention, adopted ARM9 processor S3C2410X plank in the experiment based on Samsung, integrated the free Embedded A rm-Linux operating system that MIZI company is announced on this plank, the present invention has been transplanted to Wonka (Embedded JVM) in the embedded Linux, Wonka itself had to serial ports, input equipment etc. drive to support.Selection Java or C language are used as having and test the speed, the software development language of the omnibearing vision device of vehicle flowrate function for monitoring, as java applet being operated in the support that needs embedded Java virtual machine (Embedded JVM) on the embedded Linux, used the free Java Virtual Machine of oneself transplanting successfully among the present invention.
The invention effect that the above embodiments 1 and embodiment 2 are produced is to make that by omnibearing computer vision sensor the scope of road vehicle monitoring is broader, provide a kind of brand-new, maintenance cost is low, easy to maintenance, judge that more reliable, visual road monitoring, wagon flow visual information gathers approach and means and device.

Claims (6)

1, a kind of road monitoring apparatus based on omnibearing vision sensor, described road monitoring apparatus comprise microprocessor, are used for the monitoring sensor of monitor road situation, and described microprocessor comprises:
The view data read module is used to read the video image information of coming from the omnibearing vision sensor biography;
The image data file memory module, the video image information that is used for reading is kept at storage unit by file mode;
On-the-spot real-time play module is used for the video image real-time play that will read;
Network transmission module, the video and graphic information that is used for reading is transferred to network by communication module;
It is characterized in that: described monitoring sensor is an omnibearing vision sensor, described omnibearing vision sensor comprises in order to the evagination catadioptric minute surface of object in the reflection monitoring field, dark circles cone, transparent cylinder, the camera in order to prevent that anaclasis and light are saturated, described evagination catadioptric minute surface is positioned at the top of transparent cylinder, evagination catadioptric minute surface down, the dark circles cone is fixed on the center of catadioptric minute surface male part, camera faces toward the evagination mirror surface up, and described camera is positioned at the virtual focus position of evagination mirror surface;
Described microprocessor also comprises:
The transducer calibration module is used to set up a kind of road image of space and the corresponding relation of the video image that is obtained, and material picture coordinate and image coordinate are linear in the horizontal direction;
The virtual detection line setting module is used to set up some virtual detection lines, and the corresponding relation by described space road plane image and the video image that is obtained can be corresponding becomes the detection line on real road;
The image stretching processing module, the circular video image that is used for reading expands into the panorama histogram;
The color space conversion module is used for the traffic image rgb color space is transformed into yuv space;
The vehicle judge module is used for determining that by the grey scale change situation in the statistics detection line zone target vehicle enters or leave detection line;
Vehicle judge module based on the YUV model is used to utilize the YUV color space characteristic, the color-values in the forward-backward correlation frame is compared identify information of vehicles:
Virtual detection line is divided into uniform N section, and every segment length is relevant with the geometric scale of minimum vehicle, and establishing minimum vehicle length is Car Wmin, segments is S Num, detection line total length M, minimum detection primitive size can be by formula (23), (24) expression;
BL = Car w min S num - - - - ( 23 )
N = M BL - - - - - - - ( 24 )
Utilize formula (25) to judge for one in each N section of evenly cutting apart son section,
PS = Σ t = 0 N DS ( i )
T2 is the threshold value of difference signals of vehicles and background signal component variation in the formula; SD is a frame difference accumulation mean, computing formula following (26):
SD = Σ i = 0 i = n - 1 | I c ( i ) - I B ( i ) | 2 n - - - - - - - ( 26 )
I in the formula c(i) be i pixel color value of present frame, and I B(i) be i pairing color value of pixel of background frames;
If PS>Car Wmin/ 2, judge that automobile storage exists, otherwise judge that vehicle does not exist;
Speed of a motor vehicle detection module is used for calculating car speed according to the series of values of the forward-backward correlation frame of vehicle detection by least square method, is converted into car speed on the real road according to the relation of demarcating:
Suppose that vehicle ' is at the uniform velocity in monitoring range, with the move distance and the time relation of formula (28) expression vehicle,
l i=α+βt ii (28)
In the following formula, be monitored vehicle motion track detected value l iExpression i two field picture and the distance that (i+1) vehicle moved between the two field picture, t iExpression i two field picture and the time that (i+1) vehicle displacement is spent between the two field picture, i is a natural number;
Adopt unknown parameter (α, estimated value β) in the least square method calculating formula (28)
Figure A2005100623040004C1
Make formula (29) value for minimum:
Σ i = 1 n ( l i - α ^ - β ^ t i ) 2 = min α , β Σ i = 1 n ( l i - α - β t i ) 2 - - - - - - ( 29 )
Utilize the partial differential method to find the solution estimated value
Figure A2005100623040004C3
Be the vehicle speed estimated value, represent suc as formula (30);
β ^ = Σ i = 1 n ( t i - t ‾ ) ( l i - l ‾ ) Σ i = 1 n ( t i - t ‾ ) - - - - - - - ( 30 )
In the following formula, t, l are respectively the average of time and distance.
2, the road monitoring apparatus based on omnibearing vision sensor as claimed in claim 1, it is characterized in that: described vehicle judge module, in order to set pavement of road greyish white darkly between being evenly distributed of its pixel grey scale Gb, the pixel grey scale Gv that the vehicle of travels down constitutes and the pixel grey scale Gb on background road surface have a difference, default gray threshold TH, Gv<TH<Gb, and Gv in the detection line zone is set to the critical value NTH of number of pixels between the TH, and the Gv in the statistics detection line zone is to number of pixels between the TH:
Less than critical value NTH, judging does not have vehicle passing detection line zone as the number of statistics;
Number as statistics increases to critical value NTH, judges that vehicle enters the detection line zone;
Number as statistics is reduced to critical value NTH, judges that vehicle leaves the detection line zone.
3, the road monitoring apparatus based on omnibearing vision sensor as claimed in claim 1, it is characterized in that: described microprocessor also comprises the background refresh module, be used for refreshing strategy and carry out context update according to the result of vehicle detection and selected background, to adapt to the dynamic change of road scene, in Y, U, V color model, will determine initial background earlier, the calculating formula of each component value of initial background is (31):
I 0 = 1 N Σ k = 1 N I k ( x ) color - - - - color = Y , U , V - - - - - - ( 31 )
In the following formula, N is an empirical value, I k(x) Color(color=Y, U, the V) Y at x place in the expression K two field picture, U, V color component value;
Judge whether it is that the condition of foreground pixel is:
Figure A2005100623040005C2
Wherein, U T, V T, Y TThreshold value for Y, U, each component of V;
When satisfying above-mentioned condition, the pixel at x place is a foreground pixel, otherwise is background pixel;
The calculating formula that background refreshes is (32):
Figure A2005100623040005C3
In the following formula, B t(x) Y, U, the V component value of expression t moment background, I T+ Δ t(x) Y, U, the V component value of expression t+ Δ t moment background, α is the coefficient less than 1.
4, the road monitoring apparatus based on omnibearing vision sensor as claimed in claim 1, it is characterized in that: described microprocessor also comprises the speed tracking module, is used for based on the interframe pixel difference statistical value of YUV model as the pattern feature index:
Judge that the criterion that whether has vehicle to pass through on each pixel judges with formula (26):
Figure A2005100623040005C4
Y′=255/Y B
Wherein
In the following formula, Y BBe the mean flow rate component value of background, Y, U, V are pixel color values, Y T, U T, V TBe threshold value;
When formula (26) has been judged as vehicle when passing through, begin to follow the tracks of speed of a motor vehicle detected characteristics point, then along the auxiliary detection end of speed of a motor vehicle detection line to main detection line, each pixel value relatively one by one, comparison criterion provides with formula (27);
|f t(x)-g(x)|>T (27)
In the following formula, x 0≤ x≤x 0+ M M is a comparison window, and x is the point on the speed of a motor vehicle detection line, f t(x) be the current color-values of all pixels in this zone, g (x) is the backcolor value of all pixels in this zone, and T represents threshold value;
If (27) set up, open up an internal memory and write down all color of pixel of this zone, when next frame arrives, again finish above-mentioned following calculation once, then regional color value and this frame color-values that find of last registration in internal memory compared, just think speed tracking success if satisfy formula (27), continue to follow the tracks of up to vehicle be determined leave away till, and registration of vehicle track.
5, as the described road monitoring apparatus based on omnibearing vision sensor of one of claim 1-4, it is characterized in that: described image stretching processing module is used for according to a point (x on the circular omnidirectional images *, y *) and the rectangle panorama sketch on a point (x *, y *) corresponding relation, set up (x *, y *) and (x *, y *) mapping matrix, by M mapping matrix opening relationships formula (21):
P **(x **,y **)←M×P *( *,y *) (21)
In the following formula, P *(x *, y *) be each picture element matrix on the imaging plane, P *(x *, y *) be the corresponding matrix of each point on the omnidirectional images, M is a mapping matrix.
6, as the described road monitoring apparatus of one of claim 1-4, it is characterized in that based on omnibearing vision sensor: described color space conversion module, the relational expression that is transformed into yuv space from rgb color space is formula (22):
Y=0.301*R+0.586*G+0.113*B
U=-0.301*R-0.586*G+0.887*B (22)
V=0.699*R-0.586*G-0.113*B
In the following formula, Y represents the brightness of YUV color model, and U, V are two chrominance components of YUV color model, the expression aberration; R represents the redness of rgb color space; G represents the green of rgb color space; B represents the blueness of rgb color space.
CNB2005100623041A 2005-12-28 2005-12-28 Omnibearing visual sensor based road monitoring apparatus Expired - Fee Related CN100419813C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005100623041A CN100419813C (en) 2005-12-28 2005-12-28 Omnibearing visual sensor based road monitoring apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005100623041A CN100419813C (en) 2005-12-28 2005-12-28 Omnibearing visual sensor based road monitoring apparatus

Publications (2)

Publication Number Publication Date
CN1804927A true CN1804927A (en) 2006-07-19
CN100419813C CN100419813C (en) 2008-09-17

Family

ID=36866929

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005100623041A Expired - Fee Related CN100419813C (en) 2005-12-28 2005-12-28 Omnibearing visual sensor based road monitoring apparatus

Country Status (1)

Country Link
CN (1) CN100419813C (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100437660C (en) * 2006-08-25 2008-11-26 浙江工业大学 Device for monitoring vehicle breaking regulation based on all-position visual sensor
CN100538723C (en) * 2007-10-26 2009-09-09 浙江工业大学 The inner river ship automatic identification system that multiple vision sensor information merges
CN101604448A (en) * 2009-03-16 2009-12-16 北京中星微电子有限公司 A kind of speed-measuring method of moving target and system
CN100576908C (en) * 2007-11-30 2009-12-30 汤一平 Stereo intelligent camera apparatus based on omnibearing vision sensor
CN1987357B (en) * 2006-12-26 2010-05-19 浙江工业大学 Intelligent parking auxiliary device based on omnibearing computer sight
CN101859436A (en) * 2010-06-09 2010-10-13 王巍 Large-amplitude regular movement background intelligent analysis and control system
CN101179710B (en) * 2007-11-30 2010-12-08 浙江工业大学 Intelligent video monitoring apparatus of railway crossing
CN101729872B (en) * 2009-12-11 2011-03-23 南京城际在线信息技术有限公司 Video monitoring image based method for automatically distinguishing traffic states of roads
CN102142197A (en) * 2011-03-31 2011-08-03 汤一平 Intelligent traffic signal lamp control device based on comprehensive computer vision
CN101710448B (en) * 2009-12-29 2011-08-10 浙江工业大学 Road traffic state detecting device based on omnibearing computer vision
CN101321269B (en) * 2007-06-05 2011-09-14 同济大学 Passenger flow volume detection method based on computer vision
CN101365113B (en) * 2008-09-18 2011-12-21 浙江工业大学 Portable examination room omni-directional monitoring apparatus
CN101909145B (en) * 2009-06-05 2012-03-28 鸿富锦精密工业(深圳)有限公司 Image noise filtering system and method
CN102622895A (en) * 2012-03-23 2012-08-01 长安大学 Video-based vehicle speed detecting method
CN102903157A (en) * 2012-05-25 2013-01-30 中国计量学院 Method for giving attention to highway automatic charging by using express highway vehicle overspeed image pick-up monitoring system
CN103139532A (en) * 2011-11-22 2013-06-05 株式会社电装 Vehicle periphery monitor
CN103325258A (en) * 2013-06-24 2013-09-25 武汉烽火众智数字技术有限责任公司 Red light running detecting device and method based on video processing
CN103348391A (en) * 2010-12-23 2013-10-09 业纳遥控设备有限公司 Method for safely identifying vehicle captured by radiation sensor in photograph
CN103795983A (en) * 2014-01-28 2014-05-14 彭世藩 All-directional mobile monitoring system
CN104601950A (en) * 2014-12-31 2015-05-06 北京邮电大学 Video monitoring method
CN105976627A (en) * 2016-07-21 2016-09-28 陈国栋 Signal lamp and frontshot type snapshot integrated device
CN106448202A (en) * 2016-10-31 2017-02-22 长安大学 Video based curve early warning system and early warning method
CN106652465A (en) * 2016-11-15 2017-05-10 成都通甲优博科技有限责任公司 Method and system for identifying abnormal driving behavior on road
CN106846828A (en) * 2016-11-30 2017-06-13 东南大学 A kind of opposite pedestrian stream crossing facilities canalization method of lower high density of signal control
CN106878669A (en) * 2015-12-10 2017-06-20 财团法人工业技术研究院 Image identification method
CN108074370A (en) * 2016-11-11 2018-05-25 国网湖北省电力公司咸宁供电公司 The early warning system and method that a kind of anti-external force of electric power transmission line based on machine vision is destroyed
CN113053136A (en) * 2019-12-26 2021-06-29 上海晋沙智能科技有限公司 Road intelligent security monitored control system
CN115331424A (en) * 2022-06-13 2022-11-11 扬州远铭光电有限公司 Embedded vision automobile flow detection system
CN117853975A (en) * 2023-12-29 2024-04-09 广东智视云控科技有限公司 Multi-lane vehicle speed detection line generation method, system and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3754535A1 (en) * 2019-06-17 2020-12-23 Kapsch TrafficCom AG Apparatus for recording license plates of vehicles

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02190978A (en) * 1989-01-19 1990-07-26 Mazda Motor Corp Visual sense recognizing device for vehicle
JPH07100427B2 (en) * 1989-09-26 1995-11-01 トヨタ自動車株式会社 Vehicle alarm system
JPH04304600A (en) * 1991-04-02 1992-10-27 Mazda Motor Corp Travelling stage judging device for moving vehicle
JP3112744B2 (en) * 1991-09-12 2000-11-27 タムラ化研株式会社 Surface protective agent for printed wiring boards
CN2378248Y (en) * 1999-05-06 2000-05-17 中国科学院沈阳自动化研究所 Omnibearing vision sensor carried on vehicle
JP2001294109A (en) * 2000-04-14 2001-10-23 Daihatsu Motor Co Ltd Rear monitoring device for vehicle and its controlling method
CN1136738C (en) * 2002-01-31 2004-01-28 北京理工大学 Miniaturized real-time stereoscopic visual display
CN2705807Y (en) * 2004-04-26 2005-06-22 上海鸣俱妥国际贸易有限公司 Omnibearing vision sensor

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100437660C (en) * 2006-08-25 2008-11-26 浙江工业大学 Device for monitoring vehicle breaking regulation based on all-position visual sensor
CN1987357B (en) * 2006-12-26 2010-05-19 浙江工业大学 Intelligent parking auxiliary device based on omnibearing computer sight
CN101321269B (en) * 2007-06-05 2011-09-14 同济大学 Passenger flow volume detection method based on computer vision
CN100538723C (en) * 2007-10-26 2009-09-09 浙江工业大学 The inner river ship automatic identification system that multiple vision sensor information merges
CN101179710B (en) * 2007-11-30 2010-12-08 浙江工业大学 Intelligent video monitoring apparatus of railway crossing
CN100576908C (en) * 2007-11-30 2009-12-30 汤一平 Stereo intelligent camera apparatus based on omnibearing vision sensor
CN101365113B (en) * 2008-09-18 2011-12-21 浙江工业大学 Portable examination room omni-directional monitoring apparatus
CN101604448B (en) * 2009-03-16 2015-01-21 北京中星微电子有限公司 Method and system for measuring speed of moving targets
CN101604448A (en) * 2009-03-16 2009-12-16 北京中星微电子有限公司 A kind of speed-measuring method of moving target and system
CN101909145B (en) * 2009-06-05 2012-03-28 鸿富锦精密工业(深圳)有限公司 Image noise filtering system and method
CN101729872B (en) * 2009-12-11 2011-03-23 南京城际在线信息技术有限公司 Video monitoring image based method for automatically distinguishing traffic states of roads
CN101710448B (en) * 2009-12-29 2011-08-10 浙江工业大学 Road traffic state detecting device based on omnibearing computer vision
CN101859436A (en) * 2010-06-09 2010-10-13 王巍 Large-amplitude regular movement background intelligent analysis and control system
CN101859436B (en) * 2010-06-09 2011-12-14 王巍 Large-amplitude regular movement background intelligent analysis and control system
CN103348391A (en) * 2010-12-23 2013-10-09 业纳遥控设备有限公司 Method for safely identifying vehicle captured by radiation sensor in photograph
CN102142197B (en) * 2011-03-31 2013-11-20 汤一平 Intelligent traffic signal lamp control device based on comprehensive computer vision
CN102142197A (en) * 2011-03-31 2011-08-03 汤一平 Intelligent traffic signal lamp control device based on comprehensive computer vision
CN103139532B (en) * 2011-11-22 2016-04-20 株式会社电装 vehicle periphery monitor
CN103139532A (en) * 2011-11-22 2013-06-05 株式会社电装 Vehicle periphery monitor
CN102622895A (en) * 2012-03-23 2012-08-01 长安大学 Video-based vehicle speed detecting method
CN102622895B (en) * 2012-03-23 2014-04-30 长安大学 Video-based vehicle speed detecting method
CN102903157A (en) * 2012-05-25 2013-01-30 中国计量学院 Method for giving attention to highway automatic charging by using express highway vehicle overspeed image pick-up monitoring system
CN103325258A (en) * 2013-06-24 2013-09-25 武汉烽火众智数字技术有限责任公司 Red light running detecting device and method based on video processing
CN103795983A (en) * 2014-01-28 2014-05-14 彭世藩 All-directional mobile monitoring system
CN104601950B (en) * 2014-12-31 2017-10-17 北京邮电大学 A kind of video frequency monitoring method
CN104601950A (en) * 2014-12-31 2015-05-06 北京邮电大学 Video monitoring method
CN106878669B (en) * 2015-12-10 2019-12-03 财团法人工业技术研究院 Image identification method
CN106878669A (en) * 2015-12-10 2017-06-20 财团法人工业技术研究院 Image identification method
CN105976627A (en) * 2016-07-21 2016-09-28 陈国栋 Signal lamp and frontshot type snapshot integrated device
CN105976627B (en) * 2016-07-21 2018-10-16 陈国栋 Signal lamp and preceding bat formula capture integrated device
CN106448202A (en) * 2016-10-31 2017-02-22 长安大学 Video based curve early warning system and early warning method
CN108074370A (en) * 2016-11-11 2018-05-25 国网湖北省电力公司咸宁供电公司 The early warning system and method that a kind of anti-external force of electric power transmission line based on machine vision is destroyed
CN106652465A (en) * 2016-11-15 2017-05-10 成都通甲优博科技有限责任公司 Method and system for identifying abnormal driving behavior on road
CN106846828A (en) * 2016-11-30 2017-06-13 东南大学 A kind of opposite pedestrian stream crossing facilities canalization method of lower high density of signal control
CN106846828B (en) * 2016-11-30 2019-06-04 东南大学 A kind of opposite pedestrian stream crossing facilities canalization method of the lower high density of signal control
CN113053136A (en) * 2019-12-26 2021-06-29 上海晋沙智能科技有限公司 Road intelligent security monitored control system
CN115331424A (en) * 2022-06-13 2022-11-11 扬州远铭光电有限公司 Embedded vision automobile flow detection system
CN117853975A (en) * 2023-12-29 2024-04-09 广东智视云控科技有限公司 Multi-lane vehicle speed detection line generation method, system and storage medium

Also Published As

Publication number Publication date
CN100419813C (en) 2008-09-17

Similar Documents

Publication Publication Date Title
CN1804927A (en) Omnibearing visual sensor based road monitoring apparatus
CN1912950A (en) Device for monitoring vehicle breaking regulation based on all-position visual sensor
CN101059909A (en) All-round computer vision-based electronic parking guidance system
CN1080911C (en) Object observing method and device
CN104568983B (en) Pipeline Inner Defect Testing device and method based on active panoramic vision
US8390696B2 (en) Apparatus for detecting direction of image pickup device and moving body comprising same
CN103286081B (en) Monocular multi-perspective machine vision-based online automatic sorting device for steel ball surface defect
CN1812569A (en) Intelligent safety protector based on omnibearing vision sensor
CN101064065A (en) Parking inducing system based on computer visual sense
CN101051223A (en) Air conditioner energy saving controller based on omnibearing computer vision
JP7292979B2 (en) Image processing device and image processing method
CN1858551A (en) Engineering car anti-theft alarm system based on omnibearing computer vision
US20100245611A1 (en) Camera system and image adjusting method for the same
CN109949231B (en) Method and device for collecting and processing city management information
CN1607452A (en) Camera unit and apparatus for monitoring vehicle periphery
CN105067639A (en) Device and method for automatically detecting lens defects through modulation by optical grating
CN105424723A (en) Detecting method for defects of display screen module
CN1878297A (en) Omnibearing vision device
CN109239099A (en) Road surface breakage real-time detecting system and its detection method under multi-machine collaborative environment
CN1643543A (en) Method for linking edges in stereo images into chains
CN1812570A (en) Vehicle antitheft device based on omnibearing computer vision
CN110287893A (en) A kind of vehicle blind zone reminding method, system, readable storage medium storing program for executing and automobile
CN111800588A (en) Optical unmanned aerial vehicle monitoring system based on three-dimensional light field technology
CN101067880A (en) Motor vehicle day and night running observing recording device
JP2007127595A (en) Foreign substance detection instrument

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080917

Termination date: 20101228