CN101059909A - All-round computer vision-based electronic parking guidance system - Google Patents

All-round computer vision-based electronic parking guidance system Download PDF

Info

Publication number
CN101059909A
CN101059909A CN 200610050471 CN200610050471A CN101059909A CN 101059909 A CN101059909 A CN 101059909A CN 200610050471 CN200610050471 CN 200610050471 CN 200610050471 A CN200610050471 A CN 200610050471A CN 101059909 A CN101059909 A CN 101059909A
Authority
CN
China
Prior art keywords
parking
image
module
space
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610050471
Other languages
Chinese (zh)
Other versions
CN100449579C (en
Inventor
汤一平
叶永杰
金顺敬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CNB2006100504719A priority Critical patent/CN100449579C/en
Publication of CN101059909A publication Critical patent/CN101059909A/en
Application granted granted Critical
Publication of CN100449579C publication Critical patent/CN100449579C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

An electric park induce system based on omnibearing computer vision comprises a microprocessor, an omnibearing vision sensor for detecting the park condition in a parker, and a communication module communicated with outer space, wherein the vision sensor is mounted above the park, the microprocessor detects the condition of each park part to provide dynamic internal induction and external induction to park. The inventive park check method has wide check range, non-interference installment, low energy consumption in maintenance, abundant check parameters, visual property, reliable check, high accuracy, easy statistic, simple operation, expandable property, or the like.

Description

Electronic parking guidance system based on omnibearing computer vision
(I) technical field
The invention belongs to the application of an omnidirectional computer vision sensor technology, an image recognition technology, a database technology and a network communication technology in a parking guidance system, and particularly relates to an electronic parking guidance system.
(II) background of the invention
Urban road traffic consists of dynamic traffic and static traffic, and the static traffic refers to parking states of vehicles in different areas and different parking places for finishing different travel purposes or keeping. Static traffic is an integral part of urban traffic, like dynamic traffic, which takes static traffic as a starting point and continues dynamic traffic. Dynamic traffic and static traffic are mutually promoted and restricted, and need to be coordinately developed to jointly form an urban traffic system.
In the current urban road traffic management, people usually only pay attention to the dispersion and control of urban dynamic traffic, neglect the planning, construction and management of static traffic such as vehicle parking and the like, neglect the guidance of vehicle parking, and enable the old and difficult problems such as urban traffic jam, frequent accidents and the like to be more prominent, so that people pay attention to taking a lot of treatment measures without worry, and have little effect. One of the main symptoms of the problem is that the urban parking lot garage is not constructed well, is not managed well and lacks an advanced parking guidance system, so that blind flowing time for finding parking spaces is caused, and illegal parking, road occupation and other phenomena occur.
The domestic and foreign investigation data show that drivers do not know parking spaces, randomly and randomly find the parking spaces without destinations, extra burden is added to road traffic, and about 12^ 15% of vehicles in urban road traffic flow are vehicles seeking parking positions. In addition, it is reported abroad that gasoline consumed in paris for finding parking spaces in urban areas accounts for 40% of gasoline for the whole automobile, so that the emission of vehicle tail gas is increased, and the environmental pollution caused by the automobile tail gas is greatly increased.
At present, on one hand, the number of parking lots in China cannot adapt to the increase of the automobile holding amount, and on the other hand, a driver blindly searches for the parking lots, so that the phenomenon of illegal road occupation and parking is increased, the road traffic capacity is greatly reduced, traffic jam and traffic accidents are easily caused, and the contradiction is obvious day by day.
The parking lot library is the most important condition of urban static traffic, and the defects of construction and management in the aspect directly influence the normal operation of dynamic traffic, but the unsmooth dynamic traffic and the reverse influence on the management of the static traffic. The vicious circle of traffic difficulty is generated, so that the safety, smoothness and order of urban road traffic cannot be guaranteed, and the development of cities and economy is seriously hindered.
At present, the problems that the owned quantity of urban motor vehicles greatly differs from the proportion of parking facilities and the utilization rate of a parking lot is low exist in China, and on one hand, the problems are represented that parking places in the parking lot are idle and resources are wasted; on the other hand, a large number of foreign vehicles and some local vehicles take the driver a long time to find a parking lot having a parking space because the driver does not know the parking space parking position condition. The method not only increases the urban road load, but also seriously affects the dynamic traffic of roads and greatly increases the environmental pollution caused by automobile exhaust. The practical situation in foreign countries shows that the situation that the automobile blindly searches for the parking lot can be improved only through the effective parking guidance information system, so that traffic accidents are reduced and air pollution is reduced. The key theory and implementation technology research of the advanced parking guidance and information system is the important research content of intelligent traffic, is one of leading-edge subjects which are deeply researched internationally, and is the key problem to be solved urgently in urban traffic of China.
The approach to solve the parking problem can be mainly started from the following three aspects:
(1) reasonably planning and developing static traffic infrastructure;
(2) advanced management means are adopted to manage and control parking;
(3) and implementing the intelligent transportation system.
The static traffic infrastructure mainly comprises a social parking lot, a public parking lot at a peripheral entrance and exit of an urban area, a detector, a variable information display device, network equipment and the like, wherein the detector is required to be equipped for building the parking lot and scientifically managing parking.
The advanced parking guidance information system provides different dynamic parking guidance information for a driver by using a variable electronic signboard unified by regions, wherein the dynamic parking guidance information comprises information such as a parking lot position, a parking garage position, a roadside parking position, a parking position preselected by the driver, an optimal driving route and the like. By using the information, the driver is guided to avoid the congestion and quickly find out a more ideal parking space.
Therefore, the advanced parking guidance information system must be composed of four parts, namely parking information acquisition, information processing, information transmission and information distribution, and the functions of each part are as follows: 1) the system collects the relevant information of each parking lot in the object area through a remote monitoring device and a sensing device, and mainly comprises the information of the parking space using condition and the like of the parking lot; 2) the information processing system processes the collected parking lot use condition and the surrounding road information into appropriate form information provided for the driver, such as the full space (the condition of the remaining parking spaces) of the parking lot, whether the distributed road is congested and the like. In addition, the information processing system is also responsible for storing parking lot information, processing a change pattern of the usage of the parking lot, and the like. The functions lay a foundation for providing services such as parking demand condition forecasting, parking space reservation and the like in the future; 3) the basic task of information transmission of the information transmission system is to ensure the smoothness from the information acquisition system to the information processing system and then to the information issuing system. The common forms include optical transmission network, telephone exchange network and optical access network; 4) the information issuing system is used for issuing information processed by the information processing system to the outside in a proper mode in a plurality of layers. In general, the use status of each parking lot is provided to the driver visually or audibly by broadcast on a variable information display panel at any time by a control center, and may be distributed by the internet, a mobile phone, a car navigation device, or the like. The most basic and commonly used release form at present is the induction information board arranged on the roadside.
The invention has made the previous need to detect the parking stall use situation in the parking area through burying the ground induction coil before the parking stall obtains the information whether the parking stall is occupied through the electromagnetic induction way, this way can detect the use situation of the parking stall relatively well, but when there is a local induction coil trouble, need to seal the road and dig the road surface to maintain, increase the maintenance personnel's maintenance work load, has increased the maintenance cost, need to bury the ground induction coil once only for each parking stall in the parking area with great capacity at the same time the investment is greater, and the parking stall more will bring complexity of detection and communication and pressure of calculating.
Disclosure of the invention
In order to overcome the defects of high cost, high maintenance cost and poor reliability of the conventional parking guidance system, the invention provides the electronic parking guidance system based on the omnibearing computer vision, which has low cost, low maintenance cost and good reliability.
The technical scheme adopted by the invention for solving the technical problems is as follows:
an electronic parking guidance system based on omnidirectional computer vision comprises a microprocessor, an omnidirectional vision sensor and a communication module, wherein the omnidirectional vision sensor is used for monitoring the condition of parked vehicles in a parking lot, and the communication module is used for communicating with the outside;
the omnibearing vision sensor comprises an outward convex reflecting mirror surface, a transparent cylinder and a camera which are used for reflecting objects in the monitoring field, wherein the outward convex reflecting mirror surface faces downwards, the transparent cylinder supports the outward convex reflecting mirror surface, a black cone is fixed at the center of an outward convex part of the catadioptric mirror surface, the camera which is used for shooting an imaging body on the outward convex reflecting mirror surface is positioned inside the transparent cylinder, and the camera is positioned on a virtual focus of the outward convex reflecting mirror surface;
the microprocessor comprises:
the image data reading module is used for reading the image information of the parking places in the parking lot, which is transmitted from the visual sensor;
the image data file storage module is used for storing the read image information in a storage unit in a file mode;
the virtual parking stall frame setting module is used for setting virtual parking stall frames which correspond to actual parking stalls one by one according to the read image information of the whole parking lot and storing a reference image;
the sensor calibration module is used for calibrating parameters of the omnibearing visual sensor and establishing a linear corresponding relation between a spatial object image and an obtained video image;
the virtual car frame detection module is used for carrying out difference value operation on the obtained current frame live video image and the reference image for each virtual car frame, and the calculation formula of image subtraction is expressed as formula (17):
fd(X,t0,ti)=f(X,ti)-f(X,t0) (17)
in the above formula, fd(X,t0,ti) The result is that the real-time shot image and the reference image are subtracted; f (X, t)i) The image is shot in real time; f (X, t)0) Is a reference image;
such as fd(X,t0,ti) When the vehicle is not less than the threshold value, judging that a vehicle event is suspected;
such as fd(X,t0,ti) If the threshold value is less than the threshold value, judging that no suspicious vehicle event exists;
the connected region calculation module is used for marking the current image after judging that a suspicious vehicle event exists, wherein the cell with the pixel gray level of 0 indicates that no suspicious vehicle exists in the cell, the cell with the pixel gray level of 1 indicates that the suspicious vehicle exists in the cell, whether the pixel in the current image is equal to the pixel of a certain adjacent point around the current pixel or not is calculated, if the pixel is equal to the gray level, connectivity is judged, and all the pixels with the connectivity are used as a connected region;
and the vehicle judgment module is used for calculating the area Si of the connected region according to the obtained connected region, and comparing the area Si of the connected region with a preset threshold value:
if the area Si of the connected region is larger than the threshold SThreshold(s)Judgment ofDetermining that a vehicle is positioned on the parking space;
if the area Si of the connected region is less than the threshold SThreshold(s)Judging that no vehicle exists in the parking space;
and the parking space information issuing module is used for judging according to the vehicles in each virtual parking space frame to obtain parking space occupation information in the parking lot and issuing the parking space occupation information through the communication module.
Further, the microprocessor further comprises:
and the parking space information updating module is used for comparing the current monitored parking space information with the last counted parking space information, and updating and publishing the parking space occupation information if the parking space occupation information is changed.
Still further, the microprocessor further comprises:
and the parking space reservation processing module is used for presetting parking space reservation conditions according to the reserved parking space conditions by a manager, inputting the reservation information into the parking space information publishing module and comprehensively judging the parking space occupation information.
Further, the microprocessor further comprises:
and the color space conversion processing module is used for converting the image acquired by the image data reading module from the RGB color space to the HSI color space, and the conversion calculation formula is (18):
I = R + G + B 3
H = 1 360 [ 90 - Arc tan ( F 3 ) + { 0 , G > B ; 180 , G < B } ]
S = 1 - { min ( R , G , B ) I } - - - ( 18 )
wherein, F = 2 R - G - B G - B
in the above formula, H is the hue of the HSI color space, S is the saturation of the HSI color space, I is the luminance of the HSI color space, and R is the red of the RGB color space; g is the green color of the RGB color space; b is blue in RGB color space;
the input end of the color space conversion processing module is connected with the virtual car frame detection module.
Or, the microprocessor further comprises:
and the color space conversion processing module is used for converting the image acquired by the image data reading module from the RGB color space to a (Cr, Cb) space color model, and the conversion calculation formula is (19):
Y=0.29990*R+0.5870*G+0.1140*B
Cr=0.5000*R-0.4187*G-0.0813*B+128
Cb=-0.1787*R-0.3313*G+0.5000*B+128 (19)
in the above formula, Y represents the luminance of the (Cr, Cb) spatial color model, and Cr, Cb are two color components of the (Cr, Cb) spatial color model, representing color difference; r represents red of the RGB color space; g represents the green color of the RGB color space; b represents the blue color of the RGB color space.
The working principle of the invention is as follows: image processing and computer vision are new technologies which are continuously developed, and there are four purposes of observation by computer vision in principle, namely preprocessing, feature extraction of the bottommost layer, identification of middle-level features and explanation of high-level scenes through images. Generally, computer vision includes primary features, image processing, and image understanding. The image is an extension of human vision. Through machine vision, can grasp the parking stall situation of use in the parking area accurately immediately. The basis of the rapidity of image detection is that the information received by vision takes light as a propagation medium; the richness and intuition of image information are that other current various detection technologies cannot provide such richness and intuition information.
The recently developed omni-directional Vision sensor odvs (omni-directional Vision sensors) provides a new solution for acquiring a panoramic image of a scene in real time. ODVS is characterized by wide visual field (360 degrees), which can compress the information in a hemisphere visual field into an image with larger information amount; when a scene image is obtained, the ODVS is more freely arranged in the scene; ODVS does not aim at the target while monitoring the environment; the algorithm is simpler when the moving object in the monitoring range is detected and tracked; a real-time image of the scene may be obtained. Therefore, the ODVS-based omnidirectional vision system has been rapidly developed in recent years and is becoming an important field in computer vision research, and IEEE has started to conduct annual omnidirectional vision research conference (IEEE workhop on Omni-directional vision) from 2000. Because the parking space detection in the parking lot needs to cover all parking spaces, each parking space can be detected at any time by utilizing the omnibearing visual sensor, the use condition of the parking spaces in the parking lot can be very easily mastered as long as the omnibearing visual sensor is arranged in the middle of the top of the parking lot, and papers and patents for applying the omnibearing visual sensor to the technical field of electronic parking guidance systems do not exist at present.
Therefore, an omnibearing visual sensor ODVS is adopted, a digital image processing technology is utilized, and the parking space distribution of the parking lot and some characteristics of parked vehicles are combined to detect whether each parking space is occupied or not, so that in-field guidance and out-of-field guidance information is provided for parking; the parking lot safety monitoring system has the advantages that parking space detection is used as a key point, meanwhile, safety in the parking lot can be monitored, and a pair of intelligent wisdom eyes is provided for the parking lot.
The technical scheme for manufacturing the optical part of the ODVS camera device mainly comprises a vertically downward catadioptric mirror and an upward-facing camera. The imaging unit composed of a condenser lens and a CCD is fixed at the lower part of a cylinder made of transparent resin or glass, a downward large-curvature catadioptric mirror is fixed at the upper part of the cylinder, a cone-shaped body with gradually reduced diameter is arranged between the catadioptric mirror and the condenser lens, the cone-shaped body is fixed at the middle part of the catadioptric mirror, and the cone-shaped body is used for preventing the light saturation phenomenon in the cylinder caused by the excessive light incidence. Fig. 2 is a schematic diagram showing an optical system of the omnidirectional vision sensor according to the present invention.
The catadioptric panoramic imaging system can perform imaging analysis by using a pinhole imaging model, but the requirement of real-time performance must be met for the acquired real-scene image to be subjected to back projection to obtain a perspective panoramic image.
The horizontal coordinate of an object point in a scene of a parking lot is in a linear relation with the coordinate of a corresponding image point, so that the horizontal scene can be ensured to be free of distortion, the omnibearing visual device serving as an electronic parking guidance system is arranged at the top of the parking lot and monitors the parking space condition in the horizontal direction in the whole parking lot, and therefore the omnibearing visual device is ensured not to deform in the horizontal direction when a folding mirror surface of the omnibearing visual device is designed.
In the design, a CCD (complementary metal oxide semiconductor) (CMOS) device and an imaging lens are selected to form a camera, the overall dimension of the system is preliminarily estimated on the basis of calibrating the internal parameters of the camera, and then the shape parameters of the reflector are determined according to the field of view in the height direction.
As shown in fig. 1, the projection center C of the camera is above the road level scene at a distance h from the level scene, and the vertex of the mirror is above the projection center at a distance zo from the projection center. In the invention, a coordinate system is established by taking the projection center of the camera as the origin of coordinates, and the surface shape of the reflector is expressed by a z (X) function. A pixel q in the image plane from the image center point ρ receives a ray reflected at the mirror M point from the horizontal scene O point (distance Z axis d). The distortion-free horizontal scene requires that the horizontal coordinate of the scene object point and the coordinate of the corresponding image point have a linear relation;
d(ρ)=αρ (1)
in the formula (1), ρ is a distance from a surface-shaped center point of the reflecting mirror, and α is a magnification of the imaging system.
An included angle between the normal of the reflector at the point M and the Z axis is gamma, an included angle between the incident light and the Z axis is phi, and an included angle between the reflected light and the Z axis is theta. Then
tg ( x ) = d ( x ) - x z ( x ) - h - - - ( 2 )
<math> <mrow> <mi>tg&gamma;</mi> <mo>=</mo> <mfrac> <mrow> <mi>dz</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> <mi>dx</mi> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>tg</mi> <mrow> <mo>(</mo> <mn>2</mn> <mi>&gamma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mn>2</mn> <mfrac> <mrow> <mi>dz</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> <mi>dx</mi> </mfrac> </mrow> <mrow> <mn>1</mn> <mo>-</mo> <mfrac> <mrow> <msup> <mi>d</mi> <mn>2</mn> </msup> <mi>z</mi> <mrow> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> <msup> <mi>dx</mi> <mn>2</mn> </msup> </mfrac> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
Figure A20061005047100131
Law of reflection
2γ=φ-θ
<math> <mrow> <mi>tg</mi> <mrow> <mo>(</mo> <mn>2</mn> <mi>&gamma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>tg</mi> <mrow> <mo>(</mo> <mi>&phi;</mi> <mo>-</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>tg&phi;</mi> <mo>-</mo> <mi>tg&theta;</mi> </mrow> <mrow> <mn>1</mn> <mo>+</mo> <mi>tg&phi;tg&theta;</mi> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
From the equations (2), (4), (5) and (6), the differential equation (7) is obtained
d 2 z ( z ) dx 2 + 2 k dz ( x ) dx - 1 = 0 - - - ( 7 )
In the formula (I); k = z ( x ) [ z ( x ) - h ] + x [ d ( x ) - x ] z ( x ) [ d ( x ) - x ] + x [ z ( x ) - h ] - - - ( 8 )
from equation (7), the differential equation (9) is obtained
dz ( x ) dx + k - k 2 + 1 = 0 - - - ( 9 )
Obtaining formula (10) from formulae (1) and (5)
d ( x ) = afx z ( x ) - - - ( 10 )
From equations (8), (9), (10) and the initial conditions, a numerical solution of the mirror surface shape can be obtained by solving the differential equation. The system overall dimension mainly refers to the distance Ho between the reflector and the camera and the caliber D of the reflector. When the catadioptric panoramic system is designed, a proper camera is selected according to application requirements, Rmin is calibrated, the distance Ho between a reflector and the camera is determined by the focal length f of a lens, and the aperture Do of the reflector is calculated according to the formula (1).
Determination of system parameters:
the system parameters af are determined according to the required height-wise field of view of the application. From formulae (1), (2) and (5) formula (11) is obtained, for some simplification, by dividing z (x) by z0Mainly considering that the height change of the mirror surface is smaller than the position change of the mirror surface and the camera;
<math> <mrow> <mi>tg&phi;</mi> <mo>=</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <mi>af</mi> <mo>-</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> <mfrac> <mi>&rho;</mi> <mi>f</mi> </mfrac> </mrow> <mrow> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>-</mo> <mi>h</mi> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow> </math>
at the maximum circumference position of the image plane with the image center point as the center of a circle, rho is Rmin
Figure A20061005047100141
Figure A20061005047100142
Then formula (12) can be obtained;
<math> <mrow> <mfrac> <mi>&rho;</mi> <mi>f</mi> </mfrac> <mo>=</mo> <mfrac> <mrow> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>-</mo> <mi>h</mi> <mo>)</mo> </mrow> <mi>tg</mi> <msub> <mi>&phi;</mi> <mi>max</mi> </msub> </mrow> <msub> <mi>&omega;</mi> <mi>max</mi> </msub> </mfrac> <mo>+</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow> </math>
the imaging simulation was performed in the opposite direction to the actual light. The light source is arranged at the projection center of the camera, pixel points are selected at equal intervals in the image plane, light rays passing through the pixel points are reflected by the reflector and then intersect with the horizontal plane, and if the intersection points are at equal intervals, the reflector has the property of no distortion of a horizontal scene. The imaging simulation can evaluate the imaging property of the reflector on one hand and accurately calculate the caliber and the thickness of the reflector on the other hand.
The imaging transformation involves a transformation between different coordinate systems. In the imaging system of the camera, the following 4 coordinate systems are involved; (1) a real world coordinate system XYZ; (2) a coordinate system x ^ y ^ z ^ which is established by taking the camera as the center; (3) image plane coordinate system, image plane coordinate system x formed in camera*y*o*(ii) a (4) The computer image coordinate system, the coordinate system MN used by the internal digital image of the computer, is in pixel units.
According to different conversion relations of the coordinate systems, the required imaging model of the omnidirectional camera can be obtained, and the corresponding relation from the two-dimensional image to the three-dimensional scene is converted. In the invention, an approximate perspective imaging analysis method of a catadioptric omnidirectional imaging system is adopted to convert a two-dimensional image of an image plane coordinate formed in a camera into a corresponding relation of a three-dimensional scene, a general perspective imaging model is shown in figure 3, d is object height, rho is image height, t is object distance, and F is image distance (equivalent focal length). Can obtain the formula (13)
<math> <mrow> <mi>d</mi> <mo>=</mo> <mfrac> <mi>t</mi> <mi>F</mi> </mfrac> <mi>&rho;</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow> </math>
When the catadioptric omnidirectional imaging system with the horizontal scene free of deformation is designed, the horizontal coordinate of a scene object point and the coordinate of a corresponding image point are required to be in a linear relation, as expressed in formula (1); comparing the equations (13) and (1), it can be seen that the horizontal scene imaging by the catadioptric omnidirectional imaging system without deformation is perspective imaging. Therefore, for horizontal scene imaging, the catadioptric omnidirectional imaging system without horizontal scene deformation can be regarded as a perspective camera, and α is the magnification of the imaging system. Let the center of projection of the virtual perspective camera be point C (see fig. 3), and its equivalent focal length be F. Comparing the formulas (13) and (1) gives the formula (14);
<math> <mrow> <mi>&alpha;</mi> <mo>=</mo> <mfrac> <mi>t</mi> <mi>F</mi> </mfrac> <mo>;</mo> <mi>t</mi> <mo>=</mo> <mi>h</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow> </math>
from formulae (12) and (14) to give formula (15)
<math> <mrow> <mi>F</mi> <mo>=</mo> <mfrac> <mrow> <mi>fh</mi> <msub> <mi>&omega;</mi> <mi>max</mi> </msub> </mrow> <mrow> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>-</mo> <mi>h</mi> <mo>)</mo> </mrow> <mi>tg</mi> <msub> <mi>&phi;</mi> <mi>max</mi> </msub> <mo>+</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <msub> <mi>&omega;</mi> <mrow> <mi>max</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>15</mn> <mo>)</mo> </mrow> </mrow> </math>
The system imaging simulation is performed according to the above-mentioned omnidirectional camera imaging model, and after the light rays emitted from the camera projection center and reflected by equally spaced pixel points in the pixel plane, the intersection points on the horizontal plane of the parking lot 5m away from the projection center are substantially equally spaced, as shown in fig. 4. Therefore, according to the above design principle, the relationship between the coordinates of the horizontal plane of the parking lot and the coordinates of the corresponding omnidirectional image points is simplified into a linear relationship, that is, the conversion from the real world coordinate system XYZ to the image plane coordinate system through the design of the reflecting mirror surface can be a linear relationship in proportion to the magnification α. The following is the conversion from the image plane coordinate system to the coordinate system used by the digital image inside the computer, the image coordinate unit used in the computer is the number of discrete pixels in the memory, so the coordinates of the actual image plane can be mapped to the image plane of the computer only by rounding conversion, and the conversion expression is given by the formula (16);
M = O m - x * S x ; N = O n - y * S y ; - - - ( 16 )
in the formula: om and On are respectively the number of rows and columns of point pixels mapped On the computer image plane by the origin of the image plane; sx, Sy are scale factors in the x and y directions, respectively. The Sx and Sy are determined by placing a calibration plate at a distance Z between a camera and a reflector surface, and calibrating a camera to obtain numerical values of Sx and Sy, wherein the unit is (pixel); om and On. Is determined in terms of the selected camera resolution pixels in units of (pixel).
The method comprises the following steps that a vehicle parked in a parking lot belongs to a static object in a relatively static state, vehicles entering and exiting the parking lot and people moving in the parking lot belong to moving objects, a background subtraction algorithm image processing method can be adopted when two foreground objects are obtained, but the background models of the vehicles and the moving objects are different in establishing and updating strategies, and the former basically requires that an image background is not updated as much as possible so as to avoid using the vehicle parked in the parking lot as the background; the latter requires that the image background be constantly updated to obtain a foreground point set by background subtraction. The present invention focuses mainly on the former, and therefore, a strategy of not updating the image background as much as possible is adopted.
The background subtraction algorithm is also called a difference method, and is an image processing method commonly used for detecting image changes and moving objects. Detecting the pixel parts with light source points according to the corresponding relation between the three-dimensional space and the image pixels, firstly, a stable reference image is needed, the reference image is stored in a memory of a computer, the reference image is dynamically updated by the background self-adaption method, the image subtraction is carried out between the image and the reference image by real-time shooting, the brightness of the area with the changed subtraction result is enhanced, and the calculation formula of the image subtraction is expressed by the formula (17),
fd(X,t0,ti)=f(X,ti)-f(X,t0) (17)
in the formula fd(X,t0,ti) The result is that the real-time shot image and the reference image are subtracted; f (X, t)i) Is to take an image in real time, f (X, t)0) Is the base reference picture.
Considering that the ground in the parking lot is all close to gray, and is clearly different from the color of the parked vehicle, the image subtraction calculation can be performed by using the color model. The color of each pixel point of a color image is generally weighted and synthesized by red, green and blue tristimulus values, and other color bases such as intensity, hue t saturation HSI bases and the like can be obtained by linear or nonlinear transformation of red, green and blue RGB values. In order to obtain the difference of the color characteristic values of the parked vehicle area and the background area in different color spaces and different illumination on the images in the parking lot, an HSI space color model is adopted in the patent.
The HSI color space is based on human perception of color, and the HSI model describes that color has three basic features: 1. hue H, measured by location on a standard color wheel from 0 to 360 degrees. In common use, the hue is identified by a color name, such as red, orange or green; 2. saturation S, refers to the intensity or purity of the color. The degree of saturation represents the proportion of color components in the color phase, measured as a percentage from 0% (grey) to 100% (fully saturated). On a standard color wheel, the saturation is increasing from the center to the edge; 3. the brightness I, which is the relative shade of a color, is typically measured in percentages from 0% (black) to 100% (white).
Hue and saturation are commonly referred to as chroma and are used to indicate the class and shade of a color. Since human vision is much more sensitive to brightness than to shade of color, the human visual system often uses an HSI color space, which is more consistent with human visual characteristics than an RGB color space, for the convenience of color processing and recognition. A number of algorithms are conveniently used in the HSI color space in both image processing and computer vision, which can be processed separately and independently of each other. Therefore, the workload of image analysis and processing can be greatly simplified in the HSI color space. Note that the HSI model has two important facts, firstly that the I component is color independent, mainly influenced by the intensity of the light source, and secondly that the H and S components are closely linked to the way people perceive color. The light in the parking lot can be changed in intensity, and the H and S components of the ground color of the parking lot and the color of the parked vehicle are independent and cannot be changed due to the intensity change of the light. Therefore, the information of whether each parking space in the parking lot is occupied can be obtained very easily by performing image subtraction calculation of the real-time captured image and the reference image by using the H and S components of the colors in the HSI color space. The conversion relationship between the HSI color space and the RGB color space is expressed by equation (18),
I = R + G + B 3
H = 1 360 [ 90 - Arc tan ( F 3 ) + { 0 , G > B ; 180 , G < B } ]
S = 1 - { min ( R , G , B ) I } - - - ( 18 )
wherein, F = 2 R - G - B G - B .
the connectivity between pixels is an important concept for determining the area. The method of checking whether a connected region exists in the virtual space frame region may be utilized in determining whether a parking space in the parking lot is occupied. The specific method comprises the following steps: in a two-dimensional image, m (m < ═ 8) adjacent pixels around a target pixel are assumed, and if the gray level of the pixel is equal to the gray level of one point a in the m pixels, the pixel is said to have connectivity with the point a. The usual connectivity is 4 and 8 connectivity. 4-connected four points of the target pixel are generally selected. And 8, selecting all adjacent pixels of the target pixel in the two-dimensional space when the target pixel is connected. Taking all the connected pixels as a region constitutes a connected region.
The connected region calculation mainly solves the problem that in the image processing process, a background and a target of a binary image respectively have gray values of 0 and 1. Let us say that the cell with pixel 0 indicates that no object exists in the cell, and if it is 1, it indicates that an object exists in the cell. Since the glass of some vehicles is relatively close to the ground in color, the merging of defective areas can be performed by using a connected component marking method. The connected labeling algorithm may find all connected components in the image and assign the same label to all points in the same connected component. FIG. 5 is a schematic diagram of the connectivity marker. The following is a connected component algorithm,
1) scanning the image from left to right, top to bottom;
2) if the pixel point is 1, then:
if there is a marker for the upper and left dots, then copy this marker.
If two points have the same mark, this mark is copied.
If two points have different labels, then copy the label of the point on and enter both labels into the equivalence table as equivalent labels.
Else assign a new label to this pixel point and enter this label into the equivalence table.
3) If more points need to be considered, go back to step 2.
4) The lowest label is found in each equivalence set of the equivalence table.
5) The image is scanned and each marker is replaced with the lowest marker in the equivalence table.
Then, the area Si is statistically calculated according to the connected area in each virtual parking space frame, when the area Si is statistically calculated to exceed the threshold value 1, it is considered that a vehicle is in the parking space, and when the area Si is smaller than the threshold value 1, it is considered that noise or a waste (paper, plastic film, etc.) is generated.
The invention has the following beneficial effects: 1) the detection range is wide, and the detection can be carried out on the parked vehicles with the directions within 200 meters; 2) the installation and maintenance are not interfered, and the video detector is usually installed on the top of the middle part of the parking lot, so the installation and maintenance do not influence the business of the parking lot, and the road surface does not need to be excavated and damaged; 3) the maintenance is convenient and low in consumption, when the traditional induction coil detector is damaged, the road surface needs to be excavated for maintenance, and when the video detection equipment has a problem, the equipment can be directly removed or repaired, so that the maintenance cost is reduced; 4) the detection parameters are rich, the parking space occupation condition in the parking lot can be detected, and various potential safety hazards in the parking lot, such as accidents of vehicle theft, fire and the like, can be detected after some new algorithms are added, which is incomparable to a common induction coil detector; 5) the visibility can transmit the omnibearing real-time image to a manager of the parking lot to realize the monitoring function; 6) the detection reliability and accuracy are high, and false operation or false detection can not occur as the traditional induction coil detector; 7) the statistical calculation is convenient, the algorithm is simple to realize, and the method is particularly suitable for the management of large-scale parking lots; 8) the system has good advancement, expandability and sustainable development, the video parking space detection technology is one of key technologies of the intelligent traffic system, the video parking space detection technology can independently become a system, and can be linked with advanced vehicle information systems and other dynamic intelligent traffic modules through a network to realize more functions.
(IV) description of the drawings
FIG. 1 is a schematic diagram of imaging from three-dimensional space to an omnidirectional visual plane;
FIG. 2 is a schematic diagram of the hardware components of the omnidirectional vision sensor;
FIG. 3 is a schematic view of a perspective projection imaging model equivalent to a general perspective imaging model of an omnidirectional vision apparatus;
FIG. 4 is a schematic diagram of an omni-directional vision apparatus for simulating image deformation in the horizontal direction;
FIG. 5 is a functional block diagram of the electronic parking guidance system;
FIG. 6 is a functional block diagram of an electronic parking guidance system based on omnidirectional computer vision;
fig. 7 is a flowchart of a process of the electronic parking guidance system with an omnidirectional vision sensor.
Fig. 8 is an illustration of an electronic parking guidance system based on omnidirectional computer vision.
(V) detailed description of the preferred embodiments
The invention is further described below with reference to the accompanying drawings.
Example 1
Referring to fig. 1 to 8, an electronic parking guidance system based on omnidirectional computer vision comprises a microprocessor 6, an omnidirectional vision sensor 13 for monitoring the condition of a parked vehicle in a parking lot, and a communication module 27 for communicating with the outside, wherein the omnidirectional vision sensor 13 is installed at the upper part of the parking lot to be monitored, and the omnidirectional vision sensor 13 is connected with the microprocessor 6 through a USB interface;
the omnibearing vision sensor 13 comprises an outer convex reflecting mirror surface 1 used for reflecting objects in the monitoring field, a transparent cylinder 3 and a camera 5, wherein the outer convex reflecting mirror surface 1 faces downwards, the transparent cylinder 3 supports the outer convex reflecting mirror surface 1, a black cone 2 is fixed at the center of an outer convex part of the reflecting mirror surface 1, the camera 5 used for shooting an imaging body on the outer convex reflecting mirror surface is positioned inside the transparent cylinder, and the camera 5 is positioned on a virtual focus of the outer convex reflecting mirror surface 1;
the microprocessor comprises:
the image data reading module 15 is used for reading the image information of the parking places in the parking lot, which is transmitted from the visual sensor;
an image data file storage module 16, which is used for storing the read image information in a storage unit in a file mode;
the virtual parking stall frame setting module 17 is used for setting the read image information of the whole parking lot into virtual parking stall frames which correspond to the actual parking stalls one by one according to the parking stall distribution condition and storing a reference image;
the sensor calibration module 18 is used for calibrating parameters of the omnibearing visual sensor and establishing a linear corresponding relation between a spatial object image and an obtained video image;
a network transmission module 19, configured to output the image information to the outside through a network;
a color space conversion processing module 20, configured to convert the image acquired by the image data reading module from an RGB color space to an HSI color space, where the conversion is calculated as (18):
I = R + G + B 3
H = 1 360 [ 90 - Arc tan ( F 3 ) + { 0 , G > B ; 180 , G < B } ]
S = 1 - { min ( R , G , B ) I } - - - ( 18 )
wherein, F = 2 R - G - B G - B
in the above formula, H is the hue of the HSI color space, S is the saturation of the HSI color space, I is the luminance of the HSI color space, and R is the red of the RGB color space; g is the green color of the RGB color space; b is blue in RGB color space;
the input end of the color space conversion processing module is connected with the virtual car frame detection module;
the virtual car frame detection module 21 is configured to perform difference operation on the obtained current frame live video image and the reference image for each virtual car frame, where a calculation formula of image subtraction is expressed as formula (17):
fd(X,t0,ti)=f(X,ti)-f(X,t0) (17)
in the above formula, fd(X,t0,ti) The result is that the real-time shot image and the reference image are subtracted; f (X, t)i) The image is shot in real time; f (X, t)0) Is a reference image;
such as fd(X,t0,ti) When the value is larger than or equal to the threshold value,judging that a vehicle event is suspected;
such as fd(X,t0,ti) If the threshold value is less than the threshold value, judging that no suspicious vehicle event exists;
the connected region calculation module 22 is configured to mark the current image after determining that there is a suspicious vehicle event, where a cell with a pixel grayscale of 0 indicates that there is no suspicious vehicle in the cell, and a cell with a pixel grayscale of 1 indicates that there is a suspicious vehicle in the cell, calculate whether a pixel in the current image is equal to a pixel of a certain adjacent point around the current pixel, determine that connectivity exists if the pixel in the current image is equal to the grayscale, and use all the pixels with connectivity as a connected region;
the vehicle judgment module 23 is configured to calculate an area Si of the connected region according to the obtained connected region, and compare the area of the connected region with a preset threshold:
if the area Si of the connected region is larger than the threshold SThreshold(s)Judging that a vehicle is in the parking space;
if the area Si of the connected region is less than the threshold SThreshold(s)Judging that no vehicle exists in the parking space;
the parking space information publishing module 25 is used for obtaining parking space occupation information in the parking lot according to the judgment of the vehicles in each virtual parking space frame and publishing the parking space occupation information through the communication module;
the parking space information updating module 24 is used for comparing the current monitored parking space information with the last counted parking space information, and updating and issuing the parking space occupation information if the parking space occupation information is changed;
and the parking space reservation processing module 26 is used for setting a parking space reservation condition in advance according to the reserved parking space condition by a manager, inputting the reservation information into the parking space information publishing module, and comprehensively judging the parking space occupation information.
Referring to fig. 2 in conjunction with fig. 1, the fitting with an omnidirectional visual function according to the present invention has the following structure: the device comprises a catadioptric mirror 1, a black cone 2, a transparent outer cover cylinder 3 and a base 9, wherein the catadioptric mirror 1 is positioned at the upper end of the cylinder 3, and the convex surface of the catadioptric mirror extends into the cylinder downwards; the black cone 2 is fixed at the center of the convex surface of the catadioptric mirror 1; the rotating shafts of the catadioptric mirror 1, the black cone 2, the cylinder 3 and the base 9 are on the same central axis; the digital CCD camera 5 is positioned below the cylinder 2; the base 9 is provided with a circular groove with the same wall thickness as the cylinder 2; the base 9 is provided with a hole with the same size as the lens 4 of the digital camera device 5, and the lower part of the base 9 is provided with a microprocessor 6, a memory 8 and a display 7.
The method comprises the following steps that a vehicle parked in a parking lot belongs to a static object in a relatively static state, vehicles entering and exiting the parking lot and people moving in the parking lot belong to moving objects, a background subtraction algorithm image processing method can be adopted when two foreground objects are obtained, but the background model establishing and updating strategies of the two foreground objects are different, and the former basically requires that an image background is not updated so as to avoid taking the vehicle parked in the parking lot as the background; the latter requires that the image background be constantly updated to obtain a foreground point set by background subtraction. The background subtraction algorithm is performed in the subtraction map obtaining virtual car frame detection module 21 of fig. 7.
The background subtraction algorithm is also called a difference method, and is an image processing method commonly used for detecting image changes and moving objects. Detecting the pixel parts with light source points according to the corresponding relation between the three-dimensional space and the image pixels, firstly, a stable reference image is needed, the reference image is stored in a memory of a computer, the reference image is dynamically updated by the background self-adaption method, the image subtraction is carried out between the image and the reference image by real-time shooting, the brightness of the area with the changed subtraction result is enhanced, and the calculation formula of the image subtraction is expressed by the formula (17),
fd(X,t0,ti)=f(X,ti)-f(X,t0) (17)
in the formula fd(X,t0,ti) The result is that the real-time shot image and the reference image are subtracted; f (X, t)i) Is to take an image in real time, f (X, t)0) Is the base reference picture. The reference image is stored in the image data file and the virtual parking space storage module 17 shown in fig. 7, and the reference image is an image acquired when no vehicle is parked in the parking lot.
Considering that the ground in the parking lot is all close to gray, and is clearly different from the color of the parked vehicle, the image subtraction calculation can be performed by using the color model. In the present invention, the HSI color space model is used, and the conversion between the HSI color space and the RGB color space is performed in the color space conversion processing module 20 of fig. 7, because it is considered that the intensity of the light in the parking lot varies, and the H and S components of the ground color of the parking lot and the color of the parked vehicle are independent from each other, and do not vary due to the intensity of the light. Therefore, the information of whether each parking space in the parking lot is occupied can be obtained very easily by performing image subtraction calculation of the real-time captured image and the reference image by using the H and S components of the colors in the HSI color space.
After the difference image is obtained, in order to obtain information about whether a vehicle is parked in each parking space, image preprocessing is required, wherein the image preprocessing is mainly performed in the connected region calculation module 22 in fig. 7, and a method for checking whether a connected region exists in a virtual parking space frame region is mainly used. The specific method comprises the following steps: in a two-dimensional image, m (m < ═ 8) adjacent pixels around a target pixel are assumed, and if the gray level of the pixel is equal to the gray level of one point a in the m pixels, the pixel is said to have connectivity with the point a. The usual connectivity is 4 and 8 connectivity. 4-connected four points of the target pixel are generally selected. And 8, selecting all adjacent pixels of the target pixel in the two-dimensional space when the target pixel is connected. Taking all the connected pixels as a region constitutes a connected region.
The connected region calculation mainly solves the problem that in the image processing process, a background and a target of a binary image respectively have gray values of 0 and 1. Let us say that the cell with pixel 0 indicates that no object exists in the cell, and if it is 1, it indicates that an object exists in the cell. Since the glass of some vehicles is relatively close to the ground in color, the merging of defective areas can be performed by using a connected component marking method. Then, the area Si is statistically calculated according to the connected area in each virtual parking space frame, when the area Si is statistically calculated to exceed the threshold value 1, it is considered that a vehicle is present in the parking space, and when the area Si is smaller than the threshold value 1, it is considered that video noise or discards (paper, plastic film, etc.) are present in the parking lot.
The virtual parking stall frame is manufactured when the omnibearing visual sensor is calibrated before the system is put into operation, and the specific method is that the omnibearing computer visual sensor is used for obtaining image information of parking stalls in a parking lot, the obtained image information of the parking stalls is displayed on a display, then the virtual parking stall frame is arranged in a computer according to the displayed situation of each parking stall in the parking lot, and the virtual parking stall frame is stored in a file (an image data file and a virtual parking stall storage module 17 in figure 7), and the arranged virtual parking stall frame is required to correspond to each parking stall in an actual parking lot one by one and is numbered; in the parking lot scene shown in fig. 8, there are 50 actual parking spaces, and the image data file and the virtual parking space storage module 18 in fig. 7 correspondingly store information such as the sizes of the 50 virtual parking space frames and the numbers of the respective parking spaces.
The user of the parking lot can reserve the parking space in the parking lot through various communication means, such as selecting a desired parking position and a parking time period on a scene graph representing the parking lot through the internet. The processing for reserving a space in the parking lot is performed in the space reservation processing module 26 in fig. 7.
Once it is detected that the parking space in the parking lot is changed or the parking space is reserved, the parking space occupation condition in the parking lot is recalculated in the parking space information updating module 24 in fig. 7 so as to prepare data for the parking space information publishing module 25, two tasks are mainly completed in the parking space information publishing module 25, one task is to issue parking spaces for vehicles in the parking lot in an inducement manner, and the information displays the parking spaces to be parked by using a large display screen arranged at an entrance of the parking lot so that users of the parking lot can quickly find the parking spaces to be parked; another task is to distribute the number of parked cars for guidance of off-board vehicles, and to provide the use status of the parking lot to the driver visually or audibly by broadcasting in combination with the information about the geographical position of the parking lot stored in the parking lot data storage information 28 at any time, or to distribute the use status of the parking lot to the guidance information board installed on the roadside in time by using the internet, a mobile phone, a car navigation device, and the like, as shown in fig. 6, the information displayed on the guidance information board is clear and easy to understand, for example, "% parking lot empty space ratio", and these processes are completed in the network communication module 27 in fig. 7.
Example 2
Referring to fig. 1 to 8, the color space conversion processing module of the present embodiment is configured to convert the image acquired by the image data reading module from the RGB color space to the (Cr, Cb) space color model, where the conversion is calculated by the following formula (19):
Y=0.29990*R+0.5870*G+0.1140*B
Cr=0.5000*R-0.4187*G-0.0813*B+128
Cb=-0.1787*R-0.3313*G+0.5000*B+128 (19)
in the above formula, Y represents the luminance of the (Cr, Cb) spatial color model, and Cr, Cb are two color components of the (Cr, Cb) spatial color model, representing color difference; r represents red of the RGB color space; g represents the green color of the RGB color space; b represents the blue color of the RGB color space.
The rest of the structure and the operation are the same as those of embodiment 1.

Claims (5)

1. An electronic parking guidance system based on omnidirectional computer vision is characterized in that: the electronic parking guidance system comprises a microprocessor, an omnibearing visual sensor and a communication module, wherein the omnibearing visual sensor is used for monitoring the condition of a parked vehicle in a parking lot, and the communication module is used for communicating with the outside;
the omnibearing vision sensor comprises an outward convex reflecting mirror surface, a transparent cylinder and a camera which are used for reflecting objects in the monitoring field, wherein the outward convex reflecting mirror surface faces downwards, the transparent cylinder supports the outward convex reflecting mirror surface, a black cone is fixed at the center of an outward convex part of the catadioptric mirror surface, the camera which is used for shooting an imaging body on the outward convex reflecting mirror surface is positioned inside the transparent cylinder, and the camera is positioned on a virtual focus of the outward convex reflecting mirror surface;
the microprocessor comprises:
the image data reading module is used for reading the image information of the parking places in the parking lot, which is transmitted from the visual sensor;
the image data file storage module is used for storing the read image information in a storage unit in a file mode;
the virtual parking stall frame setting module is used for setting virtual parking stall frames which correspond to actual parking stalls one by one according to the read image information of the whole parking lot and storing a reference image;
the sensor calibration module is used for calibrating parameters of the omnibearing visual sensor and establishing a linear corresponding relation between a spatial object image and an obtained video image;
the virtual car frame detection module is used for carrying out difference value operation on the obtained current frame live video image and the reference image for each virtual car frame, and the calculation formula of image subtraction is expressed as formula (17):
fd(X,t0,ti)=f(X,ti)-f(X,t0) (17)
in the above formula, fd(X,t0,ti) The result is that the real-time shot image and the reference image are subtracted; f (X, t)i) The image is shot in real time; f (X, t)0) Is a reference image;
such as fd(X,t0,ti) When the vehicle is not less than the threshold value, judging that a vehicle event is suspected;
such as fd(X,t0,ti) If the threshold value is less than the threshold value, judging that no suspicious vehicle event exists;
the connected region calculation module is used for marking the current image after judging that a suspicious vehicle event exists, wherein the cell with the pixel gray level of 0 indicates that no suspicious vehicle exists in the cell, the cell with the pixel gray level of 1 indicates that the suspicious vehicle exists in the cell, whether the pixel in the current image is equal to the pixel of a certain adjacent point around the current pixel or not is calculated, if the pixel is equal to the gray level, connectivity is judged, and all the pixels with the connectivity are used as a connected region;
and the vehicle judgment module is used for calculating the area Si of the connected region according to the obtained connected region, and comparing the area Si of the connected region with a preset threshold value:
if the area Si of the communication area is larger than the threshold S, determining that a vehicle is in the parking space;
if the area Si of the communication area is smaller than the threshold S, judging that no vehicle exists in the parking space;
and the parking space information issuing module is used for judging according to the vehicles in each virtual parking space frame to obtain parking space occupation information in the parking lot and issuing the parking space occupation information through the communication module.
2. An electronic parking guidance system based on omnidirectional computer vision as recited in claim 1, wherein: the microprocessor further comprises:
and the parking space information updating module is used for comparing the current monitored parking space information with the last counted parking space information, and updating and publishing the parking space occupation information if the parking space occupation information is changed.
3. An electronic parking guidance system based on omnidirectional computer vision as recited in claim 2, wherein: the microprocessor further comprises:
and the parking space reservation processing module is used for presetting parking space reservation conditions according to the reserved parking space conditions by a manager, inputting the reservation information into the parking space information publishing module and comprehensively judging the parking space occupation information.
4. An electronic parking guidance system based on omnidirectional computer vision according to any of claims 1-3, characterized in that: the microprocessor further comprises:
and the color space conversion processing module is used for converting the image acquired by the image data reading module from the RGB color space to the HSI color space, and the conversion calculation formula is (18):
I = R + G + B 3 - - - ( 18 )
H = 1 360 [ 90 - Arc tan ( F 3 ) + { 0 , G > B ; 180 , G < B } ]
S = 1 - { min ( R , G , B ) I }
wherein, F = 2 R - G - B G - B
in the above formula, H is the hue of the HSI color space, S is the saturation of the HSI color space, I is the luminance of the HSI color space, and R is the red of the RGB color space; g is the green color of the RGB color space; b is blue in RGB color space;
the input end of the color space conversion processing module is connected with the virtual car frame detection module.
5. An electronic parking guidance system based on omnidirectional computer vision according to any of claims 1-3, characterized in that: the microprocessor further comprises:
and the color space conversion processing module is used for converting the image acquired by the image data reading module from the RGB color space to a (Cr, Cb) space color model, and the conversion calculation formula is (19):
Y=0.29990*R+0.5870*G+0.1140*B
Cr=0.5000*R-0.4187*G-0.0813*B+128
Cb=-0.1787*R-0.3313*G+0.5000*B+128 (19)
in the above formula, Y represents the luminance of the (Cr, Cb) spatial color model, and Cr, Cb are two color components of the (Cr, Cb) spatial color model, representing color difference; r represents red of the RGB color space; g represents the green color of the RGB color space; b represents the blue color of the RGB color space.
CNB2006100504719A 2006-04-21 2006-04-21 All-round computer vision-based electronic parking guidance system Expired - Fee Related CN100449579C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100504719A CN100449579C (en) 2006-04-21 2006-04-21 All-round computer vision-based electronic parking guidance system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100504719A CN100449579C (en) 2006-04-21 2006-04-21 All-round computer vision-based electronic parking guidance system

Publications (2)

Publication Number Publication Date
CN101059909A true CN101059909A (en) 2007-10-24
CN100449579C CN100449579C (en) 2009-01-07

Family

ID=38866000

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100504719A Expired - Fee Related CN100449579C (en) 2006-04-21 2006-04-21 All-round computer vision-based electronic parking guidance system

Country Status (1)

Country Link
CN (1) CN100449579C (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101807248A (en) * 2010-03-30 2010-08-18 上海银晨智能识别科技有限公司 System for detecting the existence of people in ATM machine video scene and method thereof
CN101894482A (en) * 2010-07-15 2010-11-24 东南大学 Video technology-based roadside vacant parking position wireless network detection system and method
CN101656023B (en) * 2009-08-26 2011-02-02 西安理工大学 Management method of indoor car park in video monitor mode
CN102034365A (en) * 2010-11-28 2011-04-27 河海大学常州校区 Vehicle-mounted intelligent parking guidance system
CN102110366A (en) * 2011-03-28 2011-06-29 长安大学 Block-based accumulated expressway vehicle parking event detecting method
CN102110376A (en) * 2011-02-18 2011-06-29 汤一平 Roadside parking space detection device based on computer vision
CN101425181B (en) * 2008-12-15 2012-05-09 浙江大学 Panoramic view vision auxiliary parking system demarcating method
CN102542839A (en) * 2010-12-10 2012-07-04 西安大昱光电科技有限公司 Parking information service platform
CN102637360A (en) * 2012-04-01 2012-08-15 长安大学 Video-based road parking event detection method
CN102663357A (en) * 2012-03-28 2012-09-12 北京工业大学 Color characteristic-based detection algorithm for stall at parking lot
CN101872548B (en) * 2009-04-23 2013-04-17 黄柏霞 Parking space guiding system and method based on images
CN103778649A (en) * 2012-10-11 2014-05-07 通用汽车环球科技运作有限责任公司 Imaging surface modeling for camera modeling and virtual view synthesis
CN103824474A (en) * 2014-03-25 2014-05-28 宁波市江东元典知识产权服务有限公司 Stall prompting system based on image recognition technology
CN104169988A (en) * 2012-03-13 2014-11-26 西门子公司 Apparatus and method for detecting a parking space
CN104704321A (en) * 2012-10-09 2015-06-10 株式会社电装 Object detecting device
CN104751635A (en) * 2015-04-22 2015-07-01 成都逸泊科技有限公司 Intelligent parking monitoring system
CN104794164A (en) * 2015-03-26 2015-07-22 华南理工大学 Method for recognizing settlement parking spaces meeting social parking requirement on basis of open source data
CN105989328A (en) * 2014-12-11 2016-10-05 由田新技股份有限公司 Method and device for detecting use of handheld device by person
CN105989739A (en) * 2015-02-10 2016-10-05 成都海存艾匹科技有限公司 Hybrid parking stall monitoring algorithm
CN106257563A (en) * 2015-06-17 2016-12-28 罗伯特·博世有限公司 The management in parking lot
CN106781680A (en) * 2017-02-20 2017-05-31 洪志令 A kind of curb parking intelligent control method based on the detection of image empty parking space
CN107730970A (en) * 2017-02-14 2018-02-23 西安艾润物联网技术服务有限责任公司 Parking lot gateway outdoor scene methods of exhibiting and device
CN108022431A (en) * 2017-12-04 2018-05-11 珠海横琴小可乐信息技术有限公司 A kind of method and system that parking stall is detected by video capture image
CN108346313A (en) * 2018-04-19 2018-07-31 济南浪潮高新科技投资发展有限公司 A kind of method for detecting vacant parking place and system
CN109784306A (en) * 2019-01-30 2019-05-21 南昌航空大学 A kind of intelligent parking management method and system based on deep learning
CN109785354A (en) * 2018-12-20 2019-05-21 江苏大学 A kind of method for detecting parking stalls based on background illumination removal and connection region
CN109918970A (en) * 2017-12-13 2019-06-21 中国电信股份有限公司 Recognition methods, device and the computer readable storage medium of free parking space
CN110751854A (en) * 2019-10-28 2020-02-04 奇瑞汽车股份有限公司 Parking guidance method and device for automobile and storage medium
CN111462522A (en) * 2020-04-04 2020-07-28 东风汽车集团有限公司 Visual parking space detection method capable of eliminating influence of strong ground reflected light
CN111551196A (en) * 2020-04-23 2020-08-18 上海悠络客电子科技股份有限公司 Method for detecting whether vehicles exist on maintenance station
CN111739336A (en) * 2020-04-26 2020-10-02 智慧互通科技有限公司 Parking management method and system
WO2024098724A1 (en) * 2022-11-07 2024-05-16 重庆邮电大学 Driving system in area outside enhanced autonomous valet parking lot, and application method thereof

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789695B (en) * 2012-07-03 2014-12-10 大唐移动通信设备有限公司 Vehicular networking parking management access system and vehicular networking parking management system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6107942A (en) * 1999-02-03 2000-08-22 Premier Management Partners, Inc. Parking guidance and management system
JP3841621B2 (en) * 2000-07-13 2006-11-01 シャープ株式会社 Omnidirectional visual sensor
JP2004127162A (en) * 2002-10-07 2004-04-22 Kaa Tec Kk Parking lot managing system and its method
CN100363958C (en) * 2005-07-21 2008-01-23 深圳市来吉智能科技有限公司 Guide management system of intelligent parking position

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425181B (en) * 2008-12-15 2012-05-09 浙江大学 Panoramic view vision auxiliary parking system demarcating method
CN101872548B (en) * 2009-04-23 2013-04-17 黄柏霞 Parking space guiding system and method based on images
CN101656023B (en) * 2009-08-26 2011-02-02 西安理工大学 Management method of indoor car park in video monitor mode
CN101807248A (en) * 2010-03-30 2010-08-18 上海银晨智能识别科技有限公司 System for detecting the existence of people in ATM machine video scene and method thereof
CN101894482A (en) * 2010-07-15 2010-11-24 东南大学 Video technology-based roadside vacant parking position wireless network detection system and method
CN102034365A (en) * 2010-11-28 2011-04-27 河海大学常州校区 Vehicle-mounted intelligent parking guidance system
CN102034365B (en) * 2010-11-28 2012-12-26 河海大学常州校区 Vehicle-mounted intelligent parking guidance system
CN102542839A (en) * 2010-12-10 2012-07-04 西安大昱光电科技有限公司 Parking information service platform
CN102110376B (en) * 2011-02-18 2012-11-21 汤一平 Roadside parking space detection device based on computer vision
CN102110376A (en) * 2011-02-18 2011-06-29 汤一平 Roadside parking space detection device based on computer vision
CN102110366A (en) * 2011-03-28 2011-06-29 长安大学 Block-based accumulated expressway vehicle parking event detecting method
CN104169988A (en) * 2012-03-13 2014-11-26 西门子公司 Apparatus and method for detecting a parking space
CN102663357A (en) * 2012-03-28 2012-09-12 北京工业大学 Color characteristic-based detection algorithm for stall at parking lot
CN102637360B (en) * 2012-04-01 2014-07-16 长安大学 Video-based road parking event detection method
CN102637360A (en) * 2012-04-01 2012-08-15 长安大学 Video-based road parking event detection method
CN104704321A (en) * 2012-10-09 2015-06-10 株式会社电装 Object detecting device
CN104704321B (en) * 2012-10-09 2017-08-08 株式会社电装 Article detection device
CN103778649A (en) * 2012-10-11 2014-05-07 通用汽车环球科技运作有限责任公司 Imaging surface modeling for camera modeling and virtual view synthesis
CN103778649B (en) * 2012-10-11 2018-08-31 通用汽车环球科技运作有限责任公司 Imaging surface modeling for camera modeling and virtual view synthesis
CN103824474A (en) * 2014-03-25 2014-05-28 宁波市江东元典知识产权服务有限公司 Stall prompting system based on image recognition technology
CN103824474B (en) * 2014-03-25 2015-08-19 宁波市江东元典知识产权服务有限公司 Based on the parking stall prompt system of image recognition technology
CN105989328A (en) * 2014-12-11 2016-10-05 由田新技股份有限公司 Method and device for detecting use of handheld device by person
CN105989739A (en) * 2015-02-10 2016-10-05 成都海存艾匹科技有限公司 Hybrid parking stall monitoring algorithm
CN104794164A (en) * 2015-03-26 2015-07-22 华南理工大学 Method for recognizing settlement parking spaces meeting social parking requirement on basis of open source data
CN104794164B (en) * 2015-03-26 2018-04-13 华南理工大学 Method based on the social parking demand of data identification settlement parking stall matching of increasing income
CN104751635A (en) * 2015-04-22 2015-07-01 成都逸泊科技有限公司 Intelligent parking monitoring system
CN106257563A (en) * 2015-06-17 2016-12-28 罗伯特·博世有限公司 The management in parking lot
CN107730970A (en) * 2017-02-14 2018-02-23 西安艾润物联网技术服务有限责任公司 Parking lot gateway outdoor scene methods of exhibiting and device
CN107730970B (en) * 2017-02-14 2021-03-02 西安艾润物联网技术服务有限责任公司 Parking lot entrance and exit live-action display method and device
CN106781680A (en) * 2017-02-20 2017-05-31 洪志令 A kind of curb parking intelligent control method based on the detection of image empty parking space
CN106781680B (en) * 2017-02-20 2019-07-30 洪志令 A kind of curb parking intelligent control method based on the detection of image empty parking space
CN108022431A (en) * 2017-12-04 2018-05-11 珠海横琴小可乐信息技术有限公司 A kind of method and system that parking stall is detected by video capture image
CN109918970A (en) * 2017-12-13 2019-06-21 中国电信股份有限公司 Recognition methods, device and the computer readable storage medium of free parking space
CN109918970B (en) * 2017-12-13 2021-04-13 中国电信股份有限公司 Method and device for identifying free parking space and computer readable storage medium
CN108346313A (en) * 2018-04-19 2018-07-31 济南浪潮高新科技投资发展有限公司 A kind of method for detecting vacant parking place and system
CN109785354A (en) * 2018-12-20 2019-05-21 江苏大学 A kind of method for detecting parking stalls based on background illumination removal and connection region
CN109784306A (en) * 2019-01-30 2019-05-21 南昌航空大学 A kind of intelligent parking management method and system based on deep learning
CN110751854A (en) * 2019-10-28 2020-02-04 奇瑞汽车股份有限公司 Parking guidance method and device for automobile and storage medium
CN110751854B (en) * 2019-10-28 2021-08-31 芜湖雄狮汽车科技有限公司 Parking guidance method and device for automobile and storage medium
CN111462522A (en) * 2020-04-04 2020-07-28 东风汽车集团有限公司 Visual parking space detection method capable of eliminating influence of strong ground reflected light
CN111462522B (en) * 2020-04-04 2021-10-29 东风汽车集团有限公司 Visual parking space detection method capable of eliminating influence of strong ground reflected light
CN111551196A (en) * 2020-04-23 2020-08-18 上海悠络客电子科技股份有限公司 Method for detecting whether vehicles exist on maintenance station
CN111551196B (en) * 2020-04-23 2024-06-18 上海悠络客电子科技股份有限公司 Detection method for detecting whether vehicle exists on overhaul station
CN111739336A (en) * 2020-04-26 2020-10-02 智慧互通科技有限公司 Parking management method and system
CN111739336B (en) * 2020-04-26 2022-09-20 智慧互通科技股份有限公司 Parking management method and system
WO2024098724A1 (en) * 2022-11-07 2024-05-16 重庆邮电大学 Driving system in area outside enhanced autonomous valet parking lot, and application method thereof

Also Published As

Publication number Publication date
CN100449579C (en) 2009-01-07

Similar Documents

Publication Publication Date Title
CN101059909A (en) All-round computer vision-based electronic parking guidance system
CN1912950A (en) Device for monitoring vehicle breaking regulation based on all-position visual sensor
CN1804927A (en) Omnibearing visual sensor based road monitoring apparatus
CN101064065A (en) Parking inducing system based on computer visual sense
CN1302438C (en) Method for monitoring a moving object and system regarding same
WO2018201835A1 (en) Signal light state recognition method, device and vehicle-mounted control terminal and motor vehicle
CN103824452B (en) A kind of peccancy parking detector based on panoramic vision of lightweight
CN100417223C (en) Intelligent safety protector based on omnibearing vision sensor
CN103345766B (en) A kind of signal lamp recognition methods and device
CN102708378B (en) Method for diagnosing fault of intelligent traffic capturing equipment based on image abnormal characteristic
CN100468245C (en) Air conditioner energy saving controller based on omnibearing computer vision
CN1851555A (en) Method for realizing two-dimensional panoramic true imaging
US9836881B2 (en) Heat maps for 3D maps
CN101183427A (en) Computer vision based peccancy parking detector
CN1931697A (en) Intelligent dispatcher for group controlled lifts based on image recognizing technology
CN107240079A (en) A kind of road surface crack detection method based on image procossing
CN109949231B (en) Method and device for collecting and processing city management information
CN1819652A (en) Watching device and system
CN1674643A (en) Apparatus for digital video processing and method thereof
CN1910623A (en) Image conversion method, texture mapping method, image conversion device, server-client system, and image conversion program
CN106442231A (en) Coarse aggregate angularity evaluation method based on digital image analysis technology
CN110910372B (en) Deep convolutional neural network-based uniform light plate defect detection method
CN1812570A (en) Vehicle antitheft device based on omnibearing computer vision
CN112511610A (en) Vehicle-mounted patrol intelligent method and system based on urban fine management conditions
JP2009163714A (en) Device and method for determining road sign

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090107

Termination date: 20120421