CN102163331A - Image-assisting system using calibration method - Google Patents

Image-assisting system using calibration method Download PDF

Info

Publication number
CN102163331A
CN102163331A CN2011100404406A CN201110040440A CN102163331A CN 102163331 A CN102163331 A CN 102163331A CN 2011100404406 A CN2011100404406 A CN 2011100404406A CN 201110040440 A CN201110040440 A CN 201110040440A CN 102163331 A CN102163331 A CN 102163331A
Authority
CN
China
Prior art keywords
image
imageing sensor
sensor
subelement
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011100404406A
Other languages
Chinese (zh)
Inventor
王炳立
Original Assignee
王炳立
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201010110581.6 priority Critical
Priority to CN201010110581 priority
Application filed by 王炳立 filed Critical 王炳立
Priority to CN2011100404406A priority patent/CN102163331A/en
Publication of CN102163331A publication Critical patent/CN102163331A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses an image-assisting system using a calibration method, and the system provided by the invention comprises image sensor units, an image processing unit, an output unit, a calibration unit and a scene decision unit, wherein the image sensors are calibrated in a calibration region so as to determine the posture and position information of each image sensor, and then the image correction, viewpoint transformation and image fusion are performed based on the determined posture and position information so as to obtain the intuitive effective information which can be seen by an operator as much as possible. The image-assisting system using the calibration method can be used for obviously reducing the sight dead zone of the operator so that the operator can see as much intuitive effective information as possible.

Description

Adopt the image backup system of scaling method
Technical field
The present invention relates to the image backup system of the employing scaling method that uses in the systems such as automobile and mechanically moving and monitoring.
Background technology
In various mechanically moving field operation, as: motor vehicles travel, ship entry, mechanically moving by the goods of loaders such as narrow zone, tower crane grasp and move, placement etc., when having displacement between this type of different objects, particularly under the narrower and small situation of the permission mobile route of mobile object, need operating personnel to consider very much careful surrounding enviroment when mobile, touch, collide and rub to prevent.
A common example is, when driving a car, particularly under the situation of driving oversize vehicle, owing to car body cover and the deficiency of the viewing angle of rearview mirror and rear-view mirror forms certain viewing blind zone.Because the existence of this type of blind area, vehicle body space sense to the driver proposes high requirement, especially when steering vehicle during by narrow zone and parking, reversing, human pilot be easy to because the problem of vision blind area and when advancing because observation does not cause does not scratch, collides and produce danger comprehensively.
Existing solution is to adopt ultrasonic radar for backing car or reversing camera.Radar for backing car can be preferably carries out distance detecting to the barrier of blind area, and the testing result of barrier is carried out limited prompting by the mode of sound and figure to the driver.But there is problem not directly perceived, unconspicuous in ultrasonic radar, and because the number of range finding probe is limited, can't accomplish that all angles are all carried out reliable detection in all directions.Simultaneously, because range finding exists certain precision error, this type of distance-finding method that range finding accurately and reliably can't be provided.
The another one solution is to adopt the reversing camera, and the method is by carry out picked-up and the demonstration to the rear view picture at wide-angle imaging head of tailstock installation.Cooperate range radar etc. can finish collection and demonstration this method to the subsequent figures picture, but there is a tangible drawback in this method: this method only can be gathered and shows the realtime graphic of vehicle back, can't gather and show panoramic picture, therefore this method have just lost efficacy when vehicle is in non-reversing state.
Along with the development of image processing techniques, a kind of reversing picture system of panorama has appearred at present, and this system is shown to the driver, so that human pilot is tested and assessed to the environment of periphery by the image of a plurality of cameras is spliced later through the distortion calibration.The highly effective problem that has solved the visual field of peripheral vehicle of the method, as Chinese patent: 200710106237.8,200810236552.7,200810163310.X, 200380101461.8.
But also there are some places to be improved in above-mentioned patent: owing to only adopt 3 tunnel image acquisition excessive vehicle of volume or trailer are carried out image when synthetic such as 200710106237.8, can cause the visual field and resolution requirement raising to the optical module of imageing sensor, this patent is not spoken of the distortion in images rectification simultaneously, when adopting the wide-angle imaging head, the fault image of not correcting will make the observable spatial relationship variation of observer; Above-mentioned patent is not mentionedly carried out parameter calibration to video camera as 200380101461.8 etc., does not have so-called yardstick information at image like this, makes the operator can't carry out range estimation; Same owing to there is not the existence of camera calibration parameter, and cause composograph can't merge the multiple sensors technology such as range sensor so that information the most in all directions to be provided.
Summary of the invention
The objective of the invention is the defective of above-mentioned prior art is improved, a kind of image backup system of scaling method is provided, so that peripheral in all directions image information to be provided, this system can provide adaptive scene variation and image is carried out enhancement process so that more effective information to be provided according to monitoring object of which movement trend.
The objective of the invention is to be achieved through the following technical solutions,
Adopt the image backup system of scaling method, comprise as lower unit:
Image sensor cell, this image sensor cell is in order to obtain the image information of the effective surrounding enviroment of monitored object;
Graphics processing unit, this graphics processing unit are gathered, are handled in order to the image information of monitored object peripheral visual field that described image sensor cell is obtained and merge the composograph information that forms the monitored object peripheral visual field after handling;
The composograph information output of the monitored object peripheral visual field after output unit, this output unit are handled described graphics processing unit;
Demarcate the unit, this demarcation unit is before system puts into operation, by calibrating template the optical parametric and the distortion parameter of each imageing sensor in the described image sensor cell are demarcated, produce the nominal light mathematic(al) parameter of calibrated each imageing sensor and demarcate distortion parameter input picture processing unit, finish fault image and correct; And attitude parameter and the location parameter to each imageing sensor in the described image sensor cell demarcated in the demarcation zone that calibrating template constituted;
Scene decision package, described scene decision package receive the translational speed and the movement tendency information of operator's instruction and/or monitored object, export scene decision parameters to graphics processing unit.
Described image sensor cell comprises a plurality of imageing sensors that are installed on monitored object periphery, and there is the area of visual field that overlaps each other in two adjacent imageing sensors, and guarantee not stay blind area.
Monitored object of the present invention is meant that the image backup system that needs described employing scaling method includes but not limited to vehicle, machinery etc. to its equipment that carries out the surrounding environment monitoring.
Described imageing sensor is being used in combination of color image sensor, black white image sensor, infrared image sensor or they, such as the black white image sensor with the combination that color image sensor combines or the black white image sensor combines with infrared image sensor or color image sensor combines with infrared image sensor, black white image sensor and color image sensor and infrared image sensor combine.
Comprise image acquisition subelement, pattern distortion rectification subelement, image co-registration subelement in the graphics processing unit of the present invention;
Described image acquisition subelement is gathered and is kept in the image of each imageing sensor in the described image sensor cell;
Subelement is corrected in described pattern distortion, adopt the nominal light mathematic(al) parameter that described demarcation unit carries out calibrated each imageing sensor to the optical parametric and the distortion parameter of each imageing sensor in the described image sensor cell by calibrating template and demarcate distortion parameter, the image of each imageing sensor in the described image sensor cell is carried out distortion correction handle the distortion correction that obtains each imageing sensor after distortion correction is handled and handle image;
Described image aspects varitron unit, be used for image being handled in the distortion correction of each imageing sensor according to the scene parameter of scene decision package, the visual angle of image and the ratio of image various piece are handled in distortion correction according to each imageing sensor, carry out view transformation and image zoom, to form the visual angle image of each specific imageing sensor;
Described image co-registration unit is carried the image of the visual angle image of each imageing sensor come and monitored object to merge and is spliced in order to finish described image aspects varitron unit, forms the composograph of monitored object peripheral visual field.
Comprise one in the graphics processing unit of the present invention and be arranged on described pattern distortion and correct the anti-shake subelement of image between subelement and the described image aspects varitron unit, the anti-shake subelement of described image is handled image in order to the distortion correction of each imageing sensor of described pattern distortion being corrected subelement and carrying and is shaken to detect and the inter frame image that produces owing to the imageing sensor slight jitter in the described image sensor cell blured and handle; Obtain the anti-shake processing image of each imageing sensor; Described image aspects varitron unit, be used for according to the scene parameter of scene decision package anti-shake processing image each imageing sensor, according to the visual angle of the anti-shake processing image of each imageing sensor and the ratio of image various piece, carry out view transformation and image zoom, to form the visual angle image of each specific imageing sensor.
Comprise an image enhanson that is arranged between described pattern distortion rectification subelement and the described image aspects varitron unit in the graphics processing unit of the present invention, described image enhanson carries out the image adjustment in order to the distortion correction processing image of finishing described each imageing sensor, to improve the readability of composograph, the enhancing image of each imageing sensor after the image enhanson strengthens is delivered to described image aspects varitron unit; Described image aspects varitron unit, be used for according to the scene parameter of scene decision package enhancing image each imageing sensor, according to the visual angle of the enhancing image of each imageing sensor and the ratio of image various piece, carry out view transformation and image zoom, to form the visual angle image of each specific imageing sensor.
Comprise in the graphics processing unit of the present invention and be arranged on described pattern distortion and correct anti-shake subelement of image and image enhanson between subelement and the described image aspects varitron unit, the anti-shake subelement of described image is handled image in order to the distortion correction of each imageing sensor of described pattern distortion being corrected subelement and carrying and is shaken to detect and the inter frame image that produces owing to the imageing sensor slight jitter in the described image sensor cell blured and handle; Obtain the anti-shake processing image of each imageing sensor; Described image enhanson carries out the image adjustment in order to the anti-shake processing image of finishing described each imageing sensor, to improve the readability of composograph, the enhancing image of each imageing sensor after the image enhanson strengthens is delivered to described image aspects varitron unit.
The view data that each imageing sensor of subelement before to distortion correction corrected in described pattern distortion adopts the method for interpolation or convolution to try to achieve the distortion correction image data processing of each imageing sensor after the distortion correction.
Described distortion correction is handled and is comprised tangential distortion processing, radial distortion processing, and the thin prism distortion is handled, and decentering distortion is handled.
Described optical parametric comprises the lens optical parameter of each imageing sensor and the undesirable distortion parameter that causes of optical system of image sensor cell.
The anti-shake subelement of described image comprises the shake detection sub-unit, shake estimator subelement and the sub-subelement of jitter compensation, wherein shaking distortion correction that detection sub-unit corrects each imageing sensor that subelement carries to described pattern distortion handles the shake of image and detects, shake estimator subelement is corrected the distortion correction processing image of each imageing sensor of subelement conveying and is shaken estimation to described pattern distortion, by removing by filter the influence of randomized jitter, obtain effective global motion vector, and, calculate actual image shift with this movement tendency that obtains sequence frame, the value of rotation and convergent-divergent; The value of image shift, rotation and convergent-divergent that the sub-subelement of described jitter compensation obtains according to shake estimator subunit computes, the distortion correction processing image of pattern distortion being corrected each imageing sensor of subelement conveying compensates the anti-shake processing image of each imageing sensor that the randomized jitter that is eliminated is later.
The described distortion correction processing image of described pattern distortion being corrected each imageing sensor of subelement conveying shakes to estimate it is that the existing jitter parameter of image is handled in the distortion correction of estimating each imageing sensor of consecutive frame.
The image adjustment of described image enhanson is meant that enhancing and the image to the adjustment of brightness of image and picture contrast sketches the contours.
Described image enhanson is that brightness of image is adjusted subelement, picture contrast enhanson, image and sketched the contours a kind of or two or more combination arbitrarily in the subelement.
Since each camera photo-character there are differences, the inconsistent situation of obvious brightness that exists in the image of different images sensor, described brightness of image adjustment subelement is that the brightness of the anti-shake processing image of the distortion correction processing image of each imageing sensor or each imageing sensor is corrected, and the composograph that makes described image co-registration unit form monitored object peripheral visual field does not exist because the tangible segmented areas that sensitometric characteristic causes.
Described picture contrast enhanson be when rain, when greasy weather or lighting condition difference, the taken fuzzy degraded image degree of comparing of each imageing sensor in the described image sensor cell strengthens, to improve the visual effect of image.
It is that the image border is sketched the contours and barrier is given prominence to and delineated that described image sketches the contours subelement, so that image has stronger identifiability, effectively points out guide to observer's sight.
Described image aspects varitron unit is by carrying out the image aspects conversion with the described pinhole imaging system model of formula (1), and is specific as follows:
u ′ v ′ 1 = AR ′ 2 R 1 ′ - 1 A - 1 u v 1 - - - ( 1 )
In the formula (1):
[u v] TBe the pictorial element before the view transformation, i.e. image under original visual angle;
[u ' v '] TBe the pictorial element behind the view transformation;
R 1Be original rotation homography matrix, be the rotation matrix and the translation vector formation of imageing sensor position;
r 1And r 2Be rotation matrix First column vector With the secondary series vector
Be new viewpoint rotation homography matrix, constituted by the rotation matrix and the translation matrix of new viewpoint position;
A is the inner parameter model of camera, is defined as
α x, α y, u 0, v 0, γ is the linear model inner parameter:
α x, α yBe respectively u, the scale factor of v axle, or be called effective focal length α x=f/dx, α y=f/dy, dx, dy are respectively the pixel spacing of horizontal direction and vertical direction;
u 0, v 0It is optical centre;
γ is the out of plumb factor of u axle and v axle, γ=0 under a lot of situations;
The attitude parameter and the location parameter that need each imageing sensor during view transformation, described attitude parameter and location parameter are installed the back in system and are recorded through calibration process by demarcating the unit in " demarcating the zone ".
The method of the fusion of described image co-registration unit is spliced for the direct image pixel from two imageing sensors with registration position; Select for use during splicing directly with registration position from the image of two imageing sensors by moving the image that forms after the fusion.
To merge the position more accurate in order to make when image co-registration, the method for registering of described registration position is for seeking characteristics of image near registration position, utilize these characteristics of image that the image of two imageing sensors is mated then, to seek best registration position, image merges on this best registration position then, to avoid the appearance of situations such as ghost image.
Described characteristics of image is or any two combination in the angle point feature, contour feature, edge feature of image.
Described registration position is the splicing line position, is according to the field range of two imageing sensors of splicing line both sides and sharpness scope and determine, and is consistent in the image definition of two imageing sensors of splicing line both sides.
Described splicing line is camber line or straight line.
Described splicing line promptly can show special color and gray scale, is beneficial to the observer and determines the splicing line position, and also not Show Color and gray scale be not to destroy the globality of image.
The method of the fusion of described image co-registration unit is that integration region adopts the image from two adjacent imageing sensors to be weighted on average at same merging point, does not just have splicing line in splice region like this, and image correspondence can be better.
Described weighted mean is the weighted mean of the mode of variable weight, promptly in splice region, any image of after the image co-registration certain, more little away from the point of imageing sensor shared weight in fused images more, come then big more from the weight of the image of another one imageing sensor at same position.
In the composograph of described monitored object peripheral visual field, also comprise the image of monitored object.
The image top plan view image of described monitored object or overlook 3-D view or with the corresponding to three-dimensional perspective image in visual angle of scene decision package output.
The image of described monitored object is the image that has the image of certain transparency or have certain color.
The transparency of the image of described monitored object or color can be set by hand, to show the surrounding enviroment image information of being covered by monitored object.
In the composograph of described monitored object peripheral visual field, also comprise obstacle information or/and obstacle distance information, described obstacle information can merge in the composograph of the monitored object peripheral visual field that is presented at final panorama or/and obstacle distance information adopts external sensor to record.
Described demarcation unit adopts the camera calibration method of nonlinear model or linear model, by calibrating template the optical parametric and the distortion parameter of each imageing sensor in the described image sensor cell are demarcated, and attitude parameter and the location parameter to each imageing sensor in the described image sensor cell demarcated in the demarcation zone that calibrating template constituted.
The camera calibration method of described nonlinear model or linear model comprises:
Non-linear camera calibration method (RAC) based on radial constraint, list of references is: " A versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses ", Roger Y.Tsai, IEEE Journal of Robotics and Automation, Vol.RA-3, No.4, August 1987, pages 323-344.
The camera calibration algorithm of Zhang Zhengyou based on the 2D target, list of references: " Z.Zhang.A flexible new technique for camera calibration " .IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (11): 1330-1334,2000.
The scaling method of catadioptric formula camera and fisheye camera, list of references is: Scaramuzza, D. (2008). " Omnidirectional Vision:from Calibration to Robot Motion Estimation " ETH Zurich Thesis no.17635.
Describedly the optical parametric and the distortion parameter of each imageing sensor in the described image sensor cell are demarcated and can be carried out in each imageing sensor production run, carry out when also can system of the present invention again installing by calibrating template.
Described method of the optical parametric and the distortion parameter of each imageing sensor in the described image sensor cell being demarcated by calibrating template is: each imageing sensor is taken pictures to described calibrating template in different azimuth and attitude, to obtain the calibrating template photo of different attitudes, described demarcation unit calculates each imageing sensor lens optical parameter and distortion parameter by these calibrating template photos, finishes calibration process.
Described calibrating template photo is for more than at least four.
With in the demarcation zone that calibrating template constituted the attitude parameter of each imageing sensor in the described image sensor cell and location parameter being demarcated is that a benchmark stop position is set in described demarcation zone, monitored object is rested on the described benchmark stop position, keep the constant of monitored object space then, each imageing sensor is carried out image acquisition, calculate then each imageing sensor with respect to the attitude rotation amount of each calibrating template and displacement (because position and the attitude of each calibrating template in the whole calibrating zone all is known, therefore can calculate), calculate the position and the attitude data in the overall situation of each imageing sensor again, with attitude parameter and the displacement parameter that obtains each imageing sensor.
Described benchmark stop position is stopped reference line by some benchmark and is limited and form, shape and coordinate that described benchmark is stopped reference line are predefined according to some resemblance of monitored object, and match with some resemblance of monitoring object, accurately stop easily to guarantee monitored device.
If datum line is not set in the demarcation zone, described is that monitored device is placed in the overlapping region of two adjacent images sensor with in the demarcation zone that calibrating template constituted the attitude parameter of each imageing sensor in the described image sensor cell and location parameter being demarcated, make the two adjacent images sensor can both photograph the pattern of the calibrating template in the overlapping region of this two adjacent images sensor simultaneously, by predefined attitude of each calibrating template and position, the two adjacent images sensor is located in twos, calculate the attitude parameter and the displacement parameter of each imageing sensor at last successively.
The predefined attitude of described each calibrating template comprises horizontal positioned attitude, vertical mode attitude, tilts to place a kind of or two or more combination arbitrarily in the attitude.
Described the same area of demarcating regional several calibrating templates placements of serving as reasons.
Described calibrating template is to be placed on to demarcate interior calibrating template in zone or independent calibrating template.
Described calibrating template is made of the plane pattern with special frame structure or d pattern or wire pattern.
The structure of described calibrating template and size are pre-set.
Described calibrating template is to have to comprise the square template that several have straight line or curvilinear characteristic or angle point feature, as gridiron pattern or grid or some discrete square pattern compositions; Perhaps be the template that some polygons are formed, polygonal as triangle etc.; Perhaps be round template, promptly have the calibrating template of some round patterns in the template; Perhaps constitute template by straight line.
Image information after described output unit is handled described graphics processing unit outputs to the image device and shows and/or output to that memory storage is stored and/or carry out communication by communication facilities and other equipment.
The scene decision parameters of described scene decision package output are to help the scene parameter that the observer understands scene information.
The scene decision parameters of described scene decision package output are best view information and/or image scaled information (if image scaled information is steady state value, then can ignore image scaled information, only export best view information).
The image backup system that the invention provides a kind of scaling method is to provide peripheral in all directions image information, this system can provide adaptive scene variation and image is carried out enhancement process so that more effective information to be provided according to monitoring object of which movement trend, obtains good result.
Description of drawings
The composition structural drawing of the image backup system of the employing scaling method that Fig. 1 introduces for the specific embodiment of the invention.
Fig. 2 A is a kind of calibrating template synoptic diagram that the image backup system of the employing scaling method introduced of the specific embodiment of the invention is used.
Fig. 2 B is the another kind of calibrating template synoptic diagram that the image backup system of the employing scaling method introduced of the specific embodiment of the invention is used.
Fig. 2 C is another calibrating template synoptic diagram that the image backup system of the employing scaling method introduced of the specific embodiment of the invention is used.
Fig. 2 D is another calibrating template synoptic diagram that the image backup system of the employing scaling method introduced of the specific embodiment of the invention is used.
The image backup system of the employing scaling method that Fig. 3 introduces for the specific embodiment of the invention is used the viewpoint position synoptic diagram of the self-adaptation scene decision parameters of scene decision package output.
The image backup system of the employing scaling method that Fig. 4 introduces for the specific embodiment of the invention is used the viewpoint position synoptic diagram of the non-self-adapting scene decision-making of scene decision package output.
Fig. 5 A is that vehicle forward direction that the image backup system of the employing scaling method introduced of the specific embodiment of the invention is used travels and the splicing geometric areas synoptic diagram during right hand steering.
Fig. 5 B is vehicle that the image backup system of the employing scaling method introduced of the specific embodiment of the invention the is used splicing geometric areas synoptic diagram when retreating to the left.
Fig. 5 C is that vehicle that the image backup system of the employing scaling method introduced of the specific embodiment of the invention is used moves ahead and travels and do not have splicing geometric areas synoptic diagram when turning to trend.
Fig. 5 D be the image backup system of the employing scaling method introduced of the specific embodiment of the invention use turn to trend as low vehicle speeds and nothing the time splicing geometric areas synoptic diagram.
Fig. 6 A is the plane reference area schematic that the image backup system of the employing scaling method introduced of the specific embodiment of the invention is used.A1, A2, A3, A4 are the attitude of computed image sensor and the calibrating template that location parameter is used among the figure, and the P point is the intersection point that two benchmark are stopped reference line.
Fig. 6 B is that the on-plane surface that the image backup system of the employing scaling method introduced of the specific embodiment of the invention is used is demarcated area schematic.It is a scene of demarcating the zone in the surroundings wall installation that on-plane surface among the figure is demarcated area schematic, and on the wall around calibrating template is positioned at, benchmark is stopped reference line and is present on ground and the wall.
The anti-shake subelement of image that the image backup system of the employing scaling method that Fig. 7 introduces for the specific embodiment of the invention is used and the structural representation of image enhanson.
The equal proportion scene display mode synoptic diagram that the scene decision package of the image backup system of the employing scaling method that Fig. 8 introduces for the specific embodiment of the invention uses, curve 5,10,15,20 is apart from car geometric center point line of equidistance.
The non-equal proportion scene display mode synoptic diagram that the scene decision package of the image backup system of the employing scaling method that Fig. 9 introduces for the specific embodiment of the invention uses.Curve 5,10,15,20 is apart from car geometric center point line of equidistance among the figure.
This concrete non-equal proportion scene display mode synoptic diagram of implementing the scene decision-making of use.Curve 5,10,15,20,25 is apart from the non-line of equidistance of car geometric center point among the figure.
Embodiment
Below, introduce specific embodiments of the invention in detail, so that further understand content of the present invention by Fig. 1~Fig. 9.
With the image backup system that is installed in the automobile scaling method is example, and the present invention and using method of the present invention are described.
Image sensor cell 100 among Fig. 1 comprises some imageing sensors 110, obtain the image information of effective surrounding enviroment by this image sensor cell 100, some imageing sensors 110 be distributed in vehicle around, General System comprises more than or equal to an image sensor cell 100, less car body can adopt 4 tunnel or No. 6 imageing sensors, 110 schemes, and bigger car body will need the more imageing sensor 110 of more number.There is the area of visual field that overlaps each other in each adjacent imageing sensor 110, and does not have blind area.When imageing sensor 110 is installed, can consider to comprise in the visual field part car body, the part of promptly taking car body is to show the position of car body and barrier relation to guarantee in splicing.
Image processing module 200 among Fig. 1 is finished the collection of image and processing, fusion; It comprises image acquisition subelement 210, pattern distortion rectification subelement 220, the anti-shake subelement 230 of image, image enhanson 240, image aspects varitron unit 250, image co-registration subelement 260.
Image acquisition units subelement 210 is mainly finished the collection of image of each imageing sensor 110 and temporary.
Pattern distortion rectification module 220 adopts to be demarcated the nominal light mathematic(al) parameter of the optical parametric and the distortion parameter of each imageing sensor 110 in the image sensor cell 100 being carried out calibrated each imageing sensor by calibrating template in unit and demarcates distortion parameter, the image of each imageing sensor 110 in the image sensor cell is carried out distortion correction handle the distortion correction that obtains each imageing sensor 110 after distortion correction is handled and handle image.
The anti-shake subelement 230 of image is mainly used in the opposing image owing to the inter frame image that slight jitter produces is fuzzy.Because by the unavoidably vibration of generation of car body, be that imageing sensor 110 can constantly be shaken, therefore the image of taking might move by occurrence positions between different frames, and at this moment, image is anti-shake will to cause the image blurring and piece mistake of image if do not carry out.The anti-shake subelement 230 of this image is promptly finished the detection of shake and is eliminated function, eliminates shake and can improve the quality of image co-registration and splicing.
The anti-shake subelement 230 of image comprises shake detection sub-unit, shake estimator unit and jitter compensation subelement, the shake detection sub-unit adopts sciagraphy (PA:Projection Algorithm), representative point matching algorithm (RPM:Representative Point Matching), the motion detection algorithm that bit plane matching method (BPM:Bit Plane Matching) etc. are used always is to obtaining the translation of image and detecting of rotation or zoom level.
The estimation subelement is estimated effective exercise, it estimates the kinematic parameter of the existence of consecutive frame, by the influence of filtering random motion, and obtains effective global motion vector, and the movement tendency of acquisition sequence frame, obtain the value of actual image shift, rotation and convergent-divergent etc.
Skew, rotation and the convergent-divergent of the image that the jitter compensation subelement obtains according to the estimation subunit computes compensate the image that the randomized jitter that is eliminated is later to original image.
Referring to Fig. 7, image enhanson 240 sketches the contours to finish enhancing of brightness of image adjustment and/or picture contrast and/or image by algorithm for image enhancement, improves the readability of composograph.This image enhanson 240 comprises that brightness of image adjusts sub-subelement 241, picture contrast enhancer subelement 242 and image border and sketch the contours sub-subelement 243.Wherein:
Brightness of image is adjusted sub-subelement 241, is mainly used in to correct the brightness of image difference cause owing to the photo-character of each imageing sensor 110 is inconsistent, and this brightness of image difference will cause having the inconsistent zone of obvious brightness in the image after fusion.The brightness regulation coefficient that brightness of image is adjusted subelement 241 passes through to obtain in the brightness measurement of demarcating scene when using for the first time in system, calculates in real time when perhaps moving in system.Through the image that brightness is adjusted, the zone of different brightness can not appear when splicing.
Picture contrast enhancer subelement 242 is to strengthen obtaining image degree of comparing, common picture contrast enhancement algorithms comprises: histogram equalization method (HE), local histogram's equalization (AHE), non overlapping blocks histogram equalization method (POSHE), interpolation adaptive histogram equalization, and broad sense histogram equalization method.All can realize the self-adaptation contrast enhancing of the local message of image with the contrast of increase image, is improved the naked eyes identifiability of degraded image by above-mentioned method, improve the visual effect of image.
The image border is sketched the contours 243 pairs of image borders of sub-subelement and is sketched the contours outstanding module of delineating with barrier, and makes image have stronger identifiability, effectively points out guide to observer's sight.In operating process, operation needs to use the surplus light of eyes to observe native system sometimes, therefore needs to adopt appropriate method that image is given prominence to.Can adopt numerous image border algorithms that the edge of image feature is extracted, and can carry out feature detection to special pattern according to the feature of barrier, to detect special object, after detecting barrier, position according to its edge, to sketching the contours or adopt special shape (circle, rectangle etc.) in the edge with in barrier sign and the image.The thickness etc. that whether adopts image to sketch the contours the color edges of the boundary line that function or image sketch the contours can be controlled by operating personnel, can as data such as obstacle distances, special area be delineated according to the sensor external data simultaneously.
Brightness of image in anti-shake subelement 230 of image and the image enhanson is adjusted sub-subelement 241, picture contrast enhancer subelement 242 and image border and is sketched the contours sub-subelement 243, can make up as required, when each imageing sensor 110 vibrations can be ignored, can save the anti-shake subelement 230 of image.When not needing the image border sketched the contours, can remove the image border and sketch the contours sub-subelement 243 etc. equally.Same brightness of image is adjusted sub-subelement 241, picture contrast enhancer subelement 242 also can be by selectively removing.
The scene decision-making of scene decision package 300 is according to the movement tendency information of vehicle or operator's operation intention, generate the scene configuration parameter, this scene decision package 300 receives the movement tendency information of object to be detected or the information that the operator operates intention, calculates self-adaptation scene parameter.The scene parameter is in this example: each regional percentage of view directions and image, and case method is as follows:
When system configuration when speedometer and direction rotary angle transmitter, the scene decision-making can adopt adaptive mode to carry out choosing of scene viewpoint, if the speed of a motor vehicle detected and was v this moment, the virtual view of system will be positioned at the distance L place of the direction of motion of a direction of motion, L=f (vT s), T in the formula sBe the demonstration time, f (x) is the function of independent variable x, and according to the real system needs, this function f is definite value, piecewise function or analytical function.L can not be greater than the maximal value in this directional image sensor visual field simultaneously.Simultaneously whether choosing also with in the scene of L exists barrier that relation is arranged, the scene display effect as shown in Figure 3, promptly viewpoint is relevant with the steering angle between the L2 in vehicle velocity V 1, V2 and L1.
The scene decision-making of scene decision package 300 also can be selected several points of fixing, and as shown in Figure 4, the scene viewpoint can be selected several constant bearing points of P0, P1, P2, P3, P4, P5.The scene decision-making is according to direction of vehicle movement and turn to trend to determine suitable scene viewpoint.
Vehicle when nothing turns to trend, and be in fallback state, the scene visual point selection is selected the P4 point; When vehicle is in forward travel state, the vehicle when trend of turning right, the scene visual point selection is selected the P2 point; When vehicle is in forward travel state, the vehicle when trend of turning left, the scene visual point selection is selected the P1 point; Equally, when vehicle is in fallback state, the vehicle when trend of turning right, the scene visual point selection is selected the P5 point; When vehicle is in fallback state, the vehicle when trend of turning left, the scene visual point selection is selected the P3 point;
If the unassembled vehicle speed sensor of system uses gear position sensor can determine the traffic direction of current vehicle, if when gear is positioned at drive shift, decidable is that vehicle is in working direction or there is the operation intention of advancing in the operator.When gear is positioned at when retreating grade, decidable is that vehicle is in fallback state, and there is the operation intention that retreats in perhaps current operator.
If the unassembled steering wheel angle sensor of system then turns to the trend can be by the state justify of steering indicating light:, judge that automobile storage is in left steering trend when steering indicating light is in when opening for the left steering lamp; Same be in when opening for the right turn lamp when steering indicating light, the judgement automobile storage is in right turn trend;
When acceleration transducer has assembled in system, and the vehicle body longitudinal axis of the sensitive direction of sensor and vehicle is vertical and be positioned on the surface level, and can judge the trend that turns to that vehicle body exists this moment by the output of acceleration transducer; This acceleration signal will be through filtering removing The noise, when filtered acceleration signal detect left to acceleration when spending, can be judged to be automobile storage and turn to trend left; When the acceleration signal that detects to the right, then, be judged to be and exist acceleration to turn to trend to the right.When filtered acceleration signal is lower than certain thresholding, then is judged to and does not have the trend that effectively turns to.
Multiple when turning to the trend sensor when existing, as possessing the steering indicating light sensor simultaneously, the deflection sensor, and during acceleration transducer, system turns to the trend judgement with comprehensive the sensor information, also can adopt priority orders to judge, an available priority is: the steering indicating light signal is higher than steering wheel angle sensor information, and steering wheel angle sensor information priority level is higher than acceleration transducer;
In the time can supplying decision-making without any information, the scene decision-making will be selected the top view point, and PT as shown in Figure 4 is default viewpoint, promptly be selected from autocentre point top.Except the conversion of self-adaptation scene, the scene of system can be according to operator's needs and the selection scene decision package 300 of hand-guided and scene except the visual angle is adjusted, also will produce control with changed scale chi information, this engineer's scale is to be definite value, mean in spliced picture, ratio is consistent, and as shown in Figure 8, curve 5,10,15,20 is apart from car geometric center point line of equidistance.When the bigger viewpoint of field range, because display sizes is limited, therefore to select be not a definite value but one and the function of distance to engineer's scale, to reach apart from the vehicle position far away more, compression of images severe more, and in the nearer scope of distance vehicle, the picture compression is less with respect to far-end, as shown in Figure 9, curve 5,10,15,20,25 is apart from the non-line of equidistance of car geometric center point among the figure.
The visual angle conversion is finished in image aspects varitron unit 240, its visual angle change in location is obtained by the output of scene decision package 300, mapping relations between image are determined by known image sensing station and attitude and new viewpoint position and attitude in this image aspects varitron unit 240, finish view transformation by the picture element interpolation mapping.
Image co-registration subelement 260 combines each according to different view information through image A, B, C, D that image aspects varitron unit 240 carries out view transformation; As the different splicing integration programs of Fig. 5 A to different viewpoint shown in Fig. 5 D and direction of visual lines formation, in Fig. 5 A, when vehicle movement trend is the right front, direction of visual lines be the car forward right side to the left back to, splice among the figure this moment, the image B of front side and the image C on right side are preferential the demonstration, can obtain the effective information of forward right side to guarantee human pilot; In like manner, in Fig. 5 B, when vehicle movement trend is the left back, therefore direction of visual lines be the car left rear side to the right front to, splice among the figure this moment, and the image D of the image A in left side and rear side is preferential the demonstration, can obtain the effective information of left rear side to guarantee human pilot; In like manner, in Fig. 5 C, when vehicle movement trend is the dead ahead, so direction of visual lines is in car the place ahead, and splice among the figure this moment, and the image B of front side be preferential demonstration, can obtain the effective information of working direction to guarantee the driver.Fig. 5 D has shown the scene of a no preferential display message, and this moment, the image viewpoint was positioned at the top of vehicle geometric center, and the image A of vehicle's surroundings, B, C, D all show.
Image co-registration subelement 260 also can increase such as satellite informations such as obstacle distances except the image information with each imageing sensor merges; The signal of external sensor can be able to be merged in the scene image that is presented at final panorama, as range information, obstacle information waits external parameter to be merged.With the ranging system is example, the panorama system is according to the position and the range measurements of probe, corresponding explosive area line will be shown as among the figure of this information after fusion, this line is to highlight with the fan-shaped consistent arc of detection, and the edge is carried out in this zone strengthen, perhaps the form with numeral marks;
Demarcate unit 500 in system during first the installation or when calibrating specially, need demarcate the attitude and the location parameter of imageing sensor, this example has adopted the example of " the demarcating the zone " of enumerating a plane, vehicle parking " benchmark stop position " like this, by adopting the calibrating template among Fig. 2 (also to be, calibrating template A1 among Fig. 6 A, A2, A3, A4) attitude and the location parameter of each imageing sensor are demarcated, attitude and location parameter and calibrating template A1 by each imageing sensor at this moment, A2, A3, the A4 data, can obtain position and attitude information, promptly obtain the original position-information of imageing sensor with respect to each imageing sensor of whole calibrating zone.Among the figure, Ls is a calibration zone length, and Ws is the calibration zone width, and Wc is the tested vehicle width, and Lc is a tested vehicle length, and Lp is that benchmark is stopped reference line 510 apart from demarcating below, zone distance, and Wp is that benchmark is stopped reference line 520 apart from demarcating left side, zone distance.
Fig. 6 A has illustrated a kind of plane calibration zone, it is characterized by each calibrating template A1, A2, A3, A4 and is placed in the horizontal zone, and the timing signal imageing sensor is taken calibrating template A1, A2, A3, the A4 that is in surface level.
Fig. 6 B has illustrated one to demarcate the zone, and its calibrating template calibrating template A1, A2, A3, A4 are perpendicular to that surface level places, and promptly place at vertical metope.Wherein calibrating template A1 is placed on the rear side wall C3, and calibrating template A2 is placed on the wall C2 of right side, and on the side walls C1, calibrating template A4 was placed on the wall C4 of left side before calibrating template A3 was placed on.In fact according to the needs of system, calibrating template A1, A2, A3, A4 can take vertically, level or the combination of placing with specific attitude.
Demarcate unit 500 and can adopt " the camera calibration algorithm of Zhang Zhengyou " based on the 2D target, perhaps " scaling method of catadioptric formula camera and fisheye camera " to the lens optical parameter and the imageing sensor distortion parameter of the imageing sensor of each imageing sensor, and attitude parameter and location parameter to calculate each imageing sensor.
After the video camera calibrating external parameters is finished in demarcation unit 500, calculate each adjacent image-position sensor overlapping region coordinate, and get the brightness adjustment factor of overlapping region.
The picture signal of handling is exported in output module unit 400: output to image display and show and/or output to that memory storage is stored and/or carry out communication by communication facilities and other equipment.
Self-adaptation scene image householder method provided by the invention, adjudicate by adaptive environment vehicle periphery, be that operating personnel can see effective information directly perceived as much as possible, with effective raising security, equally in various movable machineries field, also can enhance productivity for the operator provides effective information as much as possible.
More than show and described ultimate principle of the present invention, principal character and advantage of the present invention.The technician of the industry should understand; the present invention is not restricted to the described embodiments; that describes in the foregoing description and the instructions just illustrates principle of the present invention; the present invention also has various changes and modifications without departing from the spirit and scope of the present invention, and these changes and improvements all fall in the claimed scope of the invention.

Claims (12)

1. adopt the image backup system of scaling method, it is characterized in that, comprise as lower unit:
Image sensor cell, this image sensor cell is in order to obtain the image information of the effective surrounding enviroment of monitored object;
Graphics processing unit, this graphics processing unit are gathered, are handled in order to the image information of monitored object peripheral visual field that described image sensor cell is obtained and merge the composograph information that forms the monitored object peripheral visual field after handling;
The composograph information output of the monitored object peripheral visual field after output unit, this output unit are handled described graphics processing unit;
Demarcate the unit, this demarcation unit is before system puts into operation, by calibrating template the optical parametric and the distortion parameter of each imageing sensor in the described image sensor cell are demarcated, produce the nominal light mathematic(al) parameter of calibrated each imageing sensor and demarcate distortion parameter input picture processing unit, finish fault image and correct; And attitude parameter and the location parameter to each imageing sensor in the described image sensor cell demarcated in the demarcation zone that calibrating template constituted;
Scene decision package, described scene decision package receive the translational speed and the movement tendency information of operator's instruction and/or monitored object, export scene decision parameters to graphics processing unit.
2. the image backup system of employing scaling method as claimed in claim 1, it is characterized in that, described image sensor cell comprises a plurality of imageing sensors that are installed on monitored object periphery, and there is the area of visual field that overlaps each other in two adjacent imageing sensors;
Described monitored object is meant, needs the image backup system of described employing scaling method that it is carried out the equipment of surrounding environment monitoring;
Described imageing sensor is being used in combination of color image sensor, black white image sensor, infrared image sensor or they.
3. the image backup system of employing scaling method as claimed in claim 1 is characterized in that, comprises image acquisition subelement, pattern distortion rectification subelement, image co-registration subelement in the described graphics processing unit;
Described image acquisition subelement is gathered and is kept in the image of each imageing sensor in the described image sensor cell: gather and keep in;
Subelement is corrected in described pattern distortion, adopt the nominal light mathematic(al) parameter that described demarcation unit carries out calibrated each imageing sensor to the optical parametric and the distortion parameter of each imageing sensor in the described image sensor cell by calibrating template and demarcate distortion parameter, the image of each imageing sensor in the described image sensor cell is carried out distortion correction handle the distortion correction that obtains each imageing sensor after distortion correction is handled and handle image;
Described image aspects varitron unit, be used for image being handled in the distortion correction of each imageing sensor according to the scene parameter of scene decision package, the visual angle of image and the ratio of image various piece are handled in distortion correction according to each imageing sensor, carry out view transformation and image zoom, to form the visual angle image of each specific imageing sensor;
Described image co-registration unit is carried the image of the visual angle image of each imageing sensor come and monitored object to merge and is spliced in order to finish described image aspects varitron unit, forms the composograph of monitored object peripheral visual field.
4. the image backup system of employing scaling method as claimed in claim 3, it is characterized in that, comprise one in the described graphics processing unit and be arranged on described pattern distortion and correct the anti-shake subelement of image between subelement and the described image aspects varitron unit, the anti-shake subelement of described image is handled image in order to the distortion correction of each imageing sensor of described pattern distortion being corrected subelement and carrying and is shaken to detect and the inter frame image that produces owing to the imageing sensor slight jitter in the described image sensor cell blured and handle; Obtain the anti-shake processing image of each imageing sensor; Described image aspects varitron unit, be used for according to the scene parameter of scene decision package anti-shake processing image each imageing sensor, according to the visual angle of the anti-shake processing image of each imageing sensor and the ratio of image various piece, carry out view transformation and image zoom, to form the visual angle image of each specific imageing sensor;
Comprise an image enhanson that is arranged between described pattern distortion rectification subelement and the described image aspects varitron unit in the described graphics processing unit, described image enhanson carries out the image adjustment in order to the distortion correction processing image of finishing described each imageing sensor, to improve the readability of composograph, the enhancing image of each imageing sensor after the image enhanson strengthens is delivered to described image aspects varitron unit; Described image aspects varitron unit, be used for according to the scene parameter of scene decision package enhancing image each imageing sensor, according to the visual angle of the enhancing image of each imageing sensor and the ratio of image various piece, carry out view transformation and image zoom, to form the visual angle image of each specific imageing sensor;
Comprise in the described graphics processing unit and be arranged on described pattern distortion and correct anti-shake subelement of image and image enhanson between subelement and the described image aspects varitron unit, the anti-shake subelement of described image is handled image in order to the distortion correction of each imageing sensor of described pattern distortion being corrected subelement and carrying and is shaken to detect and the inter frame image that produces owing to the imageing sensor slight jitter in the described image sensor cell blured and handle; Obtain the anti-shake processing image of each imageing sensor; Described image enhanson carries out the image adjustment in order to the anti-shake processing image of finishing described each imageing sensor, to improve the readability of composograph, the enhancing image of each imageing sensor after the image enhanson strengthens is delivered to described image aspects varitron unit.
5. as the image backup system of the described employing scaling method of each claim of claim 4, it is characterized in that the view data that each imageing sensor of subelement before to distortion correction corrected in described pattern distortion adopts the method for interpolation or convolution to try to achieve the distortion correction image data processing of each imageing sensor after the distortion correction;
Described distortion correction handle comprise that tangential distortion processing, radial distortion processing, thin prism distortion are handled, a kind of in handling of decentering distortion or and two or more combinations;
Described optical parametric comprises the lens optical parameter of each imageing sensor and the undesirable distortion parameter that causes of optical system of image sensor cell.
6. the image backup system of employing scaling method as claimed in claim 5, it is characterized in that, the anti-shake subelement of described image comprises the shake detection sub-unit, shake estimator subelement and the sub-subelement of jitter compensation, wherein shaking distortion correction that detection sub-unit corrects each imageing sensor that subelement carries to described pattern distortion handles the shake of image and detects, shake estimator subelement is corrected the distortion correction processing image of each imageing sensor of subelement conveying and is shaken estimation to described pattern distortion, by removing by filter the influence of randomized jitter, obtain effective global motion vector, and, calculate actual image shift with this movement tendency that obtains sequence frame, the value of rotation and convergent-divergent; The value of image shift, rotation and convergent-divergent that the sub-subelement of described jitter compensation obtains according to shake estimator subunit computes, the distortion correction processing image of pattern distortion being corrected each imageing sensor of subelement conveying compensates the anti-shake processing image of each imageing sensor that the randomized jitter that is eliminated is later;
The described distortion correction processing image of described pattern distortion being corrected each imageing sensor of subelement conveying shakes to estimate it is that the existing jitter parameter of image is handled in the distortion correction of estimating each imageing sensor of consecutive frame;
The image adjustment of described image enhanson is meant that enhancing and the image to the adjustment of brightness of image and picture contrast sketches the contours;
Described image enhanson is that brightness of image is adjusted subelement, picture contrast enhanson, image and sketched the contours a kind of or two or more combination arbitrarily in the subelement;
Described picture contrast enhanson be when rain, when greasy weather or lighting condition difference, the taken fuzzy degraded image degree of comparing of each imageing sensor in the described image sensor cell strengthens, to improve the visual effect of image;
It is that the image border is sketched the contours and barrier is given prominence to and delineated that described image sketches the contours subelement, so that image has stronger identifiability, effectively points out guide to observer's sight;
The method of the fusion of described image co-registration unit is spliced for the direct image pixel from two imageing sensors with registration position; Select for use during splicing directly with registration position from the image of two imageing sensors by moving the image that forms after the fusion.
7. the image backup system of employing scaling method as claimed in claim 6, it is characterized in that, the method for registering of described registration position is for seeking characteristics of image near registration position, utilize these characteristics of image that the image of two imageing sensors is mated then, to seek best registration position, image merges on this best registration position then, to avoid the appearance of situations such as ghost image;
Described characteristics of image is or any two combination in the angle point feature, contour feature, edge feature of image;
Described registration position is the splicing line position, is according to the field range of two imageing sensors of splicing line both sides and sharpness scope and determine, and is consistent in the image definition of two imageing sensors of splicing line both sides;
Described splicing line is camber line or straight line;
Described splicing line promptly can show special color and gray scale, is beneficial to the observer and determines the splicing line position, and also not Show Color and gray scale be not to destroy the globality of image;
The method of the fusion of described image co-registration unit is that integration region adopts the image from two adjacent imageing sensors to be weighted on average at same merging point, does not just have splicing line in splice region like this, and image correspondence can be better;
Described weighted mean is the weighted mean of the mode of variable weight, promptly in splice region, any image of after the image co-registration certain, more little away from the point of imageing sensor shared weight in fused images more, come then big more from the weight of the image of another one imageing sensor at same position.
8. the image backup system of employing scaling method as claimed in claim 1 is characterized in that, in the composograph of described monitored object peripheral visual field, also comprises the image of monitored object;
The image top plan view image of described monitored object or overlook 3-D view or with the corresponding to three-dimensional perspective image in visual angle of scene decision package output;
The image of described monitored object is the image that has the image of certain transparency or have certain color;
The transparency of the image of described monitored object or color can be set by hand, to show the surrounding enviroment image information of being covered by monitored object.
9. the image backup system of employing scaling method as claimed in claim 1, it is characterized in that, in the composograph of described monitored object peripheral visual field, also comprise obstacle information or/and obstacle distance information, described obstacle information can merge in the composograph of the monitored object peripheral visual field that is presented at final panorama or/and obstacle distance information adopts external sensor to record.
10. the image backup system of employing scaling method as claimed in claim 1, it is characterized in that, described demarcation unit adopts the camera calibration method of nonlinear model or linear model, by calibrating template the optical parametric and the distortion parameter of each imageing sensor in the described image sensor cell are demarcated, and attitude parameter and the location parameter to each imageing sensor in the described image sensor cell demarcated in the demarcation zone that calibrating template constituted;
Describedly the optical parametric and the distortion parameter of each imageing sensor in the described image sensor cell are demarcated and can be carried out in each imageing sensor production run, also can when the image backup system of described employing scaling method is installed, carry out by calibrating template;
Described method of the optical parametric and the distortion parameter of each imageing sensor in the described image sensor cell being demarcated by calibrating template is: each imageing sensor is taken pictures to described calibrating template in different azimuth and attitude, to obtain the calibrating template photo of different attitudes, described demarcation unit calculates each imageing sensor lens optical parameter and distortion parameter by these calibrating template photos, finishes calibration process;
Described is that a benchmark stop position is set in described demarcation zone with in the demarcation zone that calibrating template constituted the attitude parameter of each imageing sensor in the described image sensor cell and location parameter being demarcated, monitored object is rested on the described benchmark stop position, keep the constant of monitored object space then, each imageing sensor is carried out image acquisition, calculate attitude rotation amount and the displacement of each imageing sensor then with respect to each calibrating template, calculate the position and the attitude data in the overall situation of each imageing sensor again, with attitude parameter and the displacement parameter that obtains each imageing sensor;
Described benchmark stop position is stopped reference line by some benchmark and is limited and form, shape and coordinate that described benchmark is stopped reference line are predefined according to some resemblance of monitored object, and match with some resemblance of monitoring object, accurately stop easily to guarantee monitored device;
Described is that monitored device is placed in the overlapping region of two adjacent images sensor with in the demarcation zone that calibrating template constituted the attitude parameter of each imageing sensor in the described image sensor cell and location parameter being demarcated, make the two adjacent images sensor can both photograph the pattern of the calibrating template in the overlapping region of this two adjacent images sensor simultaneously, by predefined attitude of each calibrating template and position, the two adjacent images sensor is located in twos, calculate the attitude parameter and the displacement parameter of each imageing sensor at last successively; The predefined attitude of described each calibrating template comprises horizontal positioned attitude, vertical mode attitude, tilts to place a kind of or two or more combination arbitrarily in the attitude.
11. the image backup system of employing scaling method as claimed in claim 1 is characterized in that, described the same area of demarcating regional several calibrating templates placements of serving as reasons;
Described calibrating template is to be placed on to demarcate interior calibrating template in zone or independent calibrating template;
Described calibrating template is made of the plane pattern with special frame structure or d pattern or wire pattern;
The structure of described calibrating template and size are pre-set;
Described calibrating template is to have to comprise the square template that several have straight line or curvilinear characteristic or angle point feature; It perhaps is the template that some polygons are formed; Perhaps constitute template by straight line.
12. the image backup system of employing scaling method as claimed in claim 1, it is characterized in that the image information after described output unit is handled described graphics processing unit outputs to the image device and shows and/or output to that memory storage is stored and/or carry out communication by communication facilities and other equipment.
CN2011100404406A 2010-02-12 2011-02-12 Image-assisting system using calibration method Pending CN102163331A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201010110581.6 2010-02-12
CN201010110581 2010-02-12
CN2011100404406A CN102163331A (en) 2010-02-12 2011-02-12 Image-assisting system using calibration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100404406A CN102163331A (en) 2010-02-12 2011-02-12 Image-assisting system using calibration method

Publications (1)

Publication Number Publication Date
CN102163331A true CN102163331A (en) 2011-08-24

Family

ID=44464543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100404406A Pending CN102163331A (en) 2010-02-12 2011-02-12 Image-assisting system using calibration method

Country Status (1)

Country Link
CN (1) CN102163331A (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102736634A (en) * 2012-07-10 2012-10-17 浙江捷尚视觉科技有限公司 Camera angle regulation method for vehicle panorama
CN103208110A (en) * 2012-01-16 2013-07-17 展讯通信(上海)有限公司 Video image converting method and device
CN103218799A (en) * 2012-01-18 2013-07-24 三星电子株式会社 Method and apparatus for camera tracking
CN103607584A (en) * 2013-11-27 2014-02-26 浙江大学 Real-time registration method for depth maps shot by kinect and video shot by color camera
CN103983249A (en) * 2014-04-18 2014-08-13 北京农业信息技术研究中心 Plant growth detailed-process image continuous-acquisition system
CN104303497A (en) * 2012-05-15 2015-01-21 日立建机株式会社 Display device for self-propelled industrial machine
TWI507014B (en) * 2013-12-02 2015-11-01 Nat Taichung University Science & Technology System for calibrating three dimension perspective image and method thereof
CN105216715A (en) * 2015-10-13 2016-01-06 湖南七迪视觉科技有限公司 A kind of motorist vision assists enhancing system
CN105283903A (en) * 2013-04-09 2016-01-27 微软技术许可有限责任公司 Multi-sensor camera recalibration
CN105473393A (en) * 2013-08-14 2016-04-06 胡夫·许尔斯贝克和福斯特有限及两合公司 Sensor array for detecting control gestures on vehicles
CN105701808A (en) * 2016-01-11 2016-06-22 南京邮电大学 Full-automatic medical image registration method based on combined point matching
CN105799594A (en) * 2016-04-14 2016-07-27 京东方科技集团股份有限公司 Image display method, vehicle-mounted display device, sun visor and automobile
CN106447602A (en) * 2016-08-31 2017-02-22 浙江大华技术股份有限公司 Image mosaic method and device
CN106937910A (en) * 2017-03-20 2017-07-11 杭州视氪科技有限公司 A kind of barrier and ramp detecting system and method
CN107079089A (en) * 2015-03-31 2017-08-18 株式会社小松制作所 The periphery monitoring apparatus of Work machine
CN107086980A (en) * 2017-01-20 2017-08-22 刘萍 Big data distribution platform based on internet
CN107424194A (en) * 2017-04-21 2017-12-01 苏州德创测控科技有限公司 The detection method of keyboard profile tolerance
CN107533640A (en) * 2015-04-28 2018-01-02 微软技术许可有限责任公司 Sight corrects
CN107627959A (en) * 2017-09-20 2018-01-26 鹰驾科技(深圳)有限公司 The panoramic video monitoring method and system of motor vehicle
CN107784627A (en) * 2016-08-31 2018-03-09 车王电子(宁波)有限公司 The method for building up of vehicle panorama image
CN107886544A (en) * 2016-09-30 2018-04-06 法乐第(北京)网络科技有限公司 IMAQ control method and device for vehicle calibration
CN108629811A (en) * 2018-04-04 2018-10-09 广州市安晓科技有限责任公司 A kind of automobile looks around the automatic calibration method and system of panorama
CN108995591A (en) * 2018-08-01 2018-12-14 北京海纳川汽车部件股份有限公司 Vehicle panoramic has an X-rayed display methods, system and the vehicle with it
WO2018227580A1 (en) * 2017-06-16 2018-12-20 深圳市柔宇科技有限公司 Camera calibration method and terminal
CN109754436A (en) * 2019-01-07 2019-05-14 北京工业大学 A kind of camera calibration method based on camera lens subregion distortion function model
CN110950005A (en) * 2019-11-22 2020-04-03 常州联力自动化科技有限公司 Chute position correction method and system for single-camera scraper conveyor
CN111856871A (en) * 2019-04-26 2020-10-30 东莞潜星电子科技有限公司 Vehicle-mounted 3D panoramic all-around display method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2461006Y (en) * 2000-09-05 2001-11-21 阮跃清 Microcomputer video radar on vehicle
CN1753489A (en) * 2005-10-28 2006-03-29 沈阳理工大学 Fog interference resistant camera system
CN101110122A (en) * 2007-08-31 2008-01-23 北京工业大学 Large cultural heritage picture pattern split-joint method based on characteristic
CN101268437A (en) * 2005-11-02 2008-09-17 松下电器产业株式会社 Display-object penetrating apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2461006Y (en) * 2000-09-05 2001-11-21 阮跃清 Microcomputer video radar on vehicle
CN1753489A (en) * 2005-10-28 2006-03-29 沈阳理工大学 Fog interference resistant camera system
CN101268437A (en) * 2005-11-02 2008-09-17 松下电器产业株式会社 Display-object penetrating apparatus
CN101110122A (en) * 2007-08-31 2008-01-23 北京工业大学 Large cultural heritage picture pattern split-joint method based on characteristic

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
周国清等: "论CCD相机标定的内、外因素:畸变模型与信噪比", 《电子学报》 *
徐理东等: "视频抖动矫正中全局运动参数的估计", 《清华大学学报(自然科学版)》 *
王亮等: "基于一维标定物的多摄像机标定", 《自动化学报》 *
苗立刚: "视频监控中的图像拼接与合成算法研究", 《仪器仪表学报》 *

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208110A (en) * 2012-01-16 2013-07-17 展讯通信(上海)有限公司 Video image converting method and device
CN103218799A (en) * 2012-01-18 2013-07-24 三星电子株式会社 Method and apparatus for camera tracking
CN103218799B (en) * 2012-01-18 2017-09-29 三星电子株式会社 The method and apparatus tracked for camera
CN104303497A (en) * 2012-05-15 2015-01-21 日立建机株式会社 Display device for self-propelled industrial machine
US9845051B2 (en) 2012-05-15 2017-12-19 Hitachi Construction Machinery Co., Ltd. Display device for self-propelled industrial machine
CN104303497B (en) * 2012-05-15 2017-09-12 日立建机株式会社 The display device of self-propelled industrial machinery
CN102736634A (en) * 2012-07-10 2012-10-17 浙江捷尚视觉科技有限公司 Camera angle regulation method for vehicle panorama
CN102736634B (en) * 2012-07-10 2014-09-03 浙江捷尚视觉科技股份有限公司 Camera angle regulation method for vehicle panorama
CN105283903A (en) * 2013-04-09 2016-01-27 微软技术许可有限责任公司 Multi-sensor camera recalibration
CN105473393A (en) * 2013-08-14 2016-04-06 胡夫·许尔斯贝克和福斯特有限及两合公司 Sensor array for detecting control gestures on vehicles
CN105473393B (en) * 2013-08-14 2018-01-02 胡夫·许尔斯贝克和福斯特有限及两合公司 The sensor mechanism of posture is manipulated on vehicle for detecting
CN103607584A (en) * 2013-11-27 2014-02-26 浙江大学 Real-time registration method for depth maps shot by kinect and video shot by color camera
TWI507014B (en) * 2013-12-02 2015-11-01 Nat Taichung University Science & Technology System for calibrating three dimension perspective image and method thereof
CN103983249A (en) * 2014-04-18 2014-08-13 北京农业信息技术研究中心 Plant growth detailed-process image continuous-acquisition system
CN107079089B (en) * 2015-03-31 2020-04-17 株式会社小松制作所 Periphery monitoring device for working machine
CN107079089A (en) * 2015-03-31 2017-08-18 株式会社小松制作所 The periphery monitoring apparatus of Work machine
CN107533640A (en) * 2015-04-28 2018-01-02 微软技术许可有限责任公司 Sight corrects
CN107533640B (en) * 2015-04-28 2021-06-15 微软技术许可有限责任公司 Method, user equipment and storage medium for gaze correction
CN105216715A (en) * 2015-10-13 2016-01-06 湖南七迪视觉科技有限公司 A kind of motorist vision assists enhancing system
CN105701808A (en) * 2016-01-11 2016-06-22 南京邮电大学 Full-automatic medical image registration method based on combined point matching
CN105799594B (en) * 2016-04-14 2019-03-12 京东方科技集团股份有限公司 A kind of method that image is shown, display device for mounting on vehicle, sunshading board and automobile
CN105799594A (en) * 2016-04-14 2016-07-27 京东方科技集团股份有限公司 Image display method, vehicle-mounted display device, sun visor and automobile
WO2017177716A1 (en) * 2016-04-14 2017-10-19 Boe Technology Group Co., Ltd. Image display method, vehicle display device, vehicle sun visor, and related vehicle
US10616488B2 (en) 2016-04-14 2020-04-07 Boe Technology Group Co., Ltd. Image display method, vehicle display device, vehicle sun visor, and related vehicle
CN106447602B (en) * 2016-08-31 2020-04-03 浙江大华技术股份有限公司 Image splicing method and device
CN107784627A (en) * 2016-08-31 2018-03-09 车王电子(宁波)有限公司 The method for building up of vehicle panorama image
CN106447602A (en) * 2016-08-31 2017-02-22 浙江大华技术股份有限公司 Image mosaic method and device
CN107886544A (en) * 2016-09-30 2018-04-06 法乐第(北京)网络科技有限公司 IMAQ control method and device for vehicle calibration
CN107086980B (en) * 2017-01-20 2018-01-12 江苏南盐电子商务研究院有限责任公司 The vehicle of big data distribution platform based on internet is set
CN107086980A (en) * 2017-01-20 2017-08-22 刘萍 Big data distribution platform based on internet
CN106937910A (en) * 2017-03-20 2017-07-11 杭州视氪科技有限公司 A kind of barrier and ramp detecting system and method
CN106937910B (en) * 2017-03-20 2019-07-02 杭州视氪科技有限公司 A kind of barrier and ramp detection system and method
CN107424194A (en) * 2017-04-21 2017-12-01 苏州德创测控科技有限公司 The detection method of keyboard profile tolerance
CN109643455B (en) * 2017-06-16 2021-05-04 深圳市柔宇科技股份有限公司 Camera calibration method and terminal
CN109643455A (en) * 2017-06-16 2019-04-16 深圳市柔宇科技有限公司 Camera calibration method and terminal
WO2018227580A1 (en) * 2017-06-16 2018-12-20 深圳市柔宇科技有限公司 Camera calibration method and terminal
CN107627959A (en) * 2017-09-20 2018-01-26 鹰驾科技(深圳)有限公司 The panoramic video monitoring method and system of motor vehicle
CN108629811A (en) * 2018-04-04 2018-10-09 广州市安晓科技有限责任公司 A kind of automobile looks around the automatic calibration method and system of panorama
CN108629811B (en) * 2018-04-04 2021-08-20 广州市安晓科技有限责任公司 Automatic calibration method and system for panoramic view of automobile
CN108995591A (en) * 2018-08-01 2018-12-14 北京海纳川汽车部件股份有限公司 Vehicle panoramic has an X-rayed display methods, system and the vehicle with it
CN109754436B (en) * 2019-01-07 2020-10-30 北京工业大学 Camera calibration method based on lens partition area distortion function model
CN109754436A (en) * 2019-01-07 2019-05-14 北京工业大学 A kind of camera calibration method based on camera lens subregion distortion function model
CN111856871A (en) * 2019-04-26 2020-10-30 东莞潜星电子科技有限公司 Vehicle-mounted 3D panoramic all-around display method
CN110950005B (en) * 2019-11-22 2021-07-20 常州联力自动化科技有限公司 Chute position correction method and system for single-camera scraper conveyor
CN110950005A (en) * 2019-11-22 2020-04-03 常州联力自动化科技有限公司 Chute position correction method and system for single-camera scraper conveyor

Similar Documents

Publication Publication Date Title
CN102163331A (en) Image-assisting system using calibration method
CN102158684A (en) Self-adapting scene image auxiliary system with image enhancement function
CN202035096U (en) Mobile operation monitoring system for mobile machine
US8842181B2 (en) Camera calibration apparatus
US9738223B2 (en) Dynamic guideline overlay with image cropping
JP4695167B2 (en) Method and apparatus for correcting distortion and enhancing an image in a vehicle rear view system
US9280824B2 (en) Vehicle-surroundings monitoring device
US20160098815A1 (en) Imaging surface modeling for camera modeling and virtual view synthesis
US20140267415A1 (en) Road marking illuminattion system and method
US7728879B2 (en) Image processor and visual field support device
JP4696248B2 (en) MOBILE NAVIGATION INFORMATION DISPLAY METHOD AND MOBILE NAVIGATION INFORMATION DISPLAY DEVICE
US20090015675A1 (en) Driving Support System And Vehicle
EP3032818B1 (en) Image processing device
US10183621B2 (en) Vehicular image processing apparatus and vehicular image processing system
US20100245573A1 (en) Image processing method and image processing apparatus
US20110169957A1 (en) Vehicle Image Processing Method
EP1462762A1 (en) Circumstance monitoring device of a vehicle
EP2348279B1 (en) Road measurement device and method for measuring road
JP4796676B2 (en) Vehicle upper viewpoint image display device
US20090303024A1 (en) Image Processing Apparatus, Driving Support System, And Image Processing Method
US20090179916A1 (en) Method and apparatus for calibrating a video display overlay
CN109360245B (en) External parameter calibration method for multi-camera system of unmanned vehicle
CN202111802U (en) Calibration device for monitoring apparatus with multiple image sensors
KR20180112010A (en) A method of detecting an object on the road side of a car, a computing device, a driver assistance system and an automobile
Ehlgen et al. Omnidirectional cameras as backing-up aid

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110824