CN108257092A - A kind of vehicle body looks around image base display methods - Google Patents

A kind of vehicle body looks around image base display methods Download PDF

Info

Publication number
CN108257092A
CN108257092A CN201810048527.XA CN201810048527A CN108257092A CN 108257092 A CN108257092 A CN 108257092A CN 201810048527 A CN201810048527 A CN 201810048527A CN 108257092 A CN108257092 A CN 108257092A
Authority
CN
China
Prior art keywords
vehicle
image
point
angle
vehicle body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810048527.XA
Other languages
Chinese (zh)
Inventor
王艳明
王波
胡振程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New Software Technology (shanghai) Co Ltd
Original Assignee
New Software Technology (shanghai) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Software Technology (shanghai) Co Ltd filed Critical New Software Technology (shanghai) Co Ltd
Priority to CN201810048527.XA priority Critical patent/CN108257092A/en
Publication of CN108257092A publication Critical patent/CN108257092A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The present invention relates to a kind of vehicle bodies to look around image base display methods, includes the following steps:S1, the vehicle body for obtaining vehicle look around image;S2 estimates the movement locus of the vehicle;S3 according to the movement locus of the vehicle, estimates that the vehicle body looks around the movement locus on four vertex of the underbody missing image in image;S4 according to the movement locus on four vertex, obtains the history image of the underbody missing image;S5, is filled into the vehicle body by the history image and looks around in the present frame of image and carry out splicing fusion.The vehicle body of the present invention looks around image base display methods, by obtaining vehicle movement track, and then the movement locus of vehicle bottom shade is obtained to obtain the history image of vehicle bottom, it is subsequently filled in current frame image and carries out splicing fusion, so that vehicle body, which looks around image, is able to completion, it ensure that vehicle bottom being capable of real-time display.

Description

A kind of vehicle body looks around image base display methods
Technical field
Image domains are looked around the present invention relates to vehicle body more particularly to a kind of vehicle body looks around image base display methods.
Background technology
The full-view image display system of traditional vehicle, the effect after splicing fusion are only able to display camera around vehicle body The visible range of shooting, it is impossible to be shown to body bottom so that user can not obtain the environmental information of body bottom in real time.
Current vehicle panoramic looking-around system is mainly distributed in vehicle body surrounding to acquire vehicle body image using 4 cameras, Implementing principle is:The collected part vehicle body image of each camera is spliced into complete vehicle body week by image algorithm It encloses after image and Car body model is shown in together on user's screen.However when display, because camera can not capture The image of Car body model bottom section causes Car body model bottom the shadow region that can not be shown occur, not only display effect is poor, Also driver can be influenced in vehicle travel process for the analysis of the judgement of position and road conditions.
Invention content
It is an object of the invention to solve above-mentioned technical problem, a kind of vehicle body is provided and looks around image base display methods, is made The bottom shadow part that vehicle body is looked around in image is obtained to be shown.
For achieving the above object, the present invention provides a kind of vehicle body and looks around image base display methods, the method packet It includes:S1, the vehicle body for obtaining vehicle look around image;S2 estimates the movement locus of the vehicle;S3, according to the movement of the vehicle Track estimates that the vehicle body looks around the movement locus on four vertex of the underbody missing image in image;S4, according to described four The movement locus on vertex obtains the history image of the underbody missing image;The history image is filled into the vehicle body by S5 It looks around and splicing fusion is carried out in the present frame of image.
Preferably, the step S2 includes:S21 detects the angle point that the vehicle body is looked around in image;
S22 tracks optical flow method to the angle point into line trace by LK, obtain the angle point in next frame image with Track point;S23 obtains the first movable information of the vehicle by onboard sensor and vehicle movement model;S24, based on described First movable information screens the angle point;S25 carries out postsearch screening to the angle point and obtains best matrix model, leads to Cross the second movable information that the matrix model calculates the vehicle;S26, by first movable information and second movement Information carries out Kalman filtering fusion, obtains the movement locus of the vehicle.
Preferably, the step S21 includes:S211 calculates tested pixel and multiple pixels on predetermined radii respectively Multiple pixel absolute value of the difference between point;S212, if there is the pixel of predetermined quantity in the multiple pixel absolute value of the difference Absolute value of the difference is more than threshold value, then using the tested pixel as characteristic point;S213 judges the neighbour centered on the characteristic point Whether there was only this characteristic point of the characteristic point in domain, if only there are one characteristic point, using the characteristic point as described in Angle point.
Preferably, the step S21 is further included:If there are multiple characteristic points in the neighborhood centered on the characteristic point, Then calculate the score value of each characteristic point, multiple pixels of the score value between the characteristic point and the multiple pixel The summation of absolute value of the difference;If the score value of the characteristic point is maximum, using the characteristic point as the angle point.
Preferably, the step S23 includes:The steering wheel angle and vehicle of the vehicle are obtained by the onboard sensor Fast information;The turning radius of the vehicle is calculated based on the vehicle movement model and the steering wheel angle;Based on described Turning radius, the steering wheel angle, the speed information calculate displacement distance and the drift angle of the vehicle.
Preferably, after the displacement distance of the vehicle and drift angle is calculated, according to world coordinate system and image coordinate The displacement distance of the vehicle and drift angle are converted to the amount of movement and corner of image by the relationship of system.
Preferably, step S24 includes:S241, amount of movement and corner setting predetermined value based on described image;S242 leads to It crosses the vehicle movement model and estimates location point of the angle point in next frame image;Whether S243 determines the trace point In the region using the predetermined value as radius centered on the location point;S244, if the trace point is in the region It is interior, then retain the angle point, otherwise delete the angle point.
Preferably, after step S24 and before step S25, after LK optical flow tracking methods can also be used to screening Angle point screened, including:
Using LK light stream forward trace algorithms, forward trace of the angle point in current frame image in previous frame is determined Angle point;Using to track algorithm, determining that the forward trace angle point is rear to tracking angle point in the previous frame after LK light streams; The distance between angle point described in the previous frame and the backward tracking angle point are calculated, if the distance is less than predetermined threshold Value, then retain the angle point.
Preferably, in step 25, postsearch screening is carried out to the angle point after screening using RANSAC algorithms, including:From institute It states in current frame image and the previous frame image and randomly selects 3 pairs of matched angle points, this 3 angle steel joint is not conllinear, is converted Matrix model;The projection error of other all angle points and the transformation matrix model is calculated, if projection error is less than setting threshold Value, then by corresponding angle point to adding in the interior point set corresponding to the transformation matrix model;3 pairs of matched angle points are reselected, are obtained New transformation matrix model, and the projection error of other all angle points and the transformation matrix model is calculated, if projection error is small In the given threshold, then by corresponding angle point to adding in the interior point set corresponding to the transformation matrix model;Repeat above-mentioned selection The step of with angle point and calculating projection error, obtains corresponding multiple interior point sets;Multiple interior points is selected to concentrate and contain angle point number The most interior point set of amount is as optimal interior point set, and using the corresponding transformation matrix model of the optimal interior point set as best matrix Model.
Preferably, it is by the best matrix model that RANSAC algorithms obtain:
Coordinate (the x of the rear axle midpoint of vehicle in image is looked around by the best matrix model H and the vehicle bodyc, yc), vehicle corner δ is calculated and the vehicle body looks around the move distance d of vehicle in the horizontal direction in imagexWith along vertical The move distance d in directiony
Actual range with reference to representated by the time difference Δ t between two field pictures and the vehicle body look around every pixel in image The move distance D of the vehicle and movement velocity V is calculated in pixel_d:
Preferably, step S26 includes:Respectively according to being established first movable information and second movable information The state parameter of vehicle;Kalman filtering is set to merge the matrix parameter of equation, the state parameter of the vehicle is substituted into institute State the movement locus that Kalman filtering fusion equation calculates the vehicle.
Preferably, repeating said steps S1-S5 is until vehicle stops, and preserves last image and opened as next vehicle History image before dynamic.
Vehicle body according to the present invention looks around image base display methods, by obtain the movement locus of vehicle bottom shade into And the history image of vehicle bottom is obtained, it is subsequently filled in current frame image and carries out splicing fusion so that vehicle body looks around image Completion is able to, so as to ensure that vehicle bottom can also be shown in real time, contributes to driver right in vehicle travel process Position or the analysis of road conditions.
In addition, it obtains the displacement distance of vehicle using the vehicle movement information that onboard sensor obtains and drift angle and is converted to The amount of movement and corner of image, are then detected by angle steel joint, screened, and best matrix model is obtained after screening, pass through best square The second movable information that battle array model calculates vehicle obtains the movable information of image, and the most amount of movement of image and corner and figure at last The movable information of picture is merged, and obtains the movement locus of vehicle.Compared with the prior art it is middle exclusive use onboard sensor or Image optical flow method is come for estimating the method for vehicle movement track so that two methods form good complementation, have evaded respectively No matter the deficiency of method is made vehicle at a high speed or in the case of low speed, the track of vehicle can be carried out with higher precision Estimation, that is, ensure that the accuracy estimated for vehicle bottom shadow image movement locus, so as to advantageously ensure that the vehicle of acquisition The accuracy of bottom history image.
Description of the drawings
It in order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to institute in embodiment Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the present invention Example, for those of ordinary skill in the art, without creative efforts, can also obtain according to these attached drawings Obtain other attached drawings.
Fig. 1 is to schematically show that vehicle body according to the present invention looks around the flow chart of image base display methods;
Fig. 2 is the setting figure for the photographic device for schematically showing vehicle;
Fig. 3 is the diagram schematically shown when vehicle body looks around image non-completion;
Fig. 4 is the flow chart for schematically showing estimation vehicle movement track approach;
Fig. 5 is the flow chart for schematically showing FAST angular-point detection methods detection angle point according to the present invention;
Fig. 6 is the diagram for schematically showing FAST angular-point detection methods;
Fig. 7 (a) is the diagram for schematically showing vehicle double track motion model;
Fig. 7 (b) is the diagram for schematically showing vehicle single track motion model;
Fig. 8 is to schematically show that the present invention calculates the diagram of the movement of vehicle using vehicle single track motion model;
Fig. 9 is the flow chart for schematically showing angle point screening technique;
Figure 10 is the diagram for schematically showing angle point screening technique;
Figure 11 schematically shows the diagram screened using LK optical flow tracking method angle steel joints;
Figure 12 is to schematically show that interception black block history image is filled into the diagram of current frame image;
Figure 13 is to schematically show that vehicle body looks around diagram of the image bottom diagram as completion.
Specific embodiment
The description of this specification embodiment should be combined with corresponding attached drawing, and attached drawing should be used as the one of complete specification Part.In the accompanying drawings, the shape of embodiment or thickness can expand, and to simplify or facilitate mark.Furthermore it is respectively tied in attached drawing The part of structure will be to describe to illustrate respectively, it is notable that the member for being not shown in figure or not illustrated by word Part is the form known to a person of ordinary skill in the art in technical field.
The description of embodiments herein, any reference in relation to direction and orientation, is for only for ease of description, and cannot manage It solves as any restrictions to the scope of the present invention.It can be related to the combination of feature below for the explanation of preferred embodiment, These features may be individually present or combine presence, and the present invention is not defined in preferred embodiment particularly.The present invention Range be defined by the claims.
Fig. 1 is to schematically show that vehicle body according to the present invention looks around the flow chart of image base display methods.Such as Fig. 1 institutes Show, vehicle body according to the present invention is looked around image base display methods and included the following steps:S1 obtains the vehicle body panoramic view of vehicle Picture;S2 estimates the movement locus of vehicle;S3, according to the movement locus of vehicle, estimated vehicle body looks around the underbody missing figure in image The movement locus on four vertex of picture;S4 according to the movement locus on four vertex, obtains the history image of underbody missing image; S5, is filled into vehicle body by history image and looks around in the present frame of image and carry out splicing fusion.
In the method for the invention, the vehicle body for obtaining vehicle in step sl first looks around image.Specifically, vehicle is obtained Vehicle body look around image and need through multiple cameras for being mounted on vehicle body come the image of collection vehicle surrounding, so and to acquisition To image carry out calibration splicing fusion and obtain vehicle body and look around image.
Fig. 2 is the setting figure for the photographic device for schematically showing vehicle according to the invention.Fig. 3 is to schematically show vehicle body Look around diagram during the non-completion of image.
As shown in Fig. 2, image around vehicle bodies can be acquired in four cameras of installation around vehicle body, wherein L is left camera shooting Head, F are preceding cameras, and R is right camera, and B is rear camera.By abnormal to four camera the image collected uncalibrated images Multiple images are spliced fusion by variable element, and correcting image distortion parameter, the feature then extracted in image, and generation vehicle body is looked around Image.For example, as shown in figure 3, overhead view image of the image for image around vehicle body is looked around in generation.Note that multiple camera shootings are installed Head and how to obtain vehicle body and look around the various ways that the prior art may be used in image, will not be described in great detail herein.
The vehicle body generated at this time looks around in image that there are black blocks, the i.e. bottom of vehicle to be shown in real time.For Underbody image is shown in real time, also needs further to look around image to vehicle body and handles.It is looked around obtaining vehicle body After image, step S2 is then carried out, estimates the movement locus of vehicle.
Fig. 4 is the flow chart schematically shown according to present invention estimation vehicle movement track approach.As shown in figure 4, estimation The track of doing exercises of vehicle may comprise steps of:S21, detection vehicle body look around the angle point in image;S22, by LK with Track optical flow method angle steel joint obtains trace point of the angle point in next frame image into line trace;S23 passes through onboard sensor and vehicle Motion model obtains the first movable information of vehicle;S24, based on the first movable information, angle steel joint is screened;S25, diagonally Point carries out postsearch screening and obtains best matrix model, and the second movable information of vehicle is calculated by matrix model;S26, by first Movable information and the second movable information carry out Kalman filtering fusion, obtain the movement locus of vehicle.
Specifically, in the step s 21, FAST Corner Detections may be used in a kind of embodiment according to the present invention Method detects the angle point that vehicle body is looked around in image.
Fig. 5 is the flow chart for schematically showing FAST angular-point detection methods detection angle point according to the present invention.Fig. 6 is signal Property represent FAST angular-point detection methods diagram.
As shown in figure 5, it can be included using the angle point that FAST angular-point detection methods detection vehicle body is looked around in image:S211, Multiple pixel absolute value of the difference between multiple pixels on tested pixel and predetermined radii are calculated respectively;S212, if The pixel absolute value of the difference for having predetermined quantity in multiple pixel absolute value of the difference is more than threshold value, then using tested pixel as feature Point;S213 judges whether there was only this characteristic point in the neighborhood centered on characteristic point, if characteristic point there are one only, Using this feature point as angle point.FAST angular-point detection methods are illustrated by taking Fig. 6 as an example below.
It is 3 (radius can be configured as needed) in radius specifically as shown in fig. 6, centered on tested pixel p Circle shaped neighborhood region in, share 16 pixels (p1-p16).A threshold value is set, calculates pixel p1-p16 and tested picture respectively Pixel absolute value of the difference between vegetarian refreshments P, if in the pixel absolute value of the difference of 16 pixels and tested pixel p at least 9 pixel absolute value of the difference are more than the threshold value of setting, then using tested pixel p as characteristic point.Otherwise, it is tested pixel p not It is characteristic point.Then, then to next pixel it is detected.
In specific Corner Detection, the pixel absolute value of the difference of p and p1, p9 can also be calculated first, if two values Both less than threshold value, then p is not angle point.If at least one picture for being more than threshold value, calculating p and p1, p9, p5, p13 in two values Plain absolute value of the difference, if being more than threshold value, then calculate the pixel absolute value of the difference of p and p1-p16 there are three absolute value.If There are 9 absolute values to be more than threshold value, then p is determined as characteristic point.
After characteristic point is determined, it is also necessary to which determining that the neighborhood (for example, 3 × 3,5 × 5) centered on pixel p is interior is It is no that there are multiple characteristic points.If it is present the score value of each characteristic point is calculated, if the score value of tested pixel p is most Greatly, then using tested pixel p as angle point.Specifically, the method for calculating the score value of each characteristic point is:Calculate characteristic point with The pixel difference of multiple pixels in neighborhood thoroughly deserves summation, for example, the pixel absolute value of the difference of p and p1-p16 is total With.If there was only mono- characteristic point of pixel p in the neighborhood centered on pixel p, using this feature point p as angle point.
Then, proceed to step S22, optical flow method angle steel joint is tracked into line trace by LK, obtains angle point in next frame figure Trace point as in, then proceedes to step S23, and the first movement of vehicle is obtained by onboard sensor and vehicle movement model Information.According to the first obtained movable information, displacement distance and the drift angle of vehicle are obtained.Vehicle sensors can include steering wheel Rotary angle transmitter and velocity sensor.Specifically, step S23 can include:Pass through steering wheel angle sensor and velocity pick-up Device obtains the steering wheel angle and speed information of vehicle;Vehicle is calculated based on vehicle movement model, steering wheel angle, speed information Turning radius;The displacement distance and partially of vehicle is calculated based on obtained vehicle turn radius, steering wheel angle, speed information Angle.
It is specifically described referring to Fig. 7 and Fig. 8.Fig. 7 is to schematically show vehicle double track motion model and vehicle The diagram of single track motion model.Fig. 8 is to schematically show that the present invention calculates the movement of vehicle using vehicle single track motion model Diagram.
In the present embodiment, vehicle sport mode is the vehicle movement model based on single track.Double shown in Fig. 7 (a) In rail motion model, the two front-wheel approximate processings that double track moves (can be illustrated W/2 for one in two front-wheel centre positions Place, W represent the spacing of left and right wheels) centre position wheel, and using this position as vehicle front-wheel, similarly by two trailing wheel approximate processings Centre position wheel for the centre position for being in two trailing wheels, and using this position as vehicle rear wheel, so as to obtain Fig. 7 (b) Shown single track model, wherein L represent the distance of front and back wheel.
Single track model representation of the vehicle at k moment and k+1 moment is shown in Fig. 8.R1 and R2 in figure are trailing wheel respectively With the turning radius of front-wheel, the dotted line frame in figure is single track modal position of the vehicle at the k+1 moment, and solid box is vehicle in k The single track modal position at quarter, what δ was represented is the corner of steering wheel, and what γ was represented is the drift angle of vehicle.
It is now to obtain displacement distance and the drift angle of vehicle by calculating, practical is exactly to calculate from vehicle location (x, y)kIt arrives Vehicle location (x, y)k+1Distance and drift angle γ value.First have to calculate the turning half of vehicle front wheels and rear wheels during calculating Diameter R2 and R1:
Then, displacement distance dx, dy and vehicle of vehicle are calculated based on turning radius, steering wheel angle and the speed obtained Body drift angle γ, calculation formula are as follows:
γ=v*dt/R2
Wherein v represents car speed, and dt represents the vehicle movement time, and d represents vehicle displacement distance, and dx represents vehicle in x Displacement distance on direction, dy represent the displacement distance of vehicle in y-direction.
After the displacement distance and drift angle for calculating vehicle, according to world coordinate system and the correspondence of image coordinate system, The displacement distance of vehicle and drift angle are converted to the amount of movement and corner of image.Specifically, world coordinate system clear and definite first and figure As the correspondence of coordinate system, i.e., then actual range a of the clear and definite image after calibration representated by every pixel is calculated opposite The amount of movement D for the image information answeredx、DyAnd rotational angle theta:
Dx=dx/a
Dy=dy/a
θ=γ
After the first movable information of vehicle is got by onboard sensor and vehicle movement model, to detecting before Angle point can be screened.It is described in detail referring to Fig. 9 to Figure 10.
Fig. 9 is the flow chart for schematically showing angle point screening technique.Figure 10 is to schematically show showing for angle point screening technique Figure.Fig. 9 and screening process shown in Fig. 10 are the first time screenings of angle point.
As shown in figure 9, the first time screening of angle point can include:S241, amount of movement and corner setting based on image are pre- Definite value;S242 estimates location point of the angle point in next frame image by vehicle movement model;S243 determines to obtain in step S22 Whether the trace point taken is in the region using predetermined value as radius centered on location point;S244, if trace point is in the region It is interior, then retain angle point, otherwise delete angle point.Angle point screening technique is illustrated by taking Figure 10 as an example below.Wherein, in step S241 In, those skilled in the art are based on factors such as noises (fluctuation) and incorporate experience into set the predetermined value for screening angle point.
Specific P0 represents the angle point of previous frame image as shown in Figure 10, and r represents amount of movement and corner setting based on image Predetermined value, P1 represents to estimate location points of the angle point P0 in next frame image by vehicle movement model, and P2 is represented according to step The trace point that rapid S3 is obtained, judges trace point P2 whether in the region using r as radius centered on P1, as shown in Figure 10, with Track point P2 then deletes angle point P0 not in above-mentioned zone, if trace point P0 in region, retains angle point P0.
Hereafter, it is looked around in image base display methods in vehicle body according to the present invention, it is also necessary to which angle steel joint carries out secondary sieve Choosing, obtains best matrix model, thus improves the precision of vehicle running orbit estimation.Certainly, pass through angle point to further improve The obtained precision of best matrix model can first be screened using other methods angle steel joint before postsearch screening, that is, be existed It can be screened after step S24 and repeatedly with angle steel joint before step S25.
With reference to Figure 11, such as LK optical flow tracking method angle steel joints can be used after step S24 and before step S25 It is screened, detailed process can be:First using pyramid LK light stream forward trace algorithms, the angle point in previous frame is determined The forward trace angle point (for example, T01 in Figure 11) of (for example, T0 in Figure 11) in current frame image;Then using golden word To track algorithm after tower LK light streams, determine rear in previous frame of forward trace angle point (T01) to tracking angle point (for example, Figure 11 In T10);The distance between angle point (T0) and backward tracking angle point (T10) are finally calculated, if distance between the two is less than Angle point is then retained and carries out next step by predetermined threshold d.If distance between the two is more than predetermined threshold d, by this angle point Removal.Note that how to obtain forward trace angle point using LK optical flow tracking algorithms and track angle point backward for art technology It is well known for personnel, therefore repeats no more herein.
In the first movable information based on vehicle or the first movable information based on vehicle and LK tracking light stream angle steel joint After being screened, postsearch screening is carried out using RANSAC algorithms angle steel joint, may comprise steps of:From current frame image and upper 3 pairs of matched angle points are randomly selected in one frame image, this 3 angle steel joint is not conllinear, obtains transformation matrix model;It is all to calculate other The projection error of angle point and the transformation matrix model, if projection error is less than the threshold value of setting, by corresponding angle point to adding in Corresponding to the interior point set of the transformation matrix model;Again 3 pairs of matched angle points are chosen, obtain new transformation matrix model, and count The projection error of other all angle points and the transformation matrix model is calculated, it, will be corresponding if projection error is less than the threshold value of setting Angle point is to adding in the interior point set corresponding to the transformation matrix model;It repeats above-mentioned selection matching angle point and calculates the step of projection error Suddenly, corresponding multiple interior point sets are obtained;Multiple interior points is selected to concentrate and contain the most interior point set of angle point quantity as optimal interior Point set, and using the corresponding transformation matrix model of optimal interior point set as best matrix model.
Generally speaking, the postsearch screening of angle point is to search out an optimal mapping matrix model by RANSAC algorithms, is made The angle point that the transformation matrix model must be met is most.Specifically, it can be assumed that transformation matrix H is defined as follows:
Assuming that present frame forward trace angular coordinate is (x ', y '), the matched angular coordinate of previous frame is (x, y), then has:
A pair of of matching angle point can construct 2 equations, but matrix has 6 unknown parameters it can be seen from above-mentioned matrix, because This at least wants 3 pairs of matching angle points, and transformation matrix H can be obtained by 3 pairs of matching angle points.Then according to following relationship by before It is brought into matrix by other matching angle points that LK optical flow methods are screened and calculates projection error.
Wherein t represents given threshold, by angle point to point set in addition if angle point is to meeting above-mentioned relation formula.Then it weighs The step of multiple above-mentioned selection angle point and calculating projection error, obtain multiple interior point sets.The angle point concentrated by comparing multiple interior points Quantity will contain the most interior point set of angle point quantity as optimal interior point set.For example, by certain 4 angle steel joint to obtaining matrix norm Type H1, the interior point for meeting matrix model H1 concentrate the quantity of angle point having most, i.e., H1 matrix models are best matrix model. Note that how to be well known to the skilled artisan, therefore here using RANSAC algorithms to obtain matrix model H It repeats no more.
Hereafter, vehicle second is calculated based on the best matrix model obtained after screening by RANSAC algorithms angle steel joint to move Information.Coordinate (the x of the rear axle midpoint of vehicle in image is looked around by above-mentioned best matrix model H and vehicle bodyc, yc), Ke Yiji Calculation show that vehicle corner δ and vehicle body look around the move distance d of vehicle in the horizontal direction in imagex, movement vertically Distance dy
Specifically, it is known that vehicle during the motion, during turning at the midpoint of two trailing wheels (rear axle midpoint) It rotates, since vehicle body looks around in image the actual size of auto model size and vehicle there are certain correspondence, That is vehicle body looks around the wheel spacing of the wheel spacing of two trailing wheel of vehicle in image and two trailing wheels of vehicle reality there are certain ratios Relationship, it is hereby achieved that vehicle body looks around the coordinate (x at vehicle rear axle center in imagec, yc)。
It can also be to down conversion matrix model H1 in addition, looking around the position relationship of previous frame and current frame image in image To represent:
Assuming that being looked around in image in vehicle body, the corner of vehicle is δ, and vehicle moves horizontally distance as dx, vehicle it is vertical Displacement distance is dy(pay attention to:Here distance is all pixel distance), then have:
X1=scale*cos (δ)
X2=-scale*sin (δ)
X3=(dx-xc)*r1+(dy-yc)*x2+xc
X4=scale*cos (δ)
X5=scale*sin (δ)
X6=(dx-xc)*r4+(dy-yc)*x5+yc
Scale is a change of scale factor in above-mentioned 6 formula.Compare H and H1, it can be seen that vehicle body panoramic view in fact Relationship between the previous frame and present frame of picture can be come out by RANSAC algorithm direct solutions.That is, x1 ... x6 with R1 ... r6 are equal.Therefore, r1-r6 can be substituted into above-mentioned formula and calculates vehicle corner δ, vehicle body looks around vehicle in image Move horizontally distance dx, the vertical travel distance d of vehicley
Hereafter, with reference to representated by the time interval Δ t between front and rear two field pictures and vehicle body look around every pixel in image Actual range pixel_d can calculate move distance D, the movement velocity V of vehicle:
In addition it should be pointed out that except through best matrix model calculate vehicle corner information except, can be with root The corner information of vehicle is obtained according to the angle point of optimal interior point concentration, calculation is as follows:Two distances are chosen in previous frame Angle point apart from each other, such as A (x0, y0), B (x1, y1) if the distance AB of two angle points is more than predetermined value d, calculate AB The angle [alpha] of straight line.Calculate the angle beta of line correspondence A ' B ' straight lines in the current frame simultaneously, wherein A ' be with A it is matched it is positive with Track angle point, B ' are that forward trace angle point is matched with B.The corner of vehicle for δ=| β-α |.When there are multiple angle points apart from each other During AB, multiple vehicle corner δ can be obtained at this time, main processing method is that multiple corner δ are weighted averagely, by average value As last vehicle corner.
It should be strongly noted that in the inventive solutions, with the traveling of vehicle, the image detected before The quantity of middle angle point can be reduced, because the angle point detected may be in next frame image.Therefore, one can be pre-set The threshold value of a angle point quantity, when the angle point quantity in a certain frame image is less than the threshold value, on the basis of existing angle point is retained, The operations such as Corner Detection, angle point screening are carried out again, so as to increase new angle point, and then ensure the accurate of action reference variable Property.
Hereafter, step S23 and step S25 the first movable information obtained and the second movable information are subjected to Kalman filtering Fusion obtains the movement locus of vehicle.Kalman filtering fusion is mainly made of two parts, i.e. prior part and posteriority part. In the embodiment of the present invention, the data of prior part are obtained by vehicle movement model, i.e., the first movement recited above Information, the data of posteriority part are obtained by angle point, i.e., the second movable information recited above.
Can specifically it include:The state parameter of vehicle is established according to the first movable information and the second movable information respectively;If The matrix parameter of Kalman filtering fusion equation is put (for example, the covariance of state-transition matrix, observing matrix, predictive estimation Matrix, covariance matrix, the measurement noise covariance matrix for encouraging noise etc.), the state parameter of vehicle is substituted into Kalman's filter Wave fusion equation calculates the movement locus of vehicle.
Note that the matrix parameter, equation and the specific fusion calculation that are used in Kalman filtering fusion are for ability It is well known for the technical staff in domain, therefore repeats no more.
So far the content of step S2 is completed, step S3 is proceeded to, according to obtained vehicle movement track, estimated vehicle body Look around the movement locus on four vertex of the underbody missing image in image.It should be noted that on the present invention is not limited to use The method for stating estimated motion track, the present invention can also realize underbody figure using any other movement locus method of this field The completion of picture.
By taking Fig. 3 as an example, the movement rail on black block four shown in Fig. 3 vertex can be got according to the movement locus of vehicle Then mark information obtains the history image of underbody missing image according to the movement locus on four vertex of black block, can access Pictorial information of the black block under history image in current frame image, history image can be preserved by program.
Finally, the history image of vehicle bottom (the history pictorial information of black block) is filled into present frame vehicle body panoramic view Splicing fusion is carried out as in.I.e. in vehicle travel process, constantly according to the position of the black block of current predictive, from previous frame figure The history image of black block is intercepted as in, and is filled into current frame image and splices fusion.
Figure 12 is to schematically show that interception black block history image is filled into the diagram of current frame image.Figure 13 is schematic Represent that vehicle body looks around diagram of the image bottom diagram as completion.
As shown in figure 12, black block part represents present frame vehicle bottom missing image in figure, estimates in the manner described above Go out the movement locus on four vertex of black block, obtain the history image of black block, dotted box portion is filled into currently in interception figure Frame black block region, obtains the combination of current frame image and black block history image, this can be used in specific fusion process of splicing Mode carries out well known to field technology personnel, will not be described in great detail herein.It is continuous to repeat step S1- during vehicle movement S5 so that the bottom that vehicle body looks around image is able to real-time display, and display result is with reference to shown in Figure 13.
In addition, looking around in image base display methods in vehicle body according to the present invention, repeat to walk in vehicle travel process Rapid S1-S5 real-time display vehicle bottom images, when vehicle stops, before preserving last image as next vehicle launch History image.
Vehicle body according to the present invention looks around image base display methods, by obtain the movement locus of vehicle bottom shade into And the history image of vehicle bottom is obtained, it is subsequently filled in current frame image and carries out splicing fusion so that vehicle body looks around image Completion is able to, so as to ensure that vehicle bottom can also be shown in real time, contributes to driver right in vehicle travel process Position or the analysis of road conditions.
In addition, it obtains the displacement distance of vehicle using the vehicle movement information that onboard sensor obtains and drift angle and is converted to The amount of movement and corner of image, are then detected by angle steel joint, screened, and calculate the fortune of image after screening based on remaining angle point Dynamic information, and most the amount of movement of image and corner are merged with the movable information of image at last, obtain the movement locus of vehicle. Middle exclusive use onboard sensor or image optical flow method for estimating the method for vehicle movement track, make compared with the prior art It obtains two methods and forms good complementation, evaded the deficiency of respective method, make vehicle regardless of the situation in high speed or low speed Under, the track of vehicle can be estimated with higher precision, that is, ensure that for vehicle bottom shadow image movement locus The accuracy of estimation, so as to advantageously ensure that the accuracy of the vehicle bottom history image of acquisition.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention With within principle, any modification, equivalent replacement, improvement and so on should all be included in the protection scope of the present invention god.

Claims (12)

1. a kind of vehicle body looks around image base display methods, which is characterized in that the described method comprises the following steps:
S1, the vehicle body for obtaining vehicle look around image;
S2 estimates the movement locus of the vehicle;
S3 according to the movement locus of the vehicle, estimates that the vehicle body looks around four vertex of the underbody missing image in image Movement locus;
S4 according to the movement locus on four vertex, obtains the history image of the underbody missing image;
S5, is filled into the vehicle body by the history image and looks around in the present frame of image and carry out splicing fusion.
2. vehicle body according to claim 1 looks around image base display methods, which is characterized in that the step S2 includes:
S21 detects the angle point that the vehicle body is looked around in image;
S22 tracks optical flow method to the angle point into line trace by LK, obtains tracking of the angle point in next frame image Point;
S23 obtains the first movable information of the vehicle by onboard sensor and vehicle movement model;
S24 based on first movable information, screens the angle point;
S25 carries out postsearch screening to the angle point and obtains best matrix model, the vehicle is calculated by the matrix model Second movable information;
First movable information and second movable information are carried out Kalman filtering fusion, obtain the vehicle by S26 Movement locus.
3. vehicle body according to claim 2 looks around image base display methods, which is characterized in that the step S21 includes:
S211 calculates multiple pixel absolute value of the difference between multiple pixels on tested pixel and predetermined radii respectively;
S212, will if the pixel absolute value of the difference for having predetermined quantity in the multiple pixel absolute value of the difference is more than threshold value The tested pixel is as characteristic point;
S213 judges whether there was only this characteristic point of the characteristic point in the neighborhood centered on the characteristic point, if only One characteristic point, then using the characteristic point as the angle point.
4. vehicle body according to claim 3 looks around image base display methods, which is characterized in that the step S21 is also wrapped It includes:
If there are multiple characteristic points in the neighborhood centered on the characteristic point, the score value of each characteristic point is calculated, it is described The summation of multiple pixel absolute value of the difference of the score value between the characteristic point and the multiple pixel;
If the score value of the characteristic point is maximum, using the characteristic point as the angle point.
5. vehicle body according to claim 2 looks around image base display methods, which is characterized in that the step S23 includes:
The steering wheel angle and speed information of the vehicle are obtained by the onboard sensor;
The turning radius of the vehicle is calculated based on the vehicle movement model and the steering wheel angle;
Displacement distance and the drift angle of the vehicle are calculated based on the turning radius, the steering wheel angle, the speed information.
6. vehicle body according to claim 5 looks around image base display methods, which is characterized in that is calculating the vehicle Displacement distance and drift angle after, according to the relationship of world coordinate system and image coordinate system, by the displacement distance of the vehicle and Drift angle is converted to the amount of movement and corner of image.
7. vehicle body according to claim 6 looks around image base display methods, which is characterized in that step S24 includes:
S241, amount of movement and corner setting predetermined value based on described image;
S242 estimates location point of the angle point in next frame image by the vehicle movement model;
Whether S243 determines the trace point in the region using the predetermined value as radius centered on the location point;
S244 if the trace point in the region, retains the angle point, otherwise deletes the angle point.
8. vehicle body according to claim 2 looks around image base display methods, which is characterized in that after step S24 simultaneously And before step S25, the angle point after screening can also be screened using LK optical flow trackings method, including:
Using LK light stream forward trace algorithms, forward trace angle of the angle point in previous frame in current frame image is determined Point;
Using to track algorithm, determining that the forward trace angle point is rear to tracking angle point in the previous frame after LK light streams;
The distance between the angle point and described backward tracking angle point in the previous frame is calculated, if the distance is less than in advance Determine threshold value, then retain the angle point.
9. the vehicle body according to claim 2 or 8 looks around image base display methods, which is characterized in that in step s 25, Postsearch screening is carried out to the angle point after screening using RANSAC algorithms, including:
3 pairs of matched angle points are randomly selected from the current frame image and the previous frame image, this 3 angle steel joint is not conllinear, Obtain transformation matrix model;
The projection error of other all angle points and the transformation matrix model is calculated, if projection error is less than given threshold, By corresponding angle point to adding in the interior point set corresponding to the transformation matrix model;
3 pairs of matched angle points are reselected, obtain new transformation matrix model, and calculate other all angle points and the transformation matrix Corresponding angle point if projection error is less than the given threshold, is corresponded to the transformation square by the projection error of model to adding in The interior point set of battle array model;
The step of repeating above-mentioned selection matching angle point and calculating projection error, obtains corresponding multiple interior point sets;
Multiple interior points is selected to concentrate and contain the most interior point set of angle point quantity as optimal interior point set, and by the optimal interior point set Corresponding transformation matrix model is as best matrix model.
10. vehicle body according to claim 9 looks around image base display methods, which is characterized in that passes through RANSAC algorithms The obtained best matrix model is:
Coordinate (the x of the rear axle midpoint of vehicle in image is looked around by the best matrix model H and the vehicle bodyc, yc), meter Calculation obtains vehicle corner δ and the vehicle body looks around the move distance d of vehicle in the horizontal direction in imagexVertically Move distance dy
Actual range with reference to representated by the time difference Δ t between two field pictures and the vehicle body look around every pixel in image The move distance D of the vehicle and movement velocity V is calculated in pixel_d:
11. vehicle body according to claim 2 looks around image base display methods, which is characterized in that step S26 includes:
The state parameter of the vehicle is established according to first movable information and second movable information respectively;
Kalman filtering is set to merge the matrix parameter of equation, the state parameter of the vehicle is substituted into the Kalman filtering Merge the movement locus that equation calculates the vehicle.
12. vehicle body according to claim 1 looks around image base display methods, which is characterized in that repeating said steps S1- S5 is until vehicle stops, and preserves last image as the history image before next vehicle launch.
CN201810048527.XA 2018-01-18 2018-01-18 A kind of vehicle body looks around image base display methods Withdrawn CN108257092A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810048527.XA CN108257092A (en) 2018-01-18 2018-01-18 A kind of vehicle body looks around image base display methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810048527.XA CN108257092A (en) 2018-01-18 2018-01-18 A kind of vehicle body looks around image base display methods

Publications (1)

Publication Number Publication Date
CN108257092A true CN108257092A (en) 2018-07-06

Family

ID=62726886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810048527.XA Withdrawn CN108257092A (en) 2018-01-18 2018-01-18 A kind of vehicle body looks around image base display methods

Country Status (1)

Country Link
CN (1) CN108257092A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110636263A (en) * 2019-09-20 2019-12-31 黑芝麻智能科技(上海)有限公司 Panoramic annular view generation method, vehicle-mounted equipment and vehicle-mounted system
CN110969574A (en) * 2018-09-29 2020-04-07 广州汽车集团股份有限公司 Vehicle-mounted panoramic map creation method and device
CN113674363A (en) * 2021-08-26 2021-11-19 龙岩学院 Panoramic parking image splicing calibration method and calibration object thereof
CN114333105A (en) * 2020-09-30 2022-04-12 比亚迪股份有限公司 Image processing method, apparatus, device, vehicle, and medium
US20220327659A1 (en) * 2021-04-08 2022-10-13 Raytheon Company Mitigating transitions in mosaic images
US11910092B2 (en) 2020-10-01 2024-02-20 Black Sesame Technologies Inc. Panoramic look-around view generation method, in-vehicle device and in-vehicle system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969574A (en) * 2018-09-29 2020-04-07 广州汽车集团股份有限公司 Vehicle-mounted panoramic map creation method and device
CN110636263A (en) * 2019-09-20 2019-12-31 黑芝麻智能科技(上海)有限公司 Panoramic annular view generation method, vehicle-mounted equipment and vehicle-mounted system
CN110636263B (en) * 2019-09-20 2022-01-11 黑芝麻智能科技(上海)有限公司 Panoramic annular view generation method, vehicle-mounted equipment and vehicle-mounted system
CN114333105A (en) * 2020-09-30 2022-04-12 比亚迪股份有限公司 Image processing method, apparatus, device, vehicle, and medium
CN114333105B (en) * 2020-09-30 2023-04-07 比亚迪股份有限公司 Image processing method, apparatus, device, vehicle, and medium
US11910092B2 (en) 2020-10-01 2024-02-20 Black Sesame Technologies Inc. Panoramic look-around view generation method, in-vehicle device and in-vehicle system
US20220327659A1 (en) * 2021-04-08 2022-10-13 Raytheon Company Mitigating transitions in mosaic images
US11769224B2 (en) * 2021-04-08 2023-09-26 Raytheon Company Mitigating transitions in mosaic images
CN113674363A (en) * 2021-08-26 2021-11-19 龙岩学院 Panoramic parking image splicing calibration method and calibration object thereof
CN113674363B (en) * 2021-08-26 2023-05-30 龙岩学院 Panoramic parking image stitching calibration method and calibration object thereof

Similar Documents

Publication Publication Date Title
CN108257092A (en) A kind of vehicle body looks around image base display methods
CN108280847A (en) A kind of vehicle movement track method of estimation
US10640041B2 (en) Method for dynamically calibrating vehicular cameras
CN108198248A (en) A kind of vehicle bottom image 3D display method
CN107229908B (en) A kind of method for detecting lane lines
CN109813335B (en) Calibration method, device and system of data acquisition system and storage medium
CN104854637B (en) Moving object position attitude angle estimating device and moving object position attitude angle estimating method
CN106054191A (en) Wheel detection and its application in object tracking and sensor registration
CN107284455B (en) A kind of ADAS system based on image procossing
EP1391845B1 (en) Image based object detection apparatus and method
CN108638999A (en) A kind of collision early warning system and method for looking around input based on 360 degree
CN110546456B (en) Method and apparatus for chassis surveying
WO2018202464A1 (en) Calibration of a vehicle camera system in vehicle longitudinal direction or vehicle trans-verse direction
CN109596121A (en) A kind of motor-driven station Automatic Targets and space-location method
CN110223354A (en) A kind of Camera Self-Calibration method based on SFM three-dimensional reconstruction
CN109871739A (en) Motor-driven station Automatic Targets and space-location method based on YOLO-SIOCTL
Smuda et al. Multiple cue data fusion with particle filters for road course detection in vision systems
WO2006087542A1 (en) Vehicle location
Yao et al. Selective stabilization of images acquired by unmanned ground vehicles
CN108765462A (en) A kind of car speed identification method
CN108256484A (en) A kind of vehicle movement parameter evaluation method
CN111316337A (en) Method and equipment for determining installation parameters of vehicle-mounted imaging device and controlling driving
CN112308786B (en) Method for resolving target vehicle motion in vehicle-mounted video based on photogrammetry
Rabe Detection of moving objects by spatio-temporal motion analysis
CN107992677A (en) Infrared small and weak method for tracking moving target based on inertial navigation information and brightness correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 201203 Shanghai Pudong New Area free trade trial area, 1 spring 3, 400 Fang Chun road.

Applicant after: Shanghai Sen Sen vehicle sensor technology Co., Ltd.

Address before: 201210 301B room 560, midsummer Road, Pudong New Area Free Trade Zone, Shanghai

Applicant before: New software technology (Shanghai) Co., Ltd.

CB02 Change of applicant information
WW01 Invention patent application withdrawn after publication

Application publication date: 20180706

WW01 Invention patent application withdrawn after publication