CN102269595A - Embedded monocular vision guidance system based on guidance line identification - Google Patents

Embedded monocular vision guidance system based on guidance line identification Download PDF

Info

Publication number
CN102269595A
CN102269595A CN2010101894658A CN201010189465A CN102269595A CN 102269595 A CN102269595 A CN 102269595A CN 2010101894658 A CN2010101894658 A CN 2010101894658A CN 201010189465 A CN201010189465 A CN 201010189465A CN 102269595 A CN102269595 A CN 102269595A
Authority
CN
China
Prior art keywords
formula
image
intelligent vehicle
row
leading line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010101894658A
Other languages
Chinese (zh)
Inventor
张云洲
王贺
袁泉
吴昊
师恩义
俞雪婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN2010101894658A priority Critical patent/CN102269595A/en
Publication of CN102269595A publication Critical patent/CN102269595A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention relates to an embedded monocular vision guidance system based on guidance line identification. The system comprises the following processing steps: 1, acquiring video signals: acquiring the video signals by a one-chip microcomputer; 2, carrying out distortion correction: carrying out vertical and transverse distortion correction on images acquired by a camera; 3, carrying out path identification extraction: extracting a guidance line from the background images; 4, controlling turning: building a turning model, obtaining the relation between the turning radius of an intelligent vehicle and the tilt angle of a steering engine by calculating to provide data for intelligent vehicle turning control; 5, controlling the steering engine to realize that the intelligent vehicle runs along the guidance line; and 6, controlling the velocity: controlling the running velocity of the intelligent vehicle by a PID control method. According to the invention, information of the guidance line is acquired by the camera, so the system which has an excellent visual perspectiveness allows performances of the guidance system to be improved, and dynamic threshold calculation and self-adaptive adjustment of the images in complex environment and good stability under complex light to be realized.

Description

Embedded monocular vision navigation system based on leading line identification
Technical field
Patent of the present invention relates to the locomotive automation field, specifically is to realize that vehicle follows the embedded monocular vision navigation system of ground navigation line independent navigation function.
Background technology
Along with science and technology and expanding economy, independent navigation vehicle (AGV) has obtained to use widely at industrial circle.Then become the research focus in domestic and international AGV field based on the navigational system of vision.Although the vision navigation system based on PC occupies position of mainstream at present, also there are many critical defects; The embedded vision navigational system replaces PC integrated circuit board system becomes inevitable trend.But because the restriction of software/hardware resource, embedded vision navigation system is very weak to the adaptability of complex environment, has seriously restricted embedded vision navigational system applying in every field.
Summary of the invention
For solving the problem of present embedded vision navigational system and software/hardware resource limit very weak to the adaptability of complex environment, the present invention adopts the CCD camera as vision sensor, realization is calculated and the self-adaptation adjustment at the image dynamic threshold value of complex environment, and obtain method with distortion correction, the embedded monocular vision navigation system of having realized stable operation under variable light environment, having had the long distance information obtaining ability in real time based on the remote image information of limited resources.
The technical solution adopted for the present invention to solve the technical problems is: a kind of embedded monocular vision navigation system based on leading line identification, and this system comprises following treatment step:
The first step, video signal collective, single-chip microcomputer carries out collection of video signal, and at first secondly the information in the collection vertically gather information transversely;
In second step, distortion correction carries out the image of camera collection the distortion correction of vertical and horizontal;
In the 3rd step, Path Recognition is extracted, and the image of described collection is carried out leading line separate, and is about to leading line separation and Extraction from background image and comes out;
The 4th step turned to control, built steering model and calculated the radius of turn of intelligent vehicle and the relation of steering wheel pivot angle, provided data for intelligent vehicle turns to control;
The 5th step, steering wheel control, the realization intelligent vehicle is exercised along leading line;
In the 6th step, speed control adopts pid control law that the intelligent vehicle travelling speed is controlled, and the realization intelligent vehicle is exercised along leading line reposefully;
Single-chip microcomputer carries out collection of video signal in the described step 1,
At first, determine the resolution of camera image;
In the vertical, every of conventional pal mode camera has the vision signals of 280 row, and the present invention chooses wherein the information of 40 row and gathers and handle, and can reach demand for control, can save mcu resource again;
Transversely, need to improve lateral resolution.Adopt phaselocked loop that the AD inversion frequency is brought up to 24MHz, improve ATD clock (being the AD conversion of single-chip microcomputer), and adopt 8 conversions of a sequence, continuous acquisition.So that shorten the sampling time, make in the time of 58 μ s, to convert 6 sequences that promptly lateral resolution can reach 48 points;
Secondly, carry out the collection of video signal process in conjunction with the image resolution ratio of camera:
The first step is caught the isolated parity field synchronizing signal of video synchronization signal separating chips; Enter corresponding interrupt service routine then; Allowing appears on the scene guarantees the correct data that collect with blanking zone, makes regularly 1.2ms of timer, after regularly finishing, opens row and interrupts the preparation for acquiring data;
In second step, the timer timing arrives, and opens row of channels this moment, allows to interrupt, and the linage-counter of control ATD conversion is made zero, with the beginning of sign piece image;
In the 3rd step, behind about 32 μ s, go and arrive break period; 6 μ s just have real view data appearance after the rising edge line synchronizing signal, and continue 52 μ s to this row end, make regularly 6 μ s of timer in the row interruption of this row for this reason, could begin after the end in the time of regularly to gather;
In the 4th step, regularly 6 μ s times arrived timer; Begin ATD immediately and gather, because ATD gathers by sequence, sequence can 8 points of continuous acquisition, will produce an ATD after each sequence collection is finished and interrupt, and open ATD conversion with scan mode this moment, and gathering 48 points need 6 ATD interruption;
In the 5th step, every about 8 μ s, ATD interrupt to take place once the data conversion storage of ATD in the array of memory image, when sequence number is 6, indicates that a row gathered end, stops ATD and interrupts;
In the 6th step, ATD interrupts constantly writing data in the image array, and capable up to linage-counter meter to 40, a width of cloth vedio data has been gathered and finished, and turns off capable ATD and interrupts.
The camera distortion correction method is in the described step 2:
At first proofread and correct the distortion on the vertical direction, h is the cam lens height, a, b be the most nearby with the visual field of farthest, x be in the visual field more arbitrarily, α, β, θ are respectively the sight line elevations angle of a, b, x correspondence.Exist following relation as can be known between them by geometric relationship:
a tan α = b tan β = x tan ( α + θ ) = h Formula 1
On photographic plate, d is a photosensitive area length, is equivalent to total line number of image, and r is that x place image is expert at, and has geometric relationship as can be known:
r - d 2 tan ( θ - β - α 2 ) = d 2 tan ( β - α 2 ) Formula 2
Can solve the relation of r and x by above two formulas:
r = d 2 · tan [ a tan ( x · tan α a ) - β + α 2 ] tan ( β - α 2 ) + d 2 Formula 3
Bring the actual parameter of camera into, get a=9cm, b=89cm, h=25cm, d=270 then can get the collection line number of camera;
Secondly the distortion on the level of corrections direction, the distortion of establishing center line is 0, then chooses the formula below the length of the y of delegation correspondence in real image satisfies arbitrarily in the image:
L 34 = L + 80 102 = L + y x Formula 4
Then y capable on arbitrarily any side-play amount just can in the hope of.
Through the coordinate transform on level and the vertical direction, real image can be reduced, make the image that collects more near real image.
Path Recognition adopts thresholding method in the described step 3:
Because of a running environment adularescent and the black of this navigational system, therefore seek maximal value and minimum value in the picture of publishing picture, get the threshold value that its intermediate value is exactly an image then;
In order to increase the adaptability of this navigational system, in first row with first width of cloth image, get two groups of maximal values and minimum value, utilize the threshold value of the mean value of four numerical value as image;
Path extraction adopts binaryzation algorithm and following limb detection method to combine in the described step 3:
The binaryzation algorithmic procedure is as follows:
At first set a threshold value,, compare the size of each pixel value and threshold value from left to right for each row in the video matrix; If pixel value is more than or equal to threshold value, what then judge this pixel correspondence is white racing track; Otherwise what then judge correspondence is the target index wire; The row of the pixel when record occurs pixel value for the first time and for the last time less than threshold value number are calculated both mean value, with this position as target index wire on this row;
Following limb detection method process is as follows:
At first leading line is a continuous segments, so the marginal point of adjacent two row is adjacent; Following limb detects and utilizes this characteristic to come searching route; If searched out the edge of certain row, then just near a last edge, searched next time; This algorithm can be got rid of disadvantageous interference;
Above-mentioned two kinds of methods are used in combination, and concrete process is as follows:
1., raw data filtering that camera collection is arrived, as judge it is noise when point beyond the normal range occurring exceeding, replace with the mean value of these front and back data; Exclude the interference signal;
2., adopt the binary conversion treatment method that the camera collection data are handled; Write down stain hop count and this row center that each row occurs, prepare next step route searching;
3., set the nearest delegation of that behavior have only one section stain, be benchmark with this delegation's black line then, search up and down; Because leading line is continuous distribution, so be correct leading line from its nearest that section black line; So just can get rid of interference, find real black line, i.e. path position.
In the described step 4, by turning to control, build steering model and calculate the radius of turn of intelligent vehicle and the relation of steering wheel pivot angle, provide data for intelligent vehicle turns to control, specific implementation is as follows;
The tangent line of the left and right sides front wheel angle of intelligent vehicle will meet at 1 O point with trailing wheel axis extended line; If two front-wheel mid points are reference point, the distance that this reference point order to O is as the radius of turn of dolly, and derivation draws the geometry Turn Models of dolly, thereby obtains the relation of dolly radius of turn and steering wheel pivot angle;
Dolly left-hand bend formula is as follows, and intelligent vehicle is turned right and in like manner can be got:
200 tan θ L - 200 tan θ R = 130 Formula 5
With the headstock center is reference point, calculates the radius of turn of car body:
R = ( 200 tan θ L + 63 ) 2 + 200 2 = ( 200 tan θ R - 67 ) 2 + 200 2 Formula 6
Wherein, θ LBe the near front wheel pivot angle, θ RBe the off-front wheel pivot angle; Utilize the MATLAB mathematical software to draw out the relation of radius of turn and car revolver pivot angle, right wheel in like manner can get.
In the described step 5,, realize that intelligent vehicle is as follows along the leading line exercise method by steering wheel control;
At first determine be controlled target allow dolly " travelling along the line " as far as possible, what is called travel along the line into the car body central point on leading line; Choosing intelligent vehicle front-wheel central point is reference point, cooks up a cabling; This cabling originates in the car body reference point, ends at image-guidance line end; Want to reach the target of control, just require the cabling of intelligent vehicle terminal with the terminal coincidence of leading line and they tangential direction consistent; Intelligent vehicle turns over certain angle in the scope that camera is seen, promptly the angle that turns in the distance range that intelligent vehicle is ordered from the A point to C is a+b;
Motion model has comprised approximate integral process, and the steering wheel output corner is directly proportional with the angle that intelligent vehicle turns over the integration of distance, with equation expression is:
∫ θ (t) Sd θ=k α formula 7
Wherein, θ represents the steering wheel output corner, and S is the distance that dolly is passed by, and K is a fixed constant, and α is the angle that dolly turns over, and is similar to think that in the cycle of 20ms dolly steering wheel pivot angle is a constant, and then formula can be reduced to:
θ S=k α formula 8
To be exactly one section be one section arc length of radius with α for central angle R to distance S under the certain situation of rotational angle theta, carries it in the formula formula 8 and can obtain following formula:
θ = k · α S = k · α R · α = k R = k l sin α 2 = k · sin α 2 l Formula 9
Wherein 1 is half of chord length AC, and α, l can calculate acquisition from image, and k also can choose by actual measurement, and steering wheel control rotational angle theta is just calculated like this.
In the described step 6, adopt pid control law that the intelligent vehicle travelling speed is controlled, the realization intelligent vehicle is exercised along leading line reposefully;
At first detect by speed, vehicle speed is carried out close-loop feedback control, according to the corresponding relation of controller output with topworks, determine to adopt the position model pid control algorithm, formula is:
u ( t ) = K p [ e ( t ) + 1 T ∫ 0 t e ( t ) dt + T d de ( t ) dt ] Formula 10
In the formula
The output of u (t)---controller (also claiming regulator);
The input of e (t)---controller (usually be the poor of setting value and controlled volume, i.e. e (t)=r (t)-c (t));
K p---the ratio amplification coefficient of controller;
T i---the integral time of controller;
T d---the derivative time of controller.
If u (k) is the output valve of the k time sampling instant controller, the PID formula that can disperse
u ( k ) = K p e ( k ) + K i Σ j = 0 k e ( j ) + K d [ e ( k ) - e ( k - 1 ) ] Formula 11
Ki is an integral coefficient in the formula, and Kd is a differential coefficient;
Adopt pid control law that the intelligent vehicle travelling speed is controlled, the realization intelligent vehicle is exercised along leading line reposefully.
A kind of implement device of the embedded monocular vision navigation system based on leading line identification comprises: camera, single-chip microcomputer (Freescale MC9S12), audio video synchronization separative element; Described processor input end is connected to the CCD camera, and its output terminal is connected to the front-wheel steering wheel; Described processor input end also is connected to the audio video synchronization separative element, and its output terminal is connected to the back turbin generator by direct current generator chip for driving module; Described CCD camera is connected with the audio video synchronization separative element; Described processor is powered by power module.Described processor has the audio video synchronization separative element by PH0, PH1 termination, for video acquisition provides look-at-me.Described PLC processor is the outside brilliant shake of 16M, and the lock by its inside makes the bus frequency of this PLC processor reach 24M, so that satisfy the needs of video acquisition to the function of ring.
Advantage of the present invention is:
1, adopt camera collection leading line information, it is perspective to have good vision, has effectively promoted the navigational system performance.
2, the limited resources that the present invention is based on embedded system have realized that remote image information obtains and distortion correction in real time.
3, the present invention has realized that the image dynamic threshold value under complex environment calculates and the self-adaptation adjustment, makes this navigational system that better stability be arranged under complicated light.
Description of drawings
The present invention is further described below in conjunction with drawings and Examples.
Fig. 1 is an one-piece construction block diagram of the present invention.
Fig. 2 is a dynamic threshold synoptic diagram of the present invention.
Fig. 3 is this Power Generation Road figure.
Fig. 4 is this video signal collective process flow diagram.
Fig. 5 is the part Processing Structure figure that originally feels like jelly.
Fig. 6 is this CCD camera image deformation synoptic diagram.
Fig. 7 is being simplified to as illustraton of model of this CCD camera vertical direction distortion.
Fig. 8 is that the present invention utilizes MATLAB with the line number drafting pattern of gathering.
Synoptic diagram before Fig. 9-the 1st, this distortion correction.
Fig. 9-the 2nd, synoptic diagram behind this distortion correction.
Figure 10 originally floods square to the distortion correction illustraton of model.
Figure 11 is the design sketch after this distortion correction.
Figure 12-the 1st, intelligent vehicle front-wheel steer illustraton of model of the present invention.
Figure 12-the 2nd, intelligent vehicle left and right wheels angle relation curve map of the present invention.
Figure 12-the 3rd, the graph of relation of intelligent vehicle radius of turn of the present invention and revolver pivot angle.
Figure 12-the 4th, intelligent vehicle car body Turn Models figure of the present invention.
Figure 12-the 5th, control effect figure before intelligent vehicle control of the present invention improves.
Figure 12-the 6th, intelligent vehicle control of the present invention improves back control effect figure.
Figure 13 is the PID speed control process flow diagram after intelligent vehicle of the present invention improves.
Embodiment
As shown in Figure 1, 2, 3, a kind of implement device of the embedded monocular vision navigation system based on leading line identification is by comprising: video acquisition unit, processor (Freescale MC9S12), audio video synchronization separative element; Described processor input end is connected to the CCD camera, and its output terminal is connected to the front-wheel steering wheel; Described processor input termination also has audio video synchronization to separate core unit, and its output terminal is connected to the back turbin generator by direct current generator chip for driving module; Described CCD camera is connected with the audio video synchronization separative element; Described PLC processor is powered by power module.
Described video acquisition unit is selected the ccd video camera that is output as standard P AL standard video signal for use.Per second has 25 strange and 25 idol fields in the PAL standard, and every time is 20ms.Each strange or even comprises 312.5 row, and 287.5 data lines are wherein arranged, every line time 64 μ s, and wherein 12 μ s are horizontal blanking signal, so picture signal is 52 μ s.The image resolution ratio of ccd video camera output can reach 575 * 767. restrictions that are subjected to PLC processor memory and arithmetic speed, and high-resolution image can't be handled.Native system is set to 24M with the PLC processor frequencies, then can gather 48 points in the effective 52 μ s times at most in delegation, can satisfy the requirement that system path detects. the resolution of finally determining images acquired is 48 * 40, and promptly every row 48 points, every width of cloth image are gathered 40 row altogether.Described audio video synchronization separative element model is LM1881, manufacturer is U.S.'s electronics, Video Sync Separator Chip is obtained line synchronizing signal and field sync signal, and be entered into the PH0 and the PH1 port of PLC processor, for video acquisition provides look-at-me, vision signal is carried out the AD conversion through the ATD0 of PLC processor mouth.
Described PLC processor uses the outside brilliant shake of 16M, and in order to satisfy the needs of video acquisition, the lock by using microcontroller inside makes the bus frequency of final microcontroller reach 24M to the function of ring.The road information that ccd video camera is gathered need be converted into data and handle through the AD port of microcontroller.The CCD camera collection to information realize by aanalogvoltage, therefore, must transform through A/D and just can obtain the data message (analog to digital conversion) that needs.Because CCD camera per second produces 25 two field pictures, therefore to need the time be 40ms to every two field picture.And every two field picture is divided into strange and idol field, so every scan period is 20ms.Here, only the capable information in the strange field is gathered, gather 40 row and 48 row.Every collection one secondary data of AD needs the time of 20ms, therefore in timer, has set every 20ms and has carried out an AD data acquisition.
A kind of embedded monocular vision navigation system based on leading line identification, this system comprises following treatment step:
The first step, video signal collective, single-chip microcomputer carries out collection of video signal, and at first secondly the information in the collection vertically gather information transversely; To the timing sequence test of camera signals as can be known, general every of common in the market pal mode CCD camera has about 320 row, and wherein the effective video signal is greater than 280 row, and the synchronizing signal time interval of every row is 64 μ s.Wherein effective video signal duration is about 58 μ s.
Carrying out collection of video signal at first is exactly the resolution of determining image.In the vertical, the signals of 280 row are enough our demands for control, and are subjected to the resource limit of single-chip microcomputer, the data volume of 280 row can't be handled, gather and handle so choose the information of wherein general 40 row, can reach demand for control, can save mcu resource again.Transversely, the vision signal time of every row only continues about 58 μ s.Single-chip microcomputer not under the situation of overclocking AD switching time be 7 μ s, therefore effectively the counting of single file vision signal of sampling is generally 57.3/7=8, the horizontal resolution characteristic of image sensing that is to say this moment is 8 pixels, and the requirement of this and control system falls far short.Therefore the essential lateral resolution that improves.With phaselocked loop the AD inversion frequency is brought up to 24MHz, improve the ATD clock, and adopt 8 conversions of a sequence, continuous acquisition.Can shorten the sampling time like this, make in the time of 58 μ s, to convert 6 sequences (48 points).That is to say that lateral resolution can reach 48 points, can reach the control requirement.Referring to table 1
Table 1 camera time sequence parameter table [11]
Figure BSA00000125015800071
Described video synchronization signal separating chips can extract the time sequence information of signal from the CCD camera signals, as shown in Figure 2.B is line synchronizing signal output.C is field sync signal output, when the field system chronizing impulse of camera signals arrives, this end will become low level, generally keep 230 μ s, again become high level d then again and be strange-even field sync signal output terminal, when camera signals was in strange, this end was a high level, when being in the idol field, be low level.
Collection timing Design collection of video signal program in conjunction with the CCD camera:
(1), catches the isolated parity field synchronizing signal of LM1881 with the input capture function.Enter corresponding interrupt service routine.For guaranteeing the correct data (allow the same blanking zone that appears on the scene) that collect, make regularly 1.2ms of timer, after regularly finishing, open row and interrupt the preparation for acquiring data.
(2), the timer timing arrives, and opens row of channels this moment, allows to interrupt, and the linage-counter of control ATD conversion is made zero, with the beginning of sign piece image.
(3), behind about 32 μ s, row interrupts.6 μ s (rising edge) just have real view data appearance after the line synchronizing signal, and continue 52 μ s to this row end.In the row of this row interrupts, make regularly 6 μ s of timer for this reason, could begin to gather after finishing in the time of regularly.
(4), regularly 6 μ s times arrived timer.Should begin ATD immediately this moment gathers.Because ATD gathers by sequence, sequence can 8 points of continuous acquisition, will produce an ATD interruption after each sequence collection is finished.Open ATD with scan mode this moment, and gathering 48 points needs 6 ATD to interrupt.
(5), every about 8 μ s, ATD interrupts taking place once the data of ATD to be transferred in the array of memory image.When sequence number is 6, indicate that a row has been gathered to be over, stop ATD.
(6), ATD interrupts constantly writing data in the image array, up to linage-counter meter to 40.Should turn off row this moment interrupts.Mean that the piece image data have been gathered and finish this moment.The process flow diagram of capture program as shown in Figure 4; According to system requirements, system software structure as shown in Figure 5;
In second step, distortion correction carries out the image of camera collection the distortion correction of vertical and horizontal;
Image can produce distortion in the camera imaging process, as shown in Figure 6.That is to say that the image that the straight leading line of script forms in camera can limpen, this is unfavorable for the identification of leading line, therefore will avoid the generation of this situation as far as possible, to the camera calibration, corrects this distortion as far as possible.
The distortion that camera produces can reduce three kinds: 1, in vertical direction, camera compresses surface level at a distance, and distance is far away more, compresses severe more.2, in the horizontal direction, camera carries out in various degree compression to the surface level at different distance place, and distance is far away more, compresses severe more.3, axis both sides, camera pushes towards center nearby surface level.On hand, this extruding is very slight, a long way off, along with the increase of the point in the surface level apart from axial line distance, pushes more and more serious.Because the third distortion influence on hand is less, and deals with the algorithm complexity, so ignore the influence of its generation, only first and second kinds of distortion is proofreaied and correct.
At first proofread and correct the distortion on the vertical direction.As shown in Figure 7, h is the cam lens height, a, b be the most nearby with the visual field of farthest, x be in the visual field more arbitrarily, α, β, θ are respectively the sight line elevations angle of a, b, x correspondence.Exist following relation as can be known between them by geometric relationship:
a tan α = b tan β = x tan ( α + θ ) = h Formula 1
On photographic plate, d is a photosensitive area length, is equivalent to total line number of image, and r is that x place image is expert at, and has geometric relationship as can be known:
r - d 2 tan ( θ - β - α 2 ) = d 2 tan ( β - α 2 ) Formula 2
Can solve the relation of r and x by above two formulas.
r = d 2 · tan [ a tan ( x · tan α a ) - β + α 2 ] tan ( β - α 2 ) + d 2 Formula 3
Bring the actual parameter of camera into, get a=9cm, b=89cm, h=25cm, d=270, the collection line number that then can get camera sees Table 4.2.
Table 4.2 is gathered line number
Utilize MATLAB with the line number drafting pattern of gathering, as shown in Figure 8.Through top collection, the distortion on the vertical direction obtains correcting, shown in Fig. 9-1.Fig. 9-2 vertical direction is proofreaied and correct the back design sketch.
Distortion on the described level also can be tried to achieve by geometric transformation.As Figure 10, the distortion of establishing center line is 0, then chooses the formula below the length of the y of delegation correspondence in real image satisfies arbitrarily in the image:
L 34 = L + 80 102 = L + y x Formula 4
Then y capable on arbitrarily any side-play amount just can in the hope of.
Through the coordinate transform on level and the vertical direction, real image can be reduced, make the image that collects more near real image, also lay a good groundwork for the calculating of behindness parameter.Final effect is seen Figure 11.
In the 3rd step, Path Recognition is extracted, and the image of described collection is carried out leading line separate, and is about to leading line separation and Extraction from background image and comes out; The identifying purpose of native system is exactly that correct leading line is separated from the background of white.Be exactly the process of an image segmentation in fact.The basic skills of image segmentation can be divided into two big classes: cut apart and based on the image segmentation in zone based on edge-detected image.Image segmentation calculated amount based on the edge is big, and for the intelligent vehicle system, the calculated amount of this computing is that Single Chip Microcomputer (SCM) system institute is unaffordable.And thresholding method is a kind of cutting techniques based on the zone, and it has cutting apart of the strong scenery that contrasts particularly useful to object and background.So very suitable system with us.
The key of thresholding method is determining of threshold value.Under the usual conditions, because a running environment adularescent and the black of system, and be not subjected to the influence of other factors basically, as long as therefore find out maximal value and minimum value in the image, getting its intermediate value then is exactly the threshold value that can be used as image.In order to increase the adaptability of system, we ask in first row of first width of cloth image, maximum two values and minimum two values, with their mean value of four as the threshold value of image.
Path extraction is the key component of entire image recognition system, has only correct active path is extracted, and gets rid of unfavorable interference, control that could be correct.Path extraction algorithm commonly used has following several:
(1) binaryzation algorithm.Set a threshold value,, compare the size of each pixel value and threshold value from left to right for each row in the video matrix.If pixel value is more than or equal to threshold value, what then judge this pixel correspondence is white racing track; Otherwise what then judge correspondence is the target index wire.Write down the row number of the pixel when occurring pixel value less than threshold value for the first time and for the last time, calculate both mean value, with this position as target index wire on this row.This kind algorithm thought is simple, but poor anti jamming capability.
(2) direct rim detection.Set a threshold value (black is in the difference of white),, obtain the difference of adjacent two pixel values from left to right for each row in the video matrix.If difference is more than or equal to threshold value, what then judge Next pixel correspondence is the left hand edge of target index wire, with the unique point of this picture point as these row, notes the row number of this pixel, as the position of target index wire on this row.This kind algorithm is stronger than the ability of the anti-environmental interference of binaryzation algorithm.
(3) following limb detects.Because leading line is a continuous segments, so the marginal point of adjacent two row should be adjacent.Following limb detects and has utilized this characteristic to come searching route just.If searched out the edge of certain row, then just near a last edge, searched next time.This algorithm can be saved computing time greatly, can get rid of disadvantageous interference simultaneously.
Find that leading line various interference occur through regular meeting in the actual debugging.Some interference are intrinsic, as cross wire etc.; And some interference produces at random, and as the noise that produces in the camera signals transmission course, or because camera is seen the scene that scope is excessively far seen etc.Therefore, adopted first kind and the third method way of combining to extract the path, and designed corresponding program, got rid of these and disturb, what could guarantee to control stablizes.
Concrete process is such:
(1) the raw data filtering of arriving for camera collection is thought noise when the point beyond the normal range promptly occurring exceeding, with the mean value replacement of front with the data of back.So just can get rid of signal disturbs.
(2) with first method the camera data are carried out binary conversion treatment.And note stain hop count and the center thereof that each row occurs, for next step route searching is prepared.
(3) finding nearest delegation to have only that delegation of one section stain, think correct delegation, is benchmark with this delegation's black line then, up and down search.Because leading line is a continuous distribution, so be correct leading line from its nearest that section black line generally speaking.So just can get rid of interference, find real black line.
The 4th step turned to control, built steering model and calculated the radius of turn of intelligent vehicle and the relation of steering wheel pivot angle, provided data for intelligent vehicle turns to control; It is that example provides controlling models and strategy to intelligent vehicle or robotic vision navigation and control with the intelligent vehicle at this that this functional module finally will realize.Suppose that wheel is non-slip in the process of turning, and ignore the deformation that produces owing to extruding.The tangent line of the left and right sides front wheel angle of intelligent vehicle will meet at 1 O with trailing wheel axis extended line.With the front-wheel mid point is reference point,, can derive according to how much and draw the Turn Models of dolly, thereby obtain the relation of dolly radius of turn and steering wheel pivot angle to the distance of the O radius of turn as dolly with this point, for the control of back provides foundation.
The steering model of front-wheel can obtain dolly left-hand rotation formula by figure shown in Figure 12-1, turn right in like manner can get.
200 tan θ L - 200 tan θ R = 130 Formula 5
Draw the corner curve of the dolly left and right wheels of correspondence when dolly turns left or turns right shown in Figure 12-2 with MATLAB.
With the headstock center is reference point, calculates the radius of turn of car body:
R = ( 200 tan θ L + 63 ) 2 + 200 2 = ( 200 tan θ R - 67 ) 2 + 200 2 Formula 6
Draw out the relation of radius of turn and car revolver pivot angle equally with MATLAB, right wheel in like manner is omitted.Shown in Figure 12-3.
The 5th step, steering wheel control, the realization intelligent vehicle is exercised along leading line; One field picture as a control cycle, is handled image within the scope that camera is seen.Arbitrarily enumerate a field picture information such as Figure 12-4, this width of cloth treatment of picture is described.
What at first will determine is controlled target.When not considering the super shortcut problem of dolly, controlled target allows dolly " travelling " as far as possible exactly along the line, and travelling along the line here can reduce: one, tangential direction is consistent the most nearby with leading line for the car body direction; Two, the car body center is on leading line.Choosing dolly front-wheel central point is reference point, can cook up a cabling.This cabling originates in the car body reference point, ends at image-guidance line end.Want to reach the target of control, just following one-period dolly the position still overlap with the most nearby leading line tangent line.Just require the cabling of dolly terminal with the terminal coincidence of leading line and they tangential direction consistent.In order to reach this controlled target, dolly will turn over certain angle in the certain limit that camera is seen, close in the corresponding diagram to be: dolly will turn over the angle of a+b within the distance range that the A point is ordered to C.
Motion model has comprised approximate integral process, and the steering wheel output corner is directly proportional with the angle that dolly turns over the integration of distance.With equation expression be:
∫ θ (t) Sd θ=k α formula 7
Wherein θ represents the steering wheel output corner, and S is the distance that dolly is passed by, and K is a fixed constant, and α is the angle that dolly turns over.Be similar to and think that in the cycle of 20ms dolly steering wheel pivot angle is a constant, then formula can be reduced to
θ S=k α formula 8
To be exactly one section be one section arc length of radius with α for central angle R to distance S under the certain situation of rotational angle theta.Carry it in the formula and can obtain following formula.
θ = k · α S = k · α R · α = k R = k l sin α 2 = k · sin α 2 l Formula 9
Wherein 1 is half of chord length AC.α, 1 can calculate acquisition from image, k also can choose by actual measurement, and steering wheel control rotational angle theta just can have been calculated like this.
The realization of described algorithm will draw the control rotational angle theta in fact, mainly is to obtain the tangential direction of distant place leading line and the distance that image arrives dolly at a distance.But because slope fluctuation at a distance is bigger, be difficult for directly calculating, therefore taked indirect mode to calculate.Be easy to as can be seen, at a distance the angle of tangent line and center line can be got by a and b addition, and wherein a is exactly half of central angle of leading line, and b is the angle of string and center line.Seem that by calculating navigation curvature and global slopes just can calculate α easily.Known again car body reference point also just calculates the chord length that dolly will be passed by easily to image distance h nearby, thereby tries to achieve the corner of steering wheel.
In the 6th step, speed control adopts pid control law that the intelligent vehicle travelling speed is controlled, and the realization intelligent vehicle is exercised along leading line reposefully; In order to make intelligent vehicle to need the control speed of a motor vehicle reposefully along the leading line operation.Can control the speed of a motor vehicle by the average voltage on the controlling and driving motor, if but open loop control motor speed can be influenced by several factors, for example cell voltage, drive of motor friction force, road friction force and front-wheel steer angle etc.These factors can cause fluctuation of service.Detect by speed, vehicle speed is carried out close-loop feedback control, can eliminate the influence of above-mentioned various factors.We take the most classical PID control method.
PID control is the regulator control law that is most widely used in the engineering reality.Come out so far over more than 70 year, it is simple in structure with it, good stability, reliable operation, easy to adjust and become one of major technique of Industry Control.
Usually according to the corresponding relation of controller output, basic digital pid algorithm is divided into two kinds of position model PID and increment type PIDs with topworks.
1. position model pid control algorithm
The desirable formula of basic PID controller is
u ( t ) = K p [ e ( t ) + 1 T ∫ 0 t e ( t ) dt + T d de ( t ) dt ] Formula 10
In the formula
The output of u (t)---controller (also claiming regulator);
The input of e (t)---controller (usually be the poor of setting value and controlled volume, i.e. e (t)=r (t)-c (t));
K p---the ratio amplification coefficient of controller;
T j---the integral time of controller;
T d---the derivative time of controller.
If u (k) is the output valve of the k time sampling instant controller, the PID formula that can disperse
u ( k ) = K p e ( k ) + K i Σ j = 0 k e ( j ) + K d [ e ( k ) - e ( k - 1 ) ] Formula 11
Ki is an integral coefficient in the formula, and Kd is a differential coefficient.
2. increment type PID control algolithm
Increment type PID is meant that the output of digitial controller is the increment Delta u (k) of controlled quentity controlled variable.When adopting the increment type algorithm, what the controlled quentity controlled variable Δ u (k) of computing machine output was corresponding is the increment of this topworks position, rather than the physical location of corresponding topworks, therefore require topworks must have accumulation function to the controlled quentity controlled variable increment, just can finish control operation to controlled device.The accumulation function of topworks can adopt the method for hardware to realize; Also can adopt software to realize, as utilize formula u (k)=u (k-1)+Δ u (k) sequencing to finish.
Can get increment type PID control formula by formula 6
Δ u (k)=u (k)-u (k-1)=K pΔ e (k)+K iE (k)+K d[Δ e (k)-Δ e (k-1)] formula 12
Δ e (k) in the formula=e (k)-e (k-1).
Relatively these two kinds of algorithms are not difficult to find not need to add up in the increment type algorithm formula.Determining of control increment Δ u (k) is only relevant with nearest 3 times sampled value, obtains reasonable control effect by weighted easily.But increment type is the accumulation of computing machine, finished by hardware or controlled device, and our hardware does not possess such function, therefore can only adopt the position model pid algorithm.
Acceleration and deceleration improve, and can carry out the adjusting of speed though adopt conventional PID to regulate, and the adjusting time is longer, and overshoot is arranged.When the speed of intelligent vehicle is higher, no matter be to quicken or deceleration, slow excessively adjusting all can impact overall performance.In order to address this problem, the pid algorithm of routine is improved, the time of quickening and slowing down is reduced greatly, and over-control can not occur, obtained satisfied effect.It is as follows now will to improve description:
Because pid control algorithm can satisfy the control requirement of system when stable speed was regulated, so remain unchanged.Mainly when undergoing mutation, speed improved.Before regulating, earlier current velocity setting and velocity feedback are judged.When the speed of given and feedback when quickening threshold value, be judged as acceleration mode, otherwise enter deceleration regime when too much less than feeding back when given.In order to shorten the adjusting time as far as possible, given output when quickening to the maximum that can reach with deceleration.The output that it should be noted that this maximum is determined by mechanical characteristics of motor.Because the maximum static friction force on wheel and ground is certain, promptly the moment that provides of wheel is certain, and the output torque of motor can not surpass this restriction, otherwise will skid.Mechanical property according to motor is determined maximum output torque, thereby makes the speed of a motor vehicle reach set-point rapidly.When the speed of feedback near after given, withdraw from and quicken or deceleration regime, get back to conventional PID and regulate.Program flow diagram as shown in figure 13.
After improved PID regulates, speed curve diagram 12-5,12-6.Contrast does not add improved pid algorithm, and the adjusting time reduces and overshoot reduces, and effect is remarkable.

Claims (10)

1. embedded monocular vision navigation system based on leading line identification is characterized in that this system comprises following treatment step:
The first step, video signal collective, single-chip microcomputer carries out collection of video signal, and at first secondly the information in the collection vertically gather information transversely;
In second step, distortion correction carries out the image of camera collection the distortion correction of vertical and horizontal;
In the 3rd step, Path Recognition is extracted, and the image of described collection is carried out leading line separate, and is about to leading line separation and Extraction from background image and comes out;
The 4th step turned to control, built steering model and calculated the radius of turn of intelligent vehicle and the relation of steering wheel pivot angle, provided data for intelligent vehicle turns to control;
The 5th step, steering wheel control, the realization intelligent vehicle is exercised along leading line;
In the 6th step, speed control adopts pid control law that the intelligent vehicle travelling speed is controlled, and the realization intelligent vehicle is exercised along leading line reposefully;
2. by the described embedded monocular vision navigation system of claim 1, it is characterized in that based on leading line identification,
Single-chip microcomputer carries out collection of video signal in the described step 1,
At first, determine the resolution of camera image;
In the vertical, every of conventional pal mode camera has the vision signals of 280 row, and the present invention chooses wherein the information of 40 row and gathers and handle, and can reach demand for control, can save mcu resource again;
Transversely, need to improve lateral resolution.Adopt phaselocked loop that the AD inversion frequency is brought up to 24MHz, improve ATD clock (being the AD conversion of single-chip microcomputer), and adopt 8 conversions of a sequence, continuous acquisition.So that shorten the sampling time, make in the time of 58 μ s, to convert 6 sequences that promptly lateral resolution can reach 48 points;
Secondly, carry out the collection of video signal process in conjunction with the image resolution ratio of camera:
The first step is caught the isolated parity field synchronizing signal of video synchronization signal separating chips; Enter corresponding interrupt service routine then; Allowing appears on the scene guarantees the correct data that collect with blanking zone, makes regularly 1.2ms of timer, after regularly finishing, opens row and interrupts the preparation for acquiring data;
In second step, the timer timing arrives, and opens row of channels this moment, allows to interrupt, and the linage-counter of control ATD conversion is made zero, with the beginning of sign piece image;
In the 3rd step, behind about 32 μ s, go and arrive break period; 6 μ s just have real view data appearance after the rising edge line synchronizing signal, and continue 52 μ s to this row end, make regularly 6 μ s of timer in the row interruption of this row for this reason, could begin after the end in the time of regularly to gather;
In the 4th step, regularly 6 μ s times arrived timer; Begin ATD immediately and gather, because ATD gathers by sequence, sequence can 8 points of continuous acquisition, will produce an ATD after each sequence collection is finished and interrupt, and open ATD conversion with scan mode this moment, and gathering 48 points need 6 ATD interruption;
In the 5th step, every about 8 μ s, ATD interrupt to take place once the data conversion storage of ATD in the array of memory image, when sequence number is 6, indicates that a row gathered end, stops ATD and interrupts;
In the 6th step, ATD interrupts constantly writing data in the image array, and capable up to linage-counter meter to 40, a width of cloth vedio data has been gathered and finished, and turns off capable ATD and interrupts.
3. by the described embedded monocular vision navigation system of claim 1, it is characterized in that based on leading line identification,
The camera distortion correction method is in the described step 2:
At first proofread and correct the distortion on the vertical direction, h is the cam lens height, a, b be the most nearby with the visual field of farthest, x be in the visual field more arbitrarily, α, β, θ are respectively the sight line elevations angle of a, b, x correspondence.Exist following relation as can be known between them by geometric relationship:
a tan α = b tan β = x tan ( α + θ ) = h Formula 1
On photographic plate, d is a photosensitive area length, is equivalent to total line number of image, and r is that x place image is expert at, and has geometric relationship as can be known:
r - d 2 tan ( θ - β - α 2 ) = d 2 tan ( β - α 2 ) Formula 2
Can solve the relation of r and x by above two formulas:
r = d 2 · tan [ a tan ( x · tan α a ) - β + α 2 ] tan ( β - α 2 ) + d 2 Formula 3
Bring the actual parameter of camera into, get a=9cm, b=89cm, h=25cm, d=270 then can get the collection line number of camera;
Secondly the distortion on the level of corrections direction, the distortion of establishing center line is 0, then chooses the formula below the length of the y of delegation correspondence in real image satisfies arbitrarily in the image:
L 34 = L + 80 102 = L + y x Formula 4
Then v capable on arbitrarily any side-play amount just can in the hope of.
Through the coordinate transform on level and the vertical direction, real image can be reduced, make the image that collects more near real image.
4. by the described embedded monocular vision navigation system of claim 1, it is characterized in that based on leading line identification,
Path Recognition adopts thresholding method in the described step 3:
Because of a running environment adularescent and the black of this navigational system, therefore seek maximal value and minimum value in the picture of publishing picture, get the threshold value that its intermediate value is exactly an image then;
In order to increase the adaptability of this navigational system, in first row with first width of cloth image, get two groups of maximal values and minimum value, utilize the threshold value of the mean value of four numerical value as image;
Path extraction adopts binaryzation algorithm and following limb detection method to combine in the described step 3:
The binaryzation algorithmic procedure is as follows:
At first set a threshold value,, compare the size of each pixel value and threshold value from left to right for each row in the video matrix; If pixel value is more than or equal to threshold value, what then judge this pixel correspondence is white racing track; Otherwise what then judge correspondence is the target index wire; The row of the pixel when record occurs pixel value for the first time and for the last time less than threshold value number are calculated both mean value, with this position as target index wire on this row;
Following limb detection method process is as follows:
At first leading line is a continuous segments, so the marginal point of adjacent two row is adjacent; Following limb detects and utilizes this characteristic to come searching route; If searched out the edge of certain row, then just near a last edge, searched next time; This algorithm can be got rid of disadvantageous interference;
Above-mentioned two kinds of methods are used in combination, and concrete process is as follows:
1., raw data filtering that camera collection is arrived, as judge it is noise when point beyond the normal range occurring exceeding, replace with the mean value of these front and back data; Exclude the interference signal;
2., adopt the binary conversion treatment method that the camera collection data are handled; Write down stain hop count and this row center that each row occurs, prepare next step route searching;
3., set the nearest delegation of that behavior have only one section stain, be benchmark with this delegation's black line then, search up and down; Because leading line is continuous distribution, so be correct leading line from its nearest that section black line; So just can get rid of interference, find real black line, i.e. path position.
5. by the described embedded monocular vision navigation system of claim 1, it is characterized in that based on leading line identification,
In the described step 4, by turning to control, build steering model and calculate the radius of turn of intelligent vehicle and the relation of steering wheel pivot angle, provide data for intelligent vehicle turns to control, specific implementation is as follows;
The tangent line of the left and right sides front wheel angle of intelligent vehicle will meet at 1 O point with trailing wheel axis extended line; If two front-wheel mid points are reference point, the distance that this reference point order to O is as the radius of turn of dolly, and derivation draws the geometry Turn Models of dolly, thereby obtains the relation of dolly radius of turn and steering wheel pivot angle;
Dolly left-hand bend formula is as follows, and intelligent vehicle is turned right and in like manner can be got:
200 tan θ L - 200 tan θ R = 130 Formula 5
With the headstock center is reference point, calculates the radius of turn of car body:
R = ( 200 tan θ L + 63 ) 2 + 200 2 = ( 200 tan θ R - 67 ) 2 + 200 2 Formula 6
Wherein, θ LBe the near front wheel pivot angle, θ RBe the off-front wheel pivot angle; Utilize the MATLAB mathematical software to draw out the relation of radius of turn and car revolver pivot angle, right wheel in like manner can get.
6. by the described embedded monocular vision navigation system of claim 1, it is characterized in that based on leading line identification,
In the described step 5,, realize that intelligent vehicle is as follows along the leading line exercise method by steering wheel control;
At first determine be controlled target allow dolly " travelling along the line " as far as possible, what is called travel along the line into the car body central point on leading line; Choosing intelligent vehicle front-wheel central point is reference point, cooks up a cabling; This cabling originates in the car body reference point, ends at image-guidance line end; Want to reach the target of control, just require the cabling of intelligent vehicle terminal with the terminal coincidence of leading line and they tangential direction consistent; Intelligent vehicle turns over certain angle in the scope that camera is seen, promptly the angle that turns in the distance range that intelligent vehicle is ordered from the A point to C is a+b;
Motion model has comprised approximate integral process, and the steering wheel output corner is directly proportional with the angle that intelligent vehicle turns over the integration of distance, with equation expression is:
∫ θ (t) Sd θ=k α formula 7
Wherein, θ represents the steering wheel output corner, and S is the distance that dolly is passed by, and K is a fixed constant, and α is the angle that dolly turns over, and is similar to think that in the cycle of 20ms dolly steering wheel pivot angle is a constant, and then formula can be reduced to:
θ S=k α formula 8
To be exactly one section be one section arc length of radius with α for central angle R to distance S under the certain situation of rotational angle theta, carries it in the formula formula 8 and can obtain following formula:
θ = k · α S = k · α R · α = k R = k l sin α 2 = k · sin α 2 l Formula 9
Wherein 1 is half of chord length AC, and α, l can calculate acquisition from image, and k also can choose by actual measurement, and steering wheel control rotational angle theta is just calculated like this.
7. by the described embedded monocular vision navigation system of claim 1, it is characterized in that based on leading line identification,
In the described step 6, adopt pid control law that the intelligent vehicle travelling speed is controlled, the realization intelligent vehicle is exercised along leading line reposefully;
At first detect by speed, vehicle speed is carried out close-loop feedback control, according to the corresponding relation of controller output with topworks, determine to adopt the position model pid control algorithm, formula is:
u ( t ) = K p [ e ( t ) + 1 T ∫ 0 t e ( t ) dt + T d de ( t ) dt ] Formula 10
In the formula
The output of u (t)---controller (also claiming regulator);
The input of e (t)---controller (usually be the poor of setting value and controlled volume, i.e. e (t)=r (t)-c (t));
K p---the ratio amplification coefficient of controller;
T i---the integral time of controller;
T d---the derivative time of controller.
If u (k) is the output valve of the k time sampling instant controller, the PID formula that can disperse
u ( k ) = K p e ( k ) + K i Σ j = 0 k e ( j ) + K d [ e ( k ) - e ( k - 1 ) ] Formula 11
Ki is an integral coefficient in the formula, and Kd is a differential coefficient;
Adopt pid control law that the intelligent vehicle travelling speed is controlled, the realization intelligent vehicle is exercised along leading line reposefully.
8. by the implement device of the described embedded monocular vision navigation system based on leading line identification of claim 1, it is characterized in that, comprising: camera, single-chip microcomputer (Freescale MC9S12), audio video synchronization separative element;
Described processor input end is connected to the CCD camera, and its output terminal is connected to the front-wheel steering wheel;
Described processor input end also is connected to the audio video synchronization separative element, and its output terminal is connected to the back turbin generator by direct current generator chip for driving module;
Described CCD camera is connected with the audio video synchronization separative element;
Described processor is powered by power module.
9. by the implement device of the described embedded monocular vision navigation system based on leading line identification of claim 1, it is characterized in that,
Described processor has the audio video synchronization separative element by PH0, PH1 termination, for video acquisition provides look-at-me.
10. by the implement device of the described embedded monocular vision navigation system based on leading line identification of claim 1, it is characterized in that,
Described PLC processor is the outside brilliant shake of 16M, and the lock by its inside makes the bus frequency of this PLC processor reach 24M, so that satisfy the needs of video acquisition to the function of ring.
CN2010101894658A 2010-06-02 2010-06-02 Embedded monocular vision guidance system based on guidance line identification Pending CN102269595A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101894658A CN102269595A (en) 2010-06-02 2010-06-02 Embedded monocular vision guidance system based on guidance line identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101894658A CN102269595A (en) 2010-06-02 2010-06-02 Embedded monocular vision guidance system based on guidance line identification

Publications (1)

Publication Number Publication Date
CN102269595A true CN102269595A (en) 2011-12-07

Family

ID=45051965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101894658A Pending CN102269595A (en) 2010-06-02 2010-06-02 Embedded monocular vision guidance system based on guidance line identification

Country Status (1)

Country Link
CN (1) CN102269595A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102393741A (en) * 2011-08-25 2012-03-28 东南大学 Control system and control method for visual guiding mobile robot
CN102662402A (en) * 2012-06-05 2012-09-12 北京理工大学 Intelligent camera tracking car model for racing tracks
CN102788591A (en) * 2012-08-07 2012-11-21 郭磊 Visual information-based robot line-walking navigation method along guide line
CN103124334A (en) * 2012-12-19 2013-05-29 四川九洲电器集团有限责任公司 Lens distortion correction method
CN103914071A (en) * 2014-04-02 2014-07-09 中国农业大学 Visual navigation path recognition system of grain combine harvester
CN103926922A (en) * 2014-03-06 2014-07-16 杭州银江智慧城市技术集团有限公司 Control and monitor system of smart vehicle
CN103970141A (en) * 2014-05-30 2014-08-06 芜湖蓝宙电子科技有限公司 Miniature intelligent upright vehicle embedded control system and method for teaching
CN104166400A (en) * 2014-07-11 2014-11-26 杭州精久科技有限公司 Multi-sensor fusion-based visual navigation AGV system
CN104238558A (en) * 2014-07-16 2014-12-24 宁波韦尔德斯凯勒智能科技有限公司 Tracking robot quarter turn detecting method and device based on single camera
CN104567872A (en) * 2014-12-08 2015-04-29 中国农业大学 Extraction method and system of agricultural implements leading line
CN104597904A (en) * 2014-12-26 2015-05-06 深圳市科松电子有限公司 Simulating device and method for following-track algorithm experiment
CN105563449A (en) * 2014-10-13 2016-05-11 北京自动化控制设备研究所 Road following method for mobile robot
CN106054886A (en) * 2016-06-27 2016-10-26 常熟理工学院 Automatic guiding transport vehicle route identification and control method based on visible light image
CN106444762A (en) * 2016-10-18 2017-02-22 北京京东尚科信息技术有限公司 Automatic guide transport vehicle AGV, and motion control method and apparatus thereof
CN106774326A (en) * 2016-12-23 2017-05-31 湖南晖龙股份有限公司 A kind of shopping guide robot and its shopping guide method
CN106950950A (en) * 2017-03-02 2017-07-14 广东工业大学 A kind of automobile doubling accessory system and control method based on camera
CN106970628A (en) * 2017-05-19 2017-07-21 苏州寅初信息科技有限公司 The control method and its unmanned boat of a kind of Intelligent unattended ship automated transaction
CN107505946A (en) * 2017-10-11 2017-12-22 安徽建筑大学 Intelligent carriage path identifying system based on black and white camera
CN109358632A (en) * 2018-12-07 2019-02-19 浙江大学昆山创新中心 A kind of AGV control system of view-based access control model navigation
CN113066294A (en) * 2021-03-16 2021-07-02 东北大学 Intelligent parking lot system based on cloud edge fusion technology
CN115143887A (en) * 2022-09-05 2022-10-04 常州市建筑科学研究院集团股份有限公司 Method for correcting measurement result of visual monitoring equipment and visual monitoring system

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102393741A (en) * 2011-08-25 2012-03-28 东南大学 Control system and control method for visual guiding mobile robot
CN102393741B (en) * 2011-08-25 2013-07-10 东南大学 Control system and control method for visual guiding mobile robot
CN102662402B (en) * 2012-06-05 2014-04-09 北京理工大学 Intelligent camera tracking car model for racing tracks
CN102662402A (en) * 2012-06-05 2012-09-12 北京理工大学 Intelligent camera tracking car model for racing tracks
CN102788591A (en) * 2012-08-07 2012-11-21 郭磊 Visual information-based robot line-walking navigation method along guide line
CN103124334A (en) * 2012-12-19 2013-05-29 四川九洲电器集团有限责任公司 Lens distortion correction method
CN103124334B (en) * 2012-12-19 2015-10-21 四川九洲电器集团有限责任公司 A kind of method of lens distortion calibration
CN103926922B (en) * 2014-03-06 2018-03-23 杭州银江智慧城市技术集团有限公司 A kind of control and monitoring system of intelligent vehicle
CN103926922A (en) * 2014-03-06 2014-07-16 杭州银江智慧城市技术集团有限公司 Control and monitor system of smart vehicle
CN103914071A (en) * 2014-04-02 2014-07-09 中国农业大学 Visual navigation path recognition system of grain combine harvester
CN103970141A (en) * 2014-05-30 2014-08-06 芜湖蓝宙电子科技有限公司 Miniature intelligent upright vehicle embedded control system and method for teaching
CN103970141B (en) * 2014-05-30 2016-07-27 芜湖蓝宙电子科技有限公司 A kind of teaching upright car embedded control system of micro intelligence and method thereof
CN104166400A (en) * 2014-07-11 2014-11-26 杭州精久科技有限公司 Multi-sensor fusion-based visual navigation AGV system
CN104166400B (en) * 2014-07-11 2017-02-22 杭州精久科技有限公司 Multi-sensor fusion-based visual navigation AGV system
CN104238558A (en) * 2014-07-16 2014-12-24 宁波韦尔德斯凯勒智能科技有限公司 Tracking robot quarter turn detecting method and device based on single camera
CN104238558B (en) * 2014-07-16 2017-01-25 宁波韦尔德斯凯勒智能科技有限公司 Tracking robot quarter turn detecting method and device based on single camera
CN105563449B (en) * 2014-10-13 2017-10-24 航天科工智能机器人有限责任公司 A kind of mobile robot road follower method
CN105563449A (en) * 2014-10-13 2016-05-11 北京自动化控制设备研究所 Road following method for mobile robot
CN104567872A (en) * 2014-12-08 2015-04-29 中国农业大学 Extraction method and system of agricultural implements leading line
CN104567872B (en) * 2014-12-08 2018-09-18 中国农业大学 A kind of extracting method and system of agricultural machinery and implement leading line
CN104597904B (en) * 2014-12-26 2017-05-17 中智科创机器人有限公司 Simulating device and method for following-track algorithm experiment
CN104597904A (en) * 2014-12-26 2015-05-06 深圳市科松电子有限公司 Simulating device and method for following-track algorithm experiment
CN106054886B (en) * 2016-06-27 2019-03-26 常熟理工学院 The identification of automated guided vehicle route and control method based on visible images
CN106054886A (en) * 2016-06-27 2016-10-26 常熟理工学院 Automatic guiding transport vehicle route identification and control method based on visible light image
CN106444762A (en) * 2016-10-18 2017-02-22 北京京东尚科信息技术有限公司 Automatic guide transport vehicle AGV, and motion control method and apparatus thereof
WO2018072635A1 (en) * 2016-10-18 2018-04-26 北京京东尚科信息技术有限公司 Automated guided vehicle and motion control method and device
CN106774326A (en) * 2016-12-23 2017-05-31 湖南晖龙股份有限公司 A kind of shopping guide robot and its shopping guide method
CN106950950A (en) * 2017-03-02 2017-07-14 广东工业大学 A kind of automobile doubling accessory system and control method based on camera
CN106970628A (en) * 2017-05-19 2017-07-21 苏州寅初信息科技有限公司 The control method and its unmanned boat of a kind of Intelligent unattended ship automated transaction
CN107505946A (en) * 2017-10-11 2017-12-22 安徽建筑大学 Intelligent carriage path identifying system based on black and white camera
CN107505946B (en) * 2017-10-11 2021-01-29 安徽建筑大学 Intelligent trolley path identification method based on black and white camera
CN109358632A (en) * 2018-12-07 2019-02-19 浙江大学昆山创新中心 A kind of AGV control system of view-based access control model navigation
CN113066294A (en) * 2021-03-16 2021-07-02 东北大学 Intelligent parking lot system based on cloud edge fusion technology
CN115143887A (en) * 2022-09-05 2022-10-04 常州市建筑科学研究院集团股份有限公司 Method for correcting measurement result of visual monitoring equipment and visual monitoring system

Similar Documents

Publication Publication Date Title
CN102269595A (en) Embedded monocular vision guidance system based on guidance line identification
CN109684921B (en) Road boundary detection and tracking method based on three-dimensional laser radar
CN108230254B (en) Automatic detection method for high-speed traffic full lane line capable of self-adapting scene switching
US11157753B2 (en) Road line detection device and road line detection method
CN107895375B (en) Complex road route extraction method based on visual multi-features
CN101887586A (en) Self-adaptive angular-point detection method based on image contour sharpness
CN106128121A (en) Vehicle queue length fast algorithm of detecting based on Local Features Analysis
CN103731652A (en) Movement surface line recognition apparatus, movement surface line recognition method and movement member equipment control system
CN102662402B (en) Intelligent camera tracking car model for racing tracks
JP2021128612A (en) Road shape estimation device
US9830518B2 (en) Lane mark recognition device
CN105718865A (en) System and method for road safety detection based on binocular cameras for automatic driving
CN103472824A (en) Camera-based navigation system and method for automatic navigation vehicle
CN110197494B (en) Pantograph contact point real-time detection algorithm based on monocular infrared image
CN109813334A (en) Real-time high-precision vehicle mileage calculation method based on binocular vision
CN106950950A (en) A kind of automobile doubling accessory system and control method based on camera
CN105059184B (en) The rollover early warning of passenger stock bend and actively prevention and control device and determination methods thereof
CN112241175B (en) Road full-traversal sweeping path planning method for unmanned sweeper
CN104268860A (en) Lane line detection method
CN109033932B (en) Track identification method, track identification system, intelligent vehicle track patrol method and track patrol system
CN110502971A (en) Road vehicle recognition methods and system based on monocular vision
CN102663737B (en) Vanishing point detection method for video signals rich in geometry information
CN110502004B (en) Intelligent vehicle laser radar data processing-oriented traveling area importance weight distribution modeling method
CN102902975A (en) Sun positioning method based on complementary metal-oxide-semiconductor transistor (CMOS) navigation camera
KR102231560B1 (en) Improve fuel economy method using Road gradient extraction by driving front image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20111207