CN101147151B - System and method for guiding a vehicle - Google Patents

System and method for guiding a vehicle Download PDF

Info

Publication number
CN101147151B
CN101147151B CN2005800459562A CN200580045956A CN101147151B CN 101147151 B CN101147151 B CN 101147151B CN 2005800459562 A CN2005800459562 A CN 2005800459562A CN 200580045956 A CN200580045956 A CN 200580045956A CN 101147151 B CN101147151 B CN 101147151B
Authority
CN
China
Prior art keywords
vision
data
error
gps
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2005800459562A
Other languages
Chinese (zh)
Other versions
CN101147151A (en
Inventor
S·韩
J·F·赖德
F·洛维拉-马斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deere and Co
Original Assignee
Deere and Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/106,782 external-priority patent/US7610123B2/en
Application filed by Deere and Co filed Critical Deere and Co
Publication of CN101147151A publication Critical patent/CN101147151A/en
Application granted granted Critical
Publication of CN101147151B publication Critical patent/CN101147151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A method and system for guiding a vehicle comprises a location determining receiver for collecting location data for the vehicle. A vision module collects vision data for the vehicle. A location quality estimator estimates the location quality data for the location data during an evaluation time window. A vision module estimates vision quality data for the vision data during the evaluation time window. A supervisor module selects a mixing ratio for the vision data and location data (or error signals associated therewith) based on the quality data.

Description

The system and method that is used for guided vehicle
Technical field
The present invention relates to the method for a kind of vision-aided system and guided vehicle.
Background technology
GPS (GPS) receiver has been used for the guiding vehicles application position data is provided.Yet, can in most of running times, have the general location error of about 10 centimetres (4 inches) although have some GPS receiver of difference correction, but,, be typical greater than the absolute fix error of 50 centimetres (20 inches) for their running times of 5 percent.And gps signal can be stopped that this makes that having only the navigational system of GPS is insecure by buildings, trees or other barriers in some position or environment.Therefore, need replenish or strengthen navigational system with one or more additional sensors, so that improve degree of accuracy and robustness based on GPS.
Summary of the invention
A kind of method and system of guided vehicle is included as the position module (for example, receiver is determined in the position) of vehicle assembling position data.Vision module is that vehicle is collected vision data.The position quality estimator is a relevant position data estimation position quality data collected during the evaluation time window.Vision module is that corresponding vision data collected during the evaluation time window is estimated vision quality data.Monitor module chosen position data weighting, vision data weight or based on mixing ratio at the application qualitative data at interval of evaluation time window or hangover (trail) evaluation time window.
Description of drawings
Fig. 1 comes the block diagram of the system of guided vehicle according to the present invention is based on position data and vision data.
Fig. 2 is that basis the present invention is based on position data and vision data comes the process flow diagram of the method for guided vehicle.
Fig. 3 is according to the invention to the process flow diagram that guiding vehicles is determined the method for the Relative Contribution of position data and vision data (for example, weight).
Fig. 4 is according to the invention to the process flow diagram that guiding vehicles is determined the other method of the Relative Contribution of position data and vision data (for example, weight).
Fig. 5 generates the process flow diagram of the method for control signal (for example, error signal) according to the present invention is based on position data and vision data.
Fig. 6 is the process flow diagram that generates the method for control signal (for example, error signal) and curvature according to the present invention.
Fig. 7 is the process flow diagram of the fuzzy logic aspect of system and method for the present invention.
Fig. 8 A and Fig. 8 B be with vision data quality and position data quality as input, mixing ratio as output, is contributed the figure of (for example, vision data weight) so that determine position data contribution (for example, position data weight) and vision data for guiding vehicles.
Fig. 9 is the fuzzy member function's of vision quality data and position quality data a curve map.
Figure 10 is a curve map of being determined the fuzzy member function of the curvature that receiver is determined by the position.
Figure 11 is the curve map of the clearly value (crispvalue) of each mixing ratio of being associated with the ambiguity solution process.
Figure 12 is the figure that has explained such as the static immobilization error of the position data of differential Global Positioning System (GPS) signal.
Figure 13 has explained such as the position data of differential Global Positioning System (GPS) signal at the figure of positioning error that " is regulated (tuning) " such as another sensor of the vision module according to the present invention afterwards.
Figure 14 has explained the process flow diagram of selecting bootmode for guidance system, and this guidance system comprises vision module and position determination module.
The description of preferred implementation
Fig. 1 is the block diagram that is used for the guidance system 11 of guided vehicle.Guidance system 11 can be placed in vehicle or vehicle or mobile robot are gone up or be equipped with to mobile robot (mobile robot).Guidance system 11 comprises vision module 22 and the position module 26 of communicating by letter with monitor module 10.
Vision module 22 can be associated with vision quality estimator 20.Position module 26 can be associated with position quality estimator 24.Monitor module 10 can be communicated by letter with data storage device 16, vehicle control device 25, perhaps communicates by letter with them.In turn, vehicle control device 25 is coupled with control system 27.
Position module 26 comprises that the position determines receiver 28 and curvature calculator 30.The position determines that receiver 28 can comprise GPS (GPS) receiver with difference correction.The position determines that receiver provides the position data (for example, coordinate) of vehicle.Curvature calculator 30 is estimated curvature or " acutance " of the vehicle route of crooked vehicle route or plan.Curvature is exactly the rate of change of the tangent angle of the vehicle route between any two reference point to the path (for example, adjacent point).Position module 26 can be to monitor module 10 or position quality estimator 24 (are for example indicated one or more following circumstance or states at least, by status signal): the position that (1) position module 26 lost efficacy, (2) for one or more corresponding evaluation intervals, position data can not obtain or ruined position, and (3) for one or more evaluation intervals, and the estimated accuracy of position data or reliability decrease are to the following position of minimum threshold.Position module 26 or position determine that receiver 28 provides position data for the vehicle that is highly suitable for worldwide navigation or global path planning.
In an illustrative embodiment, position module 26 following column format outgoing position data:
Figure G2005800459562D00031
E wherein Off_gpsBe the error of estimating by position module 26 (for example, receiver 28 is determined in the position) that leaves the right or normal track (off-track error), E Head_gpsBe the course error of estimating by position module 26 (heading error), and ρ GpsIt is the radius-of-curvature of estimating by position module 26.Curvature is not represented estimation of error, does not have the curvature quality that is associated with used radius-of-curvature here; But curvature for example is the parameter that can be used to select suitable bootmode or guiding rule.
Vision module 22 can comprise image collection system and image processing system.Image collection system can comprise one or more as the lower part: (1) is one or more (for example to be used to collect the set of diagrams picture, utilize the multiple image of the same scene of different focusing settings or lens adjustment, perhaps for the multiple image of different field of view (FOV)s) the monocular imaging system; (2) be used for the depth information of the spot correlation connection on definite and the scene target or the stereo visual system (for example, two digital image-generating units that separate with the orientation by known distance) of three-dimensional coordinate; (3) be used for determining the range observation of the point on the scene target or the stadimeter (for example, laser range finder) of three-dimensional coordinate; (4) be used for detecting the laser system or the laser radar system of speed, short transverse or the distance of scene target; (5) the scan laser system (for example, based on the emission of pulse and to the travel-time between the reception of its reflection, the emission light pulse and estimate laser measurement system and target between the laser measurement system of distance), be used for being determined to the distance of scene target; And (6) are used for the imaging system of collecting image by optical micro electro-mechanical systems (MEMS), Free Space Optics MEMS or integrated optical MEMS.Free Space Optics MEMS utilizes compound semiconductor and has range index or refractive index materials is handled visible light, infrared ray or ultraviolet ray, and integrated optical MEMS utilizes polysilicon become to assign to reflection, diffraction, modulation or handle visible light, infrared ray or ultraviolet ray.MEMS can be configured can be according to switch matrix, lens, minute surface and the diffraction grating of various semiconducter process processing.The image of being collected by image processing system can for example be colored, monochromatic, black and white or gray level image.
Vision module 22 can be supported the collection corresponding to the position data of the position of target signature in the image (with two dimension or three-dimensional coordinate form).Vision module 22 is highly suitable for utilizing the feature or the local feature of (a) vehicle-periphery, (b) position data or the coordinate that is associated with these features, or both, so that the navigation of vehicle.Local feature can comprise one or more following aspects: plant row (plant row) position, enclosure wall position, buildings position, edges of regions position, boundary position, megalith position, rock position are (for example, greater than minimum threshold size or volume), cut edge and reference marker on soil plough ridge and ditch dug with a plow, tree positions, trim edge position, other vegetation (for example, turf).The position data of local feature can be used to regularly (for example, periodically) regulate (for example, tuning drift) position from position module 26.In an example, reference marker can be associated with high-precision position coordinates.And other local features can be relevant with reference marker position.The current vehicles position can be relevant with the fixed position of reference marker position or local feature.In one embodiment, vision module 22 can be with expressing vehicle location to coordinate or the data layout coordinate or the data layout similar or that be equal to basically of position module 26.Vision module 22 by state or data-message to watch-dog or the one or more following contents of vision quality estimator 20 indications at least: the position that (1) vision module 22 lost efficacy, (2) during one or more evaluation intervals, the position that vision data can not obtain, (3) the unstable or ruined position of vision data, and (4) view data is subjected to satisfy precision level, performance level or the reliability level of threshold performance/reliability level.
In an example, vision module 22 can be to be low to moderate 1 centimetre for soybean, and the error that is low to moderate 2.4 centimetres for corn is discerned the plant line position.
In an illustrative example, vision module 22 is exported vision data with following form:
Figure G2005800459562D00051
E wherein Off_visionBe the error of estimating by vision module 22 that leaves the right or normal track, and E Head_visionIt is the course error of estimating by vision module 22.
In another illustrative example or optional embodiment, vision module 22 is exported vision data with following form:
Figure G2005800459562D00052
E wherein Off_visionBe the error of estimating by vision module 22 that leaves the right or normal track, E Head_visionBe the course error of estimating by vision module 22, and ρ VisionIt is the radius-of-curvature of estimating by vision module 22.
Position quality estimator 24 can comprise one or more following equipment: determine S meter that receiver 28 is associated, determine the bit error rate indicator that receiver 28 is associated with the position with the position, and be used for measures signal quality, error rate, signal intensity or signal, channel or other equipment of the performance of the sign indicating number determining for the position to be sent.And, determine for satellite-based position, position quality estimator 24 can comprise an equipment, the satellite-signal with enough signal qualitys that is used for determining minimal amount (for example, on the L1 of GPS wave band from the signal of four or more satellites) whether determined that by the position receiver 28 receives so that during evaluation intervals, provide the reliable position data for vehicle.
Position quality estimator 24 is estimated quality or position quality data (for example, the Q by the position data of position module 26 outputs Gps).Position quality estimator 24 can come the quality of estimated position data based on the signal strength indicator device (or bit error rate) of being determined each component of signal that receiver 28 receives by the position.Position quality estimator 24 can also be estimated quality based on the following factor arbitrarily: the satellite-signal number that (1) can access in the zone, (2) determine by the position that receiver obtains with enough signal qualitys (for example, signal strength profile) or whether each satellite-signal all has acceptable signal level or acceptable bit error rate (BER) or frame error rate (FER) number of satellite that receives and (3).
In one embodiment, different signal strength range is associated with different respective quality level.For example, minimum signal strength range is associated with inferior quality, and the signal strength range of middle (medium) is associated with the quality of medium (fair), and the highest signal strength range is associated with E.B.B..On the contrary, minimum bit error rate scope is associated with the highest quality, and middle bit error rate scope is associated with medium quality, and the highest bit error rate scope is associated with minimum quality level.In other words, position quality data (for example, Q Gps) can be associated with linguistic input value (for example, low, middle and high).
Vision quality estimator 20 is estimated quality or vision quality data (for example, the Q by the vision data of vision module 22 outputs Vision).The illuminance (illumination) that vision quality estimator 20 can be considered in vision module 22 operations and obtain to occur during a series of time intervals of respective image.Vision quality estimator 20 can comprise photodetector, have frequency selects the photodetector of lens, one group to have corresponding frequencies and select the photodetector of lens, charge-coupled device (CCD), photometer, cadmium sulphide cell or the like.And vision quality estimator 30 comprises clock or the timer that is used for time mark (time stamp) image collection number of times and corresponding illuminance measurement (for example, the illuminance value of image).If illuminance is in the low-intensity scope, then visual quality is low for this time interval; If illuminance is in the strength range of centre, then visual quality is high for this time interval; If illuminance is in the high strength scope, then visual quality is medium, for the time interval be height or the low subrange that is defined within the high strength scope that depends on.In other words, vision quality data (for example, Q Vision) can be associated with linguistic input value (for example, low, middle and high).In an example, the front strength range relative with quality can be taken advantage of optical frequency or photochromic based on optical frequency.In another example, the mode that can be different from visible light with respect to the strength range of quality is applied to the frequency of infra-red range and the frequency of ultraviolet ray range.
Vision quality estimation can be relevant with the confidence measure in handling image.If the feature of wanting (for example, the plant row) is significantly in one or more images, then vision quality estimator 20 can be distributed high picture quality or high confidence level for respective image.On the contrary, if the feature of wanting not obvious in one or more images (for example, pruning row owing to miss), then vision quality estimator 20 can be distributed low picture quality or low confidence level.In an example, to (yaw/pitch pair), confidence level divides sum (SAD) to determine based on the absolute difference of each column vector (for example, the velocity vector of vision module 22) mean intensity for driftage/inclination of supposing.Driftage can be defined as the orientation of vision module 22 in the x-y plane, and tilts can be defined as the orientation of vision module 22 in the x-z plane of common and x-y planar quadrature.
If vision module 22 can not be located or with reference to fixed reference feature in the image or reference marker, perhaps not threshold value in maximum time with reference to the reference marker in the image, then vision module 22 can be watched out for vision quality estimator 20, and this can reduce the quality that pointer reduce vision data by quality.
Must, monitor module 10 comprise data processor, microcontroller, microprocessor, digital signal processor, flush bonding processor or arbitrarily other can be with (for example, field-programmable) able to programme equipment of software instruction programming.In one embodiment, monitor module 10 comprises rule management 12 and mixer 14.Rule management 12 can be used one or more data mixing rules 18, data decision function, relation or condition (if-then) statement, be convenient to vision weight is distributed to the visual results that is derived by vision data, and the position result of deriving to the position data by the corresponding time interval position weight allocation.Vision weight is determined vision data (for example, the y from vision module 22 Vision) the contribution degree of arranging.The position weight is determined the degree of arranging from the contribution of the position data of position module 22.Mixer 14 is determined position data (for example, y Gps) and vision data (for example, y Vision) Relative Contribution so that add up to error controling signal (for example, y) based on vision weight and position weight.In one embodiment, mixer 14 can comprise digital filter, digital signal processor, perhaps be arranged to other data processors of using one or more following contents: (1) vision data weight, (2) position data weight, and (3) evaluation time position data at interval and the mixing ratio expression of vision data Relative Contribution.
Rule management 12 can the fuzzy logic algorithm or other algorithms (for example, kalman filter method) obtain the level of vision data weight and position data weight.Although data mixing rule 18 can be stored in the data storage device 16, data mixing rule 18 can be stored or reside in the monitor module 10.In an example, vision data weight and position data weight are expressed as mixing ratio.Mixing ratio can be defined as scalar or multi-dimensional matrix.For example, mixing ratio can be defined as following matrix:
Figure G2005800459562D00071
Wherein α adds up to mixing ratio matrix, α OffBe to be used to leave the right or normal track the mixing ratio of error information, α HeadBe the mixing ratio that is used for the course error data, and α CurvIt is the mixing ratio that is used for curvature data.
Vision weight that mixer 14 provides rule management 12 and position weight or mixing ratio (for example, adding up to mixing ratio (α)) are applied in the mixed function.The output of mixed function or mixer 14 be add up to error controling signal (for example, y):
Figure G2005800459562D00081
E OffCome the total of the total of the error information that obtains since vision module 22 and position module 26 error that leaves the right or normal track, E HeadCome the total course error of the total of the error information that obtains since vision module 22 and position module 26, and ρ is a radius-of-curvature.Add up to error controling signal to represent poor (or error) between the physical location of the position data (measured) measured and vehicle by vision module 22 and position module.This total error controling signal is imported into vehicle control device 25, in order to derive the control signal of compensation.The control signal of compensation is based on the management and the control that add up to the corrective manipulated system 27 of error controling signal.Control system 27 can comprise and is used for the electrical interface that communicates with vehicle control device 25.In one embodiment, electrical interface comprises the hydraulic control system of solenoid control or is used to control other electromechanical equipments of hydraulic fluid.
In other embodiments, control system 27 comprises control system unit (SSU).SSU can be associated to the time demand with the course, so that handle or guide vehicle to travel along the expectation route or according to the route planning of expectation.Course and course error (for example, be expressed as between the course angle of actual course angle and expectation poor) are associated.
SSU can Be Controlled be used for compensating the error in the vehicle location of being estimated by vision module 22 or position module 26.For example, the error that leaves the right or normal track indication or represent the physical location form of gps coordinate (for example, with) of vehicle to the desired locations of vehicle the form of gps coordinate (for example, with).The error that leaves the right or normal track can be used to revise moving of vehicle with compensating the course.Yet if in the time point or the error that all do not leave the right or normal track on the time interval arbitrarily, uncompensated course is just enough.Course error is poor between actual vehicle course and the vehicle course estimated by vision module 22 and position module 26.Curvature is the variation of course on expected path.Curvature data can be used for controlling vehicle by SSU and advance along the crooked route of expecting.
Fig. 2 utilizes vision data and position data to come the process flow diagram of the method for guided vehicle.The method of Fig. 2 is from step S100.
In step S100, position module 26 or position determine that receiver 28 is the definite position data that is associated with it of vehicle.For example, the position determines that receiver 28 (the GPS receiver that for example, has difference correction) can be used for one or more evaluation time intervals or corresponding number of times is determined vehicle coordinate.And in step S100, position module 26 can be determined or derivation position error signal (for example, y from position data Gps), curvature (for example, the ρ that derives by the position Gps) or both.Position error signal can be represented (1) time for expectation, poor between the actual vehicle location and the vehicle location of expectation, (2) for time or the position expected, poor between the actual vehicle course and the vehicle course of expectation, other expression of (3) or the error that is associated with position data.Position error signal can be defined as, but does not need to be defined as vector data.The curvature that derive the position can be represented poor between the curvature of curvature actual for the given time and expectation, perhaps other expression of the error that is associated with curvature.
In step S102, the vision module 22 that is associated with vehicle is that one or more described evaluation times intervals or corresponding number of times are determined vision data.For example, vision module 22 can be collected image and handle collected image to determine vision data.In an example, vision data comprises the position data of the vehicle of being derived by vision, and it is by obtaining with reference to one or more vision reference marker or features with corresponding known location.Vehicle coordinate can be determined according to global coordinates system or local coordinate system.And in step S102, position module 26 can be determined or derivation vision error signal (for example, y by position data Vision), curvature (for example, the ρ that derives by vision Vision), perhaps both.Vision error signal is represented (1) time for expectation, poor between the actual vehicle location and the vehicle location of expectation, (2) for time or the position expected, poor between the actual vehicle course and the vehicle course of expectation, other expression of (3) or the error that is associated with vision data.The curvature that is derived by vision can be represented poor between the curvature of curvature actual for the given time and expectation, the perhaps expression of the error that is associated with curvature.
In step S104, position quality estimator 24 is position data estimated position qualitative data during the evaluation time window.Step S104 can carry out by the various technology that alternate application or accumulation are used.Under first technology, position quality estimator 24 can estimation or measures signal quality, error rate (for example, bit error rate or frame error rate), signal strength level (for example, with dBm being unit) or other quality levels.Under second technology, position quality estimator 24 is estimation or measures signal quality, error rate (for example, bit error rate or frame error rate), signal strength level (for example, with dBm being unit) or other quality levels at first; Then, position quality estimator 24 is categorized as scope, language description, language value or other guide with signal quality data.Second technology that relates to fuzzy logic method with aftertreatment (perhaps method step subsequently) wherein is useful.
In step S106, vision quality estimator 20 is estimated vision quality data during the evaluation time window.Vision quality estimator 20 can comprise the time or the clock of illuminance or photoelectric detector and time mark illuminance measurement, so that determine quality level based on light condition on every side.Vision quality estimator 20 can also comprise handling puts letter or reliability measure in the image, so that obtain the feature of expectation.Handle in the image put letter or reliability can depend on following factors arbitrarily, wherein: the technical standard of vision module 22 (for example, resolution), the reliability of recognition objective (for example, the terrestrial reference in the image), estimate the reliability of the position of institute's recognition objective or the point on it, image coordinate or local coordinate are converted to world coordinates or spatially go up the reliability of the position data that derives with the corresponding to vision of the position data that obtains from position module 26 with the time.
Step S106 can carry out by the various technology that alternate application or accumulation are used.Under first technology, the precision of the position data that vision quality estimator 20 can derive with vision estimates to put letter or reliability.Under second technology, the precision of the position data that vision quality estimator 20 at first derives with vision is estimated confidence level, reliability level or other quality level; And then, vision quality estimator 20 is converted to corresponding language value with quality level.Second technology is useful for fuzzy logic method in concurrent processing.
In step S108, monitor module 10 is determined or selected one of them or more following contribution factor: (1) is applied to the position data weight of position error signal, (2) be applied to the vision data weight of vision error signal, (3) position data weight and vision data weight, (4) mixing ratio, (5) leave the right or normal track mixing ratio, course mixing ratio and curvature mixing ratio (6) curvature data weight, (7) vision curvature data weight and, (8) position curvature data weight.Position error signal can be represented the derivative of position data, and the vision data weight can be represented the derivative of vision data.Mixing ratio is the Relative Contribution of error controling signal, curvature or the two definition vision data and position data.Can understand that mixing ratio can be relevant with the position data weight with the vision data weight by one or more equations.
Step S108 can carry out by the various technology that alternate application or accumulation are used.Be used under first technology of execution in step S108, monitor module 10 is used one or more data mixing rules 18 and is obtained position data weight and vision data weight.
Be used under second technology of execution in step S108, monitor module 10 is used the mixing ratio that one or more data mixing rules 18 obtain to define.
Be used under the 3rd technology of execution in step S108, watch-dog with the input set data as position quality data and vision quality data, and will export accordingly the collection data visit data storage device 16 (for example, question blank, database, relational database, form document) as position data weight and vision data weight.For example, each input set data is associated with corresponding unique output collection data.
Be used under the 4th technology of execution in step S108, watch-dog with the input set data as position quality data and vision quality data, and will export accordingly the collection data visit data storage device 16 (for example, question blank, database, relational database, form document) as mixing ratio.
Be used under the 5th technology of execution in step S108, watch-dog with input data set as position quality data and vision quality data, and will export accordingly the collection data visit data storage device 16 as position data weight and vision data weight.And each input set data is associated with corresponding language input value, and each output collection data is associated with corresponding language output valve.The language input and output value also is called as ambiguity descriptor.
In step S110, any contribution factor that monitor module 10 or mixer 14 will be determined in step S108 is used for the Relative Contribution of definition position data and vision data (or position error data and by the collimation error data of its derivation) is applied to error controling signal, curvature or the two, to be used for the navigation of vehicle.For example, monitor module 10 or mixer 14 are applied to error controling signal with position data weight, vision data weight and mixing ratio.The position data weight is based upon the estimated position data quality of corresponding position data.The vision data weight is based upon the estimated vision data quality of corresponding vision data.
In an illustrative example, position data weight and vision data weight can be derived based on the evaluation time window; Position data weight and vision data weight can be employed during using time window, and this application time window lags behind the evaluation time window, perhaps extends jointly with the evaluation time window basically.In this example, do not consider the evaluation time window and use time window how to be defined, in other examples, monitor module 10 can provide PREDICTIVE CONTROL data, feedforward control data or data feedback control to vehicle control device 25.
Fig. 3 is a method flow diagram of determining the Relative Contribution of position data and vision data for the guiding vehicles of vehicle.The method of Fig. 3 can be applied to the step S108 of Fig. 2, is used to select suitable position data weight and vision data weight, also can be applied to step S110, comes guided vehicle so that use weight.The method of Fig. 3 is from step S300.
In step S300, monitor module 10 or rule management 12 are based on each input value that is associated with one or more following contents (for example, quality level or language value) discern relation (for example, mass mixing ratio relation or rule): vision quality data, position quality data and curvature.
Step S300 can carry out according to the various technology of alternate application or accumulation application.Under first technology, monitor module 10 is based on discerning relation as first quality level of the position quality data of input value and second quality level of vision quality data.Quality level (for example, first quality level or second quality level) can be by position quality estimator 24, visual quality module 20 or numerical quantities or measured value that both provide together.For example, for the position quality, measurement can comprise signal intensity, bit error rate (BER) or frame error rate (FER), perhaps their component of GPS (GPS) signal.Each combination of first quality level and second quality level can be associated with corresponding relation that is applied to this combination uniquely or rule.
Under second technology, monitor module 10 is based on discerning relation as first quality level of the position quality data of input value and second quality level of vision quality data.The combination of first quality level and second quality level can be associated with the corresponding relation that is applied to this combination uniquely.Database or data storage device can comprise first quality level that is associated with the output collection of position data weight and vision data weight and the input set of second quality level.Replacedly, database or data storage device can comprise and be used for first quality level that error signal, curvature or the two mixing ratio be associated and the input set of second quality level.
Under the 3rd technology, monitor module 10 is based on discerning relation as first quality level of the position quality data of input value, second quality level and the curvature value of vision quality data.The combination of first quality level, second quality level and curvature value can be associated with the corresponding relation that is applied to this combination uniquely.Database or data storage device 16 can comprise the input set of first quality level, second quality level and the curvature value that are associated with the output collection of position data weight and vision data weight.Replacedly, database or data storage device 16 can comprise and the input set that is used for first quality level, second quality level and curvature value that error signal, curvature or the two mixing ratio be associated.
Under the 4th technology, monitor module 10 fuzzy logic methods.For fuzzy logic method, two phase process are used.In the phase one, position quality data (for example, Q Gps) first quality level, vision quality data (for example, Q Vision) second quality level and curvature value (for example, ρ) can be transformed into the language value from numerical value (for example, original measurement).Language value or language input value are represented the classification of the quality or the overall quality level of vision quality data and position quality data.For example, for vision quality data (for example, Q Vision) and position quality data (for example, Q Gps), the language input value can be defined as " good ", " medium ", " poor ", " height ", " generally " or " low ".For radius-of-curvature (for example, ρ or ρ VisionOr ρ Gps), the language value can be " little ", " low ", " greatly " or " height ".In the subordinate phase of fuzzy logic method, the input set that is used for the language value of vision quality data, position quality data and curvature is compared with reference listing or data mixing rule 18, so that the corresponding relation that identification is associated with input set (for example, mass mixing ratio relation or rule).
In alternative embodiment, the language value can be according to value class (for example, grade 1 to 5,5th, the highest), percentage grade, performance class (for example, one star is to the N star, and wherein N is the arbitrary integer greater than 1) or other and be the definition of vision quality data and position quality data.
In step S302, the relation that monitor module 10 is discerned based on step S300, determine and (for example be used for error controling signal, curvature or the two position data weight, vision data weight or other contribution factors, curvature or mixing ratio) output valve (for example, numerical value output valve) that is associated.If first technology to the, three technology of step S300 are employed, then the output valve of step S302 can comprise the numerical value output valve, and this numerical value output valve comprises one or more following contents: vision data weight (for example, α Vision), position data weight (for example, α Gps), mixing ratio (for example, the α of the error information that is used to leave the right or normal track Off), be used for the mixing ratio (α of course error data Head), be used for the mixing ratio (α of curvature data Curv).When the 4th technology or fuzzy logic method are used to step S300, monitor module 10 can be in step S302 (perhaps before it) use ambiguity solution and handle or other conversion process so that the language value is converted to their numerical value output valve.
In step S304, monitor module 10 or mixer 14 are used following definite output valve arbitrarily to error controling signal, curvature or the two: vision data weight (α Vision), position data weight (α Gps), mixing ratio (for example, the α of mixing ratio, the error information that is used to leave the right or normal track Off), be used for the mixing ratio (α of course error data Head), be used for the mixing ratio (α of curvature data Curv) and the numerical value that is used for any aforementioned project.Monitor module 10 or mixer 14 application vision data weights and position data weight are (for example, the numerical value that perhaps is used for it) come for determine the Relative Contribution of vision data and position data at the error controling signal in the time interval (for example, using the time interval).
Fig. 4 is used to the method flow diagram that guiding vehicles is determined the Relative Contribution of position data and vision data.The method of Fig. 4 can be applied to step S108 and the step S110 of Fig. 2, is used to select and use suitable position data weight and vision data weight.The method of Fig. 4 begins in step S400.
In step S400, monitor module 10 or rule management 12 are discerned relation (for example, mass mixing ratio relation or rule) based on each input value that is associated with vision quality data, position quality data and curvature (for example quality level or language proficiency).
Step S400 can carry out according to the various technology of alternate application or accumulation application.Under first technology, monitor module 10 is based on discerning relation as first quality level of the position quality data of input value, second quality level and the curvature value of vision quality data.Quality level (for example, first quality level or second quality level) can be by position quality estimator 24, visual quality module 20 or numerical quantities or measured value that both provide together.For example, for the position quality, measurement can comprise signal intensity, bit error rate (BER) or frame error rate (FER), perhaps their component of GPS (GPS) signal.
The combination of first quality level, second quality level and curvature value can be associated with the corresponding relation that is applied to this combination uniquely.Database or data storage device 16 can comprise the input set of first quality level, second quality level and the curvature value that are associated with the output collection of position data weight and vision data weight.Replacedly, database or data storage device 16 can comprise and the input set that is used for first quality level, second quality level and curvature value that error signal, curvature or the two mixing ratio be associated.
Under second technology, monitor module 10 fuzzy logic methods.For fuzzy logic method, a kind of two phase process are used.In the phase one, position quality data (for example, Q Gps) first quality level, vision quality data (for example, Q Vision) second quality level and curvature value (for example, ρ) can be transformed into the language value from numerical value (for example, original measurement).Language value or language input value are represented the classification of the quality or the overall quality level of vision quality data and position quality data.For example, for vision quality data (for example, Q Vision) and position quality data (for example, Q Gps), the language input value can be defined as " good ", " medium ", " poor ", " height ", " generally " or " low ".Be used for and radius-of-curvature (for example, ρ or ρ VisionOr ρ Gps) weight (for example, the α that is associated Gps, ρOr α Vision, ρ) or the language value of mixing ratio can be " little ", " low ", " greatly " or " height ".In the subordinate phase of fuzzy logic method, the input set that is used for the language value of vision quality data, position quality data and curvature is compared with reference listing or data mixing rule 18, so that the corresponding relation that identification is associated with input set (for example, mass mixing ratio relation or rule).
In alternative embodiment, the language value can be according to value class (for example, grade 1 to 5,5th, the highest), percentage grade, performance class (for example, one star is to the N star, and wherein N is the arbitrary integer greater than 1) or other and be the definition of vision quality data and position quality data.
In step S402, the relation that monitor module 10 is discerned based on step S400 is determined the output valve (for example, numerical value output valve) that is associated with the position data weight that is used for error controling signal and curvature, vision data weight or curvature data weight.If first technology of step S400 is employed, then the output valve of step S402 can comprise the numerical value output valve, and this numerical value output valve comprises one or more following contents: vision data weight (for example, α Vision), position data weight (for example, α Gps), mixing ratio (for example, the α of the error information that is used to leave the right or normal track Off), be used for the mixing ratio (α of course error data Head), be used for the mixing ratio (α of curvature data Curv), position curvature data weight Gps, ρWith vision curvature data weight Vision, ρWhen second technology or fuzzy logic method are used to step S400, monitor module 10 can be in step S402 (perhaps before it) use ambiguity solution and handle or other conversion process so that the language value is converted to their numerical value output valve.
In step S404, monitor module 10 or mixer 14 are used following definite output valve arbitrarily to error controling signal and curvature: vision data weight (α Vision), position data weight (α Gps), mixing ratio (for example, the α of mixing ratio, the error information that is used to leave the right or normal track Off), be used for the mixing ratio (α of course error data Head), be used for the mixing ratio (α of curvature data Curv), position curvature data weight (α Gps, ρ), vision curvature data weight (α Vision, ρ) and the numerical value that is used for any aforementioned project.Monitor module 10 or mixer 14 application vision data weights and position data weight are (for example, the numerical value that perhaps is used for it) come for determine the Relative Contribution of vision data and position data at the error controling signal in the time interval (for example, using the time interval).
Fig. 5 determines the control signal method flow diagram of (for example, adding up to error controling signal) for vehicle.The method of Fig. 5 can be applied to step S108 and the S110 of Fig. 2, is used to select suitable position data weight and vision data weight.Fig. 5 is similar to Fig. 3, except Fig. 5 has replaced S304 with step S500.Step or process same among Fig. 3 and Fig. 5 are indicated with same Reference numeral.
In step S500, the monitor module 10 or the bootstrap module that are used for vehicle generate the error controling signal that is used for maneuver vehicle.For example, the monitor module 10 that is used for vehicle generates the error controling signal that is used for maneuver vehicle according to following equation: y=α Vision* y Vision+ α Gps* y Gps, wherein y adds up to error controling signal, α VisionBe the vision data weight, y VisionBe the error controling signal that obtains by vision data, α GpsBe the position data weight, and y GpsIt is the error controling signal that obtains by position data (for example, gps data).The error controling signal that is obtained by vision data can be called as vision error signal.The error controling signal that is obtained by position data can be called as position error signal.Can understand y, α Vision, y Vision, α GpsAnd y GpsCan be expressed as matrix.For example, y (total error controling signal), α Vision, α Gps, y Vision(vision error signal) and y Gps(position error signal) can define as follows:
Figure G2005800459562D00161
E OffCome the error information that leaves the right or normal track (for example, the E that obtains since position module 26 and vision module 22 Off_gpsAnd E Off_vision) total that the adds up to error that leaves the right or normal track, E HeadCome error information (for example, the E that obtains since position module 26 and vision module 22 Head_gpsAnd E Head_vision) the total course error that adds up to.
α wherein VisionBe the vision data weight, α Off_visionBe to be used to leave the right or normal track the vision data weight of error information, and α Head_visionIt is the vision data weight that is used for the course error data.
E wherein Off_visonBe the error of estimating by vision module 22 that leaves the right or normal track, and E Head_visionIt is the course error of estimating by vision module 22.
Figure G2005800459562D00164
α wherein GpsBe the position data weight, α Off_gpsBe to be used to leave the right or normal track the position data weight of error information, and α Head_gpsIt is the position data weight that is used for the course error data.
Figure G2005800459562D00165
E wherein Off_gpsBe the error of estimating by position module 26 (for example, receiver 28 is determined in the position) that leaves the right or normal track, and E Head_gpsIt is the course error of estimating by position module 26.
Fig. 6 is a method flow diagram of determining control signal for vehicle.The method of Fig. 6 can be applied among the step S108 and S110 of Fig. 2, is used to select and use suitable position data weight and vision data weight.Fig. 6 is similar to Fig. 4, except Fig. 6 replaces step S404 with step S502.Fig. 3 indicates with identical Reference numeral with the same steps as among Fig. 5.
In step S502, the monitor module 10 or the bootstrap module that are used for vehicle generate error controling signal and the curvature signal that is used for maneuver vehicle.For example, the monitor module 10 that is used for vehicle generates the error controling signal that is used for maneuver vehicle according to following equation:
Y=α Vision* y Vision+ α Gps* y Gps, wherein y adds up to error controling signal, α VisionBe the vision data weight, y VisionBe the error controling signal that obtains by vision data, α GpsBe the position data weight, and y GpsIt is the error controling signal that obtains by position data (for example, gps data).
And monitor module 10 generates the curvature signal that is used for maneuver vehicle according to following equation.
ρ=α Vision, ρ* ρ Vision+ α Gps, ρ* ρ Gps, wherein ρ is a curvature signal, α Vision, ρBe vision data weight or the vision curvature data weight that is used for curvature, ρ VisionBe the curvature that derives by the vision that vision data obtains, α Gps, ρThe position data weight or the position curvature data weight that are used for curvature, and ρ GpsBe the curvature that obtains by position data (for example, gps data).And, α Vision, ρ+ α Gps, ρ=1.
The error controling signal that is obtained by vision data can be called as vision error signal.The error controling signal that is obtained by position data can be called as position error signal.Can understand y, α Vision, y Vision, α Gps, y Gps, α Vision, ρ, ρ Vision, α Gps, ρ and ρ GpsCan be expressed as matrix.For example, y (total error controling signal), α Vision(vision data weight), α Gps(position data weight), y Vision(vision error signal) and y Gps(position error signal) can define as follows:
Figure G2005800459562D00171
E OffCome the error information that leaves the right or normal track (for example, the E that obtains since position module 26 and vision module 22 Off_gAnd E Off_v) total that the adds up to error that leaves the right or normal track, E HeadCome course error data (for example, the E that obtains since position module 26 and vision module 22 Head_gAnd E Head_v) the total course error that adds up to, ρ is the radius-of-curvature that adds up to.
Figure G2005800459562D00172
α wherein VisionBe to add up to vision data weight matrix, α Off_visionBe to be used to leave the right or normal track the vision data weight of error information, α Head_visionBe the vision data weight that is used for the course error data, and α Curv_visionIt is the vision data weight that is used for the error of curvature data.Usually, α Curv_vision=0.
Figure G2005800459562D00173
E wherein Off_visonBe the error of estimating by vision module 22 that leaves the right or normal track, E Head_visionBe the course error of estimating by vision module 22, ρ VisionIt is the radius-of-curvature that is associated with vision module 22.If vision module does not provide radius-of-curvature, then ρ VisionCan be set to 0.
Figure G2005800459562D00181
α wherein GpsBe to add up to position data weight matrix, α Off_gpsBe to be used to leave the right or normal track the position data weight of error information, α Head_gpsBe the position data weight that is used for the course error data, and α Curv_gpsIt is the position data weight that is used for the error of curvature data.Usually, α Curv_gps=0.
Figure G2005800459562D00182
E wherein Off_gpsBe the error of estimating by position module 26 (for example, receiver 28 is determined in the position) that leaves the right or normal track, and E Head_gpsBe the course error of estimating by position module 26, and ρ GpsIt is the radius-of-curvature that is associated with position module 26.
Fig. 7 utilizes the vision aid in guide to come the process flow diagram of fuzzy logic aspect of the method and system of guided vehicle.The process flow diagram of Fig. 7 is from step S200.
In step S200, vision quality estimator 20, position quality estimator 24 or the two input data with clear and definite (crisp) are converted to the input language data.Clear and definite input data can be received by at least one following parts: vision module 22 and position module 26.Each of vision quality estimator 20 and position quality estimator 24 can comprise converter or sorter, is used for language data is changed or be categorized into to the scope of numeric data.The input language data can comprise position quality data (for example, Q Gps), vision quality data (for example, Q Vision).In an example, position quality data (Q Gps) have following state or language input data: good, medium and poor; Visual quality (Q Vision) have following state or language input data: good, medium and poor; And, curvature (ρ Gps) be little or big, although each aforesaid quality index device can have other input sets of the input language data of definition one or more levels, scope or performance or quality region.Step S200 can be called as Fuzzy Processing.
In step S202, data processor or monitor module 10 are made reasoning, so that obtain the output language data from the input language data of step 200.For example, the data mixing rule 18 in the data storage device 16 can comprise the input language data that are associated with corresponding output language data.Mass mixing ratio between input language value and output language value relation is based on vision module 22 and position being determined the performance of receiver 28 carries out the model of modeling.In an example, the output language data can comprise and α Off, α HeadAnd α CurvAssociated state.α Off, α HeadAnd α CurvState can for example be " little ", " medium " or " greatly ".In another example, the output language data can comprise following state: α arbitrarily Gps, ρ, α Vision, ρ, α Off_vision, α Head_vision, α Curv_vision, α Off_gps, α Head_gpsAnd α Curv_gpsα Gps, ρ, α Vision, ρ, α Off_vision, α Head_vision, α Curv_vision, α Off_gps, α Head_gpsAnd α Curv_gpsState can for example be " little ", " medium " or " greatly ".
In step S204, converter is converted to clear and definite (crisp) data of output with the output language data.For example, the output explicit data can be sent to vehicle control device 25, control system (for example, steering controller or control system unit (SSU)).Step S204 can be called as ambiguity solution to be handled.Explicit data can be expressed as and add up to error controling signal (y) or its derivative, such as the control signal of compensation.
Fig. 8 A is the figure that has used step S202, and it is called as fuzzy reasoning.And Fig. 8 A can be applied to the S300 of Fig. 5 and S400 or the S402 of S302 or Fig. 6.The figure of Fig. 8 A comprises series of rules or relation.
Fig. 8 A is about path planning, and path planning wherein is normally linear or comprise straight substantially row.For example, work as ρ GpsVery little or be generally linear or when straight, the relation of Fig. 8 A can keep in the scope of indication path planning or an Actual path (or one section).Visual quality (Q Vision) appear in the uppermost row, and position quality (Q Gps) (for example, GPS quality) appear in the leftmost row.Visual quality (Q Vision) be associated with the input variable, input set or the input language data that appear at immediately following in the following row of topmost going.Illustrated as Fig. 8 A, the input language data that are used for visual quality comprise " good ", " medium " and " poor ", although other input variables or input language data also fall within the scope of protection of the present invention.Position quality (Q Gps) input variable, input set or input language data in the row on the right capable with appearing at Far Left are associated.Illustrated as Fig. 8 A, the input language data that are used for the position quality comprise " good ", " medium " and " poor ", although other input variables or input language data also fall within the scope of protection of the present invention.
In Fig. 8 A, the various combinations or the arrangement (for example, having 9 kinds of possible arrangements here) of matrix (for example, three taking advantage of three matrixes) definition output variable, output collection or output language data.Every kind of combination of output variable is corresponding to corresponding a pair of vision quality data and position quality data.In the relation of input variable and corresponding output variable or the question blank that combination can be defined within Fig. 8 A, one group of rule, database or the data file.Wherein the relation in the table of Fig. 8 A is represented as rule, and every kind of rule can be represented as conditional statement.
Every kind of relation of Fig. 8 A comprises following content: (1) be used for visual quality (for example, Q Vision), position quality (for example, Q Gps) and Curvature Estimation quality (for example, ρ Gps) the input language data (for example, good, poor, medium, big, medium, little) that are associated of input quality variable, (2) be used for weight coefficient mixing ratio (for example, α Off, α HeadAnd α Curv) the output language data that are associated of output variable (for example, little, medium, big), and (3) are between input quality variable and the output variable or relevant, the correlation that defines between corresponding input language data and output language data, conditional relationship or other logical relations.
For each input set of input language data among Fig. 8 A, there is the output collection of corresponding output language data.The output language data can be associated with data weighting coefficient or mixing ratio.In an example, data weighting coefficient or mixing ratio comprise α Off, α HeadAnd α CurvThe value of input set is determined the analog value of output collection.For example, if visual quality (for example, Q Vision) be " good " and position quality (for example, Q Gps) be " poor ", then α Off Equal 1, α HeadBe " greatly " and α CurvBe " greatly ".
Relation between input set and the output collection can be by field test, sample plot or according to resolving of deriving of model or mathematics and experience ground is determined.Appear at input language data among Fig. 8 A and the relation between the output language data and to the selection of input language data and output language data with to describe only be illustrative; Other relations, selection and description fall into protection scope of the present invention.
Fig. 8 B is the figure that can be applied to step S202, and it can be called as fuzzy reasoning.And Fig. 8 B can be applied to the step S300 of Fig. 5 and S400 or the S402 of S302 or Fig. 6.The figure of Fig. 8 B comprises series of rules or relation.
Fig. 8 B is about path planning, the sweep that is normally crooked or that be used for the path of path planning wherein.For example, work as ρ gBe big or be general curved or when being not linearity substantially, the relation of Fig. 8 B can keep in an indication path planning or Actual path (or one section).Visual quality (Q Vision) appear in the uppermost row, and position quality (Q Gps) (for example, GPS quality) appear in the leftmost row.Visual quality (Q Vision) be associated with the input variable, input set or the input language data that appear at immediately following in the following row of topmost going.Illustrated as Fig. 8 B, the input language data that are used for visual quality comprise " good ", " medium " and " poor ", but other input variables or input language data also fall within the scope of protection of the present invention.Position quality (Q Gps) input variable, input set or input language data in the row on the right capable with appearing at Far Left are associated.Illustrated as Fig. 8 B, the input language data that are used for the position quality comprise " good ", " medium " and " poor ", but other input variables or input language data also fall within the scope of protection of the present invention.
In Fig. 8 B, the various combinations or the arrangement (for example, having 9 kinds of possible arrangements here) of matrix (for example, three taking advantage of three matrixes) definition output variable, output collection or output language data.Every kind of combination of output variable is corresponding to corresponding a pair of vision quality data and position quality data.The relation of input variable and corresponding output variable or combination can be defined within the question blank of Fig. 8 B, one group of rule, database or the data file.Wherein the relation in the table of Fig. 8 B is represented as rule, and every kind of rule can be represented as conditional statement.
Every kind of relation of Fig. 8 B comprises following content: (1) be used for visual quality (for example, Q Vision), position quality (for example, Q Gps) and Curvature Estimation quality (for example, ρ Gps) the input language data (for example, good, poor, medium, big, medium, little) that are associated of input quality variable, (2) be used for weight coefficient mixing ratio (for example, α Off, α HeadAnd α Curv) the output language data that are associated of output variable (for example, little, medium, big), and (3) are between input quality variable and the output variable or relevant, the correlation that defines between corresponding input language data and output language data, conditional relationship or other logical relations.
For each input set of input language data among Fig. 8 B, there is the output collection of corresponding output language data.The output language data can be associated with data weighting coefficient or mixing ratio.In an example, data weighting coefficient or mixing ratio comprise α Off, α HeadAnd α CurvThe value of input set is determined the analog value of output collection.For example, if visual quality (for example, Q Vision) be " good " and position quality (for example, Q Gps) be " poor ", then α OffBe " greatly ", α HeadBe " medium " and α CurvEqual zero.
Relation between input set and the output collection can be by field test, sample plot or according to resolving of deriving of model or mathematics and experience ground is determined.Appear at input language data among Fig. 8 B and the relation between the output language data and to the selection of input language data and output language data with to describe only be illustrative; Other relations, selection and description fall into protection scope of the present invention.
According to Fig. 8 A, Fig. 8 B or the two output language data, the monitor module 10 that is used for vehicle generates the error controling signal that is used for maneuver vehicle according to following equation: y=α * y Vision+ (1-α) * y Gps, wherein y adds up to error controling signal, and α is a mixing ratio, y VisionBe vision error signal, and y GpsIt is the positional data error signal.Be appreciated that y, α, y VisionAnd y GpsCan be expressed as matrix.This equation can be by replacing α Vision=α and α Gps=1-α is by aforementioned equation (the y=α in this proposition Vison* y Vision+ α Gps* y Gps) derive.For example, y (total error controling signal), α (total mixing ratio), y Vision(vision error signal) and y Gps(position error signal) can be defined as following form:
Figure G2005800459562D00221
E OffCome the error information that leaves the right or normal track (for example, the E that obtains since position module 26 and vision module 22 Off_gpsAnd E Off_vision) total that the adds up to error that leaves the right or normal track, E HeadCome course error data (for example, the E that obtains since position module 26 and vision module 22 Head_gpsAnd E Head_vision) the total course error that adds up to, ρ is the error of curvature data.
Figure G2005800459562D00222
Wherein α adds up to mixing ratio or mixing ratio matrix, α _ offBe to be used to leave the right or normal track the mixing ratio of error information, α HeadBe the mixing ratio that is used for the course error data, and α CurvIt is the mixing ratio that is used for the error of curvature data.
Figure G2005800459562D00223
E wherein Off_gpsBe the error of estimating by position module 26 (for example, receiver 28 is determined in the position) that leaves the right or normal track, and E Head_gpsBe the course error of estimating by position module 26, and ρ GpsIt is the Curvature Estimation error that is associated with position module 26.
Figure G2005800459562D00224
E wherein Off_visonBe the error of estimating by vision module 22 that leaves the right or normal track, E Head_visionIt is the course error of estimating by vision module 22.In replaceable example,
Figure G2005800459562D00225
E wherein Off_visonBe the error of estimating by vision module 22 that leaves the right or normal track, E Head_visionBe the course error of estimating by vision module 22, and ρ VisionIt is the Curvature Estimation that is associated with vision module 22.
Fig. 9 shows the fuzzy member function who is used for input variable.Transverse axis shows the value of input variable, and Z-axis shows fuzzy member's value.Input variable can comprise position quality data (for example, Q Gps) or vision quality data (Q Vision).The input language data " poor, medium, good " occur to input variable.Input variable is normalized to from 0 to 1.For input variable, " medium " scope of language input value is from A 1To A 3, for input variable near or near A 1To A 3The scope on border not as good as medium.If input variable is less than A 1, then it must be poor.If input variable is greater than A 3, then it must be.At A 1And A 2Between, input variable has the fact of various levels, they can by language be defined as the difference and medium.
The illustrative fuzzy member function who is used for Curvature Estimation who is provided by radius-of-curvature counter 30 is provided Figure 10.For (for example, the ρ of the Curvature Estimation among Figure 10 Gps), the input language data are " little and big ".Transverse axis shows the value of input variable, and Z-axis shows fuzzy member's value.Input variable can be Curvature Estimation (for example, ρ g).The input variable value is normalized to from 0 to 1.
Figure 11 shows the fuzzy member function who is used for output variable.Output variable can be mixing ratio or quality weight, so that determine the dependence ratio of position data to vision data.In an example, output variable comprises α Off, α HeadOr α CurvClear and definite mixing ratio (for example, C1, C2, C3, C4 or other middle or levels approaching with it) can be by mixing ratio α Off, α HeadAnd α CurvKnown output language value determine.Each output language value has by C value or the defined corresponding clear and definite mixing ratio of the scope of C value on transverse axis.Although Fig. 9 and fuzzy member function illustrated in fig. 11 are made up of linear element, so that carry out the ready-made comparison of member's value, yet, in alternative embodiment, the fuzzy member function can change according to curve or polynomial equation, such as the fuzzy member function's of Figure 10 sweep.
Figure 12 is the figure of the static immobilization error of explanation such as the position data of differential GPS signal.Z-axis range of a signal error (for example, being unit) with rice, and transverse axis shows the time (for example, with the second being unit).
Figure 13 is explanation such as the position data of differential GPS signal (for example, position data) with the renewal frequency of expectation or the figure of speed " adjusting " positioning error afterwards.Z-axis range of a signal error (for example, being unit) with rice, and transverse axis shows the time (for example, with the second being unit).Figure 12 will be shown as circle points without the initial error of " adjusting ", will be shown as the triangle form point through the error of " adjusting ".Realize regulating by using vision data to adjust position data with the time interval (for example, as 5 seconds the time interval or .2Hz illustrated in fig. 13) of rule.
Figure 14 is the method flow diagram that is used for the operator scheme of definite vehicle guide system.This method be convenient to determine vehicle whether only by position data (for example, only gps data), only vision data still neither vision data neither position data guide.The method of Figure 14 begins in step S400.
In step S400, the position data that position quality estimator 24 is exported at given interval for position module 26 is come the estimated position qualitative data.
In step S402, monitor module 10 determines that whether the position quality level of position quality data is greater than threshold quality (for example, 80% reliability or confidence level).If position quality level is greater than threshold quality, then method continues step S401.Yet if position quality level is not more than threshold quality level, method continues step S404.
In step S401 and step S404, the vision data that vision quality estimator 20 is exported at institute's definition time at interval for vision module 22 is estimated vision quality data.Defined interval can be extended jointly with position quality estimator 24 employed given intervals usually.
Does monitor module 10 determine that whether the vision quality level of vision quality data is greater than threshold quality (for example, 80%) in step S408? if vision quality level is greater than threshold quality, then method continues step S410.Yet if vision quality level is not more than threshold quality level, method continues step S412.
In step S410, monitor module 10 determines whether vision offset can allow biasing (for example, 10 inches) less than maximum.Maximum can allow biasing to be provided with by user data input, empirical learning, test or based on the reference of environmental factor (for example, prune select, guiding date of plantation date and vehicle).If vision offset can allow biasing greater than maximum, then method continues step S414.Yet, if being less than or equal to maximum, vision offset can allow skew, method continues step S412.
In step S414, monitor module 10 determines whether the GPS correction can allow to proofread and correct less than maximum.Maximum can allow to proofread and correct to be based on and be detected vehicle location and course (for example, perhaps being detected coordinate) and the vehicle location of expectation and the biasing of the maximum poor (for example, 30 seconds) between course (for example, perhaps expecting coordinate).If GPS proofreaies and correct less than maximum and can allow to proofread and correct, then in step S418 monitor module 10 or vehicle control device 25 the hangover (trailing) that is associated at interval with given interval or definition time during the time interval application site data (for example, gps data) come guided vehicle.Yet, if being not less than maximum, the GPS correction can allow to proofread and correct, monitor module 10 or vehicle control device 25 come guided vehicle for only using vision data with the hangover time interval that given interval or definition time are associated at interval in step S420.
Step S412 can follow step S408 or step S410, as foregoing here.In step S412, monitor module 10 determines whether the GPS correction can allow to proofread and correct less than maximum.Maximum can allow to proofread and correct to be based on and be detected vehicle location and course (for example, perhaps being detected coordinate) and the vehicle location of expectation and the biasing of the maximum poor (for example, 30 seconds) between course (for example, perhaps expecting coordinate).If GPS proofreaies and correct less than maximum and can allow to proofread and correct, then monitor module 10 or vehicle control device 25 are coming guided vehicle with the hangover time interim application site data (for example, gps data) that given interval or definition time are associated at interval in step S422.Yet, if proofreading and correct, GPS equals or is not less than maximum can allow to proofread and correct, monitor module 10 or vehicle control device 25 are not used the vectoring information from vision module 22 or position module 26 in step S424.For example, vehicle can be got back to manned pattern, and interchangeable guidance system can be activated or use, and perhaps vehicle can be stopped, and provides the time interval subsequently of more reliable output for guided vehicle up to vision module 22, position module 26 or the two.
If step S404 is performed, then method can continue step S406 after step S404.In step S406, monitor module 10 determines that whether the vision quality level of vision quality data is greater than threshold quality (for example, 80%).If vision quality level is greater than threshold quality, then method continues step S416.Yet if vision quality level is not more than threshold quality level, method continues step S424, does not wherein use guiding as previously described.
In step S416, monitor module 10 determines whether vision offset can allow biasing (for example, 10 inches) less than maximum.Maximum can allow biasing to be provided with by user data input, empirical learning, test or based on the reference of environmental factor (for example, prune select, guiding date of plantation date and vehicle).If vision offset can allow biasing greater than maximum, then method continues step S424, does not use guiding at step S424.Yet, if being less than or equal to maximum, vision offset can allow skew, method continues step S426.
In step S426,25 application site data of monitor module 10 or vehicle control device or GPS vectoring information are come the path of guided vehicle.
Described after the preferred embodiment, it is evident that more under not breaking away from and to carry out various modifications as protection domain situation of the present invention defined in the appended claims.

Claims (18)

1. the method for a guided vehicle, this method comprises:
Determine receiver based on the position that is associated with vehicle, be vehicle assembling position data;
Based on the vision module that is associated with vehicle, for vehicle is collected vision data;
During the evaluation time window, be position data estimated position qualitative data;
During the evaluation time window, for vision data is estimated vision quality data;
Based on qualitative data, chosen position data weighting and vision data weight at least one of them;
Wherein said selection further comprises:
Based on each input value that is associated with vision quality data and position quality data, identification mass mixing ratio relation;
Based on the mass mixing ratio of being discerned, determine the output valve that is associated with position data weight and vision data weight;
Use vision data weight and position data weight, so that definite vision data and position data are at the Relative Contribution of the vehicle guide system in the time interval.
2. method according to claim 1, wherein the estimated position qualitative data comprises detecting and determines the error that leaves the right or normal track, course error and the curvature that receiver is associated with the position.
3. method according to claim 1 estimates that wherein vision quality data comprises the error that leaves the right or normal track, course error and curvature that detection is associated with vision module.
4. method according to claim 1, wherein vision data weight and position data weight meet following equation: α Vision+ α Gps=1, α wherein VisionBe the vision data weight, α GpsIt is the position data weight.
5. method according to claim 1, wherein said selection further comprises:
Based on each input value that is associated with vision quality data, position quality data and curvature data, identification mass mixing ratio relation;
Based on the mass mixing ratio of being discerned, determine the output valve that is associated with position data weight, vision data weight and curvature data weight;
Use vision data weight, position data weight and curvature data weight, so that definite vision data and position data are at the Relative Contribution of the vehicle guide system in the time interval.
6. method according to claim 1, wherein said selection further comprises:
Based on each input value that is associated with vision quality data and position quality data, identification mass mixing ratio relation;
Based on the mass mixing ratio of being discerned, determine the output valve that is associated with position data weight and vision data weight;
Generate the error controling signal that is used for maneuver vehicle according to following equation: y=α Vision* y Vision+ α Gps* y Gps, wherein y is an error controling signal, α VisionBe the vision data weight, y VisionBe vision error signal, α GpsBe the position data weight, and y GpsIt is position error signal.
7. method according to claim 6, wherein y, y Vision, y Gps, α GpsAnd α VisionBe multi-C vector according to following expression:
Figure F2005800459562C00021
E OffCome since total that error information that vision module and position module obtain the adds up to error that leaves the right or normal track, E HeadCome the total course error that adds up to since error information that vision module and position module obtain;
Figure F2005800459562C00022
α wherein VisionBe the vision data weight matrix, α Off_visionBe to be used to leave the right or normal track the vision data weight of error information, and α Head_visionIt is the vision data weight that is used for the course error data;
Figure F2005800459562C00023
α wherein GpsBe the position data weight matrix, α Off_gpsBe to be used to leave the right or normal track the position data weight of error information, and α Head_gpsIt is the position data weight that is used for the course error data;
Figure F2005800459562C00024
E wherein Off_gpsBe the error of estimating by position module (for example, receiver is determined in the position) that leaves the right or normal track, and E Head_gpsIt is the course error of estimating by position module;
Figure F2005800459562C00031
E wherein Off_visionBe the error of estimating by vision module that leaves the right or normal track, and E Head_visionIt is the course error of estimating by vision module.
8. method according to claim 1, wherein said selection further comprises:
Based on each input value that is associated with vision quality data, position quality data and curvature data, identification mass mixing ratio relation;
Based on the mass mixing ratio of being discerned, determine the output valve that is associated with position data weight, vision data weight and curvature data weight;
Generate the error controling signal that is used for maneuver vehicle according to following equation: y=α Vision* y Vision+ α Gps* y Gps, wherein y is an error controling signal, α VisionBe the vision data weight, y VisionBe vision error signal, α GpsBe the position data weight, and y GpsIt is position error signal.
9. method according to claim 8, wherein y, y Vision, y Gps, α VisionAnd α GpsBe multi-C vector according to following expression:
Figure F2005800459562C00032
E OffCome since total that error information that vision module and position module obtain the adds up to error that leaves the right or normal track, E HeadCome the total course error that adds up to since error information that vision module and position module obtain, and ρ is a radius-of-curvature;
Figure F2005800459562C00033
α wherein VisionBe to add up to vision data weight matrix, α Off_visionBe to be used to leave the right or normal track the vision data weight of error information, α Head_visionBe the vision data weight that is used for the course error data, and α Curv_visionIt is the vision data weight that is used for the error of curvature data;
Figure F2005800459562C00034
E wherein Off_visionBe the error of estimating by vision module that leaves the right or normal track, E Head_visionIt is the course error of estimating by vision module;
Figure F2005800459562C00041
Wherein, α GpsBe to add up to position data weight matrix, α Off_gpsBe to be used to leave the right or normal track the position data weight of error information, α Head_gpsBe the position data weight that is used for the course error data, and α Curv_gpsIt is the position data weight that is used for the error of curvature data;
Figure F2005800459562C00042
E wherein Off_gpsBe the error of estimating by position module that leaves the right or normal track, and E Head_gpsBe the course error of estimating by position module, and ρ GpsIt is the radius-of-curvature that is associated with position module.
10. the system of a guided vehicle, this system comprises:
Determine receiver based on the position that is associated with vehicle, be vehicle assembling position data;
Based on the vision module that is associated with vehicle, for vehicle is collected vision data;
During the evaluation time window, be position data estimated position qualitative data;
During the evaluation time window, for vision data is estimated vision quality data;
Based on qualitative data, chosen position data weighting and vision data weight at least one of them;
Wherein said selection further comprises:
Based on each input value that is associated with vision quality data, position quality data and curvature data, identification mass mixing ratio relation;
Based on the mass mixing ratio of being discerned, determine the output valve that is associated with position data weight, vision data weight and curvature data weight; And
Use vision data weight, position data weight and curvature data weight, so that definite vision data and position data are at the Relative Contribution of the vehicle guide system in the time interval.
11. system according to claim 10, wherein the estimated position qualitative data comprises detecting and determines the error that leaves the right or normal track, course error and the curvature that receiver is associated with the position.
12. system according to claim 10 estimates that wherein vision quality data comprises the error that leaves the right or normal track, course error and curvature that detection is associated with vision module.
13. system according to claim 10, wherein vision data weight and position data weight meet following equation: α Vision+ α Gps=1, α wherein VisionBe the vision data weight, α GpsIt is the position data weight.
14. system according to claim 10, wherein said selection further comprises:
Based on each input value that is associated with vision quality data and position quality data, identification mass mixing ratio relation;
Based on the mass mixing ratio of being discerned, determine the output valve that is associated with position data weight and vision data weight;
Use vision data weight and position data weight, so that definite vision data and position data are at the Relative Contribution of the vehicle guide system in the time interval.
15. system according to claim 10, wherein said selection further comprises:
Based on each input value that is associated with vision quality data and position quality data, identification mass mixing ratio relation;
Based on the mass mixing ratio of being discerned, determine the output valve that is associated with position data weight and vision data weight; And
Generate the error controling signal that is used for maneuver vehicle according to following equation: y=α Vision* y Vision+ α Gps* y Gps, wherein y is an error controling signal, α VisionBe the vision data weight, y VisionBe vision error signal, α GpsBe the position data weight, and y GpsIt is position error signal.
16. system according to claim 15, wherein y, y Vision, y Gps, α GpsAnd α VisionBe multi-C vector according to following expression:
Figure F2005800459562C00051
E OffCome since total that error information that vision module and position module obtain the adds up to error that leaves the right or normal track, E HeadCome the total course error that adds up to since error information that vision module and position module obtain;
α wherein VisionBe the vision data weight matrix, α Off_visionBe to be used to leave the right or normal track the vision data weight of error information, and α Head_visionIt is the vision data weight that is used for the course error data;
Figure F2005800459562C00061
α wherein GpsBe the position data weight matrix, α Off_gpsBe to be used to leave the right or normal track the position data weight of error information, and α Head_gpsIt is the position data weight that is used for the course error data;
Figure F2005800459562C00062
E wherein Off_gpsBe the error of estimating by position module (for example, receiver is determined in the position) that leaves the right or normal track, and E Head_gpsIt is the course error of estimating by position module;
Figure F2005800459562C00063
E wherein Off_visonBe the error of estimating by vision module that leaves the right or normal track, and E Head_visionIt is the course error of estimating by vision module.
17. system according to claim 10, wherein said selection further comprises:
Based on each input value that is associated with vision quality data, position quality data and curvature qualitative data, identification mass mixing ratio relation;
Based on the mass mixing ratio of being discerned, determine the output valve that is associated with position data weight, vision data weight and curvature data weight;
Generate the error controling signal that is used for maneuver vehicle according to following equation: y=α Vision* y Vision+ α Gps* y Gps, wherein y is an error controling signal, α VisionBe the vision data weight, y VisionBe vision error signal, α GpsBe the position data weight, and y GpsIt is position error signal.
18. system according to claim 17, wherein y, y Vision, y Gps, α VisionAnd α GpsBe multi-C vector according to following expression:
Figure F2005800459562C00064
E OffCome since total that error information that vision module and position module obtain the adds up to error that leaves the right or normal track, E HeadCome the total course error that adds up to since error information that vision module and position module obtain, and ρ is based on the radius-of-curvature of vision module and position module;
Figure F2005800459562C00071
α wherein VisionBe to add up to vision data weight matrix, α Off_visionBe to be used to leave the right or normal track the vision data weight of error information, α Head_visionBe the vision data weight that is used for the course error data, and α Curv_visionIt is the vision data weight that is used for the error of curvature data;
Figure F2005800459562C00072
E wherein Off_visonBe the error of estimating by vision module that leaves the right or normal track, E Head_visionBe the course error of estimating by vision module, and ρ vIt is the radius-of-curvature that is associated with vision module;
Figure F2005800459562C00073
Wherein, α GpsBe to add up to position data weight matrix, α Off_gpsBe to be used to leave the right or normal track the position data weight of error information, α Head_gpsBe the position data weight that is used for the course error data, and α Curv_gpsIt is the position data weight that is used for the error of curvature data;
Figure F2005800459562C00074
E wherein Off_gpsBe the error of estimating by position module that leaves the right or normal track, and E Head_gpsBe the course error of estimating by position module, and ρ GpsIt is the radius-of-curvature that is associated with position module.
CN2005800459562A 2005-01-04 2005-12-16 System and method for guiding a vehicle Active CN101147151B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US64124005P 2005-01-04 2005-01-04
US60/641,240 2005-01-04
US11/106,782 2005-04-15
US11/106,782 US7610123B2 (en) 2005-01-04 2005-04-15 Vision-aided system and method for guiding a vehicle
PCT/US2005/045799 WO2006073759A2 (en) 2005-01-04 2005-12-16 Vision-aided system and method for guiding a vehicle

Publications (2)

Publication Number Publication Date
CN101147151A CN101147151A (en) 2008-03-19
CN101147151B true CN101147151B (en) 2010-06-09

Family

ID=39012110

Family Applications (4)

Application Number Title Priority Date Filing Date
CN200580045966A Active CN100580690C (en) 2005-01-04 2005-12-15 Vision-aided system and method for guiding vehicle
CN200580045916A Active CN100580689C (en) 2005-01-04 2005-12-15 Vision-based system and method for adjusting and guiding vehicle
CN2005800459581A Active CN101292244B (en) 2005-01-04 2005-12-16 Vision-aided system and method for guiding a vehicle
CN2005800459562A Active CN101147151B (en) 2005-01-04 2005-12-16 System and method for guiding a vehicle

Family Applications Before (3)

Application Number Title Priority Date Filing Date
CN200580045966A Active CN100580690C (en) 2005-01-04 2005-12-15 Vision-aided system and method for guiding vehicle
CN200580045916A Active CN100580689C (en) 2005-01-04 2005-12-15 Vision-based system and method for adjusting and guiding vehicle
CN2005800459581A Active CN101292244B (en) 2005-01-04 2005-12-16 Vision-aided system and method for guiding a vehicle

Country Status (1)

Country Link
CN (4) CN100580690C (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8447519B2 (en) * 2010-11-10 2013-05-21 GM Global Technology Operations LLC Method of augmenting GPS or GPS/sensor vehicle positioning using additional in-vehicle vision sensors
CN102880798A (en) * 2012-09-20 2013-01-16 浪潮电子信息产业股份有限公司 Variable neighborhood search algorithm for solving multi depot vehicle routing problem with time windows
JP7080101B2 (en) * 2018-05-14 2022-06-03 株式会社クボタ Work vehicle
CN108957511A (en) * 2018-05-21 2018-12-07 永康威力科技股份有限公司 A kind of automobile navigation steering control system and the modification method that navigates
CN109189061B (en) * 2018-08-10 2021-08-24 合肥哈工库讯智能科技有限公司 AGV trolley running state regulation and control method with time error analysis function
TWI725611B (en) * 2019-11-12 2021-04-21 亞慶股份有限公司 Vehicle navigation switching device for golf course self-driving cars
CN111077549B (en) * 2019-12-31 2022-06-28 深圳一清创新科技有限公司 Position data correction method, apparatus and computer readable storage medium
CN111197994B (en) * 2019-12-31 2021-12-07 深圳一清创新科技有限公司 Position data correction method, position data correction device, computer device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5341142A (en) * 1987-07-24 1994-08-23 Northrop Grumman Corporation Target acquisition and tracking system
CN1370977A (en) * 2001-02-14 2002-09-25 松下电器产业株式会社 Vehiculor pilot system
CN1438138A (en) * 2003-03-12 2003-08-27 吉林大学 Vision guiding method of automatic guiding vehicle and automatic guiding electric vehicle

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5899956A (en) * 1998-03-31 1999-05-04 Advanced Future Technologies, Inc. Vehicle mounted navigation device
JP3045713B1 (en) * 1998-12-09 2000-05-29 富士通株式会社 Vehicle-mounted vehicle guidance device, communication server system, and alternative vehicle guidance system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5341142A (en) * 1987-07-24 1994-08-23 Northrop Grumman Corporation Target acquisition and tracking system
CN1370977A (en) * 2001-02-14 2002-09-25 松下电器产业株式会社 Vehiculor pilot system
CN1438138A (en) * 2003-03-12 2003-08-27 吉林大学 Vision guiding method of automatic guiding vehicle and automatic guiding electric vehicle

Also Published As

Publication number Publication date
CN101099162A (en) 2008-01-02
CN100580689C (en) 2010-01-13
CN101147151A (en) 2008-03-19
CN101292244A (en) 2008-10-22
CN100580690C (en) 2010-01-13
CN101099163A (en) 2008-01-02
CN101292244B (en) 2010-12-08

Similar Documents

Publication Publication Date Title
EP1849113B1 (en) Vision-aided system and method for guiding a vehicle
CA2592977C (en) Vision-aided system and method for guiding a vehicle
CN101147151B (en) System and method for guiding a vehicle
CN101681168B (en) Method and system for guiding a vehicle with vision-based adjustment
CN101681167B (en) Method and system for guiding a vehicle with vision-based adjustment
CA2592996C (en) Method and system for guiding a vehicle with vision-based adjustment
EP1836650A2 (en) Method and system for guiding a vehicle with vision enhancement
CN100510636C (en) Inertial augmentation for GPS navigation on ground vehicles
CN103411609A (en) Online composition based aircraft return route programming method
CN101285686A (en) Agricultural machines navigation hierarchical positioning process and system
CN116719037A (en) Environment sensing method and system for intelligent mowing robot
US20240101145A1 (en) Road marking detection
IL281791B1 (en) Method for improving orientation using a hand-held smartphone having an integral GPS device
Přeučil et al. Towards environment modeling by autonomous mobile systems
CN117193382A (en) Unmanned aerial vehicle flight path determining method and system
Rovira Más et al. Three-dimensional Perception and Localization

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant