CN114578690A - Intelligent automobile autonomous combined control method based on multiple sensors - Google Patents

Intelligent automobile autonomous combined control method based on multiple sensors Download PDF

Info

Publication number
CN114578690A
CN114578690A CN202210094709.7A CN202210094709A CN114578690A CN 114578690 A CN114578690 A CN 114578690A CN 202210094709 A CN202210094709 A CN 202210094709A CN 114578690 A CN114578690 A CN 114578690A
Authority
CN
China
Prior art keywords
attitude
sensor
vehicle
quaternion
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210094709.7A
Other languages
Chinese (zh)
Other versions
CN114578690B (en
Inventor
韩渭辛
王锦涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202210094709.7A priority Critical patent/CN114578690B/en
Publication of CN114578690A publication Critical patent/CN114578690A/en
Application granted granted Critical
Publication of CN114578690B publication Critical patent/CN114578690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R19/00Arrangements for measuring currents or voltages or for indicating presence or sign thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to an intelligent automobile autonomous combination control method based on multiple sensors, and belongs to the field of electromechanical intelligent control. Extracting the edge characteristics of the lane by adopting a self-adaptive threshold segmentation method, and initially realizing tracing navigation; meanwhile, the direction of the magnetic guidance route is sensed according to the Biot-Saval theorem, and the tracing is assisted under the complex environment condition; and according to different lane types and vehicle body postures, the steering and the vehicle speed control of the intelligent vehicle are completed. The method breaks through the limitation that the existing single sensor based on a camera and the like cannot be easily interfered in a complex light environment, and improves the running stability of the vehicle.

Description

Intelligent automobile autonomous combined control method based on multiple sensors
Technical Field
The invention relates to an intelligent automobile autonomous control system with a multi-sensor combination, which specifically comprises the combined application of various sensors such as a monocular camera, an inductance-capacitance resonance filtering sensor, a photoelectric encoder, an attitude sensor and the like on an autonomous tracing function, and belongs to the field of electromechanical intelligent control.
Background
The existing traffic system is a complex system consisting of various types of automobiles, people participate and dominate the whole system, and the behavior of people plays a very important role in the performance of the whole system. Scientific research and analysis show that in frequent traffic accidents, most of the accidents caused by improper operation of drivers cannot be effectively avoided artificially, the environment sensing capability of vehicles needs to be improved by applying the current advanced scientific technology, and the assistance or unmanned driving of the vehicles is realized, so that the research and development of the autonomous automobile tracking are important ways for solving the traffic conditions. In the development of automobiles, most of environmental information obtained by drivers is visually perceived, such as lane boundary lines, obstacles and the like, but information of vehicle postures, speeds and the like is difficult to obtain through image observation. In addition, under the condition that the light is too bright and too dark, the visual perception can be seriously interfered, and the vehicle can normally run by combining other tracing modes such as electromagnetic navigation and the like. Therefore, it is very important to research the combined tracking method based on multiple sensors.
The multi-sensor-based robot trolley navigation positioning research (intelligent Kai-Rev, Guangdong university of Industrial science 2020.) provides a multi-sensor-based trolley navigation positioning mode, and the defects that a single laser radar cannot adapt to a complex environment in navigation positioning are overcome by using a laser radar, a binocular vision camera and an inertial navigation unit. However, the solution has high requirements on the hardware cost of the sensor and the operation cost of the control unit. The intelligent control scheme designed by the patent can adapt to tracing navigation in a complex environment, and the economic and development costs are reduced, so that the application range of the intelligent automobile is expanded.
Disclosure of Invention
Technical problem to be solved
In order to overcome the defect that a vehicle using a single sensor is easy to be interfered when automatically tracking in a complex environment, the invention provides an intelligent automobile autonomous combination control method based on multiple sensors.
Technical scheme
An intelligent automobile autonomous combination control method based on multiple sensors is disclosed, wherein the multiple sensors comprise a monocular camera, an inductance-capacitance resonance filtering sensor, a photoelectric encoder and an attitude sensor, wherein the monocular camera and the attitude sensor are arranged on a central axis of an automobile body; the method is characterized by comprising the following steps:
step 1: sensing environmental information through a monocular camera, correcting image distortion, and segmenting an image threshold value to extract lane boundary characteristics;
step 2: detecting electromotive forces at different positions on a lane by an inductance-capacitance resonance filtering sensor; the inductance-capacitance resonance filtering sensor comprises a section of conducting wire and inductors positioned at two ends of the conducting wire, and the conducting wire is laid on the middle line of the lane;
let the coordinate of the midpoint between the two inductors be (x)E,yE) The electromotive forces of the left and right inductors can be calculated as
Figure BDA0003490580360000021
Figure BDA0003490580360000022
In the formula, K is a proportionality coefficient, L is the length of a lead, h is the ground clearance of the electromagnetic sensor, and theta is an included angle between the projection of the inductor on the track and the center line of the track;
and step 3: sensing the vehicle attitude based on a complementary filtering algorithm by combining data acquired by an attitude sensor;
and 4, step 4: steering control and speed control of combined direction and attitude information;
firstly, linearizing lane boundary characteristics, and fitting pixel points into straight lines by using a least square method shown in the following formula under a pixel coordinate system:
Figure BDA0003490580360000031
in the formula, N is the number of boundary line points to be fitted, xciRepresenting the abscissa point value, y, to be fittedciThe longitudinal coordinate value which needs to be fitted is represented, and a and b are respectively the slope and intercept of a straight line of a lane boundary line; by presetting a parameter t1、t2Calculating direction information d acquired by the camera:
d=t1*a+t2*b
obtaining a deviation value e representing the direction information obtained by the inductance-capacitance resonance filtering sensor by a ratio calculation method described by the following formula:
Figure BDA0003490580360000032
in the formula, keRepresenting a scale parameter;
combining the vehicle body posture obtained in the step 3, writing the final direction data dir into the following formula to realize the intelligent vehicle steering control
dir=w*d+(1-w)*e
Figure BDA0003490580360000033
Wherein w represents a weight coefficient, C1And C2Respectively representing the number of pixels which are divided into a foreground and a background during threshold value division, and pitch represents the pitch angle of the posture of the vehicle body;
the vehicle speed v decision is given by
Figure BDA0003490580360000034
In the formula, vmax、vminRepresenting preset maximum and minimum velocities, dir being direction data, kvIs a scale parameter; and reading data of the photoelectric encoder to serve as speed feedback, and performing closed-loop control on the vehicle speed by using a PID algorithm.
The further technical scheme of the invention is as follows: and (3) adopting a self-adaptive threshold segmentation method to segment and extract lane boundary characteristics in the step 1.
The further technical scheme of the invention is as follows: the step 3 is as follows:
a three-axis accelerometer and a three-axis gyroscope are integrated in the attitude sensor;
Figure BDA0003490580360000041
Figure BDA0003490580360000042
respectively representing linear acceleration data acquired by an accelerometer and angular velocity data acquired by a gyroscope in the attitude sensor;
quaternion representation of rigid body attitude in three-dimensional space is as follows
Figure BDA0003490580360000043
Figure BDA0003490580360000044
Wherein q is a quaternion, qw、qx、qy、qzFor each of the components of the quaternion,
Figure BDA00034905803600000420
are mutually orthogonal unit direction vectors;
n represents the current sampling time, the sampling time is T, q (n) represents the quaternion of the current time, and the error between vectors is represented by vector cross multiplication; the complementary filtering process can be expressed as
Figure BDA0003490580360000045
Figure BDA0003490580360000046
Figure BDA0003490580360000047
Figure BDA0003490580360000048
Figure BDA0003490580360000049
Figure BDA00034905803600000410
Figure BDA00034905803600000411
q(n)=qw(n)+qx(n)i+qy(n)j+qz(n)k
Wherein,
Figure BDA00034905803600000412
the error between the current acceleration measurement value and the predicted value,
Figure BDA00034905803600000413
is the error at the last sampling instant,
Figure BDA00034905803600000414
is an error in the current time of day,
Figure BDA00034905803600000415
a predicted value indicative of the acceleration is displayed,
Figure BDA00034905803600000416
for the corrected angular velocity data, KpAnd KiFor a given adjustment parameter;
Figure BDA00034905803600000417
is predicted by
Figure BDA00034905803600000418
Figure BDA00034905803600000419
Figure BDA0003490580360000051
After the filtered attitude quaternion is obtained to express, the Euler angles pitch, yaw and roll of the current triaxial attitude can be obtained through the conversion relationship between the quaternion and the Euler angles, wherein the conversion relationship is as follows:
first, quaternion normalization is performed
Figure BDA0003490580360000052
Figure BDA0003490580360000053
Reconverting into Euler angles
Figure BDA0003490580360000054
yaw=sin-1(-2(qx(n)qz(n)-qw(n)qy(n)))
Figure BDA0003490580360000055
In the formula, yaw, pitch, and roll are yaw, pitch, and roll angles of the vehicle body attitude, respectively.
Advantageous effects
The invention designs a multi-sensor combined control system aiming at vehicle tracing navigation, and adopts a self-adaptive threshold segmentation method to extract the edge characteristics of a lane and preliminarily realize tracing navigation; meanwhile, the direction of the magnetic guidance route is sensed according to the Biot-Saval theorem, and the tracing is assisted under the complex environment condition; and according to different lane types and vehicle body postures, the steering and the vehicle speed control of the intelligent vehicle are completed. The method breaks through the limitation that the existing single sensor based on a camera and the like cannot be easily interfered in a complex light environment, and improves the running stability of the vehicle.
In addition, the invention adopts a method of self-adaptive sensor weight proportioning and quadratic term planning to control the direction and the speed of the vehicle under different lane environments, thereby improving the adaptability of the vehicle on different lanes and the running robustness.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
FIG. 1 is a schematic view of the installation of various sensor modules;
FIG. 2 shows a lane coordinate system and an inductor layout scheme;
FIG. 3 is a flow chart of steps performed by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention provides a multi-sensor combination-based track-seeking navigation control method, wherein the multi-sensor comprises a monocular camera, an inductance-capacitance resonance filtering sensor, a photoelectric encoder and an attitude sensor, wherein the monocular camera and the attitude sensor are arranged on a central axis of a vehicle body, the attitude sensor is positioned in front of the camera, the inductance-capacitance resonance filtering sensor is arranged in front of a main shaft of a front wheel of the vehicle, and the photoelectric encoder is arranged on a gear of a rear wheel, as shown in figure 1. The method can improve the stability of the vehicle during operation, solves the problem that the tracking cannot be stably carried out, and comprises the following specific steps:
the method comprises the following steps: sensing environmental information through a monocular camera, correcting image distortion, and segmenting an image threshold value to extract lane boundary characteristics;
establishing a small hole imaging model for a camera, wherein a three-dimensional point in a scene is Pc=(Xc,Yc,Zc) The distance between the imaging plane and the center of gravity of the camera is f, the three-dimensional point needs to be converted into a two-dimensional pixel point under the imaging plane, and the following conversion formula can be obtained according to the principle of pinhole imaging
Figure BDA0003490580360000061
In the formula, xdis、ydisRespectively an abscissa and an ordinate under a pixel plane coordinate system (with the upper left corner of the image as the origin), dx, dy being the length and height of a pixel, cx、cyDenotes the amount of translation, X, from the imaging plane coordinate system to the pixel plane coordinate systemc、Yc、ZcAnd the three-dimensional coordinate values are respectively the three-dimensional coordinate values of the middle point of the real scene under the camera coordinate system. R, t respectively represent rotation transformation and translation transformation, which constitute homogeneous transformation matrix, and will use the camera coordinate system as the coordinate point (X) of the reference systemc,Yc,Zc) And transforming to a world coordinate system.
The problems of radial distortion caused by the manufacturing process of the camera lens and tangential distortion caused by the installation of the camera element in a practical situation are considered. The first few terms of the taylor series expansion at r-0 are used to approximately describe the radial distortion.
Radial and tangential distortion can be corrected using the following equations
xc=xdis(1+k1r2+k2r4+k3r6)+xdis+[2p1xy+p2(r2+2xdis 2)]
yc=ydis(1+k1r2+k2r4+k3r6)+ydis+[p1(r2+ydis 2)+2p2xdisydis]
In the formula, xc、ycRadial correction of dial by dialRear coordinate, k1、k2、k3Parameters representing radial distortion, p, respectively1、p2Respectively, representing parameters of tangential distortion.
Threshold segmentation the threshold is dynamically calculated using the algorithm of Otsu, there is a threshold k to divide all pixels of the image into two classes called foreground pixels and background pixels, C1 (less than k) and C2 (more than k), the respective mean values of the two classes of pixels are m1 and m2, the global mean value of the image is mG, and the probability that the pixels are divided into C1 and C2 is p1 and p2, so the following can be written:
p1*m1+p2*m2=mG
p1+p2=1
according to the concept of variance, the inter-class variance expression is:
σ2=p1(m1-mG)2+p2(m2-mG)2
in the formula sigma2Is the variance. It is simplified to obtain:
σ2=p1p2(m1-m2)2
Figure BDA0003490580360000071
Figure BDA0003490580360000072
Figure BDA0003490580360000073
wherein i is the number of gray levels (value range 0-255), piThe probability of the pixel corresponding to the gray level number i appearing in the image, k is a segmentation threshold, and L is the maximum level number L which can be expressed by 8-bit gray value, 28=256。
The cumulative mean m of the gray levels and the global mean mG of the image can be expressed as
Figure BDA0003490580360000081
Figure BDA0003490580360000082
Therefore, m1 and m2 can be changed to the following forms
m1=1/p1*m
m2=1/p2*(mG-m)
Obtaining a final inter-class variance formula:
Figure BDA0003490580360000083
make variance σ according to formula2And the maximized k is the global optimal segmentation threshold of the image, so that the binarization of the image is realized, the lane edge is extracted, and the method is used for calculating the direction control signal in the step four.
Step two: the inductance-capacitance resonance filtering sensor detects electromotive forces at different positions on a lane;
an electromagnetic circuit with a fixed-frequency alternating signal is laid and constructed in advance along the running direction of the lane, and an LC resonance circuit is used for combining with an amplifying circuit to detect the signal so as to acquire environmental information.
Firstly, in order to obtain a signal with a correct frequency, an inductor-capacitor pair with appropriate parameters needs to be selected in hardware according to the following formula.
Figure BDA0003490580360000084
Wherein f is the fixed frequency of the alternating signal.
For convenience of discussion, assuming that the advancing direction of the vehicle along the lane is a z-axis, the direction perpendicular to the ground is a y-axis, and the plane where the ground is located is a x-axis perpendicular to the center line of the lane, a three-axis coordinate system as shown in fig. 2 is established, and the length of the electromagnetic sensor is l according to the right-hand rule.
The laying of the wires is on the lane center line as shown in fig. 2. Along the advancing direction of the lane, the magnetic field generated by the alternating signal is distributed asConcentric circles centered on the wire, only the coordinate (x) need be considered in calculating the magnetic field strengthE,yE) The magnetic wire on the lane can be equivalent to an infinite straight wire for the electromagnetic sensor, and the magnitude of the single inductive induced electromotive force E obtained according to the Biot-Saval theorem can be expressed by the following formula
Figure BDA0003490580360000091
Where K is a proportionality coefficient related to the amplification factor of the operational amplifier used in the circuit, h is the height of the electromagnetic sensor from the ground, and xEThe distance of the single inductor deviating from the magnetic wire is shown, and theta is an included angle between the projection of the inductor on the track and the central line of the track.
Step three: sensing the vehicle attitude based on a complementary filtering algorithm;
for a slope road surface or a bumpy road surface, it is difficult to accurately calculate the posture of the vehicle body by means of sensors such as a camera and the like and to take corresponding measures in time. Therefore, an attitude sensor is adopted to calculate the accurate value of the attitude, and a certain method is applied to inhibit the problems of zero drift and the like.
The attitude sensor integrates a three-axis accelerometer and a three-axis gyroscope.
Figure BDA0003490580360000092
Figure BDA0003490580360000093
Respectively representing linear acceleration data collected by an accelerometer and angular velocity data collected by a gyroscope in the attitude sensor. For a gyroscope, temperature drift and null drift exist, the dynamic response characteristic is poor in a low-frequency band, and the high-frequency dynamic response characteristic is good; for the accelerometer, the static characteristic is good, and the dynamic characteristic is poor. Therefore, the high-pass filtering is carried out on the angle data measured by the gyroscope, the low-pass filtering is carried out on the acceleration data measured by the accelerometer, and then the good performance on the whole frequency domain is realized through complementation.
Quaternion representation of rigid body attitude in three-dimensional space is as follows
Figure BDA0003490580360000094
Figure BDA0003490580360000095
Wherein q is a quaternion, qw、qx、qy、qzFor each of the components of the quaternion,
Figure BDA0003490580360000096
are mutually orthogonal unit direction vectors.
n represents the current sampling moment, the sampling time is T, q (n) represents the quaternion of the current moment, and the error between vectors is represented by vector cross multiplication. The complementary filtering process can be expressed as
Figure BDA0003490580360000101
Figure BDA0003490580360000102
Figure BDA0003490580360000103
Figure BDA0003490580360000104
Figure BDA0003490580360000105
Figure BDA0003490580360000106
Figure BDA0003490580360000107
q(n)=qw(n)+qx(n)i+qy(n)j+qz(n)k
Wherein,
Figure BDA0003490580360000108
the error between the current acceleration measurement value and the predicted value,
Figure BDA0003490580360000109
is the error at the last sampling instant,
Figure BDA00034905803600001010
as an error at the present time, the error is,
Figure BDA00034905803600001011
a predicted value indicative of the acceleration is displayed,
Figure BDA00034905803600001012
for the corrected angular velocity data, KpAnd KiFor a given adjustment parameter.
Figure BDA00034905803600001013
Is predicted by
Figure BDA00034905803600001014
Figure BDA00034905803600001015
Figure BDA00034905803600001016
After the filtered attitude quaternion is obtained to express, the Euler angles pitch, yaw and roll of the current triaxial attitude can be obtained through the conversion relationship between the quaternion and the Euler angles, wherein the conversion relationship is as follows:
first, quaternion normalization is performed
Figure BDA00034905803600001017
Figure BDA00034905803600001018
Reconverting into Euler angles
Figure BDA00034905803600001019
yaw=sin-1(-2(qx(n)qz(n)-qw(n)qy(n)))
Figure BDA00034905803600001020
In the formula, yaw, pitch, and roll are yaw, pitch, and roll angles of the vehicle body attitude, respectively.
Step four: steering control and speed control of combined direction and attitude information;
directional control of a combined camera sensor and electromagnetic sensor. Firstly, linearizing lane boundary characteristics, and fitting pixel points into straight lines by using a least square method shown in the following formula under a pixel coordinate system
Figure BDA0003490580360000111
In the formula, N is the number of boundary line points to be fitted, xciRepresenting the abscissa point value, y, to be fittedciAnd a and b are respectively the slope and intercept of the straight line of the lane boundary line. By presetting a parameter t1、t2Calculating cameraAcquired direction information d
d=t1*a+t2*b
Then, according to the calculation principle of the induced electromotive force, the coordinate of the midpoint between the two inductors is (x)E,yE) The electromotive forces of the left and right inductors can be calculated as
Figure BDA0003490580360000112
Figure BDA0003490580360000113
Wherein L is the length of the sensor. The acquired deviation value e represents the direction information acquired by the electromagnetic sensor by a ratio calculation method described by the following formula.
Figure BDA0003490580360000114
In the formula, keRepresenting a scale parameter.
Combining the vehicle body posture obtained by IMU, the final direction data dir is written into the following formula to realize the intelligent vehicle steering control
dir=w*d+(1-w)*e
Figure BDA0003490580360000121
Wherein w represents a weight coefficient, C1And C2Each indicates the number of pixels divided into foreground and background at the time of threshold division, and pitch indicates the pitch angle of the vehicle body attitude.
The vehicle speed v decision is given by
Figure BDA0003490580360000122
In the formula, vmax、vminIndicating a preset maximumLarge and minimum velocities, dir direction data, kvIs a scale parameter. The lower layer reads the data of the photoelectric encoder as speed feedback, and the closed-loop control of the vehicle speed is carried out by using a PID algorithm. The vehicle speed control decision is simple, the speed gradient is obvious, the foresight is obvious, and the vehicle can be accelerated and decelerated in time.
Example 1:
in order to improve the stability of vehicle tracking navigation, the invention designs a control system based on multi-sensor combination to enhance the robustness of vehicle autonomous operation, and the specific implementation mode of the invention is described by combining the actual vehicle model operation process:
executing the step one: visually perceiving environmental information through a monocular camera, correcting image distortion, and segmenting an image threshold value to extract lane boundary characteristics;
and a CMOS camera is used, and the shooting and coordinate transformation of the image are completed through hardware, so that the perception of a visual method to the environment is realized.
Using Zhang camera calibration method to shoot multiple pictures with obvious characteristic points, and realizing distortion parameter k by means of computer-aided software1,k2,k3,p1,p2And (4) correcting.
The dynamic threshold segmentation algorithm takes processing of a grayscale map with a resolution of 188 × 180 as an example, and mainly performs two steps, namely histogram statistics and threshold calculation.
Histogram statistics of the image
Figure BDA0003490580360000131
Wherein P (S)i) Is a gray level SiProbability of occurrence in the image, niIs a gray level of SiThe number of pixels (n) is 188 × 180, which is the total number of pixels. The statistics of the histogram are:
Figure BDA0003490580360000132
Figure BDA0003490580360000133
Figure BDA0003490580360000134
where S is the total gray value, S1Is the gray average of the foreground (class C1) pixels, S2The gray average of the background (class C2) pixels, k is the optimal threshold to be solved.
The threshold value is calculated according to the formula
σ2=p1(1-p1)(m1-m2)2
p1=P(Sk),(k=0,1,2...255)
m1=S1
m2=S2
Iterative solution so that σ2The maximum gray threshold k. And completing the binarization of the image according to the following formula to realize the extraction of the lane boundary characteristics.
Figure BDA0003490580360000135
And (5) executing the step two: the electromagnetic sensor detects electromotive forces at different positions on the lane;
an alternating signal with a current of 100mA and a frequency of 20kHz was passed through the magnet wire. Firstly, inductance and capacitance parameters are calculated according to a calculation formula of the resonance period of the LC oscillating circuit
Figure BDA0003490580360000136
Figure BDA0003490580360000137
Here a 10mH inductance and 6.3mF capacitance pair was chosen for detecting a 20kHz signal.
Taking the central coordinates between the two inductors as (0.1,0), the length L of the electromagnetic sensor as 30cm, the height h from the ground as 5cm, the coordinates of the left inductor as (0.3,0.05), the coordinates of the right inductor as (-0.1,0.05), and the included angle alpha between the projection of the two inductors on the track and the central line of the track
Figure BDA0003490580360000148
The proportionality coefficient K is taken to be 0.1. Induced electromotive force E of left and right inductors1、E2Are respectively as
Figure BDA0003490580360000141
Figure BDA0003490580360000142
And step three is executed: sensing the vehicle attitude based on a complementary filtering algorithm;
the quaternion obtained by the last calculation of the attitude sensor is converted into a body coordinate system of the vehicle body, and the unit vector of gravity under the body coordinate system of the vehicle body can be calculated to be
Figure BDA0003490580360000143
vx=2(qxqz-qwqy)
vy=2(qwqx+qyqz)
Figure BDA0003490580360000144
The unit vector of gravity measured by the accelerometer under the current machine body coordinate system is
Figure BDA0003490580360000145
Obtaining an error between the current estimated attitude and the measured attitude as
Figure BDA0003490580360000146
Expressed as a vector cross product
Figure BDA0003490580360000147
Expressing the last error as
Figure BDA0003490580360000151
Expressed as the actual error of the time by using a linear superposition mode
Figure BDA0003490580360000152
Figure BDA0003490580360000153
The current error and the gyroscope error are both in a body coordinate system, and the magnitude of the current error and the magnitude of the gyroscope error are in direct proportion, so that the gyroscope error can be corrected. And performing zero offset correction on the current error by using PI control, wherein the method comprises the following steps of:
Figure BDA0003490580360000154
Figure BDA0003490580360000155
Figure BDA0003490580360000156
in the formula,
Figure BDA00034905803600001511
for gyroscope data, n is the control period, by adjusting KpAnd KiThe parameters modify the gyroscope data. And integrating the corrected gyroscope data through a quaternion differential equation to obtain the current quaternion expressed as
qw(n)=qw(n-1)+(-qxgx-qygy-qzgz)T
qx(n)=qx(n-1)+(qwgx+qygz-qzgy)T
qy(n)=qy(n-1)+(-qwgy-qxgz+qzgx)T
qz(n)=qz(n-1)+(qwg2+qxgy-qygx)T
Wherein T is a control period. And finally, normalizing the updated quaternion to obtain the latest quaternion attitude.
Figure BDA0003490580360000158
And obtaining an intuitive Euler angle attitude form through a conversion formula between the quaternion and the Euler angle.
Figure BDA0003490580360000159
pitch=sin-1(-2(qxqz-qwqy))
Figure BDA00034905803600001510
In the formula, yaw, pitch, and roll are yaw, pitch, and roll angles of the vehicle body attitude, respectively.
Step four: steering control and speed control of combined direction and attitude information;
the threshold-segmented binary image can be represented by the following matrix
Figure BDA0003490580360000161
In the formula, 1 represents the background after segmentation, and 0 represents the foreground. Obtaining coordinates of boundary pixel points by using a 4-neighborhood seed growth method, substituting the coordinates into a least square equation to solve a left boundary line y taking the lower left corner of the image as an originlAnd the right boundary line y with the lower right corner of the image as the originrAnd calculates direction data d
yl=alx+bl
yr=arx+br
Figure BDA0003490580360000162
Figure BDA0003490580360000163
Get al=4,bl=9,ar=-3.75,br-36, number of foreground pixels C125380, number of background pixels C2=6345
d=α*a+β*b=0.25α-27β
Induced electromotive force E1=0.077V,E2=1V
Figure BDA0003490580360000164
The intelligent vehicle is arranged on a flat road surface, the pitch attitude angle pitch of the vehicle body is equal to 0, and the direction data for controlling steering is
Figure BDA0003490580360000165
dir=w*d+(1-w)*e
=0.75*(0.25α-27β)-0.25*0.67ke
Let vmax=200,vmin150, the vehicle speed data is
Figure BDA0003490580360000166
Wherein the vehicle speed is represented by feedback values of photoelectric encoder, alpha, beta, ke、kvThe equal coefficients need to be adjusted according to actual conditions. And data for controlling steering and vehicle speed is input into the bottom PID controller to realize the multi-sensor combined autonomous tracing intelligent vehicle control.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present disclosure.

Claims (3)

1. An intelligent automobile autonomous combination control method based on multiple sensors is disclosed, wherein the multiple sensors comprise a monocular camera, an inductance-capacitance resonance filtering sensor, a photoelectric encoder and an attitude sensor, wherein the monocular camera and the attitude sensor are arranged on a central axis of an automobile body; the method is characterized by comprising the following steps:
step 1: sensing environmental information through a monocular camera, correcting image distortion, and segmenting an image threshold to extract lane boundary characteristics;
step 2: detecting electromotive forces at different positions on a lane by an inductance-capacitance resonance filtering sensor; the inductance-capacitance resonance filtering sensor comprises a section of conducting wire and inductors positioned at two ends of the conducting wire, and the conducting wire is laid on the middle line of the lane;
let the coordinate of the midpoint between the two inductors be (x)E,yE) The electromotive forces of the left and right inductors can be calculated as
Figure FDA0003490580350000011
Figure FDA0003490580350000012
In the formula, K is a proportionality coefficient, L is the length of a lead, h is the ground clearance of the electromagnetic sensor, and theta is an included angle between the projection of the inductor on the track and the center line of the track;
and 3, step 3: sensing the vehicle attitude based on a complementary filtering algorithm by combining data acquired by an attitude sensor;
and 4, step 4: steering control and speed control of combined direction and attitude information;
firstly, linearizing the lane boundary characteristics, and fitting pixel points into straight lines by using a least square method shown in the following formula under a pixel coordinate system:
Figure FDA0003490580350000013
in the formula, N is the number of boundary line points to be fitted, xciRepresenting the abscissa point value, y, to be fittedciThe longitudinal coordinate value which needs to be fitted is represented, and a and b are respectively the slope and intercept of a straight line of a lane boundary line; by presetting a parameter t1、t2Calculating direction information d acquired by the camera:
d=t1*a+t2*b
obtaining a deviation value e representing the direction information obtained by the inductance-capacitance resonance filtering sensor by a ratio calculation method described by the following formula:
Figure FDA0003490580350000021
in the formula, keRepresenting a scale parameter;
combining the vehicle body posture obtained in the step 3, writing the final direction data dir into the following formula to realize the intelligent vehicle steering control
dir=w*d+(1-w)*e
Figure FDA0003490580350000022
Wherein w represents a weight coefficient, C1And C2Respectively representing the number of pixels which are divided into a foreground and a background during threshold value division, and pitch represents the pitch angle of the posture of the vehicle body;
the vehicle speed v decision is given by
Figure FDA0003490580350000023
In the formula, vmax、vminRepresenting preset maximum and minimum speeds, dir being direction data, kvIs a scale parameter; and reading data of the photoelectric encoder to serve as speed feedback, and performing closed-loop control on the vehicle speed by using a PID algorithm.
2. The intelligent automobile autonomous combined control method based on multiple sensors according to claim 1, characterized in that: and (3) adopting a self-adaptive threshold segmentation method to segment and extract lane boundary characteristics in the step 1.
3. The intelligent automobile autonomous combined control method based on multiple sensors according to claim 1, characterized in that: the step 3 is as follows:
a three-axis accelerometer and a three-axis gyroscope are integrated in the attitude sensor;
Figure FDA0003490580350000031
Figure FDA0003490580350000032
respectively representing linear acceleration data acquired by an accelerometer and angular velocity data acquired by a gyroscope in the attitude sensor;
quaternion representation of rigid body attitude in three-dimensional space is as follows
Figure FDA0003490580350000033
Figure FDA0003490580350000034
Wherein q is a quaternion and qw、qx、qy、qzFor each of the components of the quaternion,
Figure FDA0003490580350000035
are mutually orthogonal unit direction vectors;
n represents the current sampling time, the sampling time is T, q (n) represents the quaternion of the current time, and the error between vectors is represented by vector cross multiplication; the complementary filtering process can be expressed as
Figure FDA0003490580350000036
Figure FDA0003490580350000037
Figure FDA0003490580350000038
Figure FDA0003490580350000039
Figure FDA00034905803500000310
Figure FDA00034905803500000311
Figure FDA00034905803500000312
q(n)=qw(n)+qx(n)i+qy(n)j+qz(n)k
Wherein,
Figure FDA00034905803500000313
the error between the current acceleration measurement value and the predicted value,
Figure FDA00034905803500000314
is the error at the last sampling instant,
Figure FDA00034905803500000315
as an error at the present time, the error is,
Figure FDA00034905803500000316
a predicted value that indicates the acceleration is shown,
Figure FDA00034905803500000317
for the corrected angular velocity data, KpAnd KiFor a given adjustment parameter;
Figure FDA00034905803500000318
is predicted by
Figure FDA00034905803500000319
Figure FDA00034905803500000320
Figure FDA00034905803500000321
After the filtered attitude quaternion is obtained to express, the Euler angles pitch, yaw and roll of the current triaxial attitude can be obtained through the conversion relationship between the quaternion and the Euler angles, wherein the conversion relationship is as follows:
first, quaternion normalization is performed
Figure FDA0003490580350000041
Figure FDA0003490580350000042
Reconverting into Euler angles
Figure FDA0003490580350000043
yaw=sin-1(-2(qx(n)qz(n)-qω(n)qy(n)))
Figure FDA0003490580350000044
In the formula, yaw, pitch, and roll are yaw, pitch, and roll angles of the vehicle body attitude, respectively.
CN202210094709.7A 2022-01-26 2022-01-26 Intelligent automobile autonomous combination control method based on multiple sensors Active CN114578690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210094709.7A CN114578690B (en) 2022-01-26 2022-01-26 Intelligent automobile autonomous combination control method based on multiple sensors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210094709.7A CN114578690B (en) 2022-01-26 2022-01-26 Intelligent automobile autonomous combination control method based on multiple sensors

Publications (2)

Publication Number Publication Date
CN114578690A true CN114578690A (en) 2022-06-03
CN114578690B CN114578690B (en) 2023-07-21

Family

ID=81769439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210094709.7A Active CN114578690B (en) 2022-01-26 2022-01-26 Intelligent automobile autonomous combination control method based on multiple sensors

Country Status (1)

Country Link
CN (1) CN114578690B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5400244A (en) * 1991-06-25 1995-03-21 Kabushiki Kaisha Toshiba Running control system for mobile robot provided with multiple sensor information integration system
CN106981215A (en) * 2017-03-23 2017-07-25 北京联合大学 A kind of automatic parking parking stall bootstrap technique of multi sensor combination formula
US20180162412A1 (en) * 2018-02-09 2018-06-14 GM Global Technology Operations LLC Systems and methods for low level feed forward vehicle control strategy
CN108573272A (en) * 2017-12-15 2018-09-25 蔚来汽车有限公司 Track approximating method
CN109204599A (en) * 2018-09-13 2019-01-15 吉林大学 Active attitude and all-wheel steering cooperative control method based on coaxial-type wheel leg structure
CN109292019A (en) * 2018-09-13 2019-02-01 吉林大学 All-terrain vehicle active body gesture control method based on coaxial-type wheel leg structure
US20190266418A1 (en) * 2018-02-27 2019-08-29 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
WO2020048027A1 (en) * 2018-09-06 2020-03-12 惠州市德赛西威汽车电子股份有限公司 Robust lane line detection method based on dynamic region of interest
CN110989647A (en) * 2019-12-24 2020-04-10 北京航天飞腾装备技术有限责任公司 Multi-sensor fusion flight controller based on SoC
US20200217972A1 (en) * 2019-01-07 2020-07-09 Qualcomm Incorporated Vehicle pose estimation and pose error correction
CN111551174A (en) * 2019-12-18 2020-08-18 无锡北微传感科技有限公司 High-dynamic vehicle attitude calculation method and system based on multi-sensor inertial navigation system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5400244A (en) * 1991-06-25 1995-03-21 Kabushiki Kaisha Toshiba Running control system for mobile robot provided with multiple sensor information integration system
CN106981215A (en) * 2017-03-23 2017-07-25 北京联合大学 A kind of automatic parking parking stall bootstrap technique of multi sensor combination formula
CN108573272A (en) * 2017-12-15 2018-09-25 蔚来汽车有限公司 Track approximating method
US20180162412A1 (en) * 2018-02-09 2018-06-14 GM Global Technology Operations LLC Systems and methods for low level feed forward vehicle control strategy
US20190266418A1 (en) * 2018-02-27 2019-08-29 Nvidia Corporation Real-time detection of lanes and boundaries by autonomous vehicles
WO2020048027A1 (en) * 2018-09-06 2020-03-12 惠州市德赛西威汽车电子股份有限公司 Robust lane line detection method based on dynamic region of interest
CN109204599A (en) * 2018-09-13 2019-01-15 吉林大学 Active attitude and all-wheel steering cooperative control method based on coaxial-type wheel leg structure
CN109292019A (en) * 2018-09-13 2019-02-01 吉林大学 All-terrain vehicle active body gesture control method based on coaxial-type wheel leg structure
US20200217972A1 (en) * 2019-01-07 2020-07-09 Qualcomm Incorporated Vehicle pose estimation and pose error correction
CN111551174A (en) * 2019-12-18 2020-08-18 无锡北微传感科技有限公司 High-dynamic vehicle attitude calculation method and system based on multi-sensor inertial navigation system
CN110989647A (en) * 2019-12-24 2020-04-10 北京航天飞腾装备技术有限责任公司 Multi-sensor fusion flight controller based on SoC

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEIXIN HAN等: "Interval Estimation for Uncertain Systems via Polynomial Chaos Expansions", vol. 66, no. 1, XP011827482, DOI: 10.1109/TAC.2020.2982907 *
李旭;张为公;: "基于联邦滤波的智能车辆多传感器组合导航的研究", no. 12 *

Also Published As

Publication number Publication date
CN114578690B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN106256606B (en) A kind of lane departure warning method based on vehicle-mounted binocular camera
CN109945858B (en) Multi-sensing fusion positioning method for low-speed parking driving scene
CN108749819B (en) Tire vertical force estimating system and evaluation method based on binocular vision
CN113819914A (en) Map construction method and device
CN107615201A (en) Self-position estimation unit and self-position method of estimation
CN111142091B (en) Automatic driving system laser radar online calibration method fusing vehicle-mounted information
CN104802697B (en) Micro inertial measurement unit and adaptive front lamp control method based on this measuring unit
CN106295560A (en) The track keeping method controlled based on vehicle-mounted binocular camera and stagewise PID
CN106289250A (en) A kind of course information acquisition system
CN111860322A (en) Unstructured pavement type identification method based on multi-source sensor information fusion
CN114693787B (en) Parking garage map building and positioning method, system and vehicle
CN113819905A (en) Multi-sensor fusion-based odometer method and device
CN108759823A (en) The positioning of low speed automatic driving vehicle and method for correcting error in particular link based on images match
CN112388635B (en) Method, system and device for fusing sensing and space positioning of multiple sensors of robot
CN112433531A (en) Trajectory tracking method and device for automatic driving vehicle and computer equipment
CN111829514B (en) Road surface working condition pre-aiming method suitable for vehicle chassis integrated control
CN114475581B (en) Automatic parking positioning method based on wheel speed pulse and IMU Kalman filtering fusion
CN115303265A (en) Vehicle obstacle avoidance control method and device and vehicle
CN113063416B (en) Robot posture fusion method based on self-adaptive parameter complementary filtering
CN114578690B (en) Intelligent automobile autonomous combination control method based on multiple sensors
CN108709560A (en) Carrying robot high accuracy positioning air navigation aid based on straightway feature
CN117765070A (en) Method for estimating traffic sign position and posture information in vision/inertial odometer
CN116972844A (en) Mobile robot indoor positioning system and method based on ArUco array
CN112179336A (en) Automatic luggage transportation method based on binocular vision and inertial navigation combined positioning
CN116338719A (en) Laser radar-inertia-vehicle fusion positioning method based on B spline function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant