CN110542416B - Automatic positioning system and method for underground garage - Google Patents

Automatic positioning system and method for underground garage Download PDF

Info

Publication number
CN110542416B
CN110542416B CN201810525414.4A CN201810525414A CN110542416B CN 110542416 B CN110542416 B CN 110542416B CN 201810525414 A CN201810525414 A CN 201810525414A CN 110542416 B CN110542416 B CN 110542416B
Authority
CN
China
Prior art keywords
vehicle
coordinate system
mark number
preset
looking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810525414.4A
Other languages
Chinese (zh)
Other versions
CN110542416A (en
Inventor
刘玉
叶卉
刘元伟
赵奇
鲍凤卿
卢远志
梁伟铭
刘奋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAIC Motor Corp Ltd
Original Assignee
SAIC Motor Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC Motor Corp Ltd filed Critical SAIC Motor Corp Ltd
Priority to CN201810525414.4A priority Critical patent/CN110542416B/en
Publication of CN110542416A publication Critical patent/CN110542416A/en
Application granted granted Critical
Publication of CN110542416B publication Critical patent/CN110542416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/70Energy storage systems for electromobility, e.g. batteries

Abstract

The invention provides an automatic positioning system and method for an underground garage, wherein the system consists of a visual positioning sensor and a vehicle controller, and the visual positioning sensor comprises four paths of high-definition looking-around cameras with scanning ranges covering 360 degrees. The vehicle controller utilizes four-way high-definition looking-around images sent by the four-way high-definition looking-around camera, and combines the preset library position marks and the special marks to determine target preset mark numbers near the vehicle, so that the position coordinates of the vehicle under the global coordinate system are determined by utilizing the target preset mark numbers. Based on the above, the stable and reliable positioning can be realized by using the low-cost high-definition looking-around camera, and the economical efficiency and the practicability are improved.

Description

Automatic positioning system and method for underground garage
Technical Field
The invention relates to the technical field of automatic driving, in particular to an automatic positioning system and method for an underground garage.
Background
With the continuous development of automatic driving technology, the requirement on the automatic driving degree of a vehicle is higher and higher. In order to realize automatic driving in the underground garage, the problem of automatic positioning of vehicles in the environment of the underground garage needs to be solved.
The current position of the moving body is continuously measured by mainly adopting a positioning means of pure inertial navigation, namely, the position of the next point is calculated from the position of a known point according to the continuously measured course angle and speed of the moving body. However, this requires a high-precision inertial navigation device to be mounted on the vehicle, and there is a limit to the travel time of the vehicle in the ground, so that if the position of the vehicle is not found for a long time, the positioning error becomes larger.
Disclosure of Invention
In order to solve the problems, the invention provides an automatic positioning system and method for an underground garage, and the technical scheme is as follows:
an underground garage automatic positioning system, comprising: the visual positioning system comprises a visual positioning sensor and a vehicle controller connected with the visual positioning sensor, wherein the visual positioning sensor comprises four paths of high-definition looking-around cameras with scanning ranges covering 360 degrees;
the four-way high-definition looking-around camera is used for collecting four-way high-definition looking-around images and sending the four-way high-definition looking-around images to the vehicle controller;
the vehicle controller is configured to determine an interested image region ROI where a preset mark number is located from the four-way high-definition looking-around image, where the preset mark number includes a preset base mark number and/or a preset special mark number, the preset base mark number is set in a middle region of a base bit line near one side of a lane line, and the preset special mark number is set in a middle region in an independent lane; identifying a target preset mark number in the determined ROI, and determining a first position coordinate of the vehicle under a vehicle body coordinate system according to the target preset mark number; the first position coordinates are transformed into second position coordinates of the vehicle in a global coordinate system.
Preferably, the visual positioning sensor further comprises: inertial measurement unit IMU sensors and vehicle sensors;
the IMU sensor is used for measuring the angular speed and the acceleration of the vehicle under the vehicle body coordinate system;
the vehicle sensor is used for measuring the wheel speed and steering wheel rotation angle of the vehicle under the vehicle body coordinate system;
the vehicle controller is used for calculating IMU position and IMU course angle information under the global coordinate system based on the angular speed and the acceleration; calculating vehicle position and vehicle heading angle information in the global coordinate system based on the wheel speed and the steering wheel information; and calculating an looking-around positioning error correction amount according to the IMU position, the IMU course angle information, the vehicle position and the vehicle course angle information, and compensating the second position coordinate by utilizing the looking-around positioning error correction amount.
Preferably, the vehicle controller is configured to determine an image region of interest ROI in which a preset mark number is located from the four-path high-definition looking-around image, and is specifically configured to:
threshold segmentation is carried out on the four paths of high-definition looking-around images; performing straight line segment detection on the four-path high-definition looking-around image subjected to threshold segmentation; and determining the ROI where the preset mark number is located according to the detected straight line segment information and the ROI identification rule of the image region of interest corresponding to the preset mark number.
Preferably, the vehicle controller for identifying the determined target preset mark number in the ROI is specifically configured to:
and identifying the determined target preset mark number in the ROI by using a mark number identification model, wherein the mark number identification model is obtained by performing deep learning training by using the ROI with the mark number calibrated in advance.
Preferably, the vehicle controller is configured to determine a first position coordinate of the vehicle in a vehicle body coordinate system according to the target preset mark number, and is specifically configured to:
obtaining a target vertex coordinate corresponding to the target preset mark number from a pre-constructed two-dimensional map, wherein the two-dimensional map is recorded with a library vertex coordinate corresponding to the preset library mark number and a special mark vertex coordinate corresponding to the preset special mark number under a vehicle body coordinate system; the target vertex coordinates are determined as first position coordinates of the vehicle in a vehicle coordinate system.
Preferably, the vehicle controller for transforming the first position coordinate into a second position coordinate of the vehicle in a global coordinate system is specifically configured to:
determining an origin coordinate of the origin of the vehicle body coordinate system under a global coordinate system, and forming an included angle between the vehicle body coordinate system and a transverse axis of the global coordinate system; calculating the position coordinate difference of the vehicle body coordinate system and the global coordinate system according to the origin coordinate and the transverse axis included angle; and compensating the first position coordinate by using the position coordinate difference to obtain a second position coordinate of the vehicle under the global coordinate system.
An automatic positioning method for an underground garage is applied to the automatic positioning system for the underground garage, which is characterized in that the system comprises a visual positioning sensor and a vehicle controller connected with the visual positioning sensor, wherein the visual positioning sensor comprises a four-way high-definition all-round camera with a scanning range covering 360 degrees; the method is applied to the vehicle controller, and includes:
receiving four paths of high-definition looking-around images sent by the four paths of high-definition looking-around cameras;
determining an interested image region ROI where a preset mark number is located from the four-path high-definition looking-around image, wherein the preset mark number comprises a preset base position mark number and/or a preset special mark number, the preset base position mark number is arranged in a middle region of a base bit line close to one side of a lane line, and the preset special mark number is arranged in a middle region in an independent lane;
identifying a target preset mark number in the determined ROI, and determining a first position coordinate of the vehicle under a vehicle body coordinate system according to the target preset mark number;
the first position coordinates are transformed into second position coordinates of the vehicle in a global coordinate system.
Preferably, determining the region of interest ROI where the preset mark number is located from the four-path high-definition looking-around image includes:
threshold segmentation is carried out on the four paths of high-definition looking-around images;
performing straight line segment detection on the four-path high-definition looking-around image subjected to threshold segmentation;
and determining the ROI where the preset mark number is located according to the detected straight line segment information and the ROI identification rule of the image region of interest corresponding to the preset mark number.
Preferably, the determining the first position coordinate of the vehicle in the vehicle body coordinate system according to the target preset mark number includes:
obtaining a target vertex coordinate corresponding to the target preset mark number from a pre-constructed two-dimensional map, wherein the two-dimensional map is recorded with a library vertex coordinate corresponding to the preset library mark number and a special mark vertex coordinate corresponding to the preset special mark number under a vehicle body coordinate system;
the target vertex coordinates are determined as first position coordinates of the vehicle in a vehicle coordinate system.
Preferably, the transforming the first position coordinate to a second position coordinate of the vehicle in a global coordinate system includes:
determining an origin coordinate of the origin of the vehicle body coordinate system under a global coordinate system, and forming an included angle between the vehicle body coordinate system and a transverse axis of the global coordinate system;
calculating the position coordinate difference of the vehicle body coordinate system and the global coordinate system according to the origin coordinate and the transverse axis included angle;
and compensating the first position coordinate by using the position coordinate difference to obtain a second position coordinate of the vehicle under the global coordinate system.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides an automatic positioning system and method for an underground garage, wherein the system consists of a visual positioning sensor and a vehicle controller, and the visual positioning sensor comprises four paths of high-definition looking-around cameras with scanning ranges covering 360 degrees. The vehicle controller utilizes four-way high-definition looking-around images sent by the four-way high-definition looking-around camera, and combines the preset library position marks and the special marks to determine target preset mark numbers near the vehicle, so that the position coordinates of the vehicle under the global coordinate system are determined by utilizing the target preset mark numbers. Based on the above, the stable and reliable positioning can be realized by using the low-cost high-definition looking-around camera, and the economical efficiency and the practicability are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an automatic positioning system for an underground garage according to an embodiment of the present invention;
FIG. 2 is an example of a special tag field and bin tag number;
FIG. 3 is a special label numbering plan outline and dimension example;
fig. 4 is a flowchart of a method for automatically positioning an underground garage according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention provides an automatic positioning system of an underground garage, the structural schematic diagram of the system is shown in figure 1, the system comprises a visual positioning sensor 1 and a vehicle controller 2 connected with the visual positioning sensor 1, and the visual positioning sensor 1 comprises a four-way high-definition looking-around camera with a scanning range covering 360 degrees;
the four-way high-definition looking-around camera is used for collecting four-way high-definition looking-around images and sending the four-way high-definition looking-around images to the vehicle controller 2;
the vehicle controller 2 is configured to determine an interested image region ROI where a preset mark number is located from the four-way high-definition looking-around image, where the preset mark number includes a preset base mark number and/or a preset special mark number, the preset base mark number is set in a middle region of a base bit line near one side of a lane line, and the preset special mark number is set in a middle region in an independent lane; identifying a target preset mark number in the determined ROI, and determining a first position coordinate of the vehicle under a vehicle body coordinate system according to the target preset mark number; the first position coordinates are transformed to second position coordinates of the vehicle in a global coordinate system.
In this embodiment, the position layout of the four-way high-definition looking-around camera requires full coverage within 360 ° range, the position layout requirements of the front and rear two high-definition looking-around cameras on the whole vehicle are shown in table 1, and the position layout requirements of the left and right two high-definition looking-around cameras on the whole vehicle are shown in table 2. Meanwhile, for a high-definition all-round camera, the resolution is at least 720P (the resolution is more than or equal to 1280 x 720), the horizontal view angle H is more than or equal to 180 degrees, and the vertical view angle V is more than or equal to 130 degrees
TABLE 1
TABLE 2
In addition, the preset mark numbers are divided into two types, namely a preset base mark number and a preset special mark number, wherein the preset base mark number is preset in the middle area of a base bit line close to one side of a lane line, and the preset special mark number is preset in the middle area of a passable road area, namely an independent lane.
Assuming that all the garage positions in the underground garage are divided into a plurality of areas such as A, B, C … … Y, Z and the like according to the areas, a special mark is required to be drawn at the middle position in each independent lane, the special mark area where the special mark is positioned and the number of the garage position mark are shown as shown in fig. 2, if the category of the special mark is consistent with that of the area where the special mark is positioned, the special mark is divided into A, B, C … … Y, Z and the like, a Z area is taken as an example, an outline and a size example of the special mark number with the number of 001 are shown as shown in fig. 3 (the sequence of 4 vertexes of the mark is shown as a schematic in the figure), and the like, the special mark number can be supported to be Z999 at most.
Assuming that there are a total of N special marks in a closed loop road, the center point position of each special mark is marked as (x) i ,y i ) (i=1, 2,3, …, N), then the specific layout requires that in two adjacent special marksThe euclidean distance dis between the heart points is calculated according to the following formula (1):
the euclidean distance dis satisfies the following requirements:
the dis near the non-intersection area is less than or equal to 5m, and the dis near the intersection area is less than or equal to 3m;
in addition, special marking numbers must be guaranteed at the entrances to underground garages.
In this embodiment, an interesting image region ROI where a preset mark number is located is determined, that is, a middle region where a target preset library position mark number is located and/or a middle region where a target preset special mark number is located is determined; further, performing deep learning training on the determined middle area, namely determining the category and the numerical value of the target preset mark number, and determining a first position coordinate of the vehicle under a vehicle body coordinate system by combining a pre-constructed two-dimensional chart; and finally, converting the first position coordinate into a second position coordinate under the global coordinate system by utilizing the position coordinate difference of the vehicle body coordinate system and the global coordinate system.
The construction process of the two-dimensional map is described as follows:
a global coordinate system W is established at any entrance of the underground garage. Accurately measuring two vertexes of a library bit line close to one side of a lane line and 4 vertexes of a special mark area by high-precision equipment, wherein the measurement precision requirement is less than or equal to 5cm, obtaining two-dimensional coordinates of all preset library bit mark numbers and vertexes corresponding to the preset special mark numbers under a global coordinate system, and constructing a two-dimensional chart according to the information, wherein the specific structure of the chart is shown in a table 3:
TABLE 3 Table 3
In some other embodiments, the process of determining the region of interest ROI where the preset mark number is located from the four-way high definition looking-around image by the vehicle controller includes the following steps:
threshold segmentation is carried out on the four paths of high-definition looking-around images; performing straight line segment detection on the four paths of high-definition looking-around images subjected to threshold segmentation; and determining the ROI where the preset mark number is located according to the detected straight line segment information and the ROI identification rule of the image region of interest corresponding to the preset mark number.
In this embodiment, an adaptive threshold segmentation method may be used to perform threshold segmentation on the four-path high-definition looking-around image, and the adaptive threshold segmentation method is briefly described below:
the main idea of the adaptive thresholding method is to divide the image into two parts, namely a background and a target, according to the gray scale characteristics of the image. The larger the inter-class variance between the background and the target is, the larger the difference between the background and the target area forming the image is, the intra-class variance between the background and the foreground corresponding to the different thresholds is calculated by traversing the different thresholds, and when the intra-class variance obtains the maximum value, the corresponding threshold is the required threshold.
The OTSU algorithm is also called maximum inter-class difference method, and is an algorithm for selecting a threshold value in image segmentation. The algorithm divides the image into a background part and a foreground part according to the gray characteristic of the image. Since variance is a measure of the uniformity of the gray level distribution, the larger the inter-class variance between the background and the foreground, the larger the difference between the two parts that make up the image, which results in a smaller difference between the two parts when the foreground is divided into the background or the background is divided into the foreground. Therefore, the smallest misclassification probability is the segmentation with the largest inter-class variance.
In order to improve the threshold segmentation effect, distortion correction can be performed on the image before threshold segmentation is performed on the four-path high-definition looking-around image.
After the threshold segmentation is completed, the four paths of high-definition looking-around images after the threshold segmentation can be further subjected to straight line segment detection so as to obtain straight line segment information contained in the images. The basic principle of straight line segment detection is briefly described as follows:
firstly, obtaining a region where a straight line segment possibly exists by carrying out edge segmentation on an original image, and then, detecting an object with straight line characteristics by a voting algorithm. And (2) as shown in a formula (2) is a parameterized linear equation adopted by standard linear Hough transformation, wherein theta is equal to or less than or equal to 0 and less than 180 DEG in the normal direction of a straight line, rho is the distance from an origin to the straight line, and a set conforming to the characteristic of the straight line is obtained as a final result by calculating the local maximum value of the accumulated result.
x*cosθ+y*sinθ=ρ (2)
And further screening based on the straight line segment information and the ROI identification rule corresponding to the preset mark number to obtain ROI (region of interest) where the preset mark number is located. The following describes the ROI identification rule corresponding to the preset bin mark number and the ROI identification rule corresponding to the preset special mark number:
assuming that N straight line segments are detected by the straight line segments, the length of each straight line segment is calculated first, and assuming that D1, D2, D3, …, DN are recorded, some straight line segments are filtered using a minimum distance constraint according to the following formula (3):
D>D0 (3)
wherein D0 is the minimum distance requirement of the straight line segment, and is generally 20cm.
Assuming that the final effective straight line segment is M, the slope of the straight line is k1, k2, k3, … and km respectively, and the length of the straight line segment is d1, d2, d3, … and dm respectively. The four straight lines ki, kj, kp, kq satisfying the following formula (4) then constitute the minimum outer boundary square region of the preset special mark number, namely ROI:
ki≈kj&di≈dj&kp≈kq&dp≈dq&di≈dp&ki*kp≈0&di≈0.5m (4)
wherein i=1, 2,3, …, m, j=1, 2,3, …, m, p=1, 2,3, …, m, q=1, 2,3, …, m, and the two amounts within 10% of each other are required to be satisfied. The endpoints of the four straight line segments are the vertexes on the preset special mark, and are respectively 1,2,3 and 4 according to the sequence from top to bottom and from left to right.
Similarly, the criteria for determining the bank bit line endpoints 1 and 2 are as follows:
if the equations of straight line segments r and s are respectively: y=kr x+br, y=ks x+bs
Let (x 0, y 0) be any point on the linear equation r, the vertical distance between the linear segments r and s can be calculated as follows in equation (5):
assuming that (X1, Y1) and (X2, Y2) are the end point coordinate information of the two straight line segments closest to the origin of the vehicle body coordinate system, if the straight line segments r and s satisfying the following formula (6) are the two straight line segments of the bin,
kr = ks & drs≥d0 & X1>0 & Y1>0 & X2>0& Y2>0 (6)
where d0 is the distance threshold, typically the minimum bin width for a vertical bin, here taken as 1.8m.
Thus, (X1, Y1), (X2, Y2), where the euclidean distance d (X1, Y1) < d (X2, Y2), corresponds to the bank bit line end points 1 and 2 in fig. 2, respectively.
Considering that the bin number is in the middle region of the connection between bin end points 1 and 2, the bin number region selection criteria is as follows:
first, the midpoint coordinates (x, y) of the library bitline endpoints 1 and 2 are calculated according to the following equation (7):
further, the upper left corner coordinates (rect. X, rect. Y) of the ROI where the preset bin marker number is located are determined according to the following formula (8):
Rect.x=x- Rect.width/2 Rect.y=y+drs/2 (8)
where rect.width represents the ROI width and rect.height represents the ROI height.
In some other embodiments, the process of the vehicle controller identifying the target preset mark number in the determined ROI includes the steps of:
and identifying the target preset mark number in the determined ROI by using a mark number identification model, wherein the mark number identification model is obtained by performing deep learning training by using the ROI with the mark number calibrated in advance.
Lenet-5 is a classical convolutional neural network structure, mainly for handwriting recognition. LeNet-5 has no input and a total of 7 layers, including two convolution layers, two pooling layers, two full-connection layers, and one Gaussian connection layer. Each layer contains trainable parameters (connection weights). The LeNet network can be trained with a set of handwritten letters and numbers on the network (e.g., a Mnist database) prior to use of the model. The training process is roughly divided into four steps: forward conduction, loss function, postamble conduction, and weight update. After training, a test set is used for testing the network, and if the identification rate meets the requirement, the network can be used for numbering identification. Specifically, after a certain preprocessing, the image acquired by the looking-around camera accords with the input specification of the LeNet network, namely, is input into the network, and an identification result is obtained. Training with the handwritten alphanumeric set improves the robustness of the numbering recognition to a certain extent.
In some other embodiments, the vehicle controller determines a first position coordinate of the vehicle in a body coordinate system according to the target preset mark number, including the steps of:
obtaining a target vertex coordinate corresponding to a target preset mark number from a pre-constructed two-dimensional chart, wherein the two-dimensional chart is recorded with a library vertex coordinate corresponding to a preset library mark number under a vehicle body coordinate system and a special mark vertex coordinate corresponding to a preset special mark number; the target vertex coordinates are determined as first position coordinates of the vehicle in the vehicle coordinate system.
In some other embodiments, a vehicle controller transforms a first position coordinate to a second position coordinate of a vehicle in a global coordinate system, comprising the steps of:
determining an origin coordinate of an origin of a vehicle body coordinate system under a global coordinate system and an included angle between the vehicle body coordinate system and a transverse axis of the global coordinate system; calculating the position coordinate difference of the vehicle body coordinate system and the global coordinate system according to the original point coordinate and the included angle of the transverse axis; and compensating the first position coordinate by using the position coordinate difference to obtain a second position coordinate of the vehicle under the global coordinate system.
Assuming that 4 special marked vertexes are extracted from the four-path high-definition looking-around image and respectively marked as P1 (X1, Y1), P2 (X2, Y2) and … … P4 (X4, Y4), meanwhile, 2 library position marked points are respectively marked as P5 (X5, Y5) and P6 (X6, Y6), coordinates of the 6 points in a global coordinate system are respectively (X1, Y1), (X2, Y2) and … … (X6, Y6) through coordinate transformation, and the following description is provided for determining the position coordinate difference:
the unknowns dx, dy and θ can be determined according to the following formula (9), where dx, dy is the origin coordinate of the origin of the vehicle body coordinate system in the global coordinate system, and θ is the horizontal axis included angle between the x axis of the b system and the x axis of the w system.
The correspondence of the same point in two coordinate systems is known as shown in formula (9):
the initial value dx can be calculated according to two one-to-one corresponding points 0 ,dy 00 Specifically, the following formulas (10) to (13):
θ 0 =arctg(ΔY/ΔX)-arctg(Δy/Δx) (13)
based on nonlinear least squares principle, linear expansion of equation (9) is available (14)
The solution equations are shown in (15) and (16):
V=BX-L (15)
X=(B T B) -1 B T L (16)
wherein (V) x ,V y ) Is the difference of the position coordinates before and after the coordinate transformation.
According to the automatic positioning system for the underground garage, provided by the embodiment of the invention, the vehicle controller utilizes four-way high-definition looking-around images sent by the four-way high-definition looking-around camera to determine the target preset mark number near the vehicle by combining the preset position mark and the special mark, so that the position coordinate of the vehicle under the global coordinate system is determined by utilizing the target preset mark number. Based on the above, the stable and reliable positioning can be realized by using the low-cost high-definition looking-around camera, and the economical efficiency and the practicability are improved.
On the basis of the automatic positioning system for an underground garage provided in the above embodiment, the embodiment of the present invention provides another automatic positioning system for an underground garage, where the visual positioning sensor 1 further includes: inertial measurement unit IMU sensors and vehicle sensors;
the IMU sensor is used for measuring the angular speed and the acceleration of the vehicle under a vehicle body coordinate system;
the vehicle sensor is used for measuring the wheel speed and steering wheel rotation angle of the vehicle under a vehicle body coordinate system;
the vehicle controller is used for calculating IMU position and IMU course angle information under a global coordinate system based on the angular speed and the acceleration; calculating vehicle position and vehicle course angle information under a global coordinate system based on the wheel speed and the steering wheel information; and calculating an looking-around positioning error correction amount according to the IMU position, the IMU course angle information, the vehicle position and the vehicle course angle information, and compensating the second position coordinate by using the looking-around positioning error correction amount.
After the IMU sensor directly obtains the angular velocity and acceleration information of the vehicle, the vehicle controller can obtain the IMU position and IMU course angle information under the vehicle body coordinate system through integration, and the position and course angle information of the IMU under the global coordinate system is obtained by combining the inherent position and posture transformation relation (which can be calibrated when the global coordinate system is established) between the vehicle body coordinate system and the global coordinate system.
The vehicle sensor directly obtains the wheel speed and the steering wheel corner of the vehicle, the vehicle controller can obtain the vehicle position and the vehicle course angle information under the vehicle body coordinate system through integration, and the vehicle position and the vehicle course angle information under the global coordinate system are obtained by combining the inherent position and posture transformation relation between the vehicle body coordinate system and the global coordinate system.
Under the global coordinate system, the vehicle controller takes the vehicle position and course angle information as observables, takes the vehicle positioning error and course angle error as estimators, constructs a state equation, utilizes EKF filtering estimation to obtain an error correction quantity of the looking-around positioning result, and compensates the looking-around positioning result by the correction quantity to obtain a relatively accurate visual positioning result.
The automatic positioning system for the underground garage can ensure that accurate positioning results can be obtained even when traffic participants, such as people or vehicles, are blocked for a short time, and the auxiliary control system performs decision planning and control.
Based on the automatic positioning system of the underground garage provided by the embodiment, the embodiment of the invention provides an automatic positioning method of the underground garage, which is applied to a vehicle controller, and a flow chart of the method is shown in fig. 4, and comprises the following steps:
s10, receiving four-way high-definition looking-around images sent by four-way high-definition looking-around cameras;
s20, determining an interested image region ROI where a preset mark number is located from four paths of high-definition looking-around images, wherein the preset mark number comprises a preset base position mark number and/or a preset special mark number, the preset base position mark number is arranged in a middle region of a base bit line close to one side of a lane line, and the preset special mark number is arranged in a middle region in an independent lane;
s30, identifying a target preset mark number in the determined ROI, and determining a first position coordinate of the vehicle under a vehicle body coordinate system according to the target preset mark number;
and S40, converting the first position coordinate into a second position coordinate of the vehicle under the global coordinate system.
In some other embodiments, determining the region of interest ROI where the preset mark number is located from the four-way high-definition looking-around image includes the following steps:
threshold segmentation is carried out on the four paths of high-definition looking-around images; performing straight line segment detection on the four paths of high-definition looking-around images subjected to threshold segmentation; and determining the ROI where the preset mark number is located according to the detected straight line segment information and the ROI identification rule of the image region of interest corresponding to the preset mark number.
In some other embodiments, determining a first position coordinate of the vehicle in the body coordinate system according to the target preset mark number includes the following steps:
obtaining a target vertex coordinate corresponding to a target preset mark number from a pre-constructed two-dimensional chart, wherein the two-dimensional chart is recorded with a library vertex coordinate corresponding to a preset library mark number under a vehicle body coordinate system and a special mark vertex coordinate corresponding to a preset special mark number; the target vertex coordinates are determined as first position coordinates of the vehicle in the vehicle coordinate system.
In some other embodiments, transforming the first position coordinates to second position coordinates of the vehicle in a global coordinate system includes the steps of:
determining an origin coordinate of an origin of a vehicle body coordinate system under a global coordinate system and an included angle between the vehicle body coordinate system and a transverse axis of the global coordinate system; calculating the position coordinate difference of the vehicle body coordinate system and the global coordinate system according to the original point coordinate and the included angle of the transverse axis; and compensating the first position coordinate by using the position coordinate difference to obtain a second position coordinate of the vehicle under the global coordinate system.
According to the automatic positioning method for the underground garage, provided by the embodiment of the invention, the vehicle controller utilizes four-way high-definition looking-around images sent by the four-way high-definition looking-around camera to determine the target preset mark number near the vehicle by combining the preset position mark and the special mark, so that the position coordinate of the vehicle under the global coordinate system is determined by utilizing the target preset mark number. Based on the above, the stable and reliable positioning can be realized by using the low-cost high-definition looking-around camera, and the economical efficiency and the practicability are improved.
The above describes in detail an automatic positioning system and method for an underground garage provided by the present invention, and specific examples are applied to illustrate the principles and embodiments of the present invention, and the above examples are only used to help understand the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include, or is intended to include, elements inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An automatic positioning system for an underground garage, comprising: the visual positioning system comprises a visual positioning sensor and a vehicle controller connected with the visual positioning sensor, wherein the visual positioning sensor comprises four paths of high-definition looking-around cameras with scanning ranges covering 360 degrees;
the four-way high-definition looking-around camera is used for collecting four-way high-definition looking-around images and sending the four-way high-definition looking-around images to the vehicle controller;
the vehicle controller is configured to determine an interested image region ROI where a preset mark number is located from the four-way high-definition looking-around image, where the preset mark number includes a preset base mark number and/or a preset special mark number, the preset base mark number is set in a middle region of a base bit line near one side of a lane line, and the preset special mark number is set in a middle region in an independent lane; identifying a target preset mark number in the determined ROI, and determining a first position coordinate of the vehicle under a vehicle body coordinate system according to the target preset mark number; transforming the first position coordinates into second position coordinates of the vehicle in a global coordinate system;
the vehicle controller for determining a first position coordinate of the vehicle in a vehicle body coordinate system according to the target preset mark number is specifically configured to:
obtaining a target vertex coordinate corresponding to the target preset mark number from a pre-constructed two-dimensional map, wherein the two-dimensional map is recorded with a library vertex coordinate corresponding to the preset library mark number and a special mark vertex coordinate corresponding to the preset special mark number under a vehicle body coordinate system; determining the target vertex coordinates as first position coordinates of the vehicle in a vehicle coordinate system;
the vehicle controller for transforming the first position coordinates into second position coordinates of the vehicle in a global coordinate system, in particular for:
determining an origin coordinate of the origin of the vehicle body coordinate system under a global coordinate system, and forming an included angle between the vehicle body coordinate system and a transverse axis of the global coordinate system; calculating the position coordinate difference of the vehicle body coordinate system and the global coordinate system according to the origin coordinate and the transverse axis included angle; and compensating the first position coordinate by using the position coordinate difference to obtain a second position coordinate of the vehicle under the global coordinate system.
2. The system of claim 1, wherein the visual positioning sensor further comprises: inertial measurement unit IMU sensors and vehicle sensors;
the IMU sensor is used for measuring the angular speed and the acceleration of the vehicle under the vehicle body coordinate system;
the vehicle sensor is used for measuring the wheel speed and steering wheel rotation angle of the vehicle under the vehicle body coordinate system;
the vehicle controller is used for calculating IMU position and IMU course angle information under the global coordinate system based on the angular speed and the acceleration; calculating vehicle position and vehicle course angle information under the global coordinate system based on the wheel speed and steering wheel information; and calculating an looking-around positioning error correction amount according to the IMU position, the IMU course angle information, the vehicle position and the vehicle course angle information, and compensating the second position coordinate by utilizing the looking-around positioning error correction amount.
3. The system according to claim 1, wherein the vehicle controller is configured to determine an image region of interest ROI in which a preset marker number is located from the four-way high definition panoramic image, specifically configured to:
threshold segmentation is carried out on the four paths of high-definition looking-around images; performing straight line segment detection on the four-path high-definition looking-around image subjected to threshold segmentation; and determining the ROI where the preset mark number is located according to the detected straight line segment information and the ROI identification rule of the image region of interest corresponding to the preset mark number.
4. The system according to claim 1, characterized in that said vehicle controller for identifying a target preset marker number in said determined ROI, in particular for:
and identifying the determined target preset mark number in the ROI by using a mark number identification model, wherein the mark number identification model is obtained by performing deep learning training by using the ROI with the mark number calibrated in advance.
5. An automatic positioning method for an underground garage is characterized by being applied to the automatic positioning system for the underground garage, which is disclosed in any one of claims 1 to 4, wherein the system comprises a visual positioning sensor and a vehicle controller connected with the visual positioning sensor, and the visual positioning sensor comprises a four-way high-definition all-round camera with a scanning range covering 360 degrees; the method is applied to the vehicle controller, and includes:
receiving four paths of high-definition looking-around images sent by the four paths of high-definition looking-around cameras;
determining an interested image region ROI where a preset mark number is located from the four-path high-definition looking-around image, wherein the preset mark number comprises a preset base position mark number and/or a preset special mark number, the preset base position mark number is arranged in a middle region of a base bit line close to one side of a lane line, and the preset special mark number is arranged in a middle region in an independent lane;
identifying a target preset mark number in the determined ROI, and determining a first position coordinate of the vehicle under a vehicle body coordinate system according to the target preset mark number;
the first position coordinates are transformed into second position coordinates of the vehicle in a global coordinate system.
6. The method of claim 5, wherein determining an image region of interest ROI in which a preset marker number is located from the four-way high definition panoramic image comprises:
threshold segmentation is carried out on the four paths of high-definition looking-around images;
performing straight line segment detection on the four-path high-definition looking-around image subjected to threshold segmentation;
and determining the ROI where the preset mark number is located according to the detected straight line segment information and the ROI identification rule of the image region of interest corresponding to the preset mark number.
7. The method of claim 5, wherein determining a first position coordinate of the vehicle in a body coordinate system according to the target preset mark number comprises:
obtaining a target vertex coordinate corresponding to the target preset mark number from a pre-constructed two-dimensional map, wherein the two-dimensional map is recorded with a library vertex coordinate corresponding to the preset library mark number and a special mark vertex coordinate corresponding to the preset special mark number under a vehicle body coordinate system;
the target vertex coordinates are determined as first position coordinates of the vehicle in a vehicle coordinate system.
8. The method of claim 5, wherein said transforming the first position coordinates to second position coordinates of the vehicle in a global coordinate system comprises:
determining an origin coordinate of the origin of the vehicle body coordinate system under a global coordinate system, and forming an included angle between the vehicle body coordinate system and a transverse axis of the global coordinate system;
calculating the position coordinate difference of the vehicle body coordinate system and the global coordinate system according to the origin coordinate and the transverse axis included angle;
and compensating the first position coordinate by using the position coordinate difference to obtain a second position coordinate of the vehicle under the global coordinate system.
CN201810525414.4A 2018-05-28 2018-05-28 Automatic positioning system and method for underground garage Active CN110542416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810525414.4A CN110542416B (en) 2018-05-28 2018-05-28 Automatic positioning system and method for underground garage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810525414.4A CN110542416B (en) 2018-05-28 2018-05-28 Automatic positioning system and method for underground garage

Publications (2)

Publication Number Publication Date
CN110542416A CN110542416A (en) 2019-12-06
CN110542416B true CN110542416B (en) 2023-07-21

Family

ID=68700847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810525414.4A Active CN110542416B (en) 2018-05-28 2018-05-28 Automatic positioning system and method for underground garage

Country Status (1)

Country Link
CN (1) CN110542416B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113375656B (en) * 2020-03-09 2023-06-27 杭州海康威视数字技术股份有限公司 Positioning method and device
CN112509375B (en) * 2020-10-20 2022-03-08 东风汽车集团有限公司 Parking dynamic display method and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2007340727A1 (en) * 2006-12-28 2008-07-10 Kabushiki Kaisha Toyota Jidoshokki Parking assistance device, component for parking assistance device, parking assistance method, parking assistance program, method and program for calculating vehicle travel parameter, device for calculating vehicle travel parameter, and component for device for calculating vehicle travel parameter
CN101201395B (en) * 2007-12-24 2011-05-04 北京航空航天大学 Underground garage positioning system based on RFID technique
CN107180215B (en) * 2017-05-31 2020-01-31 同济大学 Parking lot automatic mapping and high-precision positioning method based on library position and two-dimensional code
CN107246868B (en) * 2017-07-26 2021-11-02 上海舵敏智能科技有限公司 Collaborative navigation positioning system and navigation positioning method
CN107664500B (en) * 2017-09-19 2019-12-06 重庆交通大学 garage vehicle positioning and navigation method based on image feature recognition

Also Published As

Publication number Publication date
CN110542416A (en) 2019-12-06

Similar Documents

Publication Publication Date Title
CN111801711A (en) Image annotation
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN108363065A (en) Object detecting system
CN106997688B (en) Parking lot parking space detection method based on multi-sensor information fusion
US11288526B2 (en) Method of collecting road sign information using mobile mapping system
EP1780675B1 (en) Object detector
EP1783684A1 (en) Plane detector and detecting method
Wedel et al. B-spline modeling of road surfaces with an application to free-space estimation
EP1783683A1 (en) Mobile peripheral monitor
CN109791598A (en) The image processing method of land mark and land mark detection system for identification
CN104637073B (en) It is a kind of based on the banding underground structure detection method for shining upon shadow compensation
Poggenhans et al. Precise localization in high-definition road maps for urban regions
WO2018181974A1 (en) Determination device, determination method, and program
CN106774313A (en) A kind of outdoor automatic obstacle-avoiding AGV air navigation aids based on multisensor
Qu et al. Landmark based localization in urban environment
CN102037735A (en) Self calibration of extrinsic camera parameters for a vehicle camera
CN103198302A (en) Road detection method based on bimodal data fusion
CN110197173B (en) Road edge detection method based on binocular vision
TW200944830A (en) System and method for map matching with sensor detected objects
CN103487034A (en) Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target
CN106156723A (en) A kind of crossing fine positioning method of view-based access control model
CN108364466A (en) A kind of statistical method of traffic flow based on unmanned plane traffic video
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
CN103499337A (en) Vehicle-mounted monocular camera distance and height measuring device based on vertical target
CN110542416B (en) Automatic positioning system and method for underground garage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant