CN110645973A - Vehicle positioning method - Google Patents

Vehicle positioning method Download PDF

Info

Publication number
CN110645973A
CN110645973A CN201910906109.4A CN201910906109A CN110645973A CN 110645973 A CN110645973 A CN 110645973A CN 201910906109 A CN201910906109 A CN 201910906109A CN 110645973 A CN110645973 A CN 110645973A
Authority
CN
China
Prior art keywords
vehicle
distance
road
simulated
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910906109.4A
Other languages
Chinese (zh)
Other versions
CN110645973B (en
Inventor
张恒
秦屹
胡玉斌
刘皓伦
彭诚诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Whst Co Ltd
Original Assignee
Whst Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Whst Co Ltd filed Critical Whst Co Ltd
Priority to CN201910906109.4A priority Critical patent/CN110645973B/en
Publication of CN110645973A publication Critical patent/CN110645973A/en
Application granted granted Critical
Publication of CN110645973B publication Critical patent/CN110645973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • G01S13/08Systems for measuring distance only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Abstract

The invention provides a vehicle positioning method, which belongs to the technical field of image processing and comprises the following steps: acquiring an environment image through a camera arranged on a vehicle, wherein the environment image comprises a lane line and a guardrail; establishing a road model according to the environment image; acquiring the distance from a vehicle to a guardrail in an environment through a radar mounted on the vehicle; determining the simulated position of the vehicle in the road model according to the distance and the size information of the vehicle; and determining the target distance between the vehicle and the lane line according to the simulated position of the vehicle in the road model. The vehicle positioning method provided by the invention has the advantages of higher positioning precision and stronger data referential property.

Description

Vehicle positioning method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a vehicle positioning method.
Background
With the rapid development of the automobile industry, in recent years, various automobile companies have been focused on the research of driving assistance systems. The most common vehicle Positioning and navigation System is the GPS (Global Positioning System). However, with the rapid development of urban traffic, the complexity of urban roads is increased, and the accuracy of vehicle positioning becomes more important. The existing vehicle positioning method mostly depends on lane lines, the collected positioning data has large variation, and the data has poor referential performance.
Disclosure of Invention
The invention aims to provide a vehicle positioning method, aiming at solving the problems of low positioning precision and poor referential performance.
In order to achieve the purpose, the invention adopts the technical scheme that: provided is a vehicle positioning method including:
acquiring an environment image through a camera arranged on a vehicle, wherein the environment image comprises a lane line and a guardrail;
establishing a road model according to the environment image;
acquiring the distance from the vehicle to a guardrail in an environment through a radar installed on the vehicle;
determining a simulated position of the vehicle in the road model according to the distance and the size information of the vehicle;
and determining the target distance between the vehicle and the lane line according to the simulated position of the vehicle in the road model.
As another embodiment of the present application, the building a road model according to the environment image includes:
acquiring a current road surface image and a plurality of road images covering the range of the road surface image in different target areas;
determining a mark value of the road surface image, eliminating pixel points close to the mark value on a plurality of road images, and determining a plurality of images to be processed containing the lane line and the guardrail;
and synthesizing the road model by a plurality of images to be processed.
As another embodiment of the present application, the determining the mark value of the road surface image includes:
summing the RGB values of all the pixel points in the pavement image to obtain a total RGB value;
dividing the total RGB value by the number of the pixel points to obtain an average RGB value;
and taking the RGB average value as the mark value.
As another embodiment of the present application, the removing pixel points close to the mark value from the plurality of road images includes:
setting a threshold range based on the flag value;
and when the RGB value of the pixel point in the road image is within the threshold range, the pixel point is removed.
As another embodiment of the present application, the obtaining a distance from a guardrail in an environment by a radar installed on the vehicle includes:
the method comprises the steps that a first distance and a second distance between a vehicle and one side of a guardrail are respectively obtained through a first radar and a second radar, and a third distance between the vehicle and the other side of the guardrail is obtained through a third radar.
As another embodiment of the present application, after the first and second distances from the vehicle to one side of the guardrail are respectively obtained by the first and second radars, and the third distance from the vehicle to the other side of the guardrail is obtained by the third radar, the method further includes:
picking up the simulated width of the simulated guardrail model on the road model;
determining an actual width of the guardrail from the first distance, the third distance, and actual distances of the first radar and the third radar;
and dividing the actual width by the simulated width to obtain the distance ratio required by increasing or decreasing the unit of the simulated width.
As another embodiment of the present application, the determining a simulated position of the vehicle in the road model according to the distance and the size information of the vehicle includes:
converting the first distance, the second distance, the third distance and the size information into a first analog quantity, a second analog quantity, a third analog quantity and a size analog quantity respectively through the distance ratio;
and combining the first analog quantity, the second analog quantity, the third analog quantity and the size analog quantity to simulate the simulated position on the road model.
As another embodiment of the present application, the determining a target distance from the vehicle to the lane line according to the simulated position of the vehicle in the road model includes:
picking up a simulation target quantity of the simulation position from the lane line on the road model;
and multiplying the simulated target quantity by the distance ratio to determine the target distance.
As another embodiment of the present application, the picking up the simulated width of the simulated guardrail model on the road model includes:
picking out a connecting line which is parallel to the simulated road surface and is vertical to the central line of the simulated road surface from the road model;
and extending the connecting line to the simulation guardrails on the road model at two ends, wherein the extending length of the connecting line is the simulation width.
As another embodiment of the present application, when the sign value exceeds a preset limit value, the road surface image that has not exceeded the limit value at the previous time is selected.
Compared with the prior art, the vehicle positioning method has the advantages that the environmental image is collected through the camera arranged on the vehicle, the environmental image comprises the lane line and the guardrail, and the road model is established according to the environmental image. The lane line and the guardrail of the road where the vehicle is located are used as reference lines for the vehicle to run, and the established model can provide more accurate reference for positioning of the vehicle. The distance between the vehicle and a guardrail in the environment is obtained through a radar installed on the vehicle, and the simulation position of the vehicle in the road model is determined according to the distance and the size information of the vehicle. The guardrail on two sides of the road is used as a reference, the distance between the vehicle and the guardrail can be easily measured through devices such as the distance meter, the size of the devices such as the distance meter is small, the cost is low, and the measured distance data can be transmitted in real time. And the position relation between the vehicle and the road, namely the simulated position, can be simulated on the road model by combining the distance and the size information of the vehicle. And determining the target distance between the vehicle and the lane line according to the simulated position of the vehicle in the road model. And the target distance between the vehicle and the lane line can be determined by combining the simulation position and the road model. The position information of the vehicle relative to the road can be accurately mastered through the target distance, and reliable data support is provided for the fields of unmanned driving and the like. Compared with the GPS positioning method and other methods, the vehicle positioning method can provide more accurate position information, and the provided target distance has more referential property.
Drawings
FIG. 1 is a flow chart of a vehicle locating method provided by an embodiment of the present invention;
fig. 2 is a process diagram for calculating a target distance according to an embodiment of the present invention.
In the figure: 1. a first analog quantity; 2. a second analog quantity; 3. a third analog quantity; 4. a size analog quantity; 5. And simulating the target quantity.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, a method for positioning a vehicle according to the present invention will now be described. A vehicle localization method, comprising:
s110: the method comprises the following steps of collecting an environment image through a camera arranged on a vehicle, wherein the environment image comprises lane lines and guardrails.
S120: and establishing a road model according to the environment image.
S130: and acquiring the distance from the vehicle to a guardrail in the environment through a radar installed on the vehicle.
S140: and determining the simulated position of the vehicle in the road model according to the distance and the size information of the vehicle.
S150: and determining the target distance between the vehicle and the lane line according to the simulated position of the vehicle in the road model.
Compared with the prior art, the vehicle positioning method has the advantages that the environmental image is collected through the camera arranged on the vehicle, the environmental image comprises the lane line and the guardrail, and the road model is established according to the environmental image. The lane line and the guardrail of the road where the vehicle is located are used as reference lines for the vehicle to run, and the established model can provide more accurate reference for positioning of the vehicle. The distance between the vehicle and a guardrail in the environment is obtained through a radar installed on the vehicle, and the simulation position of the vehicle in the road model is determined according to the distance and the size information of the vehicle. The guardrail on two sides of the road is used as a reference, the distance between the vehicle and the guardrail can be easily measured through devices such as the distance meter, the size of the devices such as the distance meter is small, the cost is low, and the measured distance data can be transmitted in real time. And the position relation between the vehicle and the road, namely the simulated position, can be simulated on the road model by combining the distance and the size information of the vehicle. And determining the target distance between the vehicle and the lane line according to the simulated position of the vehicle in the road model. And the target distance between the vehicle and the lane line can be determined by combining the simulation position and the road model. The position information of the vehicle relative to the road can be accurately mastered through the target distance, and reliable data support is provided for the fields of unmanned driving and the like. Compared with the GPS positioning method and other methods, the vehicle positioning method can provide more accurate position information, and the provided target distance has more referential property.
The invention can provide an auxiliary method for positioning the vehicle, and can be used in areas with relatively fixed guardrail positions and fewer vehicles.
As a specific embodiment of the vehicle positioning method provided by the present invention, the building of a road model according to an environmental image includes:
the method comprises the steps of obtaining a current road surface image and a plurality of road images covering the range of the road surface image in different target areas.
Determining the sign values of the road surface images, eliminating pixel points close to the sign values from the road images, and determining a plurality of to-be-processed images containing lane lines and guardrails.
And synthesizing the road model by a plurality of images to be processed.
According to the invention, the plurality of first camera devices are respectively arranged at the front end part and the rear end part of the vehicle, and the target area range of each first camera device is different, so that image information in different directions is provided, and the positioning accuracy is improved. The first camera device is used for shooting areas including a road surface, a lane line, a guardrail and the like. When the vehicle is traveling forward, the first camera device takes an image of the front. And the first camera device is electrically connected with a memory and a controller, the memory can store images acquired within a period of time, and the controller can be used for eliminating pixel points and other operations on the images acquired by the first camera device. Because the lane line can be regarded as the identification line on surface course such as concrete, in order to can be accurate discernment lane line and guardrail, can install second camera device at the intermediate position of vehicle head at first, second camera device is used for shooing surface course such as concrete, and the scope that second camera device acquireed is less, mainly used for shooing the colour information on the vehicle road surface of traveling, through the color mark value that acquires this surface course, can provide for follow-up rejection relevant pixel promptly as accurate data support. And the image information acquired by the second camera device can reflect the color of the concrete and other surface layers due to most of pixel points, and the mark value of the color is determined through calculation. The mark value can accurately represent the color of the photographed road surface layer. A fluctuation range can be preset on the basis of the sign value, pixel points close to the sign value are eliminated from a plurality of road images, the pixel points in the fluctuation range on the road images are eliminated, and after the elimination is completed, the image to be processed only comprising the lane lines and the guardrails is obtained, so that reliable data support is provided for accurately establishing a road model. The road model can be generated by synthesizing a plurality of images to be processed. Firstly, a plane image shot by a first camera device is obtained, the plane image is preprocessed, and an initial depth map of the preprocessed plane image is calculated. The initial farthest boundaries of the image are appointed to the preprocessed planar image, and the initial farthest boundaries are divided into the following seven types: an upper boundary of the preprocessed planar image; preprocessing the left border of the planar image; preprocessing the right boundary of the planar image; a combination of an upper boundary, a left boundary, and a right boundary of the preprocessed planar image; a combination of an upper boundary and a left boundary of the preprocessed planar image; preprocessing the combination of the upper boundary and the right boundary of the plane image; a combination of the left and right borders of the preprocessed planar image. And correcting the initial farthest boundary, wherein the method comprises the steps of obtaining a farthest point, and respectively calculating initial depth values of all pixel points of vertical lines of an upper boundary, a lower boundary, a left boundary and a right boundary on the preprocessed plane image by taking the farthest point as a starting point to obtain at least one initial depth boundary. Calculating an initial depth map of the planar image according to the initial depth boundary, including setting a search direction of depth calculation; an initial depth value of the plane image is set. And calculating the depth value of each point in the search direction to obtain an initial depth map of the plane image. For each current point in the search direction, respectively calculating absolute values of pixel value differences of n points around the current point and the current point, respectively adding depth values of the n points and the calculated absolute values of the pixel value differences to obtain n new depth values, and selecting the minimum value from the n new depth values and the initial depth value of the current point as the depth value of the current point to obtain the depth value of each point in the search direction; and carrying out summation operation on the depth values of all points in the searching direction and averaging to obtain an average depth value. And controlling the depth value of each point in the search direction within a predetermined interval according to the average depth value. And carrying out filtering post-processing on the initial depth map to obtain a depth map of the plane image. And performing pixel shift on the civilian image according to the depth map of the plane image to obtain a virtual attempt, performing stereo synthesis on the plane image and the virtual attempt, and generating and outputting a stereo video. And integrating the models established by the obtained multiple road graphs so as to generate a road model capable of truly reflecting the relative position relation.
As a specific embodiment of the vehicle positioning method provided by the present invention, determining a mark value of a road surface image includes:
and summing the RGB values of all the pixel points in the road surface image to obtain an RGB total value.
And dividing the total RGB value by the number of the pixel points to obtain an average RGB value.
The RGB average value is taken as the flag value.
In the invention, in order to better and effectively remove the information of the concrete pavement in the road image, the removal is carried out by pixel point units. The RGB color scheme is a color standard in the industry, and various colors are obtained by changing three color channels of red (R), green (G) and blue (B) and superimposing the three color channels on each other, where RGB represents the colors of the three channels of red, green and blue, and the standard almost includes all colors that can be perceived by human vision, and is one of the most widely used color systems. RGB is designed based on the principle of color light emission, and it is popular to say that its color mixing mode is as if there are three lamps of red, green and blue, when their lights are superimposed, the colors are mixed, and the brightness is equal to the sum of the three brightnesses, the more mixed the brightness is, the higher the brightness is, the additive mixing is. The superposition condition of three lamps of red, green and blue, the brightest superposition area of the three colors in the center is white, and the characteristics of addition mixing are as follows: the more superimposed the brighter. The three color channels red, green and blue are each divided into 256 levels of brightness, with the "light" being weakest at 0, which is off, and brightest at 255. When the three-color gray values are the same, gray tones with different gray values are generated, namely, the darkest black tone is generated when the three-color gray values are all 0; when the three-color gray scale is 255, the color tone is brightest white. The RGB colors are called additive colors and white can be produced by adding R, G and B together (i.e., all light rays are reflected back to the eye). Additive colors are used in illumination, televisions, and computer displays. For example, the display produces color by emitting light through red, green, and blue phosphors. Most visible light spectra can be represented as a mixture of red, green, blue (RGB) light in different proportions and intensities. If these colors overlap, cyan, magenta, and yellow are produced. The RGB values represent values of red, green and blue between 0 and 055. Whether the pixel point exceeds a preset range or not can be judged fundamentally by comparing the RGB value of each pixel point, and the monitoring accuracy is guaranteed. After the RGB values of the pixel points are obtained, the RGB values of the red, the green and the blue of each pixel point are added, and the average value of each component is calculated, so that the pixel information which can accurately represent concrete and other surface layers can be determined, and the related pixel points can be removed more accurately through the information.
In order to effectively identify the color information of the road surface and facilitate subsequent operation and other processing, the RGB values of all the pixel points are summed to obtain three total operation values of red, green and blue, the three total operation values are divided by the number of the pixel points of the image obtained by the second camera device respectively to obtain the average RGB value of the pixel points, the value is a mark value, the mark value represents the value of RGB in the red, green and blue, and the related pixel points can be removed by taking the array as a reference.
As a specific implementation manner of the vehicle positioning method provided by the present invention, the method for removing pixel points close to the sign value from a plurality of road images includes:
the threshold range is set based on the flag value.
And when the RGB value of the pixel point in the road image is within the threshold range, the pixel point is rejected.
In the invention, the set mark value is a specific pixel point, the RGB of the pixel point at the surface layer of concrete and the like in the road image has certain change due to the reasons of light, impurities and the like, and if the pixel point is just removed by taking the pixel point as a standard, the lane line and the guardrail cannot be effectively picked up. Therefore, a threshold range is set on the basis of the mark value, the threshold range is used as a standard, when a pixel point in the road image is located in the threshold range, the pixel point is proved to represent a surface layer such as concrete in the road, the surface layer is removed, after the pixel point in the image acquired by the second camera device is completely removed, information such as a lane line and a guardrail can be accurately identified, and therefore more effective data support is provided for subsequent analysis.
Referring to fig. 2, as a specific embodiment of the vehicle positioning method provided by the present invention, obtaining a distance from a vehicle to a guardrail in an environment by using a radar installed on the vehicle includes:
the first distance and the second distance between the vehicle and one side of the guardrail are respectively obtained through the first radar and the second radar, and the third distance between the vehicle and the other side of the guardrail is obtained through the third radar.
In the invention, a first radar and a second radar are arranged on one side of a vehicle, and the first radar and the second radar are different in position. The first radar may be mounted on a side of a head of the vehicle, and the second radar may be mounted on a side of a tail of the vehicle to which the first radar is mounted. The first radar and the second radar can be ultrasonic distance measuring instruments, and due to the fact that ultrasonic directivity is strong, energy consumption is slow, and the distance of propagation in a medium is long, ultrasonic waves are often used for distance measurement, and distance measuring instruments and level measuring instruments can be achieved through the ultrasonic waves. The ultrasonic detection is often rapid and convenient, the calculation is simple, the real-time control is easy to realize, and the requirement of industrial practicality can be met in the aspect of measurement accuracy, so that the ultrasonic detection method is widely applied to the research and the manufacture of mobile robots. The distance between the vehicle and the railing on one side can be measured through the first radar and the second radar, and the distance between the vehicle and the railing can be determined through the distance. A third radar is mounted on the other side of the vehicle, i.e. the side facing away from the first radar. The third radar is used for measuring the distance between the other side of the vehicle and the railing. The relative position relation of the vehicle relative to the road model can be displayed more accurately through the first distance, the second distance and the third distance, the position relation of the vehicle in the width direction of the railings on the two sides can be determined through the first distance and the third distance, the angle information of the vehicle relative to the railings can be determined through the first distance and the second distance, the position information can be accurately displayed through the first distance, the second distance and the third distance, the whole positioning condition is simple, convenient and fast, and the precision is high.
As a specific embodiment of the vehicle positioning method provided by the present invention, please refer to fig. 2, after the first radar and the second radar respectively obtain a first distance and a second distance between the vehicle and one side of the guardrail, and the third radar obtains a third distance between the vehicle and the other side of the guardrail, the method further includes:
and picking up the simulated width of the simulated guardrail model on the road model.
And determining the actual width of the guardrail by the first distance, the third distance and the actual distance of the first radar and the third radar.
And dividing the actual width by the simulation width to obtain the distance ratio required by increasing or decreasing the unit simulation width.
In the invention, the simulated widths of the handrails on two sides, which are perpendicular to the central line of the road, are picked up on the established road model, the picking up can be realized through software on the established road model, and the picked-up simulated widths do not represent the actual widths of the handrails. The connecting lines of the first distance and the third distance can be coaxially arranged, the first radar and the third radar can also be coaxially arranged, the signal sending outlets of the first radar and the third radar can be vertically arranged with the side face of the vehicle, the first radar and the third radar are located on the same connecting line, and the projection of the first distance, the third distance and the connecting line between the first radar and the third radar on the symmetrical plane of the vehicle is a point. Therefore, the first distance, the third distance and the connecting line of the first radar and the third radar bracket can represent the actual width of the railings at the two sides. The distance ratio required by increasing or decreasing the unit simulation width can be obtained by dividing the actual width by the simulation width, so that the relation between the road model and the actual is established, and a certain distance is picked up on the road model and can be converted into the actual length.
Referring to fig. 2, as a specific embodiment of the vehicle positioning method provided by the present invention, determining a simulated position of a vehicle in a road model according to a distance and a size information of the vehicle includes:
and converting the first distance, the second distance, the third distance and the size information into a first analog quantity, a second analog quantity, a third analog quantity and a size analog quantity respectively through distance ratio values.
And simulating a simulation position on the road model by combining the first analog quantity, the second analog quantity, the third analog quantity and the size analog quantity.
In the invention, because the relative position relation of the vehicle relative to the railing needs to be simulated on the road model, because the distance ratio is the same specific value on the same road model, the first distance, the second distance, the third distance and the size information of the vehicle are divided by the distance ratio, and the first analog quantity 1, the second analog quantity 2, the third analog quantity 3 and the size analog quantity 4 on the corresponding road model can be obtained. Taking the vehicle driving forward as an example, since the first radar and the second radar are both installed on the side of the vehicle and the first camera device is installed on the head of the vehicle, first, a first distance and a third distance are drawn on the road model, and the third distance of the first distance is on the same straight line and is perpendicular to the center line of the road. After the first distance and the third distance are plotted, the analog quantity 4 is calculated according to the converted size. The distance ratio can be converted into an analog installation amount according to the installation distance between the first radar and the second radar. And drawing the simulated installation amount on the road model, and drawing an endpoint of a second distance at a corresponding position, wherein the second distance is parallel to the first distance, and the length of the second distance is the second distance. One end of the second distance is connected with the simulation position, and the other end of the second distance is connected with the simulated railing boundary on the road model. Thereby expressing the position information of the vehicle relative to the tie rod on the road model.
Referring to fig. 2, as a specific embodiment of the vehicle positioning method provided by the present invention, determining a target distance from a vehicle to a lane line according to a simulated position of the vehicle in a road model includes:
and picking up a simulation target quantity of the simulation position from the lane line on the road model.
And multiplying the simulated target quantity by the distance ratio to determine the target distance.
In the invention, after the simulated position of the vehicle is determined on the road model, the simulated target quantity 5 between the simulated position and the lane line on the road model can be picked up by software, and the actual distance between the vehicle and the lane line under the actual scene is determined by multiplying the simulated target quantity 5 by the distance ratio. The target distance may be a distance from the middle of the vehicle to the lane lines on both sides.
Referring to fig. 2, as a specific embodiment of the vehicle positioning method provided by the present invention, picking up a simulated width of a simulated guardrail model on a road model includes:
and picking out a connecting line which is parallel to the simulated road surface and is vertical to the central line of the simulated road surface on the road model.
And extending the connecting line to the road model at two ends to simulate the guardrail, wherein the extending length of the connecting line is the simulation width.
In the invention, in order to pick up the simulation width between the simulation railings at two sides on the road model, a connection which is disposed with the central line of the simulated road surface in the road model is selected in a plane parallel to the road surface in the road model, the connection line is connected with the simulation railings at two sides, and the length of the line is the simulation width. The operation method can be automatically obtained in software, and also can be used for setting a connecting line between the first radar and the third radar as a connecting point, and the connecting line passes through the connecting point. The road model can also mark measuring points of the first radar and the third radar on the railing, and end points at two ends of the connecting line are the measuring points, so that the simulation width is obtained.
As a specific embodiment of the vehicle positioning method provided by the present invention, when the calibrated value exceeds the preset limit value, the road surface image which does not exceed the limit value at the previous time is selected.
In the invention, when the vehicle changes lanes, the image acquired by the second camera device has large change, and if the mark value is taken as a reference standard for removing, effective lane line and guardrail images cannot be acquired. For this purpose, when the marking value determined by the second camera device exceeds a preset limit value, namely when the second camera device shoots a lane line or other marks, the previously shot image which is not over the limit value is used as a reference for eliminating pixel points, thereby ensuring the effective operation of the vehicle positioning method.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A vehicle positioning method characterized by comprising:
acquiring an environment image through a camera arranged on a vehicle, wherein the environment image comprises a lane line and a guardrail;
establishing a road model according to the environment image;
acquiring the distance from the vehicle to a guardrail in an environment through a radar installed on the vehicle;
determining a simulated position of the vehicle in the road model according to the distance and the size information of the vehicle;
and determining the target distance between the vehicle and the lane line according to the simulated position of the vehicle in the road model.
2. The vehicle localization method of claim 1, wherein the building a road model from the environment image comprises:
acquiring a current road surface image and a plurality of road images covering the range of the road surface image in different target areas;
determining a mark value of the road surface image, eliminating pixel points close to the mark value on a plurality of road images, and determining a plurality of images to be processed containing the lane line and the guardrail;
and synthesizing the road model by a plurality of images to be processed.
3. The vehicle localization method of claim 2, wherein said determining a landmark value of the road surface image comprises:
summing the RGB values of all the pixel points in the pavement image to obtain a total RGB value;
dividing the total RGB value by the number of the pixel points to obtain an average RGB value;
and taking the RGB average value as the mark value.
4. The method as claimed in claim 2, wherein said removing pixels close to said marker value from said plurality of road images comprises:
setting a threshold range based on the flag value;
and when the RGB value of the pixel point in the road image is within the threshold range, the pixel point is eliminated.
5. The vehicle localization method of any one of claims 1 to 4, wherein the obtaining the distance of the vehicle from a guard rail in an environment by a radar installed on the vehicle comprises:
the method comprises the steps that a first distance and a second distance between a vehicle and one side of a guardrail are respectively obtained through a first radar and a second radar, and a third distance between the vehicle and the other side of the guardrail is obtained through a third radar.
6. The vehicle positioning method according to claim 5, further comprising, after the acquiring a first distance and a second distance of the vehicle from one side of the guardrail by a first radar and a second radar, respectively, and acquiring a third distance of the vehicle from the other side of the guardrail by a third radar:
picking up the simulated width of the simulated guardrail model on the road model;
determining the actual width of the guardrail according to the first distance, the third distance and the actual distance of the first radar and the third radar;
and dividing the actual width by the simulated width to obtain the distance ratio required by increasing or decreasing the unit of the simulated width.
7. The vehicle localization method of claim 6, wherein determining the simulated location of the vehicle in the road model based on the distance and the vehicle size information comprises:
converting the first distance, the second distance, the third distance and the size information into a first analog quantity, a second analog quantity, a third analog quantity and a size analog quantity respectively through the distance ratio;
and combining the first analog quantity, the second analog quantity, the third analog quantity and the size analog quantity to simulate the simulated position on the road model.
8. The vehicle localization method of claim 6, wherein determining the target distance of the vehicle from the lane line based on the simulated position of the vehicle in the road model comprises:
picking up a simulation target quantity of the simulation position from the lane line on the road model;
and multiplying the simulated target quantity by the distance ratio to determine the target distance.
9. The vehicle positioning method of claim 6, wherein the picking up the simulated width of the simulated guardrail model on the road model comprises:
picking out a connecting line which is parallel to the simulated road surface and is vertical to the central line of the simulated road surface from the road model;
and extending the connecting line to the simulation guardrails on the road model at two ends, wherein the extending length of the connecting line is the simulation width.
10. The vehicle positioning method according to claim 2, characterized in that when the index value exceeds a preset limit value, the road surface image at the previous time that does not exceed the limit value is selected.
CN201910906109.4A 2019-09-24 2019-09-24 Vehicle positioning method Active CN110645973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910906109.4A CN110645973B (en) 2019-09-24 2019-09-24 Vehicle positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910906109.4A CN110645973B (en) 2019-09-24 2019-09-24 Vehicle positioning method

Publications (2)

Publication Number Publication Date
CN110645973A true CN110645973A (en) 2020-01-03
CN110645973B CN110645973B (en) 2021-06-25

Family

ID=68992490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910906109.4A Active CN110645973B (en) 2019-09-24 2019-09-24 Vehicle positioning method

Country Status (1)

Country Link
CN (1) CN110645973B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111683134A (en) * 2020-06-04 2020-09-18 勇鸿(重庆)信息科技有限公司 Distributed Internet of vehicles data transmission system and method based on block chain technology
CN112053559A (en) * 2020-08-25 2020-12-08 浙江省机电设计研究院有限公司 Expressway safety situation assessment method and system
CN112213725A (en) * 2020-09-28 2021-01-12 森思泰克河北科技有限公司 Multipath false alarm suppression method and device for vehicle-mounted radar and terminal equipment
CN112798004A (en) * 2020-12-31 2021-05-14 北京星云互联科技有限公司 Vehicle positioning method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140139673A1 (en) * 2012-11-22 2014-05-22 Fujitsu Limited Image processing device and method for processing image
CN103954275A (en) * 2014-04-01 2014-07-30 西安交通大学 Lane line detection and GIS map information development-based vision navigation method
CN104951744A (en) * 2014-03-27 2015-09-30 丰田自动车株式会社 Lane boundary marking line detection device and electronic control device
CN106096525A (en) * 2016-06-06 2016-11-09 重庆邮电大学 A kind of compound lane recognition system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140139673A1 (en) * 2012-11-22 2014-05-22 Fujitsu Limited Image processing device and method for processing image
CN104951744A (en) * 2014-03-27 2015-09-30 丰田自动车株式会社 Lane boundary marking line detection device and electronic control device
CN103954275A (en) * 2014-04-01 2014-07-30 西安交通大学 Lane line detection and GIS map information development-based vision navigation method
CN106096525A (en) * 2016-06-06 2016-11-09 重庆邮电大学 A kind of compound lane recognition system and method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111683134A (en) * 2020-06-04 2020-09-18 勇鸿(重庆)信息科技有限公司 Distributed Internet of vehicles data transmission system and method based on block chain technology
CN112053559A (en) * 2020-08-25 2020-12-08 浙江省机电设计研究院有限公司 Expressway safety situation assessment method and system
CN112213725A (en) * 2020-09-28 2021-01-12 森思泰克河北科技有限公司 Multipath false alarm suppression method and device for vehicle-mounted radar and terminal equipment
CN112213725B (en) * 2020-09-28 2022-10-25 森思泰克河北科技有限公司 Multipath false alarm suppression method and device for vehicle-mounted radar and terminal equipment
CN112798004A (en) * 2020-12-31 2021-05-14 北京星云互联科技有限公司 Vehicle positioning method, device, equipment and storage medium
CN112798004B (en) * 2020-12-31 2023-06-13 北京星云互联科技有限公司 Positioning method, device and equipment for vehicle and storage medium

Also Published As

Publication number Publication date
CN110645973B (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN110645973B (en) Vehicle positioning method
CN110174093B (en) Positioning method, device, equipment and computer readable storage medium
CN107235044B (en) A kind of restoring method realized based on more sensing datas to road traffic scene and driver driving behavior
CN111436216B (en) Method and system for color point cloud generation
CN110501018B (en) Traffic sign information acquisition method for high-precision map production
US20110242311A1 (en) Image processing system and position measurement system
US20110242319A1 (en) Image processing system and position measurement system
CN110307791B (en) Vehicle length and speed calculation method based on three-dimensional vehicle boundary frame
CN106019264A (en) Binocular vision based UAV (Unmanned Aerial Vehicle) danger vehicle distance identifying system and method
CN112558023A (en) Calibration method and device of sensor
CN103158607A (en) Method and device for controlling a light emission of a headlight of a vehicle
CN109903574B (en) Method and device for acquiring intersection traffic information
CN110705485A (en) Traffic signal lamp identification method and device
CN105628194A (en) Road lighting quality field measurement method
CN103020613A (en) Method and device for identifying signal lamps on basis of videos
Adamshuk et al. On the applicability of inverse perspective mapping for the forward distance estimation based on the HSV colormap
CN110717438A (en) Traffic signal lamp identification method and device
CN112446915B (en) Picture construction method and device based on image group
CN111009166B (en) Road three-dimensional sight distance checking calculation method based on BIM and driving simulator
US20230177724A1 (en) Vehicle to infrastructure extrinsic calibration system and method
CN112255604B (en) Method and device for judging accuracy of radar data and computer equipment
KR20160069762A (en) Nighttime Visibility Assessment Solution for Road System Method Thereof
CN107328777A (en) A kind of method and device that atmospheric visibility is measured at night
CN112837365B (en) Image-based vehicle positioning method and device
CN115131360A (en) Road pit detection method and device based on 3D point cloud semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant