CN114841188A - Vehicle fusion positioning method and device based on two-dimensional code - Google Patents

Vehicle fusion positioning method and device based on two-dimensional code Download PDF

Info

Publication number
CN114841188A
CN114841188A CN202210208718.4A CN202210208718A CN114841188A CN 114841188 A CN114841188 A CN 114841188A CN 202210208718 A CN202210208718 A CN 202210208718A CN 114841188 A CN114841188 A CN 114841188A
Authority
CN
China
Prior art keywords
dimensional code
vehicle
imaging
area
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210208718.4A
Other languages
Chinese (zh)
Inventor
李学聪
陈凤和
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Critical Information Technology Co ltd
Guangdong University of Technology
Original Assignee
Guangzhou Critical Information Technology Co ltd
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Critical Information Technology Co ltd, Guangdong University of Technology filed Critical Guangzhou Critical Information Technology Co ltd
Priority to CN202210208718.4A priority Critical patent/CN114841188A/en
Publication of CN114841188A publication Critical patent/CN114841188A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Toxicology (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a vehicle fusion positioning method based on two-dimension codes, wherein the two-dimension codes are arranged beside a road and comprise a first coordinate and a two-dimension code area; the vehicle acquires the two-dimensional code through a first camera unit, and a first coordinate and a two-dimensional code area are obtained through analysis; and positioning the vehicle according to a first imaging position of the two-dimensional code in the first camera unit, the area of the first imaging two-dimensional code and the first coordinate. Vehicle fusion positioning devices and computer readable media are also disclosed. The two-dimensional code of high accuracy location on the road has been utilized, let the vehicle that does not use ordinary map location during going, the scanning two-dimensional code can obtain high accuracy location data, also can provide more accurate reliable coordinate basis for autopilot technique.

Description

Vehicle fusion positioning method and device based on two-dimensional code
Technical Field
The invention relates to the technical field of vehicle fusion positioning, in particular to a vehicle fusion positioning method and device based on two-dimensional codes.
Background
In the vehicle positioning technology, a fusion positioning system consisting of inertial navigation, satellite navigation and a wheel speed meter is a very general means, and satisfactory positioning performance can be obtained in many scenes. Compared with the single navigation equipment, the method has the advantages of complete autonomy, all weather and no interference of external information. However, the development of the automatic driving technology puts higher requirements on the performance of the fusion positioning system, the fusion positioning device needs to realize real-time uninterrupted full-scene centimeter-level positioning, and the fusion positioning device composed of inertial navigation, satellite navigation and a wheel speed meter is difficult to meet the requirements. Under the condition that satellite signals are lost for a long time, particularly in places such as tunnels or viaducts, the positioning errors of inertial/wheel speed meter fusion positioning can be accumulated along with the increase of the driving mileage of the vehicle, so that the positioning result gradually deviates from the real position of the vehicle, and automatic driving cannot be continued.
In order to solve the problem, the commonly adopted technical scheme includes technical means such as increasing map matching, increasing laser radar positioning, increasing visual navigation and the like or superposition of several means. However, continuous high-precision positioning of map matching requires that a driving route has obvious geometric characteristics, and the actual driving route of a vehicle is difficult to ensure the characteristic requirements; the relative positioning of the laser radar also has the problem of error accumulation, and the absolute positioning needs to be mapped in advance; the visual navigation also has the problem of error accumulation, the accumulated error can be eliminated only by the closed-loop path through loop-back verification, and the actual driving route of the vehicle is difficult to ensure the closed-loop path.
Disclosure of Invention
Based on the situation, the invention provides a vehicle fusion positioning method based on two-dimensional codes, which utilizes the two-dimensional codes positioned with high precision on roads to enable the two-dimensional codes to be scanned to obtain high-precision positioning data during the running of vehicles which are not positioned by using a common map. Therefore, on one hand, the vehicle can obtain high-precision positioning data when high-precision equipment is not adopted, on the other hand, more accurate and reliable coordinate basis can be provided for the automatic driving technology, and inertial navigation parameters can be adjusted in time.
The invention provides a vehicle fusion positioning method based on two-dimension codes, which comprises the steps of arranging the two-dimension codes beside a road, wherein the two-dimension codes comprise a first coordinate and a two-dimension code area; the vehicle acquires the two-dimensional code through the first camera unit, and a first coordinate and the area of the two-dimensional code are obtained through analysis; and positioning the vehicle according to a first imaging position of the two-dimensional code in a first camera unit, the area of the first imaging two-dimensional code and the first coordinate.
The step of locating the vehicle comprises: acquiring a first distance between the vehicle and the two-dimensional code according to the area of the two-dimensional code and the area of the first imaging two-dimensional code; acquiring a first angle between the vehicle and the two-dimensional code according to the position of the two-dimensional code imaged in the first camera unit; and calculating and generating the vehicle position coordinate by using the first distance, the first angle and the first coordinate of the two-dimensional code.
Acquiring a first imaging position and a first imaging two-dimensional code area of the two-dimensional code in the first imaging unit at a first moment; acquiring a second imaging position of the two-dimensional code in the camera unit and a second imaging two-dimensional code area at a second moment; calculating the offset angle of the automobile according to the first imaging position and the second imaging position, and calculating the offset distance of the automobile according to the area of the first imaging two-dimensional code and the area of the second imaging two-dimensional code; according to the time difference between the first time and the second time; calculating and obtaining the angular acceleration and the linear acceleration of the vehicle; and corrects the IMU data of the vehicle.
The vehicle is provided with a second camera unit; the first camera shooting unit and the second camera shooting unit are arranged at intervals; acquiring a first angle of an included angle formed by a first camera shooting unit and the two-dimensional code and a second angle of an included angle formed by a second camera shooting unit and the two-dimensional code, and calculating and acquiring relative position information of the vehicle and the two-dimensional code according to the distance, the first angle and the second angle; and when the area of the two-dimensional code exceeds a preset threshold value, acquiring the first coordinate, and calculating the position coordinate of the vehicle according to the relative position information. The vehicle position coordinates obtained in this way can also locate the vehicle.
Acquiring high-precision positioning coordinates; settling to obtain a second coordinate of the two-dimensional code according to the high-precision positioning coordinate, the area of the two-dimensional code, and a first imaging position of the two-dimensional code in the first imaging unit and the area of the first imaging two-dimensional code; and judging whether the difference value of the first coordinate and the second coordinate exceeds a preset threshold value, if so, the high-precision positioning information is not credible.
Meanwhile, the invention also provides a vehicle fusion positioning device based on the two-dimensional code, which comprises the following components: the system comprises a vehicle camera module, an analysis module and a positioning module, wherein the modules are in data connection;
the vehicle camera module at least comprises a first camera unit and is installed on the vehicle; the first camera unit is used for acquiring a two-dimensional code arranged beside a road; the two-dimension code comprises a first coordinate and a two-dimension code area; the analysis module is used for analyzing and obtaining the coordinates and the area of the two-dimensional code; and the positioning module is used for positioning the vehicle according to the first imaging position of the two-dimensional code in the first camera unit and the area of the first imaging two-dimensional code.
The step of locating the vehicle comprises: acquiring a first distance between the vehicle and the two-dimension code according to the ratio of the area of the two-dimension code to the area of the first imaging two-dimension code; acquiring a first angle between a vehicle and the two-dimensional code according to the position of the two-dimensional code imaged in the first camera unit; and calculating and generating a vehicle position coordinate by using the first distance, the first angle and the first coordinate of the two-dimensional code.
In addition, the present disclosure proposes a computer-readable medium having stored therein a computer program, which is loaded and executed by a processing module to implement the vehicle fusion localization method.
Some technical effects of this disclosure are: through the two-dimensional code after high-precision positioning at the roadside, in the running period of the vehicle, the vehicle obtains the two-dimensional code information through scanning and then settles to obtain the high-precision positioning information of the vehicle. Therefore, on one hand, the vehicle can obtain high-precision positioning data when high-precision equipment is not adopted, and on the other hand, IMU data of the vehicle can be calibrated.
Drawings
For a better understanding of the technical aspects of the present disclosure, reference may be made to the following drawings, which are included to provide an additional description of the prior art or embodiments. These drawings selectively illustrate articles or methods related to the prior art or some embodiments of the present disclosure. The basic information for these figures is as follows:
fig. 1 is a schematic flow chart of an embodiment of a vehicle fusion positioning method based on two-dimensional codes according to the present invention.
Fig. 2 is a first angle diagram in an embodiment of the invention.
Fig. 3 is a schematic flow chart of an embodiment of a vehicle fusion positioning device based on two-dimensional codes according to the present invention.
Detailed Description
The technical means or technical effects related to the present disclosure will be further described below, and it is apparent that the examples provided are only some embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be made by those skilled in the art without any inventive step, will be within the scope of the present disclosure, either explicitly or implicitly based on the embodiments and the text of the present disclosure.
As shown in fig. 1, the method in this embodiment includes the steps of:
s1: and a two-dimensional code is arranged beside the road, and the two-dimensional code comprises a first coordinate and a two-dimensional code area.
At present all can set up some traffic sign on ordinary road and the highway, and the parking area entry can only appear in the general condition two-dimensional code. With the development of unmanned technology, some road sections are provided with some sensors. And the installation of the sensor generally requires enormous manpower and material resources. At the same time, a continuous power supply needs to be provided, which causes a large cost investment. The two-dimensional code can be set on the unmanned road section or the normal driving road section, corresponding area information of the two-dimensional code and current accurate first coordinate information of the two-dimensional code are placed in the two-dimensional code, and then only an iron sheet or some simple devices are needed, and subsequent continuous maintenance is not needed. The two-dimensional code comprises a first coordinate and a two-dimensional code area, namely after the vehicle scans the two-dimensional code, the size of the area of the two-dimensional code and the accurate position of the first coordinate of the current two-dimensional code can be obtained through analysis; if the two-dimensional code is connected with the network, more related information of the current two-dimensional code can be obtained through the network.
S2: the first vehicle obtains the two-dimensional code through the first camera unit, and the first coordinate and the area of the two-dimensional code are obtained through analysis.
The first vehicle runs on the road, and the front of the road is continuously shot through the first camera unit. When the two-dimensional code appears beside the road, the content of the two-dimensional code can be identified but not necessarily. When the area of the two-dimensional code is larger than a preset threshold value, the two-dimensional code can be analyzed, and information including a first coordinate and the area of the two-dimensional code in the two-dimensional code can be obtained. If the common map navigation exists, the position of the corresponding two-dimensional code arranged on the road can be obtained in advance, whether the position is within a first preset range of the first vehicle or not is judged, and if the position is within the first preset range, the first camera shooting unit is started to shoot and recognize. In general, most vehicles only need to navigate through a common map; whether the camera shooting unit is intelligently started or not can be achieved by detecting whether the camera shooting unit is in a first preset range or not, and the purpose of saving energy is achieved. The camera unit in this embodiment may be a high-definition camera or a general camera.
S3: and positioning the first vehicle according to the first imaging position of the two-dimensional code in the first camera unit, the area of the first imaging two-dimensional code and the first coordinate.
After the first vehicle shoots the two-dimensional code through the first camera shooting unit, a first imaging position and a first imaging two-dimensional code area of the two-dimensional code exist in the first camera shooting unit. Generally, the first image of the two-dimensional code acquired by the image pickup unit is in the currently taken picture, and the area of the vehicle in the image pickup unit is larger as the vehicle is closer to the two-dimensional code. The method comprises the steps that a first distance between a first vehicle and a two-dimensional code can be calculated by taking the area change condition of the first vehicle in the process of shooting the two-dimensional code to drive as an empirical value; specifically, in a test road section, the current area is recorded by shooting the area change condition and the first imaging position condition of the first vehicle at a distance of 1-200 meters from the two-dimensional code for 1000 times, and when the same area is extracted and the first imaging positions are the same, the distance between the first vehicle and the two-dimensional code can be known. The first imaging two-dimensional code area is an area of the first imaging of the two-dimensional code on the imaging unit, and a change of the area can also be calculated by a correlation algorithm defining an area change. According to the position of the two-dimensional code in the shot picture, a first imaging position of the two-dimensional code is formed, and a first angle formed by the two-dimensional code and the first vehicle can be obtained. The step of locating the first vehicle comprises: acquiring a first distance between the vehicle and the two-dimensional code according to the ratio of the area of the two-dimensional code to the area of the first imaging two-dimensional code; acquiring a first angle between the vehicle and the two-dimensional code according to the position of the two-dimensional code imaged in the first camera unit; and calculating and generating a first vehicle position coordinate by using the first distance, the first angle and the first coordinate of the two-dimensional code.
As shown in fig. 2, an included angle formed by the extension line of the plane of the first camera unit and the straight line connecting the first camera unit and the two-dimensional code is defined as a first angle. And then calculating and generating first vehicle coordinates by using the first distance, the first angle and the first coordinates. Because the first coordinate is a high-precision positioning value, the distance between the two-dimensional code and the first vehicle is an experience or calculation conversion value, and the data reliability is high. A simple coordinate system can be established according to the first distance, the first angle and the first coordinate, and the corresponding first vehicle high-precision positioning coordinate can be calculated through coordinate conversion. In addition, regarding the first distance, the first angle is defined and calculated according to the area and the corresponding imaging ratio in a relatively large number of manners, and there may be different definitions and different calculation manners, but it should be understood that the manners of calculating the corresponding numerical values and coordinate relationships are changed or modified based on the first angle, and specifically, reference may be made to some existing books of the related art of visual SLAM and PnP, and the embodiment is not described herein.
In the driving process of the vehicle, the information of the two-dimensional code cannot be scanned and analyzed in a far place, so that the two-dimensional code can be regarded as a point, and if the area of the two-dimensional code is large, the intersection point of diagonal lines of the area (if the area of the two-dimensional code is a circle, the center of the circle is adopted) can be taken as a point collected by the vehicle. If the first vehicle cannot analyze and obtain the area of the two-dimensional code, a second camera shooting unit can be arranged on the first vehicle; the first camera unit and the second camera unit are arranged at a distance. Acquiring a first angle of an included angle formed by the first camera unit and the two-dimensional code and a second angle of an included angle formed by the second camera unit and the two-dimensional code, and calculating and acquiring relative position information of the first vehicle and the two-dimensional code by adopting a cosine theorem correlation method (needing some simple angle conversion, and not performing description here) according to the distance, the first angle and the second angle; and calculating the coordinates of the two-dimensional code according to the first coordinates and the relative position information. Of course, if the corresponding two-dimensional code can be analyzed, the two-dimensional code may be regarded as a point, and the calculation is performed by using the double-included-angle plus line segment, and the final calculation result is the same as that of only the first image capturing unit, which is a supplement. In actual vehicle traveles, in order to ensure that whole journey can both carry out accurate positioning, the effect of fixing a position with two mesh cameras can be some better, but the hardware cost can be much higher. The monocular camera is under the cooperation that has the two-dimensional code to set for, and the sexual valence relative altitude is some and the location effect is also not poor.
The positioning error of the inertial/wheel speed meter fusion positioning can be accumulated along with the increase of the driving mileage of the first vehicle, so that the positioning result gradually deviates from the real position of the vehicle. This time requires periodic corrections to the inertial/wheel speed meter fused position data. To further provide more accurate correction data to the vehicle's IMU data. The method comprises the steps that a first vehicle obtains a first imaging position of a two-dimensional code in a first camera unit and the area of the first imaging two-dimensional code at a first moment; acquiring a second imaging position of the two-dimensional code in the first camera unit and a second imaging two-dimensional code area at a second moment; calculating the offset angle of the automobile according to the first imaging position and the second imaging position, and calculating the offset distance of the automobile according to the area of the first imaging two-dimensional code and the area of the second imaging two-dimensional code; according to the time difference between the first time and the second time; calculating and obtaining the angular acceleration and the linear acceleration of the first vehicle; the IMU data for the first vehicle is corrected. As a further embodiment, the image is processed in a first way, and the visual positioning data can be obtained in combination with the scene map data, i.e. by obtaining the position information of the first or second camera unit, the data of the angular velocity and linear velocity of the vehicle at present can be obtained by using the data recorded by the odometer in the vehicle. In the field of fusion positioning technology, technicians can fuse the three types of position information (inertial navigation, satellite and vision) by using the existing fusion positioning technology (such as particle filtering, kalman filtering technology and the like), perform big data statistics and correction (namely, obtain an average value that most vehicles are close to each other) through data acquisition of a plurality of vehicles, finally obtain the corrected position information, and output a positioning result.
As another implementation example, the two-dimensional code may further set a two-dimensional code characteristic line and a two-dimensional code characteristic line length; when the vehicle obtains the two-dimension code through the first camera unit, the two-dimension code characteristic line and the two-dimension code characteristic line length are obtained through analysis. After the two-dimensional code is shot and identified by the camera shooting unit, the feature points of the two-dimensional code can be extracted, wherein the feature points can be four corner points of a square of the two-dimensional code or the middle points of the side length. Two feature points are selected to be connected to form a feature connecting line segment (optimally, the line segment is formed by connecting the two parallel side length midpoints of the two-dimensional codes, and other continuously formed line segments have slight angle deviation when substitution calculation is carried out, but the angle deviation can be ignored), so that the corresponding feature connecting line length is obtained. After the first camera shooting unit shoots the two-dimensional code characteristic line, a first imaging characteristic line and a first imaging characteristic line length are formed in the camera shooting unit. Acquiring a first distance between the vehicle and the two-dimensional code according to the two-dimensional code characteristic line length and the first imaging two-dimensional code characteristic line length; acquiring a first angle between the vehicle and the road administration facility according to the imaging position of the two-dimension code characteristic line in the first camera unit; and finally, calculating and generating the position coordinate of the vehicle by using the first distance, the first angle and the first coordinate of the two-dimensional code. Here, the calculation can be performed through the relevant knowledge of the camera imaging principle, and also the multiple recording can be performed by using empirical values, which is not described herein.
In one embodiment, the present disclosure provides a vehicle fusion localization apparatus, comprising: the system comprises a vehicle camera module, an analysis module and a positioning module, wherein the modules are in data connection; the vehicle camera module at least comprises a first camera unit and is installed on the vehicle; the first camera unit is used for acquiring a two-dimensional code arranged beside a road; the two-dimension code comprises a first coordinate and a two-dimension code area; the analysis module is used for analyzing and obtaining the coordinates and the area of the two-dimensional code; and the positioning module is used for positioning the vehicle according to the first imaging position of the two-dimensional code in the first camera unit and the area of the first imaging two-dimensional code. The step of locating the vehicle comprises: acquiring a first distance between the vehicle and the two-dimensional code according to the ratio of the area of the two-dimensional code to the area of the first imaging two-dimensional code; acquiring a first angle between the vehicle and the two-dimensional code according to the position of the two-dimensional code imaged in the first camera unit; and calculating and generating the vehicle position coordinate by using the first distance, the first angle and the first coordinate of the two-dimensional code.
It will be understood by those skilled in the art that all or part of the steps in the embodiments may be implemented by hardware instructions of a computer program, and the program may be stored in a computer readable medium, which may include various media capable of storing program codes, such as a flash memory, a removable hard disk, a read-only memory, a random access memory, a magnetic disk or an optical disk. In one embodiment, the present disclosure provides a computer-readable medium, in which a computer program is stored, the computer program being loaded and executed by a processing module to implement a two-dimensional code-based vehicle fusion positioning method.
The various embodiments or features mentioned herein may be combined with each other as additional alternative embodiments without conflict, within the knowledge and ability level of those skilled in the art, and a limited number of alternative embodiments formed by a limited number of combinations of features not listed above are still within the skill of the disclosed technology, as will be understood or inferred by those skilled in the art from the figures and above.
Moreover, the descriptions of the various embodiments are expanded upon with varying emphasis, and where not already described, may be had by reference to the prior art or other related descriptions herein.
It is emphasized that the above-mentioned embodiments, which are typical and preferred embodiments of the present disclosure, are only used for explaining and explaining the technical solutions of the present disclosure in detail for the convenience of the reader, and do not limit the protection scope or application of the present disclosure. Any modifications, equivalents, improvements and the like which come within the spirit and principle of the disclosure are intended to be covered by the scope of the disclosure.

Claims (10)

1. A vehicle fusion positioning method based on two-dimensional codes is characterized by comprising the following steps:
setting a two-dimensional code beside a road, wherein the two-dimensional code comprises a first coordinate and a two-dimensional code area; the vehicle acquires the two-dimensional code through a first camera unit, and the first coordinate and the area of the two-dimensional code are obtained through analysis; and positioning the vehicle according to a first imaging position of the two-dimensional code in the first imaging unit, a first imaging two-dimensional code area and the first coordinate.
2. The vehicle fusion localization method according to claim 1, wherein: the step of locating the vehicle comprises: acquiring a first distance between the vehicle and the two-dimension code according to the area of the two-dimension code and the area of the first imaging two-dimension code; acquiring a first angle between the vehicle and the two-dimensional code according to the position of the two-dimensional code imaged in the first camera unit; and calculating and generating the vehicle position coordinate by using the first distance, the first angle and the first coordinate of the two-dimensional code.
3. The vehicle fusion localization method according to claim 2, characterized in that: acquiring a first imaging position and a first imaging two-dimensional code area of the two-dimensional code in the first imaging unit at a first moment; acquiring a second imaging position of the two-dimensional code in the camera unit and a second imaging two-dimensional code area at a second moment; calculating the offset angle of the automobile according to the first imaging position and the second imaging position, and calculating the offset distance of the automobile according to the area of the first imaging two-dimensional code and the area of the second imaging two-dimensional code; according to the time difference between the first time and the second time; and calculating to obtain the angular acceleration and the linear acceleration of the vehicle, and correcting the IMU data of the vehicle.
4. The vehicle fusion localization method according to claim 1, wherein: the vehicle is provided with a second camera unit; the first camera shooting unit and the second camera shooting unit are arranged at intervals; acquiring a first angle of an included angle formed by a first camera shooting unit and the two-dimensional code and a second angle of an included angle formed by a second camera shooting unit and the two-dimensional code, and calculating and acquiring relative position information of the vehicle and the two-dimensional code according to the distance, the first angle and the second angle; and calculating the position coordinates of the vehicle according to the first coordinates and the relative position information.
5. The vehicle fusion localization method according to claim 1, wherein: the two-dimension code can also comprise a two-dimension code characteristic line and a two-dimension code characteristic line length; when the vehicle obtains the two-dimension code through the first camera unit, analyzing to obtain the two-dimension code characteristic line and the two-dimension code characteristic line length; and replacing the two-dimension code by the two-dimension code characteristic line and replacing the area of the two-dimension code by the length of the two-dimension code characteristic line.
6. The utility model provides a vehicle fuses positioner based on two-dimensional code which characterized in that: the method comprises the following steps: the system comprises a vehicle camera module, an analysis module and a positioning module, wherein the modules are in data connection; the vehicle camera module at least comprises a first camera unit and is arranged on a vehicle; the first camera unit is used for acquiring a two-dimensional code arranged beside a road; the two-dimension code comprises a first coordinate and a two-dimension code area; the analysis module is used for analyzing and obtaining the coordinates and the area of the two-dimensional code; the positioning module is used for positioning the vehicle according to a first imaging position of the two-dimensional code in the first camera unit and the area of the first imaging two-dimensional code.
7. The vehicle fusion positioning apparatus of claim 6, wherein: the step of locating the vehicle comprises: acquiring a first distance between the vehicle and the two-dimensional code according to the area of the two-dimensional code and the area of the first imaging two-dimensional code; acquiring a first angle between the vehicle and the two-dimensional code according to the position of the two-dimensional code imaged in the first camera unit; and calculating and generating the vehicle position coordinate by using the first distance, the first angle and the first coordinate of the two-dimensional code.
8. The vehicle fusion positioning apparatus of claim 7, wherein: acquiring a first imaging position and a first imaging two-dimensional code area of the two-dimensional code in the first imaging unit at a first moment; acquiring a second imaging position of the two-dimensional code in the camera unit and a second imaging two-dimensional code area at a second moment; calculating the offset angle of the automobile according to the first imaging position and the second imaging position, and calculating the offset distance of the automobile according to the area of the first imaging two-dimensional code and the area of the second imaging two-dimensional code; according to the time difference between the first time and the second time; calculating and obtaining the angular acceleration and the linear acceleration of the vehicle; and correcting the IMU data of the vehicle.
9. The vehicle fusion positioning apparatus of claim 6, wherein: the two-dimension code can also comprise a two-dimension code characteristic line and a two-dimension code characteristic line length; when the vehicle obtains the two-dimension code through the first camera unit, analyzing to obtain the two-dimension code characteristic line and the two-dimension code characteristic line length; and replacing the two-dimension code by the two-dimension code characteristic line and replacing the area of the two-dimension code by the two-dimension code characteristic line.
10. A computer-readable medium characterized by: the computer readable medium has stored therein a computer program which is loaded and executed by a processing module to implement the vehicle fusion positioning method according to any one of claims 1 to 5.
CN202210208718.4A 2022-03-05 2022-03-05 Vehicle fusion positioning method and device based on two-dimensional code Pending CN114841188A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210208718.4A CN114841188A (en) 2022-03-05 2022-03-05 Vehicle fusion positioning method and device based on two-dimensional code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210208718.4A CN114841188A (en) 2022-03-05 2022-03-05 Vehicle fusion positioning method and device based on two-dimensional code

Publications (1)

Publication Number Publication Date
CN114841188A true CN114841188A (en) 2022-08-02

Family

ID=82561740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210208718.4A Pending CN114841188A (en) 2022-03-05 2022-03-05 Vehicle fusion positioning method and device based on two-dimensional code

Country Status (1)

Country Link
CN (1) CN114841188A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116386373A (en) * 2023-06-05 2023-07-04 好停车(北京)信息技术有限公司天津分公司 Vehicle positioning method and device, storage medium and electronic equipment
CN117109599A (en) * 2023-10-24 2023-11-24 交通运输部公路科学研究所 Vehicle auxiliary positioning method, device and medium based on road side two-dimension code

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116386373A (en) * 2023-06-05 2023-07-04 好停车(北京)信息技术有限公司天津分公司 Vehicle positioning method and device, storage medium and electronic equipment
CN117109599A (en) * 2023-10-24 2023-11-24 交通运输部公路科学研究所 Vehicle auxiliary positioning method, device and medium based on road side two-dimension code
CN117109599B (en) * 2023-10-24 2024-01-02 交通运输部公路科学研究所 Vehicle auxiliary positioning method, device and medium based on road side two-dimension code

Similar Documents

Publication Publication Date Title
JP6821712B2 (en) Calibration of integrated sensor in natural scene
CN109931939B (en) Vehicle positioning method, device, equipment and computer readable storage medium
CN109084782B (en) Lane line map construction method and construction system based on camera sensor
CN108303103A (en) The determination method and apparatus in target track
CN108896994A (en) A kind of automatic driving vehicle localization method and equipment
CN111507130B (en) Lane-level positioning method and system, computer equipment, vehicle and storage medium
JP6975513B2 (en) Camera-based automated high-precision road map generation system and method
CN110766760B (en) Method, device, equipment and storage medium for camera calibration
CN114841188A (en) Vehicle fusion positioning method and device based on two-dimensional code
CN111830953A (en) Vehicle self-positioning method, device and system
CN110751693B (en) Method, apparatus, device and storage medium for camera calibration
WO2018149539A1 (en) A method and apparatus for estimating a range of a moving object
CN113252051A (en) Map construction method and device
CN110780287A (en) Distance measurement method and distance measurement system based on monocular camera
CN112446915B (en) Picture construction method and device based on image group
CN113405555B (en) Automatic driving positioning sensing method, system and device
CN114910085A (en) Vehicle fusion positioning method and device based on road administration facility identification
CN114791282A (en) Road facility coordinate calibration method and device based on vehicle high-precision positioning
CN113513984B (en) Parking space recognition precision detection method and device, electronic equipment and storage medium
CN114358038B (en) Two-dimensional code coordinate calibration method and device based on vehicle high-precision positioning
CN113566834A (en) Positioning method, positioning device, vehicle, and storage medium
CN114581509A (en) Target positioning method and device
JP2012118029A (en) Exit determination device, exit determination program and exit determination method
JP3081788B2 (en) Local positioning device
CN115235526A (en) Method and system for automatic calibration of sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination