CN114061611A - Target object positioning method, apparatus, storage medium and computer program product - Google Patents

Target object positioning method, apparatus, storage medium and computer program product Download PDF

Info

Publication number
CN114061611A
CN114061611A CN202111363932.9A CN202111363932A CN114061611A CN 114061611 A CN114061611 A CN 114061611A CN 202111363932 A CN202111363932 A CN 202111363932A CN 114061611 A CN114061611 A CN 114061611A
Authority
CN
China
Prior art keywords
target
lane
target object
data
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111363932.9A
Other languages
Chinese (zh)
Inventor
苏景岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111363932.9A priority Critical patent/CN114061611A/en
Publication of CN114061611A publication Critical patent/CN114061611A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3492Special cost functions, i.e. other than distance or default speed limit of road segments employing speed data or traffic data, e.g. real-time or historical

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The present application relates to a target object positioning method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: acquiring positioning data of a target object; performing road network matching according to the positioning data, and determining a target road where the target object is located; when the target object changes lanes on the target road, determining lane areas corresponding to candidate lanes in the target road in a target plane; determining a target lane from the candidate lanes, so that a projection point of the target object in the target plane is located in a lane area corresponding to the target lane, positioning the target object to the target lane, and further realizing accurate positioning of a vehicle at a lane level.

Description

Target object positioning method, apparatus, storage medium and computer program product
Technical Field
The present application relates to the field of vehicle navigation positioning technologies, and in particular, to a target object positioning method, apparatus, computer device, storage medium, and computer program product.
Background
With the rapid development of sensing technology, navigation positioning technology and the like are widely applied to vehicle positioning. With the improvement of the requirement on navigation precision, the function of lane-level positioning navigation is more and more important.
In the existing lane-level positioning navigation technology, high-precision positioning is generally performed through the distance between the center line of a lane line and the current position of a vehicle based on a visual sensor, a laser radar and a high-precision map, but the laser radar is high in cost, the visual sensor is greatly influenced by weather, road environment and the like, the accuracy of lane estimation and detection is low, and low-cost high-precision positioning of a target object cannot be realized.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device, a computer readable storage medium and a computer program product for positioning a target object, which can achieve low cost and high accuracy.
In a first aspect, the present application provides a target object positioning method. The method comprises the following steps:
acquiring positioning data of a target object;
performing road network matching according to the positioning data, and determining a target road where the target object is located;
when the target object changes lanes on the target road, determining lane areas corresponding to candidate lanes in the target road in a target plane;
determining a target lane from the candidate lanes so that a projection point of the target object in the target plane is located in a lane area corresponding to the target lane;
positioning the target object to the target lane.
In a second aspect, the present application further provides a target object positioning apparatus. The device comprises:
the positioning data acquisition module is used for acquiring positioning data of the target object;
the road network matching module is used for performing road network matching according to the positioning data and determining a target road where the target object is located;
the lane change detection module is used for determining lane areas, corresponding to candidate lanes in a target plane, of the target road when the target object changes lanes on the target road;
the lane positioning module is used for determining a target lane from the candidate lanes so that a projection point of the target object in the target plane is located in a lane area corresponding to the target lane;
positioning the target object to the target lane.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
acquiring positioning data of a target object;
performing road network matching according to the positioning data, and determining a target road where the target object is located;
when the target object changes lanes on the target road, determining lane areas corresponding to candidate lanes in the target road in a target plane;
determining a target lane from the candidate lanes so that a projection point of the target object in the target plane is located in a lane area corresponding to the target lane;
positioning the target object to the target lane.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring positioning data of a target object;
performing road network matching according to the positioning data, and determining a target road where the target object is located;
when the target object changes lanes on the target road, determining lane areas corresponding to candidate lanes in the target road in a target plane;
determining a target lane from the candidate lanes so that a projection point of the target object in the target plane is located in a lane area corresponding to the target lane;
positioning the target object to the target lane.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring positioning data of a target object;
performing road network matching according to the positioning data, and determining a target road where the target object is located;
when the target object changes lanes on the target road, determining lane areas corresponding to candidate lanes in the target road in a target plane;
determining a target lane from the candidate lanes so that a projection point of the target object in the target plane is located in a lane area corresponding to the target lane;
positioning the target object to the target lane.
The target object positioning method, the target object positioning device, the target device, the storage medium and the computer program product are characterized in that positioning data of the target object are obtained, road network matching is carried out according to the positioning data, a target road where the target object is located is determined, the accuracy requirement of the target road on the required positioning data is low, when the target object changes lanes on the target road, lane areas corresponding to candidate lanes in a target plane in the target road are determined, the target lane is determined from the candidate lanes, a projection point of the target object in the target plane is located in the lane area corresponding to the target lane, the target object is positioned to the target lane, the lane positioning problem is converted into the position relation problem between the projection point and the lane areas in the plane, and the position relation between the vehicle and an actual lane line is positioned without a laser radar or a visual detection mode, the method can reduce the dependence on hardware conditions in the processing process of lane-level positioning, and realize low-cost and high-precision lane-level positioning.
Drawings
FIG. 1 is a diagram of an exemplary implementation of a target object location method;
FIG. 2 is a flowchart illustrating a method for locating a target object according to one embodiment;
FIG. 3 is a schematic diagram illustrating the relative positions of candidate lanes and a target vehicle in a target roadway in one embodiment;
FIG. 4 is a schematic illustration of a change in vehicle heading in one embodiment;
FIG. 5 is a graphical illustration of changes in angular velocity measurements in one embodiment;
FIG. 6 is a diagram of a vehicle heading change matching template in one embodiment;
FIG. 7 is a diagram of a matching template for changes in the measurement of the angular velocity of the gyroscope, according to an embodiment;
FIG. 8 is a flow diagram illustrating normalized cross-correlation matching in one embodiment;
FIG. 9 is a flowchart illustrating a target object locating method according to another embodiment;
FIG. 10 is a flowchart illustrating a target object locating method according to still another embodiment;
FIG. 11 is a schematic flow chart of data update in one embodiment;
FIG. 12 is a block diagram of a target object locating device in one embodiment;
FIG. 13 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Embodiments of the present application relate to Artificial Intelligence (AI) and machine learning techniques, and are designed based on computer vision techniques and Machine Learning (ML) in the AI. Artificial intelligence is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence is a comprehensive subject, and relates to a wide field, namely a hardware level technology and a software level technology. The basic technology of artificial intelligence generally comprises the technologies of a sensor, a special artificial intelligence chip, cloud computing, distributed storage, a big data processing technology, an operation interaction system, electromechanical integration and the like; software techniques for artificial intelligence generally include computer vision techniques, natural language processing techniques, and machine learning/deep learning. With the development and progress of artificial intelligence, artificial intelligence is researched and applied in multiple fields, such as common smart homes, smart customer services, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, robots, smart medical care and the like.
The automatic driving technology generally comprises technologies such as high-precision maps, environment perception, behavior decision, path planning, motion control and the like, and has wide application prospects. And the accurate positioning is the basis for realizing automatic driving, and is an important operation for realizing behavior decision, path planning, motion control and constructing a high-precision map. It should be noted that the target object positioning method provided in the present application may be applied to scenarios including, but not limited to, maps, navigation, automatic driving, internet of vehicles, and vehicle-road coordination.
The target object positioning method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Fig. 1 is a schematic structural diagram of a positioning system according to an embodiment of the present application, where the positioning system includes: a terminal 102 and a network side device 104. The terminal 102 includes a vehicle-mounted terminal or a user terminal, etc., which are only schematically illustrated here and do not specifically limit the system of the internet of things in the embodiment of the present application. The vehicle-mounted terminal can comprise a driving computer or a vehicle-mounted unit and the like, the user terminal can be a wireless terminal device or a wired terminal device, the wireless terminal device can be a device with a wireless transceiving function, the user terminal can be but not limited to various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the portable wearable devices can be smart watches, smart bracelets, head-mounted devices and the like.
The terminal 102 communicates with the network side device 104 through a network. The network-side device 104 may include a data processing center of a global navigation satellite system, a road network information database, and the like. For example, the vehicle-mounted device obtains satellite observation data in a global satellite navigation system from a data processing center, or obtains road network information from a road network information database, and performs the target object positioning method of the embodiment of the present application based on at least one of the satellite observation data and the road network information to position the lane of the target vehicle in the map. Specifically, taking the terminal 102 as an example of a vehicle-mounted terminal, the vehicle-mounted terminal obtains positioning data of a target vehicle, performs road network matching according to the positioning data, determines a target road where the target vehicle is located, determines lane areas corresponding to candidate lanes in the target road in a target plane when the target vehicle changes lanes on the target road, and displays a positioning result of the target vehicle in the target lane on a navigation display interface of the vehicle-mounted terminal when the target lane exists in the candidate lanes and a projection point of the target vehicle in the target plane is located in the lane area corresponding to the target lane.
The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, and the internet of things devices may be smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In an embodiment, as shown in fig. 2, a method for positioning a target object is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
step 202, positioning data of the target object is acquired.
The target object is an object needing positioning analysis, and the target object can change the position of the target object in the road based on the movement of the target object or the carrier. For example, the target object may be a vehicle carrying a positioning terminal, or may be a vehicle-mounted positioning terminal.
The positioning data is used for representing the geographical position information of the target object. The terminal acquires the positioning data by responding to the positioning request. The specific acquisition mode of the positioning data is matched with the triggering scene of the positioning request. The triggering scenario of the positioning request includes two cases of initial positioning for the target object and non-initial positioning for the target object.
The initial positioning is matched with a trigger scene corresponding to a positioning function started by a user, when the user triggers the starting of the positioning function, for example, an application icon corresponding to the positioning program displayed on a terminal interface is clicked, the positioning program generates a first positioning request for a target object and sends the first positioning request to a processor to request positioning processing for the target object, and positions of the target object at different moments are changed due to the fact that the target object moves, so that after the positioning program sends the first positioning request, 2 nd to N (N is more than 2) th positioning requests are sent to the processor according to a certain time interval, and the 2 nd to N positioning requests are positioning requests for non-initial positioning of the target object. The processor carries out corresponding positioning processing aiming at each positioning request initiated by the positioning program after the positioning program is started, and stores the obtained corrected positioning result, and when the terminal responds to the positioning ending triggering operation triggered by the user and feeds back positioning ending information to the processor, the processor carries out data marking or data releasing on the stored positioning result, so that whether the triggering scene corresponding to the positioning request is the primary positioning aiming at the target object can be accurately judged based on whether historical data aiming at the target object is stored or identification content based on the historical data is stored.
Specifically, when receiving a positioning request, the processor searches for stored historical positioning request data, and determines whether a trigger scene corresponding to the positioning request is primary positioning according to whether the historical positioning request data matched with the positioning request exists. The historical positioning request data matched with the positioning request may specifically be the existing historical positioning request data, or the identification information carried by the existing historical positioning request data is matched with the positioning request.
If there is no historical positioning request data matching with the positioning request, the processor determines that the positioning request is the initial positioning for the target object, and the positioning data is obtained in the following manner: the positioning data of the target object at the current moment is obtained in real time by the road network data and the fusion of the inertial sensing data and the satellite observation data, and the positioning data obtained in real time is used as the positioning data, so that the positioning data corresponding to the next moment can be predicted and obtained based on the corresponding positioning data of the target object at the previous moment.
If historical positioning request data matched with the positioning request exist, the processor judges that the positioning request is non-primary positioning aiming at the target object, and the positioning data is obtained in the following mode: and obtaining historical positioning data obtained when the target object is positioned at the previous moment, and determining the historical positioning data corresponding to the previous moment as the positioning data of the target object. Specifically, after determining that the target object has been previously positioned, the processor acquires historical positioning data of a previous time corresponding to the current time based on the positioning request of the current time, and uses the historical positioning data of the previous time as the positioning data of the current target object.
In a specific application, the real-time positioning data acquisition can be realized by means of the combined action of an inertial Navigation System (GNSS) and a Global Navigation Satellite System (GNSS). Specifically, the terminal obtains inertial sensing data of a target object through an inertial navigation system, obtains satellite observation data from a global satellite navigation system, and obtains positioning data of the target object through road network matching based on the combination of the inertial sensing data matched with the target object and the satellite observation data.
The inertial sensing data refers to data collected by an inertial sensor. The inertial sensor is installed on the target object, and the vehicle terminal is in communication connection with the inertial sensor. The inertial sensor is a complete MEMS (Micro-Electro-Mechanical System) integrating Mechanical elements, a miniature sensor, a signal processing and control circuit, an interface circuit, communication and a power supply, the MEMS inertial sensor can form a low-cost INS/GPS integrated navigation System, and the inertial sensor is a type of inertial sensor very suitable for constructing a miniature strapdown inertial navigation System; the inertial sensor is mainly used for detecting and measuring acceleration, inclination, impact, vibration, rotation and multi-degree-of-freedom motion, and is an important part for solving navigation, orientation and motion carrier control. Specifically, the inertial sensing data includes angular velocity measurement values, acceleration measurement values, and the like of the target object.
Satellite observation data refers to data observed by the global satellite navigation system, which is a space-based radio navigation positioning system that can provide users with all-weather three-dimensional coordinates and velocity and time information at any location on the earth's surface or in near-earth space. The common systems include four Satellite Navigation systems, namely, Global Positioning System (GPS), BeiDou Navigation Satellite System (BDS), Global Navigation Satellite System (GLONASS), and GALILEO Satellite Navigation System (GALILEO).
And step 204, performing road network matching according to the positioning data, and determining a target road where the target object is located.
The road network matching is to match target positioning points to road nodes by using a road network database based on a matching algorithm; road network matching is a positioning correction method based on software technology, and the basic idea is to link the motion track of a vehicle with road information in a digital map and obtain the accurate position of the relative map.
The road network matching can be realized based on road network data, the road network data refers to detailed road information in a navigation map and is mainly used for road network matching, road condition information processing and the like, and the road network data required by the terminal for the road network matching processing can be provided by a road network map database. The road network matching algorithm is the fusion of a curve matching principle and a geographic space proximity analysis method. The road matching algorithm mainly comprises a network topology algorithm, a curve fitting algorithm, a similarity algorithm, a fuzzy logic algorithm, a road network matching algorithm based on a hidden Gaussian Markov model and the like.
Specifically, the terminal obtains road network data, performs road network matching on the target object through a road matching algorithm according to the road network data and the positioning data, and determines the target road of the target object in the map based on the matching result of the road matching algorithm, so that road-level positioning for the target object is completed.
In one embodiment, performing road network matching according to the positioning data, and determining a target road where a target object is located includes: and inputting the positioning data and the satellite observation data into a hidden Markov model, so that the hidden Markov model performs road network matching on the target object based on the input data and the road network data, and determining the target road of the target object in the map.
The hidden Markov model is a statistical model and is used for describing a Markov process containing hidden unknown parameters. A hidden markov model is a type of markov chain whose states are not directly observable but observable through a sequence of observation vectors, each observation vector being represented as a variety of states by some probability density distribution, each observation vector being generated by a sequence of states having a corresponding probability density distribution.
Road network matching based on a hidden Markov model is to find a hidden sequence generating the observation sequence on the premise of giving a certain observation, wherein the hidden sequence represents the specific position of a person or an object holding a target object positioning device, and the observation sequence represents a series of track point coordinates generated by the target object positioning device. That is, by the model, the true position (i.e., the hidden sequence) is estimated from the trace points (i.e., the observed sequence). In the map matching algorithm based on the hidden Markov model, one or more candidate road sections exist for a track point within a certain distance, projection points of the track point on the candidate road sections are regarded as candidate points, and the track point is used as a vertex in a Markov chain, and the closer the track point is to the position on a side road section, the higher the probability of the point on the road section is. The closer the distance between the front and rear two real position points is, the higher the probability of state transition is, or the closer the distance between the front and rear two points on the real road section is to the distance between the front and rear two points observed by the track point, the higher the probability of state transition is.
Specifically, the terminal inputs historical positioning data corresponding to the positioning data and satellite observation data into the hidden Markov model, so that the hidden Markov model performs road network matching on the target object based on the road network data and the satellite observation data according to the historical positioning data, and determines the target road of the target object in the map based on a road network matching result.
In a specific application, the terminal can acquire historical positioning data of a target object within 30s before the current time and satellite observation data matched with the target object within 30s before the current time, and input a hidden markov model to perform road network matching, so as to determine a target road of the target object in a map.
In this embodiment, the road network matching is performed through the hidden markov model based on the positioning data and the satellite observation data of the target object, so that on one hand, the target road where the target object is located in the map can be quickly determined by utilizing the quick data processing effect of the hidden markov model in the road network matching, and on the other hand, the road network matching can be performed from both the positioning data and the satellite observation data by taking the positioning data and the satellite observation data of the target object as the input of the hidden markov model, thereby avoiding the influence caused by the error of single data and improving the accuracy of the matching result.
And step 206, when the target object has lane change on the target road, determining the lane areas corresponding to the candidate lanes in the target road in the target plane.
Wherein the road may be divided into a plurality of lanes based on lane lines. Lane change refers to the process by which a target object moves from one lane to another lane in a road. The lane change may be a change of an adjacent lane or a change of a plurality of consecutive lanes, for example, a target vehicle changing from a first lane to a second lane in a four-lane road continues traveling in the second lane, and for example, a target vehicle changing from the first lane to the second lane and continuing changing from the second lane to a third lane, and the like.
The target plane is a virtual plane, and the virtual plane may be established directly on the basis of the target road in an electronic map containing the target road, or may be established in any other space, and the target road is projected onto the virtual plane.
In the lane positioning process, the terminal needs to screen out the target lanes from the lanes of the target road for positioning, and the screened lanes are candidate lanes in the target road.
The lane area is an area formed by projection lines corresponding to lane lines on both sides of a lane on the target plane. In the target plane, the relative positional relationship between the lane lines corresponding to the respective candidate lanes may be determined based on a lane width and the number of lanes, wherein the lane width may be determined based on the lane width and the number of lanes, which may be acquired from the road network data.
Specifically, the terminal obtains road boundary data of a road position where the target object is located currently through a road level positioning result for the target object, and determines a relative position relationship between each lane line in the target road and a relative position relationship between the lane line and the road boundary line based on road network data. The terminal determines lane areas of the candidate lanes respectively corresponding to the target planes based on the relative positional relationship between the lane lines and the road boundary lines.
Further, the terminal may determine, on the basis of a target virtual plane in which the target road is located in the electronic map, lane areas constituted by lane lines corresponding to the respective candidate roads on the target road in the target virtual plane, on the basis of the determined relative positional relationship between the respective lane lines, and the relative positional relationship between the lane lines and the road boundary lines. The terminal may also project the road boundary line of the target road to a virtual plane constructed in advance, project the lane lines in the target road of the virtual plane based on the determined relative positional relationship between the lane lines and the road boundary line, obtain a virtual plane including the road boundary line and the lane lines, and obtain a region between projected adjacent lane lines or road boundary lines as a lane region.
And step 208, determining a target lane from the candidate lanes, so that the projection point of the target object in the target plane is located in the lane area corresponding to the target lane.
The projection point refers to a result obtained by projecting the target object in the target plane. When the volume of the target object is small, the position data of any point in the target object can be directly projected to the target plane, and when the volume of the target object is large, the central point of the target object can be determined first, and the position data of the central point can be projected to the target plane.
Specifically, the terminal determines, for each candidate lane, a positional relationship between a lane region corresponding to the candidate lane in the target plane and a projected point of the target object in the target plane based on the lane region and the projected point. And when the projection point is positioned in the lane area, determining the candidate lane as the target lane in which the lane is changed. When the projection point is located outside the lane area, it is determined that the target object has not changed lane to the candidate lane.
Step 210, positioning the target object to the target lane.
Specifically, the terminal updates the position of the target object in the electronic map to the position corresponding to the target lane in the displayed navigation page, so that the navigation object can accurately know the updated lane positioning result, the judgment capability of the road condition is improved in the driving process, and the driving risk is reduced.
The positioning method of the target object comprises the steps of obtaining positioning data of the target object, carrying out road network matching according to the positioning data, determining the target road where the target object is located, determining that the accuracy requirement of the target road on the required positioning data is low, determining lane areas corresponding to candidate lanes in a target plane when the target object changes lanes on the target road, determining the target lane from the candidate lanes, enabling a projection point of the target object in the target plane to be located in the lane area corresponding to the target lane, positioning the target object to the target lane, converting the lane positioning problem into the position relationship problem between the projection point and the lane areas in the plane, positioning the position relationship between a vehicle and a lane line in a laser radar or visual detection mode, and reducing the dependence on hardware conditions in the lane-level positioning process, and the low-cost high-precision lane-level positioning is realized.
In one embodiment, determining a target lane from the candidate lanes such that a projection point of the target object in the target plane is located in a lane area corresponding to the target lane, and positioning the target object to the target lane includes:
respectively constructing lane boundary functions aiming at each candidate lane in the target road; when a target lane exists in the candidate lanes, and the solving result of the lane boundary function corresponding to the target lane meets the result screening condition, positioning the target object to the target lane; and (5) screening conditions for results, wherein the projection point of the target object in the target plane is located in the lane area corresponding to the target lane.
When the optimization problem is processed based on the gate function, in order to ensure that the search is always in a feasible region in the search process of extreme points, barrier items are added into the target function for points which attempt to cross the boundary from the inside of the feasible region, wherein the functions representing the barrier items are the gate functions, the closer to the boundary, the larger the barrier is, and when the boundary is approached, the barrier tends to be infinite, so that the optimal solution is ensured not to exceed the feasible region.
When the projection point of the target object in the target plane is located in the lane area corresponding to the candidate lane, the value of the solution result of the lane boundary function corresponding to the candidate lane is small, when the projection point of the target object in the target plane is located outside the lane area corresponding to the candidate lane, the value of the solution result of the lane boundary function corresponding to the candidate lane is large, and the result screening condition can be that the target solution result with the minimum value is screened from the solution results of the lane boundary functions respectively corresponding to the candidate lanes, so that the candidate lane corresponding to the target solution result is used as the target lane.
In this embodiment, by constructing a lane boundary function and determining whether a projection point of a target object in a target plane is located in a lane area corresponding to a candidate lane based on a solution result of the lane boundary function, a region in which the projection point is located inside and outside the lane area can be made significant, so that the accuracy of a determination result of a relative position relationship between the projection point and the lane area is improved.
In one embodiment, a lane boundary function is constructed separately for each candidate lane in the target roadway, including:
determining a matching relation between a candidate lane represented by the lane area and a projection line based on the projection line forming the lane area in the target plane; and constructing a segmented lane boundary function according to the position relation between the projection point of the target object in the target plane and the target projection line based on the target projection line matched with each candidate lane.
The projection lines are results obtained by projecting in a target plane based on lane lines and road boundary lines, wherein the area between two adjacent projection lines is a lane area corresponding to each lane in the target road, and each lane in the target road corresponds to two parallel projection lines.
The target projection line refers to two parallel projection lines having a matching relationship to the specified candidate lane. The projection point refers to a result obtained by projecting a target object in a target plane, and the position relationship between the projection point and the target projection line includes two major situations, the first situation is that two target projection lines are both located on the same side of the projection point, specifically, the two target projection lines are located on the right side of the projection point or the two target projection lines are located on the left side of the projection point, and the second situation is that the projection point is located in the middle of the two target projection lines. And constructing a segmented lane boundary function based on the category to which the position relation between the projection point and the target projection line belongs.
Specifically, the terminal determines the matching relationship between the candidate lane represented by each lane area and each projection line respectively based on the projection lines forming the lane area in the target plane, and each candidate lane has a target projection line matched with the candidate lane. The terminal establishes a three-section type lane boundary function according to three position relations between a projection point of a target object in a target plane and a target projection line based on the target projection line matched with each candidate lane, the dependent variable of the lane boundary function is position data of the projection point, and the position data value range is divided into three sections based on the position relation between the position data of the projection point and the target projection line.
Further, two target projection lines corresponding to the same candidate lane are respectively recorded as a first projection line and a second projection line, the terminal is based on the first projection line and the second projection line, constructing a planar coordinate system in the object plane, the planar coordinate system including an abscissa axis and an ordinate axis which are perpendicular to each other, the first projection line and the second projection line being parallel to an object coordinate axis in the planar coordinate system, the object coordinate axis being any one of the abscissa axis and the ordinate axis, in order to accurately describe the position relationship between the projection line and the projection point, the projection line with a smaller distance from the target coordinate axis is taken as a first projection line, the distance between the first projection line and the target coordinate axis is taken as a first distance, the projection line with a larger distance from the target coordinate axis is taken as a second projection line, and the distance between the second projection line and the target coordinate axis is taken as a second distance. It will be appreciated that in a particular application, the abscissa and ordinate axes of the planar coordinate system may be interchanged, as may the first projection line and the second projection line.
Specifically, when the target distance between the projection point and the target coordinate axis is smaller than the first distance, the value of the lane boundary function is positively correlated with the distance between the projection point and the first projection line. And when the target distance between the projection point and the target coordinate axis is greater than the second distance, the value of the lane boundary function is positively correlated with the distance between the projection point and the second projection line. And when the target distance between the projection point and the target coordinate axis is greater than or equal to the first distance and less than or equal to the second distance, the value of the lane boundary function is zero.
The lane boundary function may be a quadratic gate function, for example, when the target distance between the projection point and the target coordinate axis is smaller than the first distance, the value of the lane boundary function is positively correlated with the square value of the distance between the projection point and the first projection line. For another example, when the target distance between the projection point and the target coordinate axis is greater than the second distance, the value of the lane boundary function is positively correlated with the square value of the distance between the projection point and the second projection axis.
In a specific application, as shown in fig. 3, assuming that a target road of a target object has three lanes 1,2 and 3, a terminal projects a central position D of the target object and road network data to a same plane coordinate system Oxy, and a straight line passing through the central position D of the target object is perpendicular to a road direction and intersects with a lane line C point, an E point, an F point and a G point. The lane boundary function is a quadratic brake function, and the expression of the quadratic brake function is as follows:
Figure BDA0003359910240000131
wherein (x)min,xmax)={(xC,xE),(xE,xF),(xF,xG)};
Wherein x isC,xE,xF,xGRespectively represent the abscissa of the C point, the E point, the F point, and the G point in the xy coordinate system.
In this embodiment, effective limitation on a candidate lane region can be realized through a segmented lane boundary function, and based on different position relationships between a candidate lane and a target object, values of the lane boundary function adopt different calculation methods, so that accurate estimation on whether a projection point corresponding to the target object is located in the lane region corresponding to the candidate lane is realized, and accuracy of a positioning result is ensured.
In one embodiment, determining respective lane regions in the target plane for respective candidate lanes in the target road comprises:
acquiring the road width and the number of lanes of a target road from road network data; and determining the lane area of each candidate lane of the target road corresponding to each candidate lane in the target plane based on the road width and the number of lanes.
The road network data refers to detailed road information in the navigation map, and is mainly used for providing road condition information, road network matching, road condition information processing and the like. The lane area is an area formed by projection lines on a target plane, and the projection lines are used to represent lane lines and road boundary lines on both sides of the lane.
Specifically, the terminal may obtain road information of a target road where the target object is currently located from the road network data based on the positioning data of the target object, including the road width and the number of lanes of the target road, determine a relative positional relationship between lane lines corresponding to respective candidate lanes based on the lane width and the number of lanes, and determine a relative positional relationship between the lane lines and road boundary lines. And according to the relative position relationship between the lane lines and the lane boundary line, performing lane line projection in the target road of the target plane to obtain a projection result containing the lane boundary line and the lane line, wherein the projected area between the adjacent lane lines or the area between the lane line and the adjacent lane boundary line is a lane area.
In a specific application, if the target road of the target object has three lanes 1,2, and 3, and the target plane includes two lane lines, two road boundary lines, and four projection lines, the width of the target road is 9m, and the widths of the lanes are the same, it may be determined that the distance between adjacent projection lines in the target plane is 3m, that is, the width of the lane region of each lane on the target road in the target plane is 3 m.
In this embodiment, based on the positioning data of the target object, the road width and the number of lanes of the road are obtained from the road network data, and the projection result of each lane in the target road in the target plane can be accurately obtained only by obtaining the road-level positioning information on the premise of not performing visual detection or radar detection on the lane line, thereby reducing the dependency of the positioning process on hardware equipment.
In one embodiment, the method further comprises: based on positioning data and observation data of the target object, determining a filter parameter for the target object, acquiring a covariance matrix corresponding to the filter parameter and an estimation parameter corresponding to the covariance matrix, taking the covariance matrix and the estimation parameter as prior information, performing nonlinear optimization by combining lane boundary functions corresponding to candidate lanes to obtain target cost function values corresponding to each candidate lane, and screening the candidate lane with the minimum target cost function value as the target lane.
The filtering parameters refer to parameters corresponding to the fusion positioning filter. The terminal comprises a fusion positioning filter, the specific type of the fusion positioning filter is not limited, and optionally, the fusion positioning filter can be a Kalman fusion positioning filter. Taking lane-level positioning of a vehicle as an example, in an initial stage of the lane-level positioning, a terminal firstly establishes an MEMS IMU (inertial sensing)/GNSS (global navigation satellite system) fusion positioning algorithm, and specifically comprises the following steps: and initializing the vehicle attitude, acquiring a satellite observation value, and calculating the initial position and speed of the vehicle, thereby establishing a fusion positioning filter.
The construction process of the fusion positioning filter comprises the following steps: acquiring initial positioning data, receiver clock error data and zero offset data of a target object; and constructing a fusion positioning filter based on the initial positioning data, the receiver clock error data and the zero-offset data to obtain filter parameters, and respectively using the fusion positioning filter parameters as matrix elements to obtain a covariance matrix.
The initial positioning data comprises an initial speed, an initial position and an initial attitude of the target object, the receiver clock error data comprises a receiver clock error change rate and a receiver clock error, and the zero offset data comprises zero offsets of the inertial sensing gyroscope and the accelerometer.
Taking lane-level positioning of a vehicle as an example, in an initial stage of the lane-level positioning, a terminal firstly establishes an MEMS IMU (inertial sensing System)/GNSS (Global navigation satellite System) fusion positioning algorithm, which specifically comprises the following steps: and initializing the vehicle attitude, acquiring a satellite observation value, and calculating the initial position and the initial speed of the vehicle, thereby establishing a fusion positioning filter.
The fusion positioning filter parameters may be set to:
Figure BDA0003359910240000151
wherein,
Figure BDA0003359910240000152
for vehicle attitude parameters, v and p are the initial speed and initial position of the vehicle in the ECEF coordinate system, baAnd bgZero offset for the inertial sensing gyroscope and accelerometer, is used to correct for measured deviations in angular velocity and acceleration of the MEMS IMU,
Figure BDA0003359910240000161
indicating the receiver clock difference rate of change and dtr the receiver clock difference.
In a specific application, the terminal can acquire the road direction beta of the vehicle based on map matching, and initialize the motion attitude of the vehicle by using the road direction beta
Figure BDA0003359910240000162
Namely:
Figure BDA0003359910240000163
wherein,
Figure BDA0003359910240000164
Figure BDA0003359910240000165
wherein psik=β,pk=0,rk=0
Wherein, beta is the road direction, lambda is the longitude coordinate of the vehicle, phi is the latitude coordinate of the vehicle.
The initial position and speed of the vehicle, the receiver clock error and the change rate of the receiver clock error, namely p, v, dtr,
Figure BDA0003359910240000166
And (4) parameters.
In another specific application, vehicle attitude
Figure BDA0003359910240000167
Representing the Euler angles of the inertial frame corresponding to the MEMS IMU and ECEF (Earth coordinate System), i.e.
Figure BDA0003359910240000168
Wherein,
Figure BDA0003359910240000169
and
Figure BDA00033599102400001610
converting a coordinate system where three axes of the inertial sensor are located to an Euler angle rotated by ECEF around a z axis, a y axis and an x axis; the conversion relation between the coordinate system of the three axes of the inertial sensor and ECEF can be expressed in a matrix form
Figure BDA00033599102400001611
Therefore, the temperature of the molten metal is controlled,in determining vehicle attitude
Figure BDA00033599102400001612
Based on the euler angle between the inertial sensor and ECEF, using the following formula:
Figure BDA00033599102400001613
wherein Log is the Log operation of the plum cluster SO3,
Figure BDA0003359910240000171
is composed of
Figure BDA0003359910240000172
Is used to generate the inverse symmetric matrix.
The covariance matrix corresponds to the filter parameters corresponding to the fusion positioning filter and is a matrix expression of the filter parameters. And the terminal performs matrix conversion on the filtering parameters corresponding to the fusion positioning filter to obtain a covariance matrix corresponding to the filtering parameters. The mahalanobis distance is used for representing the distance between a point and a distribution, and the mahalanobis distance is used for calculating the distance between the two points through covariance and is an effective method for calculating the similarity of two unknown sample sets.
In a specific application, a target road where a target object is located has three lanes 1,2 and 3, firstly, a terminal projects a central position D of the target object and road network data to a same plane coordinate system Oxy, and a straight line passing through the central position D of the target object is perpendicular to the road direction and intersects with a lane line C point, a lane line E point, a lane line F point and a lane line G point. The expression of the secondary gate function is as follows:
Figure BDA0003359910240000173
wherein (x)min,xmax)={(xC,xE),(xE,xF),(xF,zG)};
Wherein x isC,xE,xF,xGRespectively represent the abscissa of the C point, the E point, the F point, and the G point in the xy coordinate system.
Assuming that the target object is in lane 1, the following nonlinear optimization equation is constructed:
Figure BDA0003359910240000174
assuming that the target object is in lane 2, the following nonlinear optimization equation is constructed:
Figure BDA0003359910240000175
assuming that the target object is in lane 3, the following nonlinear optimization equation is constructed:
Figure BDA0003359910240000176
wherein p is+(tk) Is tkThe covariance matrix corresponding to the target object corresponding to the moment,
Figure BDA0003359910240000177
Figure BDA0003359910240000178
representing a non-linear optimization with covariance matrix and estimation parameters as prior information,
Figure BDA0003359910240000179
the covariance matrix and the estimation parameters are used as prior information, and nonlinear optimization is carried out by combining the lane boundary function corresponding to the lane 1 to obtain the target cost function value M corresponding to the lane 11In the same way, the target cost function value M corresponding to the lane 2 can be obtained2And a target cost function value M corresponding to the lane 33And screening the candidate lane with the minimum target cost function value as a target lane. That is, when M1When the minimum value is reached, the target is judgedThe object is in lane 1; when M is2When the target object is the smallest, judging that the target object is in the lane 2; when M is3When the minimum value is reached, the target object is determined to be in the lane 3.
In the embodiment, the lane where the target object is currently located is judged by combining the fusion positioning filter with the lane boundary function to perform nonlinear optimization, so that effective limitation of a candidate lane area can be realized based on the lane boundary function, accurate estimation of the target lane where the target object is located is realized based on the fusion of the fusion positioning filter and the lane boundary function, and the accuracy of a positioning result is ensured.
In one embodiment, the method further comprises: monitoring course change data and angular speed change data of a target object; determining a first matching result between the course change data and the course change template and a second matching result between the angular speed change data and the angular speed change template; and when the first matching result accords with the first matching condition and the second matching result accords with the second matching condition, determining that the target object has lane change on the target road.
The course change data refers to change data of the moving direction of the vehicle within a period of time, and the angular speed change data refers to change data of the moving angular speed of the vehicle within a period of time. For example, the target vehicle moves from a north-to-south direction, and changes to a 5 ° north-to-east direction after 2 seconds, and changes to a north-to-north direction after 4 seconds. As shown in fig. 4 and 5, the measured values of the heading and the angular velocity of the vehicle show regular changes when the vehicle makes a lane change. Referring to fig. 4, when the vehicle changes lane, the Heading change angle Heading of the vehicle increases and then decreases, and finally stabilizes to the original state. Referring to fig. 5, when the vehicle changes lane, the measured value of the angular velocity of the vehicle increases, decreases, increases, and stabilizes to the original state.
Specifically, the course change template is shown in fig. 6, dT is the time elapsed by the course change and is related to the lane width of the target lane, dT is the template matching time span, and dHeading is the course change angle. Angular velocity change templates as shown in fig. 7, dt1 and dt2 are the time elapsed for the change in the angular velocity measurement value and dt1+ dt2 ═ dt; dw1 and dw2 are changes in the angular velocity measurement values.
The lane change refers to a state in which the target object moves from the current lane to another lane. And when the first matching result meets the first matching condition and the second matching result meets the second matching condition, judging that the current lane of the target vehicle changes, namely the target vehicle changes lanes. And when the first matching result does not accord with the first matching condition or the second matching result does not accord with the second matching condition, judging that the lane where the target vehicle is located is not changed, namely the target vehicle does not change the lane.
Specifically, the terminal obtains course change data and inertial gyro angular speed measurement values obtained after fusion positioning filtering processing, matches the course change data with a course change template, matches the angular speed change data with the angular speed change template, and performs lane change analysis on the target vehicle based on the currently positioned lane of the target vehicle when matching results all meet corresponding matching conditions.
In the embodiment, on one hand, lane change analysis is performed on a target vehicle through the change of the course and the change of the angular speed, whether the vehicle changes lanes or not can be analyzed through the combination of the course and the angular speed, and the accuracy of a lane change analysis result is ensured, on the other hand, the lane change analysis is realized through the matching of the course change template and the angular speed change template, the data needing to be processed is simplified, and the data processing efficiency of the lane change analysis is improved.
In one embodiment, determining a first match between the heading change data and the heading change template and a second match between the angular velocity change data and the angular velocity change template includes:
acquiring a course change template and an angular speed change template which are matched with the lane width based on the lane width of the current lane of the target object; carrying out normalization cross-correlation matching on the course change data and the course change template to obtain a first matching result between the course change data and the course change template; and carrying out normalized cross-correlation matching on the angular velocity change data and the angular velocity change template to obtain a second matching result between the angular velocity change data and the angular velocity change template.
The normalized cross-correlation matching algorithm is a statistical matching algorithm, and the algorithm determines the matching degree by calculating the cross-correlation value of the template and the image to be matched. The position of the search window at which the cross-correlation value is maximum determines the position of the template image in the image to be matched.
Specifically, the lane width of the target lane may be obtained from the road network data, and the duration corresponding to the course of the heading change and the angular speed change during the lane change of the vehicle may be determined based on the lane width, where the duration corresponding to the course of the heading change and the angular speed change is longer when the lane is wider, and the duration corresponding to the course of the heading change and the angular speed change is shorter when the lane is narrower. Furthermore, the course change data and the course change template are subjected to normalized cross-correlation matching, the normalized cross-correlation matching adopted for the normalized cross-correlation matching of the angular speed change data and the angular speed change template can be the same algorithm, and the matching result corresponding to the data combination can be obtained only by changing the input data combination.
In a specific embodiment, as shown in fig. 8, the same normalized cross-correlation matching model may be used to process the normalized cross-correlation matching between the course change data and the course change template and the normalized cross-correlation matching between the angular velocity change data and the angular velocity change template to reduce the number of required models, or two normalized cross-correlation matching models may be used to process the normalized cross-correlation matching between the course change data and the course change template and the normalized cross-correlation matching between the angular velocity change data and the angular velocity change template, respectively, so that the normalized cross-correlation matching between the course change data and the course change template and the normalized cross-correlation matching between the angular velocity change data and the angular velocity change template may be processed synchronously to improve the processing efficiency.
In the embodiment, the acquired course change template and the acquired angular speed change template can be accurately matched with the lane change data of the target lane according to the lane width of the target lane, and the course change data and the course change template are subjected to normalized cross-correlation matching and the angular speed change data and the angular speed change template are subjected to normalized cross-correlation matching, so that the lane change judgment processing process is simplified, and the accuracy of the judgment result is improved.
In one embodiment, when the first matching result meets the first matching condition and the second matching result meets the second matching condition, determining that the target object has a lane change on the target road includes:
and when the course variation in the course variation data reaches the curve peak value in the course variation template and the angular speed maximum value in the angular speed variation data reaches the curve peak value in the angular speed variation template, judging that the target object has lane change on the target road.
The curve in the course change template represents the course change amount before and after the lane change, the curve peak value in the course change template represents the maximum course change when the lane change occurs, the curve in the angular speed change template represents the angular speed value before and after the lane change occurs, and the curve peak value in the angular speed change template represents the maximum value of the angular speed when the lane change occurs.
The terminal obtains the maximum course variation from the monitored course variation data, judges that the first matching result meets the first matching condition when the maximum course variation reaches the curve peak value in the course variation template, obtains the maximum angular velocity from the monitored angular velocity variation data, judges that the second matching result meets the second matching condition when the maximum angular velocity reaches the curve peak value in the angular velocity variation template, and further obtains the judgment result of lane change of the target object on the target road.
In this embodiment, the course variation is matched with the curve peak in the course variation template, and the angular velocity maximum is matched with the curve peak in the angular velocity variation template, so that the matching process can be simplified, and the determination result of whether the target object has lane change on the target road can be quickly obtained.
In one embodiment, the method further comprises: updating the initial positioning data of the target object according to the inertial sensing data matched with the target object to obtain intermediate data; determining an error correction amount of the intermediate data based on the satellite observation data matched with the target object; and according to the error correction amount, carrying out error correction on the intermediate data to obtain updated positioning data.
The initial positioning data may be a position data acquisition result, a speed data acquisition result, and an attitude data acquisition result acquired in real time in response to a positioning request triggered by a user. The initial positioning data may also be historical position data, historical speed data, and historical attitude data obtained from historical positioning data at a previous time in response to a positioning request automatically triggered at certain time intervals. The initial positioning data may specifically comprise one or more of position data, velocity data and pose data of the target object.
Updating the initial positioning data of the target object refers to a process of converting the positioning data of the target object at the previous time into the positioning data of the target object at the next time. The updated positioning data is the result of the updating process based on the initial positioning data. For example, the positioning data of the target object at the current time is recorded as initial positioning data, and since the target object is in a motion state, the positioning data corresponding to the target object at the next time needs to be determined, and a result obtained by performing data updating processing on the initial positioning data by the terminal is the updated positioning data corresponding to the next time.
Specifically, the updates to the initial positioning data may include data updates and error corrections. The data updating refers to a processing procedure of determining ideal positioning data of the next moment based on the positioning data of the previous moment, and the error correction refers to a processing procedure of performing data correction on the ideal positioning data after the data updating so as to reduce errors with actual data. In a specific application, the terminal responds to the positioning request and obtains initial positioning data of the target object. If the positioning request is the initial positioning for the target object, the initial positioning data is data acquired in real time, and updated positioning data corresponding to the next moment is obtained based on the data acquired in real time; if the positioning request is not the initial positioning for the target object, the initial positioning data can be obtained from the historical positioning result, and the intermediate data can be obtained through data updating.
Further, the terminal updates the initial positioning data of the target object based on the inertial sensing data and the satellite observation data matched with the target object to obtain intermediate data.
In a specific application, the terminal updates the positioning data of the previous moment based on the inertial sensing data matched with the target object to obtain the ideal positioning data of the next moment as intermediate data, and then performs error correction on the updated ideal positioning data based on the satellite observation data to obtain the updated positioning data corresponding to the next moment. Through the fusion of the inertial sensing data matched with the target object and the satellite observation data, the initial positioning data of the target object is subjected to data updating and error correction, so that the accuracy of the obtained updated positioning data corresponding to the next moment is higher.
In this embodiment, based on the filtering parameters and the covariance matrix of the fusion positioning filter, inertial sensing data and satellite observation data can be fused, and error correction is performed on initial positioning data, so that the accuracy of updated positioning data is improved, and accurate positioning of a target object is realized.
In one embodiment, the updated positioning data includes updated position data of the target object, the terminal extracts the updated position data from the updated positioning data, when a coordinate system corresponding to the updated position data is the same as a coordinate system in the road network data, the updated position data can be directly used as the coordinate position of the target object in the target road, and when the coordinate system corresponding to the updated position data is different from the coordinate system in the road network data, the coordinate position of the target object in the target road can be obtained through coordinate conversion.
Specifically, the terminal determines position data of a center point of a target object based on the updated positioning data; performing coordinate conversion on the position data according to the longitude and latitude data corresponding to the target object, and converting the coordinatesAnd projecting the converted central position data to a plane coordinate system corresponding to the target plane, and determining the coordinate position of the target object in the target road. The position data is coordinate data corresponding to the position of the center of the target object in the terrestrial coordinate system, and the coordinate conversion refers to a process of unifying coordinate expressions of different coordinate systems. In a specific application, let the coordinate of the earth coordinate system corresponding to the O point in the plane coordinate system Oxy be poThen, the coordinate of the center position D point of the target object in the plane coordinate system Oxy is (x)D,yD) That is to say that,
Figure BDA0003359910240000221
wherein, λ represents the longitude of the current position of the target object, and φ represents the latitude of the current position of the target object. p is a radical ofx、py、pzCoordinate data, p, each representing a target object in a terrestrial coordinate systemox、poy、pozEach represents coordinate data of a point O of the plane coordinate system in the terrestrial coordinate system.
In this embodiment, coordinate conversion is performed on the position data of the central point of the target object according to the longitude and latitude data, and the central position data of the target object is projected to the plane coordinate system corresponding to the target plane, so that unification of data of different coordinate systems can be realized, and the accuracy of the data is improved.
Further, the error correction amount of the filter parameter x can be represented by δ x;
Figure BDA0003359910240000222
specifically, the terminal updates the initial posture of the target object based on the angular velocity measurement value of the target object to obtain an updated posture; updating the initial speed of the target object based on the updated attitude and the acceleration measured value of the target object to obtain an updated speed; updating the initial position of the target object based on the initial speed and the updating speed to obtain an updated position; and updating the covariance matrix based on the updated attitude and the acceleration measured value to obtain an updated covariance matrix.
In one particular application, the update process for the vehicle position, speed and attitude and covariance matrix is as follows:
at tkTime of day, angular velocity measurement ω (t)k) And acceleration measurement ω (t)k) Comprises the following steps:
Figure BDA0003359910240000231
wherein, ω isx(tk) Characterizing angular velocity measurements ω (t)k) Decomposition results of the x-axis among the three axes of the inertial sensor; omegay(tk) Characterizing angular velocity measurements ω (t)k) Decomposition results of the y-axis among the three axes of the inertial sensor; omegaz(tk) Characterizing angular velocity measurements ω (t)k) The decomposition results of the z-axis among the three axes of the inertial sensor; a isx(tk) Characterizing the acceleration measurement a (t)k) Decomposing results of an x axis in three axes of the inertial sensor; a isy(tk) Characterizing the acceleration measurement a (t)k) Decomposition results of the y-axis among the three axes of the inertial sensor; a isz(tk) Characterizing the acceleration measurement a (t)k) The z-axis decomposition among the three axes of the inertial sensor.
Further, the terminal may calculate the target vehicle at t by the following stepskUpdate position, update speed, and update posture at time:
the vehicle posture updating process comprises the following steps:
Figure BDA0003359910240000232
wherein, ω iseFor the angular velocity of the earth rotation, Δ t is tk-tk-1In order to update the time interval,
Figure BDA0003359910240000233
is tk-1The coordinate system transformation matrix of the moment inertial coordinate system and the earth coordinate system is used for determining the moment of the initial positioning,
Figure BDA0003359910240000234
by fusing filtering parameters
Figure BDA0003359910240000235
Is determined.
Figure BDA0003359910240000236
Is tkA coordinate system conversion matrix of the moment inertial coordinate system and the earth coordinate system, namely the updated vehicle attitude;
the vehicle speed update process is as follows:
and determining the speed at the current moment based on the acceleration measurement data obtained at the current moment, the speed at the previous moment, the gravity value corresponding to the current moment and the positioning time interval. The specific treatment process is as follows:
Figure BDA0003359910240000241
wherein, g (t)k) Is at tkGravity value v (t) under time-of-day terrestrial coordinate systemk-1) Is tk-1The speed of the vehicle at time, v (t)k) Is tkThe speed of the vehicle at the moment, namely the speed of the vehicle after updating;
the vehicle position update process is as follows:
Figure BDA0003359910240000242
wherein, p (t)k-1) Is tk-1Position of the vehicle at time, p (t)k) Is tkAnd the position of the vehicle at the moment, namely the updated vehicle position.
The process of covariance matrix update is as follows:
and after the terminal obtains the updating attitude, the updating speed and the updating position at the current moment based on the speed measurement data obtained at the current moment, updating corresponding parameters in the filtering parameter matrix based on the updating attitude, the updating speed and the updating position at the current moment to obtain an intermediate parameter matrix. Further, the terminal constructs a state transition matrix corresponding to a Kalman filtering algorithm based on acceleration measurement data and the attitude in the current-time speed measurement data, updates an initial covariance matrix according to the state transition matrix and an error matrix determined by attribute information of the inertial sensor, and obtains a middle covariance matrix, wherein the initial covariance matrix is a covariance matrix obtained after last positioning is completed.
The covariance matrix x of the filter is updated by
P(tk)=Φ(tk)·P(tk-1)·Φ(tk)T+Q(tk)
Wherein, phi (t)k) Representing a system state transition matrix in a filtering algorithm;
Figure BDA0003359910240000243
wherein, I3×3Is a 3 × 3 identity matrix, Ωe、F21、F23For calculating phi (t)k) The intermediate variable of (1).
Figure BDA0003359910240000251
Figure BDA0003359910240000252
Figure BDA0003359910240000253
Wherein r isS(tk) Represents the distance, Q (t), of the current vehicle from the center of the earthk) For system noiseThe sound, which is directly available from the inertial sensor product specification of the mobile terminal, i.e.,
Figure BDA0003359910240000254
wherein,
Figure BDA0003359910240000255
and
Figure BDA0003359910240000256
the system noise spectral density for the accelerometer and gyroscope can be obtained directly from the on-board MEMS IMU product specifications.
And so on, when receiving the next time tk+1The above steps are repeated for the inertial acceleration and angular velocity measurements, thereby achieving the updating of the vehicle position, velocity, attitude, and covariance matrices.
In the present embodiment, the angular velocity measurement ω (t) acquired based on the inertial sensork) And acceleration measurement ω (t)k) The updating of the initial positioning data and the updating of the covariance matrix are realized, and the initial positioning data and the covariance matrix are equivalent to the position information of the target object and are updated to the position determined based on the data acquired by the inertial sensor, so that the subsequent correction processing is not required to be carried out, and the positioning precision is ensured.
In one embodiment, determining an error correction for the intermediate data based on satellite observations matching the target object comprises:
acquiring a pseudo-range observation value and a Doppler observation value from satellite observation data corresponding to a target object; constructing a filtering parameter error correction equation based on the pseudo-range observed value and the Doppler observed value; and solving the filtering parameter error correction equation to obtain the error correction.
Wherein the pseudo-range observation represents a difference between a local reception time of the signal and a characteristic time value carried by the signal. The doppler observations represent the average velocity during two adjacent observation time intervals.
Specifically, the terminal respectively obtains pseudo-range observation values and Doppler observation values of a plurality of satellites from satellite observation data corresponding to a target object, a first equation set between the pseudo-range observation values and a vehicle position is constructed based on the pseudo-range observation values corresponding to the plurality of satellites, a second equation set between a stage-based Doppler observation value and the vehicle position is constructed based on the Doppler observation values corresponding to the plurality of satellites, a filtering parameter error correction equation is constructed based on the first equation set and the second equation set, and a filtering parameter error correction equation is solved to obtain an error correction quantity.
In one specific application, based on the received pseudorange observations for n satellites, the following equation may be constructed:
Figure BDA0003359910240000261
wherein,
Figure BDA0003359910240000262
1,2, …, for the terminal receiving the satellite observations, p is the vehicle position, riIs the position of the satellite i, dtrFor receiver clock difference, dtiThe clock error of the satellite i, the light velocity value in vacuum, and the error correction value (including ionosphere, troposphere and earth rotation correction, which can be calculated by an empirical model) are shown as c;
based on the received doppler observations for n satellites, the following equation can be constructed:
Figure BDA0003359910240000263
wherein lambda is the wavelength of the satellite broadcast signal,
Figure BDA0003359910240000264
i is 1,2, … …, n is doppler observation, v isiI is 1,2, … …, n is the satellite running speed,
Figure BDA0003359910240000265
in order for the terminal receiver to drift in the clock,
Figure BDA0003359910240000266
1,2, … …, n is satellite clock drift;
based on the above equation, the following filtering parameter error correction equation is constructed:
Figure BDA0003359910240000267
Figure BDA0003359910240000271
Figure BDA0003359910240000272
Figure BDA0003359910240000273
Figure BDA0003359910240000274
calculating the correction quantity of the fusion filter parameter based on the error correction equation of the filter parameter
Figure BDA0003359910240000275
Figure BDA0003359910240000276
Figure BDA0003359910240000277
Wherein, P-(tk) Is tkTime-of-day vehicle covariance matrix prediction, RM(tk) To measure the error matrix, λ is the carrier wavelength,
Figure BDA0003359910240000278
i is 1,2, …, n is a unit observation vector from the vehicle-mounted terminal to the satellite; p+(tk) Fusing the covariance matrix of the filter after correction; using the obtained error correction
Figure BDA0003359910240000279
Each parameter in the fusion filter parameter matrix is modified, i.e.,
Figure BDA00033599102400002710
Figure BDA00033599102400002711
Figure BDA0003359910240000281
Figure BDA0003359910240000282
Figure BDA0003359910240000283
Figure BDA0003359910240000284
Figure BDA0003359910240000285
in this embodiment, corresponding equation sets are respectively constructed through pseudo-range observed values and doppler observed values of a plurality of satellites, and then a filtering parameter error correction equation is constructed to obtain an error correction amount, so that a reasonable constraint is constructed to improve the positioning accuracy of a vehicle.
In a specific embodiment, as shown in fig. 9, the target object positioning method includes:
and step 902, updating the initial positioning data of the target object according to the inertial sensing data matched with the target object to obtain intermediate data.
And 904, acquiring a pseudo-range observation value and a Doppler observation value from satellite observation data corresponding to the target object.
And step 906, constructing a filter parameter error correction equation based on the pseudo-range observed value and the Doppler observed value.
And 908, solving a filtering parameter error correction equation to obtain an error correction quantity.
And step 910, performing error correction on the intermediate data according to the error correction amount to obtain updated positioning data.
Step 912, performing road network matching according to the positioning data, determining a target road where the target object is located, and acquiring the road width and the number of lanes of the target road from the road network data.
Step 914, the heading change data and angular velocity change data of the target object are monitored.
Step 916, a heading change template and an angular speed change template matched with the lane width are obtained based on the lane width of the lane where the target object is located currently.
Step 918, performing normalized cross-correlation matching on the course change data and the course change template to obtain a first matching result between the course change data and the course change template.
And 920, performing normalized cross-correlation matching on the angular velocity change data and the angular velocity change template to obtain a second matching result between the angular velocity change data and the angular velocity change template.
And step 922, when the first matching result meets the first matching condition and the second matching result meets the second matching condition, determining that the target object has a lane change on the target road.
Step 924, determining lane areas corresponding to the candidate lanes of the target road in the target plane based on the road width and the number of the lanes.
In step 926, based on the projection lines constituting the lane area in the target plane, a matching relationship between the candidate lane characterized by the lane area and the projection lines is determined.
Step 928, constructing a segmented lane boundary function according to the position relation between the projection point of the target object in the target plane and the target projection line based on the target projection line matched with the candidate lane for each candidate lane.
Wherein the object projection lines include a first projection line and a second projection line parallel to the object coordinate axis in the planar coordinate system. A first distance between the first projection line and the target coordinate axis is less than a second distance between the second projection line and the target coordinate axis. When the target distance between the projection point and the target coordinate axis is smaller than the first distance, the value of the lane boundary function is positively correlated with the distance between the projection point and the first projection line. And when the target distance between the projection point and the target coordinate axis is greater than the second distance, the value of the lane boundary function is positively correlated with the distance between the projection point and the second projection line. And when the target distance between the projection point and the target coordinate axis is greater than or equal to the first distance and less than or equal to the second distance, the value of the lane boundary function is zero.
Step 930, determining filter parameters for the target object based on the positioning data and the observation data of the target object.
Step 932, obtain the covariance matrix corresponding to the filter parameters and the estimation parameters corresponding to the covariance matrix.
And 934, taking the covariance matrix and the estimation parameters as prior information, performing nonlinear optimization by combining lane boundary functions corresponding to the candidate lanes to obtain target cost function values corresponding to each candidate lane, and selecting the candidate lane with the minimum target cost function value as the target lane.
In a specific application, as shown in fig. 10 and 11, the above-mentioned target object locating method is applied to the application scenario taking lane-level location of the vehicle-mounted terminal for the target vehicle as an example. Specifically, the application of the target object positioning method to the lane-level positioning of the target vehicle in the application scenario is as follows:
the lane-level positioning technology is widely applied to the fields of ADAS (Advanced Driving Assistance System), lane departure System and automatic Driving, compared with the existing methods using a vision sensor, a laser radar and the like, the traditional methods using the vision sensor, the laser radar and the like have high cost and large calculation amount, the vision sensor is greatly influenced by weather, road environment and the like, the lane estimation and change detection accuracy is low, and effective data such as vehicle-mounted MEMS IMU, satellite observation information, road network information and the like are not fully utilized, so that a reliable lane change detection and lane tracking method is lacked.
Aiming at the defects of the current lane-level positioning method, the lane-level target object positioning method fusing MEMS IMU, multi-system multi-frequency satellite observation information and map matching is provided, and the method mainly comprises the following steps:
(1) and establishing an MEMS IMU/GNSS fusion positioning algorithm at the initial stage of lane-level positioning.
Specifically, the fusion positioning algorithm is established as follows: the vehicle-mounted terminal initializes the vehicle attitude by using the road direction or GNSS observation information, calculates the initial position and the initial speed of the vehicle by the satellite observation value obtained by the vehicle satellite positioning equipment, and establishes the Kalman fusion positioning filter based on the initial attitude, the initial position and the initial speed.
In a specific application, the direction of the road where the vehicle is located is acquired as beta through map matching, and the moving posture of the vehicle is initialized by using the beta
Figure BDA0003359910240000301
Namely, it is
Figure BDA0003359910240000302
Wherein,
Figure BDA0003359910240000303
Figure BDA0003359910240000304
wherein psik=β,pk=0,rk=0
Wherein, beta is the road direction, lambda is the longitude coordinate of the vehicle, phi is the latitude coordinate of the vehicle.
The initial position and speed of the vehicle, the receiver clock error and the change rate of the receiver clock error, namely p, v, dtr,
Figure BDA0003359910240000305
And (4) parameters.
Wherein the filter parameters of the fusion positioning filter can be set to
Figure BDA0003359910240000306
Wherein,
Figure BDA0003359910240000307
for vehicle attitude parameters, v and p are the initial speed and initial position of the vehicle in the ECEF coordinate system, baAnd bgZero offset for the inertial sensing gyroscope and accelerometer, is used to correct for measured deviations in angular velocity and acceleration of the MEMS IMU,
Figure BDA0003359910240000311
indicating the receiver clock difference rate of change and dtr the receiver clock difference.
The error correction quantity of the fusion positioning filter parameter x is represented by deltax;
Figure BDA0003359910240000312
(2) updating of vehicle motion states and covariance matrices is facilitated based on MEMS IMU inertial sensors. The update process of the vehicle position, speed and attitude and covariance matrix is as follows:
at tkTime of day, angular velocity measurement ω (t)k) And acceleration measurement ω (t)k) Comprises the following steps:
Figure BDA0003359910240000313
wherein, ω isx(tk) Characterizing angular velocity measurements ω (t)k) Decomposition results of the x-axis among the three axes of the inertial sensor; omegay(tk) Characterizing angular velocity measurements ω (t)k) Decomposition results of the y-axis among the three axes of the inertial sensor; omegaz(tk) Characterizing angular velocity measurements ω (t)k) The decomposition results of the z-axis among the three axes of the inertial sensor; a isx(tk) Characterizing the acceleration measurement a (t)k) Decomposing results of an x axis in three axes of the inertial sensor; a isy(tk) Characterizing the acceleration measurement a (t)k) Decomposition results of the y-axis among the three axes of the inertial sensor; a isz(tk) Characterizing the acceleration measurement a (t)k) The z-axis decomposition among the three axes of the inertial sensor.
Further, the terminal may calculate the target vehicle at t by the following stepskUpdate position, update speed, and update posture at time:
the vehicle posture updating process comprises the following steps:
Figure BDA0003359910240000314
wherein, ω iseFor the angular velocity of the earth rotation, Δ t is tk-tk-1In order to update the time interval,
Figure BDA0003359910240000315
is tk-1The coordinate system transformation matrix of the moment inertial coordinate system and the earth coordinate system is used for determining the moment of the initial positioning,
Figure BDA0003359910240000316
by fusing filtering parameters
Figure BDA0003359910240000317
Is determined.
Figure BDA0003359910240000318
Is tkA coordinate system conversion matrix of the moment inertial coordinate system and the earth coordinate system, namely the updated vehicle attitude;
the vehicle speed update process is as follows:
and determining the speed at the current moment based on the acceleration measurement data obtained at the current moment, the speed at the previous moment, the gravity value corresponding to the current moment and the positioning time interval. The specific treatment process is as follows:
Figure BDA0003359910240000321
wherein, g (t)k) Is at tkGravity value v (t) under time-of-day terrestrial coordinate systemk-1) Is tk-1The speed of the vehicle at time, v (t)k) Is tkThe speed of the vehicle at the moment, namely the speed of the vehicle after updating;
the vehicle position update process is as follows:
Figure BDA0003359910240000322
wherein, p (t)k-1) Is tk-1Position of the vehicle at time, p (t)k) Is tkAnd the position of the vehicle at the moment, namely the updated vehicle position.
The process of covariance matrix update is as follows:
and after the terminal obtains the updating attitude, the updating speed and the updating position at the current moment based on the speed measurement data obtained at the current moment, updating corresponding parameters in the filtering parameter matrix based on the updating attitude, the updating speed and the updating position at the current moment to obtain an intermediate parameter matrix. Further, the terminal constructs a state transition matrix corresponding to a Kalman filtering algorithm based on acceleration measurement data and attitude data in the current-time speed measurement data, updates an initial covariance matrix according to the state transition matrix and an error matrix determined by attribute information of the inertial sensor, and obtains a middle covariance matrix, wherein the initial covariance matrix is a covariance matrix obtained after last positioning is completed.
The covariance matrix x of the filter is updated by
P(tk)=Φ(tk)·P(tk-1)·Φ(tk)T+Q(tk)
Wherein, phi (t)k) Representing a system state transition matrix in a filtering algorithm;
Figure BDA0003359910240000323
wherein, I3×3Is a 3 × 3 identity matrix, Ωe、F21、F23For calculating phi (t)k) The intermediate variable of (1).
Figure BDA0003359910240000331
Figure BDA0003359910240000332
Figure BDA0003359910240000333
Wherein r isS(tk) Represents the distance, Q (t), of the current user's vehicle from the geocentrick) As system noise, can be directly generated byThe inertial sensor product specification of the mobile terminal is obtained, i.e.
Figure BDA0003359910240000334
Wherein,
Figure BDA0003359910240000335
and
Figure BDA0003359910240000336
the system noise spectral density of the accelerometer and the gyroscope can be directly obtained from the specifications of vehicle-mounted MEMS IMU products;
and so on, when receiving the next time tk+1Repeating the above steps when the inertial acceleration and the angular velocity are measured; and updating the position, the speed, the attitude and the covariance matrix of the vehicle.
(3) And correcting the motion state of the vehicle by utilizing the GNSS pseudo range and the Doppler measurement data based on the fusion positioning filter.
The vehicle-mounted terminal constructs the following first equation based on the received pseudo-range observed values of n satellites:
Figure BDA0003359910240000337
wherein,
Figure BDA0003359910240000338
i is 1,2, …, n is the terminal received satellite observation, p is the vehicle position, r isiIs the position of the satellite i, dtrFor receiver clock difference, dtiThe clock error of the satellite i, the light velocity value in vacuum, and the error correction value (including ionosphere, troposphere and earth rotation correction, which can be calculated by an empirical model) are shown as c;
based on the received doppler observations for n satellites, the following second equation is constructed:
Figure BDA0003359910240000341
wherein lambda is the wavelength of the satellite broadcast signal,
Figure BDA0003359910240000342
i is 1,2, … …, n is doppler observation, v isiI is 1,2, … …, n is the satellite running speed,
Figure BDA0003359910240000343
in order for the terminal receiver to drift in the clock,
Figure BDA0003359910240000344
1,2, … …, n is satellite clock drift;
based on the first equation and the second equation, the following filter parameter error correction equation is constructed:
Figure BDA0003359910240000345
Figure BDA0003359910240000346
Figure BDA0003359910240000347
Figure BDA0003359910240000348
Figure BDA0003359910240000349
calculating the correction quantity of the fusion filter parameter based on the error correction equation of the filter parameter
Figure BDA0003359910240000351
Figure BDA0003359910240000352
Figure BDA0003359910240000353
Wherein, P-(tk) Is tkTime-of-day vehicle covariance matrix prediction, RM(tk) To measure the error matrix, λ is the carrier wavelength,
Figure BDA0003359910240000354
i is 1,2, …, n is a unit observation vector from the vehicle-mounted terminal to the satellite; p+(tk) Fusing the covariance matrix of the filter after correction; using the obtained error correction
Figure BDA0003359910240000355
Each parameter in the fusion filter parameter matrix is modified, i.e.,
Figure BDA0003359910240000356
Figure BDA0003359910240000357
Figure BDA0003359910240000358
Figure BDA0003359910240000359
Figure BDA00033599102400003510
Figure BDA00033599102400003511
Figure BDA00033599102400003512
corresponding equation sets are respectively constructed through pseudo-range observed values and Doppler observed values of a plurality of satellites, then a filtering parameter error correction equation is constructed to obtain error correction, and reasonable constraint is constructed to improve the positioning accuracy of the vehicle.
(4) And (5) lane changing tracking. The vehicle-mounted terminal projects the current position of the vehicle to the center line of the lane where the vehicle is located and tracks the projection point of the vehicle on the center line of the lane where the vehicle is located, the MEMS IMU observation data and the vehicle course information are fused based on the normalized cross-correlation algorithm to detect lane change, when the lane change is detected, the step (5) is used for estimating the lane where the current vehicle is located, and the projection point position of the current vehicle on the center line of the lane where the current vehicle is located is output.
When the vehicle changes lanes, the heading and the gyro angular velocity measurement value of the MEMS IMU show regular changes. When the vehicle changes lanes, the Heading change angle Heading of the vehicle is increased and then decreased, and finally the vehicle is stabilized to the original state. The angular velocity measurement of the vehicle will increase, decrease, increase, and eventually stabilize to the original state. Specifically, the heading change template is shown in fig. 6, dt is the time elapsed by the heading change and is related to the lane width of the target lane, the lane width of the target lane can be obtained from the road network data, the heading change during the lane change of the vehicle and the duration corresponding to the angular speed change process can be determined based on the lane width, the duration corresponding to the heading change and the angular speed change process is longer when the lane is wider, and the duration corresponding to the heading change and the angular speed change process is shorter when the lane is narrower. dT is the template matching time span, dHeading is the course change angle, and a first matching result can be obtained by matching the course change data with the course change template. The angular velocity change templates are dt1 and dt2 shown in fig. 8 as the time elapsed for the change of the angular velocity measurement value and dt1+ dt2 ═ dt; dw1 and dw2 are changes in the angular velocity measurement values. The course change data, the angular speed change data, the course change template and the angular speed change template are subjected to normalized cross-correlation matching, the relevance among different data can be fully considered, the integral error caused by data matching of single data is avoided, and the accuracy of a matching result is improved.
(5) And (6) lane estimation. And combining the position, the speed and the posture output by the fusion filter, and acquiring information such as a road where the current vehicle is located, the number of lanes, the lane width and the like based on the map matching of the hidden Markov model. And estimating the lane where the current vehicle is located based on the quadratic brake function and the nonlinear optimization.
Let the coordinate of the earth coordinate system corresponding to the O point in the plane coordinate system Oxy be poThen, the coordinate of the center position D point of the target object in the plane coordinate system Oxy is (x)D,yD) That is to say that,
Figure BDA0003359910240000361
wherein, λ represents the longitude of the current position of the target object, and φ represents the latitude of the current position of the target object. p is a radical ofx、py、pzCoordinate data, p, each representing a target object in a terrestrial coordinate systemox、poy、pozEach represents coordinate data of a point O of the plane coordinate system in the terrestrial coordinate system.
Assuming that a target road where a target object is located has three lanes 1,2 and 3, firstly, a terminal projects the central position D and road network data of the target object to the same plane coordinate system Oxy, and a straight line passing through the central position D of the target object is perpendicular to the road direction and intersects with a lane line C point, a lane line E point, a lane line F point and a lane line G point. The expression of the secondary gate function is as follows:
Figure BDA0003359910240000362
wherein (x)min,xmax)={(xC,xE),(xE,zF),(xF,xG)};
Wherein x isC,xE,xF,xGRespectively represent the abscissa of the C point, the E point, the F point, and the G point in the xy coordinate system.
Assuming that the target object is in lane 1, the following nonlinear optimization equation is constructed:
Figure BDA0003359910240000371
assuming that the target object is in lane 2, the following nonlinear optimization equation is constructed:
Figure BDA0003359910240000372
assuming that the target object is in lane 3, the following nonlinear optimization equation is constructed:
Figure BDA0003359910240000373
wherein p is+(tk) Is tkThe covariance matrix corresponding to the target object corresponding to the moment,
Figure BDA0003359910240000374
Figure BDA0003359910240000375
representing a non-linear optimization with covariance matrix and estimation parameters as prior information,
Figure BDA0003359910240000376
the covariance matrix and the estimation parameters are used as prior information, and nonlinear optimization is carried out by combining the lane boundary function corresponding to the candidate lane 1 to obtain the target cost function value M corresponding to the lane 11Similarly, the target cost function corresponding to the lane 2 can be obtainedNumerical value M2And a target cost function value M corresponding to the lane 33And screening the candidate lane with the minimum target cost function value as a target lane. That is, when M1When the target object is the lane 1, judging that the target object is in the lane 1; when M is2When the target object is the smallest, judging that the target object is in the lane 2; when M is3When the minimum value is reached, the target object is determined to be in the lane 3. .
Based on the target object positioning method, the traditional visual positioning method is converted into a lane area where the position of the projection point of the vehicle on the target plane is judged, lane-level positioning of the vehicle is achieved, the lane where the specific vehicle is located is determined through fusion algorithm and gate function based on the obtained positioning data and road network data, namely, the technology of low-cost hardware and algorithm supplement is adopted, vehicle position precision is effectively improved, lane-level positioning navigation is assisted, user experience is optimized, lane-level positioning cost and calculated amount are reduced, the problem of inaccurate positioning of vehicles in complex scenes such as high-rise forests and urban canyons can be solved, and the problem of lane-level positioning failure caused by using a visual sensor in severe environments such as night and foggy days can be solved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a target object positioning apparatus for implementing the above-mentioned target object positioning method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the target object positioning device provided below can be referred to the limitations of the target object positioning method in the foregoing, and details are not described herein again.
In one embodiment, as shown in fig. 12, there is provided a target object positioning apparatus, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, and specifically includes: location data acquisition module 1202, road network matching module 1204, lane change detection module 1206 and lane location module 1208, wherein:
a positioning data obtaining module 1202, configured to obtain positioning data of a target object;
a road network matching module 1204, configured to perform road network matching according to the positioning data, and determine a target road where the target object is located;
a lane change detection module 1206, configured to determine, when the target object has a lane change on the target road, lane areas corresponding to candidate lanes in the target road in a target plane;
a lane positioning module 1208, configured to determine a target lane from the candidate lanes, so that a projection point of the target object in the target plane is located in a lane area corresponding to the target lane, and position the target object to the target lane.
In one embodiment, the lane positioning module further comprises a lane boundary function construction module and a positioning module; wherein:
the lane boundary function building module is used for respectively building lane boundary functions for each candidate lane in the target road;
the positioning module is used for positioning the target object to the target lane when a target lane is determined from the candidate lanes and the solving result of the lane boundary function corresponding to the target lane meets the result screening condition;
and the result screening condition is used for representing that the projection point of the target object in the target plane is positioned in the lane area corresponding to the target lane.
In one embodiment, the lane boundary function building module is further configured to determine, based on a projection line constituting a lane region in the target plane, a matching relationship between a candidate lane characterized by the lane region and the projection line; and for each candidate lane, constructing a segmented lane boundary function according to the position relation between the projection point of the target object in the target plane and the target projection line based on the target projection line matched with the candidate lane.
In one embodiment, the object projection lines include a first projection line and a second projection line parallel to an object coordinate axis in a planar coordinate system; a first distance between the first projection line and the target coordinate axis is less than a second distance between the second projection line and the target coordinate axis;
when the target distance between the projection point and the target coordinate axis is smaller than the first distance, the value of the lane boundary function is positively correlated with the distance between the projection point and the first projection line;
and when the target distance between the projection point and the target coordinate axis is greater than the second distance, the value of the lane boundary function is positively correlated with the distance between the projection point and the second projection line.
And when the target distance between the projection point and the target coordinate axis is greater than or equal to the first distance and less than or equal to the second distance, the value of the lane boundary function is zero.
In one embodiment, the lane change detection module is further configured to obtain a road width and a number of lanes of the target road from road network data; determining a lane area of the target road corresponding to each candidate lane in the target plane based on the road width and the number of lanes.
In one embodiment, the apparatus further comprises a mahalanobis distance calculation module to determine filter parameters for a target object based on positioning data and observation data of the target object; acquiring a covariance matrix corresponding to the filter parameters, and calculating the Mahalanobis distance between the positioning data of the target object and each element in the covariance matrix;
the lane positioning module is further configured to position the target object to the target lane when a target lane is determined from the candidate lanes, so that a solution result of the lane boundary function and an accumulation result of the mahalanobis distance meet a result screening condition.
In one embodiment, the device lane change detection module is further configured to monitor heading change data and angular velocity change data of the target object; determining a first matching result between the course change data and a course change template and a second matching result between the angular speed change data and the angular speed change template; and when the first matching result accords with a first matching condition and the second matching result accords with a second matching condition, determining that the target object has lane change on the target road.
In one embodiment, the device lane change detection module is further configured to obtain a heading change template and an angular speed change template that match the lane width of the lane where the target object is currently located, based on the lane width of the lane where the target object is currently located; carrying out normalized cross-correlation matching on the course change data and the course change template to obtain a first matching result between the course change data and the course change template; and carrying out normalized cross-correlation matching on the angular velocity change data and the angular velocity change template to obtain a second matching result between the angular velocity change data and the angular velocity change template.
In one embodiment, the device lane change detection module is further configured to determine that the target object has a lane change on the target road when the heading change amount in the heading change data reaches the peak of the curve in the heading change template and the maximum value of the angular velocity in the angular velocity change data reaches the peak of the curve in the angular velocity change template.
In one embodiment, the positioning data acquiring module is further configured to update the initial positioning data of the target object according to the inertial sensing data matched with the target object, so as to obtain intermediate data; determining an error correction amount of the intermediate data based on the satellite observation data matched with the target object; and according to the error correction amount, carrying out error correction on the intermediate data to obtain updated positioning data.
In one embodiment, the positioning data acquisition module is further configured to acquire pseudorange observations and doppler observations from satellite observations corresponding to the target object; constructing a filtering parameter error correction equation based on the pseudo-range observation value and the Doppler observation value; and solving the filtering parameter error correction equation to obtain error correction.
The modules in the target object positioning device can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 13. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a target object localization method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 13 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (15)

1. A method for locating a target object, the method comprising:
acquiring positioning data of a target object;
performing road network matching according to the positioning data, and determining a target road where the target object is located;
when the target object changes lanes on the target road, determining lane areas corresponding to candidate lanes in the target road in a target plane;
determining a target lane from the candidate lanes so that a projection point of the target object in the target plane is located in a lane area corresponding to the target lane;
positioning the target object to the target lane.
2. The method of claim 1, wherein the determining a target lane from the candidate lanes such that a projected point of the target object in the target plane is within a lane area corresponding to the target lane comprises:
respectively constructing lane boundary functions for each candidate lane in the target road;
determining a target lane from the candidate lanes, so that a solving result of a lane boundary function corresponding to the target lane meets a result screening condition;
and the result screening condition is used for representing that the projection point of the target object in the target plane is positioned in the lane area corresponding to the target lane.
3. The method of claim 2, wherein the separately constructing a lane boundary function for each candidate lane in the target roadway comprises:
determining a matching relation between a candidate lane characterized by the lane area and a projection line based on the projection line forming the lane area in the target plane;
and constructing a segmented lane boundary function according to the position relation between the target projection line and the projection point of the target object in the target plane based on the target projection line matched with each candidate lane.
4. The method of claim 3, wherein the object projection lines include a first projection line and a second projection line parallel to an object coordinate axis in a planar coordinate system;
a first distance between the first projection line and the target coordinate axis is less than a second distance between the second projection line and the target coordinate axis;
when the target distance between the projection point and the target coordinate axis is smaller than the first distance, the value of the lane boundary function is positively correlated with the distance between the projection point and the first projection line;
when the target distance between the projection point and the target coordinate axis is greater than the second distance, the value of the lane boundary function is positively correlated with the distance between the projection point and the second projection line;
and when the target distance between the projection point and the target coordinate axis is greater than or equal to the first distance and less than or equal to the second distance, the value of the lane boundary function is zero.
5. The method of claim 1, wherein the determining respective lane areas in a target plane for respective candidate lanes in the target road comprises:
acquiring the road width and the number of lanes of the target road from road network data;
determining a lane area of the target road corresponding to each candidate lane in the target plane based on the road width and the number of lanes.
6. The method of claim 2, further comprising:
determining filter parameters for a target object based on positioning data and observation data of the target object;
acquiring a covariance matrix corresponding to the filtering parameters and estimation parameters corresponding to the covariance matrix;
taking the covariance matrix and the estimation parameters as prior information, and performing nonlinear optimization by combining lane boundary functions corresponding to the candidate lanes to obtain target cost function values corresponding to each candidate lane;
determining a target lane from the candidate lanes so that a solution result of the lane boundary function combination meets a result screening condition, wherein the method comprises the following steps:
and screening out the candidate lane with the minimum target cost function value as a target lane.
7. The method of claim 1, further comprising:
monitoring course change data and angular speed change data of the target object;
determining a first matching result between the course change data and a course change template and a second matching result between the angular speed change data and the angular speed change template;
and when the first matching result accords with a first matching condition and the second matching result accords with a second matching condition, determining that the target object has lane change on the target road.
8. The method of claim 7, wherein determining a first match between the heading change data and a heading change template and a second match between the angular velocity change data and an angular velocity change template comprises:
acquiring a course change template and an angular speed change template which are matched with the lane width of the lane where the target object is located at present based on the lane width of the lane where the target object is located;
carrying out normalized cross-correlation matching on the course change data and the course change template to obtain a first matching result between the course change data and the course change template;
and carrying out normalized cross-correlation matching on the angular velocity change data and the angular velocity change template to obtain a second matching result between the angular velocity change data and the angular velocity change template.
9. The method according to claim 7, wherein the determining that the target object has a lane change on the target road when the first matching result meets a first matching condition and the second matching result meets a second matching condition comprises:
and when the course variation in the course variation data reaches the curve peak value in the course variation template and the angular speed maximum value in the angular speed variation data reaches the curve peak value in the angular speed variation template, judging that the target object has lane change on the target road.
10. The method according to any one of claims 1 to 9, further comprising:
updating the initial positioning data of the target object according to the inertial sensing data matched with the target object to obtain intermediate data;
determining an error correction amount of the intermediate data based on the satellite observation data matched with the target object;
and according to the error correction amount, carrying out error correction on the intermediate data to obtain updated positioning data.
11. The method of claim 10, wherein determining an error correction for the intermediate data based on satellite observations matching the target object comprises:
acquiring a pseudo-range observation value and a Doppler observation value from satellite observation data corresponding to the target object;
constructing a filtering parameter error correction equation based on the pseudo-range observation value and the Doppler observation value;
and solving the filtering parameter error correction equation to obtain error correction.
12. An apparatus for locating a target object, the apparatus comprising:
the positioning data acquisition module is used for acquiring positioning data of the target object;
the road network matching module is used for performing road network matching according to the positioning data and determining a target road where the target object is located;
the lane change detection module is used for determining lane areas, corresponding to candidate lanes in a target plane, of the target road when the target object changes lanes on the target road;
and the lane positioning module is used for determining a target lane from the candidate lanes, so that the projection point of the target object in the target plane is positioned in a lane area corresponding to the target lane, and positioning the target object to the target lane.
13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 11 when executing the computer program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 11.
15. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 11 when executed by a processor.
CN202111363932.9A 2021-11-17 2021-11-17 Target object positioning method, apparatus, storage medium and computer program product Pending CN114061611A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111363932.9A CN114061611A (en) 2021-11-17 2021-11-17 Target object positioning method, apparatus, storage medium and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111363932.9A CN114061611A (en) 2021-11-17 2021-11-17 Target object positioning method, apparatus, storage medium and computer program product

Publications (1)

Publication Number Publication Date
CN114061611A true CN114061611A (en) 2022-02-18

Family

ID=80277826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111363932.9A Pending CN114061611A (en) 2021-11-17 2021-11-17 Target object positioning method, apparatus, storage medium and computer program product

Country Status (1)

Country Link
CN (1) CN114061611A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115127547A (en) * 2022-06-27 2022-09-30 长安大学 Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning
CN116242410A (en) * 2022-09-05 2023-06-09 浙江智马达智能科技有限公司 Calibration method, terminal and computer storage medium
CN117330097A (en) * 2023-12-01 2024-01-02 深圳元戎启行科技有限公司 Vehicle positioning optimization method, device, equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115127547A (en) * 2022-06-27 2022-09-30 长安大学 Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning
CN115127547B (en) * 2022-06-27 2024-04-19 长安大学 Tunnel detection vehicle positioning method based on strapdown inertial navigation system and image positioning
CN116242410A (en) * 2022-09-05 2023-06-09 浙江智马达智能科技有限公司 Calibration method, terminal and computer storage medium
CN116242410B (en) * 2022-09-05 2023-12-19 浙江智马达智能科技有限公司 Calibration method, terminal and computer storage medium
CN117330097A (en) * 2023-12-01 2024-01-02 深圳元戎启行科技有限公司 Vehicle positioning optimization method, device, equipment and storage medium
CN117330097B (en) * 2023-12-01 2024-05-10 深圳元戎启行科技有限公司 Vehicle positioning optimization method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112639502B (en) Robot pose estimation
Lee et al. Intermittent gps-aided vio: Online initialization and calibration
CN111551186B (en) Real-time vehicle positioning method and system and vehicle
Zhang et al. Increasing GPS localization accuracy with reinforcement learning
Georgy et al. Modeling the stochastic drift of a MEMS-based gyroscope in gyro/odometer/GPS integrated navigation
US8718932B1 (en) Snapping GPS tracks to road segments
CN110686686B (en) System and method for map matching
CN114061611A (en) Target object positioning method, apparatus, storage medium and computer program product
Eling et al. Development of an instantaneous GNSS/MEMS attitude determination system
US20170285181A1 (en) Measuring traffic speed in a road network
CN114646992B (en) Positioning method, apparatus, computer device, storage medium and computer program product
Niu et al. Development and evaluation of GNSS/INS data processing software for position and orientation systems
CN113063425B (en) Vehicle positioning method and device, electronic equipment and storage medium
US20240255305A1 (en) Vehicle positioning method and apparatus, computer device, and storage medium
Bai et al. A sensor fusion framework using multiple particle filters for video-based navigation
Lyu et al. Optimal time difference-based TDCP-GPS/IMU navigation using graph optimization
Nagui et al. Improved GPS/IMU loosely coupled integration scheme using two kalman filter-based cascaded stages
Georgy Advanced nonlinear techniques for low cost land vehicle navigation
CN114061570A (en) Vehicle positioning method and device, computer equipment and storage medium
Chiang et al. Multifusion schemes of INS/GNSS/GCPs/V-SLAM applied using data from smartphone sensors for land vehicular navigation applications
Zhang et al. RANSAC-Based Fault Detection and Exclusion Algorithm for Single-Difference Tightly Coupled GNSS/INS Integration
Forno et al. Techniques for improving localization applications running on low-cost IoT devices
Hussain et al. Comparison of FGO and KF for PDR-GNSS Fusion on Smartphone for Diverse Pedestrian Movements
Bonnifait et al. Real-time implementation of a GIS-based localization system for intelligent vehicles
Song et al. Enhanced Map‐Aided GPS/3D RISS Combined Positioning Strategy in Urban Canyons

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40065612

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination