CN115147792A - Vision-based positioning method and device, computer equipment and storage medium - Google Patents

Vision-based positioning method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115147792A
CN115147792A CN202210750190.3A CN202210750190A CN115147792A CN 115147792 A CN115147792 A CN 115147792A CN 202210750190 A CN202210750190 A CN 202210750190A CN 115147792 A CN115147792 A CN 115147792A
Authority
CN
China
Prior art keywords
type
target
sampling
determining
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210750190.3A
Other languages
Chinese (zh)
Inventor
王若夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaoma Yiyi Technology Shanghai Co ltd
Original Assignee
Xiaoma Yiyi Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaoma Yiyi Technology Shanghai Co ltd filed Critical Xiaoma Yiyi Technology Shanghai Co ltd
Priority to CN202210750190.3A priority Critical patent/CN115147792A/en
Publication of CN115147792A publication Critical patent/CN115147792A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a vision-based positioning method, a vision-based positioning device, a computer device and a storage medium. The method comprises the following steps: identifying an image of a road environment, and determining a target marker from the identified markers; determining sampling points of map elements in the electronic map by adopting a corresponding characteristic point matching mode; determining a target observation model corresponding to the type of a target marker in a plurality of preset observation models; inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into a target observation model, and outputting the measured value of the Kalman filter; and inputting the measured value into a corresponding Kalman filter to obtain a positioning result corresponding to the target marker. By adopting the method, the efficiency of characteristic point matching can be improved, the consumption of computing resources is reduced, and the noise brought by the characteristic point matching link is reduced, so that the positioning error is reduced.

Description

Vision-based positioning method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of positioning technologies, and in particular, to a positioning method and apparatus based on vision, a computer device, and a storage medium.
Background
During the driving process of a vehicle, it is common to determine the position of the vehicle by using a visual positioning technology or combining more positioning technologies (such as a satellite positioning technology, a laser radar positioning technology, etc.) on the basis of the visual positioning technology.
In the conventional visual positioning technology, the main steps include: the method comprises the steps of obtaining an image collected by a visual camera, identifying the image, selecting a plurality of feature points in the image, extracting sampling points of matched map elements from a high-precision map through feature point matching, placing the feature points in the image and the sampling points of the map elements in the same coordinate system, calculating the distance between the feature points and the sampling points, and combining known position information of the sampling points, thereby calculating the position information of the current vehicle.
However, the number of map elements is large, different map elements have different forms, and the conventional visual positioning technology adopts the same feature point matching mode, so that it is difficult to accurately extract sampling points matched with image elements, and the calculated position information has a certain error.
Disclosure of Invention
In view of the above, there is a need to provide a vision-based positioning method, apparatus, computer device and storage medium that are beneficial for reducing positioning errors.
The application discloses a positioning method based on vision, which comprises the following steps:
identifying an image of a road environment, and determining a target marker from the identified markers;
determining a sampling point of a map element in the electronic map by adopting a corresponding characteristic point matching mode according to the type of the target marker;
determining a target observation model corresponding to the type of the target marker in a plurality of preset observation models; the observation model is used for representing the position relation between the characteristic point of the marker and the sampling point in the corresponding map element, and different observation models and different characteristic point matching modes correspond to the markers with different boundary constraint conditions;
inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into a target observation model, and outputting the measured value of the Kalman filter;
and inputting the measured value into a corresponding Kalman filter to obtain a positioning result corresponding to the target marker.
In some embodiments, the types of markers include a first type for indicating a lack of a boundary constraint in a long direction; the plurality of observation models comprise a first type of observation model corresponding to a first type, and the first type of observation model is used for representing the position relation between the characteristic point of the marker and the sampling line segment in the corresponding map element;
when the target marker comprises a first type of target marker, determining a sampling point of a map element in the electronic map by adopting a corresponding characteristic point matching mode according to the type of the target marker, wherein the method comprises the following steps: determining a first type of map element corresponding to a first type of target marker; determining a first type of sampling point on a first type of map element according to a preset first type of matching mode;
inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into a target observation model, and outputting the measured value of the Kalman filter, wherein the method comprises the following steps: determining the corresponding relation between the characteristic point of the first type of target marker and the sampling line segment as a first type of corresponding relation; determining the position data of the sampling line segment according to the position data of the first type of sampling points; inputting the observed values of the characteristic points of the first type of target markers and the position data of the sampling line segments into a first type of observation model according to the first type of corresponding relation, and outputting the distances between the characteristic points of the first type of target markers and the corresponding sampling line segments as the measured values of the Kalman filter; the sampling line segment takes two adjacent first type sampling points as end points.
In some embodiments, determining a first type of sample point on a first type of map element according to a preset first type matching manner includes:
determining a center line of the first type of map elements extending along the long direction;
and selecting a plurality of sampling points from the center line as first type sampling points according to a preset sampling interval.
In some embodiments, the sampling interval varies according to a change in curvature of the centerline.
In some embodiments, the types of markers further include a second type for indicating having a closed boundary; the plurality of observation models further comprise a second type of observation model corresponding to a second type, the second type of observation model being used for representing a positional relationship between the feature points of the markers and the sampling points in the respective map elements;
when the target marker comprises a second type of target marker, determining a sampling point of a map element in the electronic map by adopting a corresponding characteristic point matching mode according to the type of the target marker, wherein the method comprises the following steps: determining a second type of map element corresponding to a second type of target marker; determining a second type of sampling point on a second type of map element according to a preset second type of matching mode;
inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into a target observation model, and outputting the measured value of the Kalman filter, and the method further comprises the following steps: determining the corresponding relation between the characteristic point of the second type of target marker and the second type of sampling point as a second type of corresponding relation; and according to the second type of corresponding relation, inputting the observed value of the characteristic point of the second type of target marker and the position data of the second type of sampling point into a second type of observation model, and outputting the distance between the characteristic point of the second type of target marker and the corresponding second type of sampling point as the measured value of the Kalman filter.
In some embodiments, determining a second type of sample point on a second type of map element according to a preset second type of matching manner includes:
and determining corner points, mass points or central points of the second type map elements as second type sampling points according to the attributes of the second type map elements.
In some embodiments, before inputting the measurement values into the corresponding kalman filter to obtain the positioning result corresponding to the target marker, the vision-based positioning method further includes:
determining the current Kalman innovation according to the measured value;
determining a current innovation test threshold value according to the type of the target marker and a chi-square value of the past Kalman innovation;
determining a chi-square value of the current Kalman innovation;
the measurement values are discarded when the current innovation check threshold is exceeded by the current kalman innovation chi-squared value.
In some embodiments, when the marker in the image includes a plurality of different types of target markers, inputting the observed values of the feature points of the target markers and the position data of the sampling points into the target observation model, and outputting the measured values of the kalman filter, includes: inputting the observed values of the characteristic points of the plurality of target markers and the position data of the sampling points into corresponding target observation models, and outputting a measurement value matrix, wherein the measurement value matrix comprises the measurement values corresponding to the plurality of target markers;
inputting the measured value into a corresponding Kalman filter to obtain a positioning result corresponding to the target marker, wherein the positioning result comprises the following steps: and inputting the measurement value matrix into a Kalman filter to obtain a positioning result corresponding to the image.
The application also discloses positioner based on vision, it includes:
the image acquisition module is used for acquiring an image of a road environment;
the identification module is used for identifying the marker in the image;
the observation model determining module is used for determining a corresponding target observation model according to the type of a target marker in a plurality of preset observation models; the observation models are used for expressing the position relation between the characteristic points of the markers and the sampling points in the corresponding map elements, and different observation models correspond to the markers with different boundary constraint conditions;
the measured value determining module is used for inputting the observed value of the characteristic point of the target marker into the target observation model and outputting the measured value of the Kalman filter;
and the filtering module is used for inputting the measured value into a corresponding Kalman filter to obtain a positioning result corresponding to the target marker.
The application also discloses a computer device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the following steps:
identifying an image of a road environment, and determining a target marker from the identified markers;
determining sampling points of map elements in the electronic map by adopting a corresponding characteristic point matching mode according to the type of the target marker;
determining a target observation model corresponding to the type of the target marker in a plurality of preset observation models; the observation model is used for representing the position relation between the characteristic point of the marker and the sampling point in the corresponding map element, and different observation models and different characteristic point matching modes correspond to the markers with different boundary constraint conditions;
inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into a target observation model, and outputting the measured value of the Kalman filter;
and inputting the measured value into a corresponding Kalman filter to obtain a positioning result corresponding to the target marker.
The present application also discloses a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
identifying an image of a road environment, and determining a target marker from the identified markers;
determining a sampling point of a map element in the electronic map by adopting a corresponding characteristic point matching mode according to the type of the target marker;
determining a target observation model corresponding to the type of a target marker in a plurality of preset observation models; the observation model is used for representing the position relation between the characteristic point of the marker and the sampling point in the corresponding map element, and different observation models and different characteristic point matching modes correspond to the markers with different boundary constraint conditions;
inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into a target observation model, and outputting the measured value of the Kalman filter;
and inputting the measured value into a corresponding Kalman filter to obtain a positioning result corresponding to the target marker.
According to the vision-based positioning method, the vision-based positioning device, the computer equipment and the storage medium, the target observation models corresponding to the target markers of different types are determined by presetting a plurality of observation models according to different boundary constraint conditions corresponding to the target markers in the image, corresponding feature point matching modes are provided for the markers of different forms, the feature point matching efficiency can be improved, the consumption of computing resources is reduced, and the noise brought by the feature point matching link is reduced, so that the positioning error is reduced.
Drawings
FIG. 1 is a diagram of an application environment of a vision-based localization method in some embodiments;
FIG. 2 is a flow diagram of a vision-based localization method in some embodiments;
FIG. 3 is a schematic data processing flow diagram relating to a first class of observation models in some embodiments;
FIG. 4 is a data processing flow diagram relating to a second type of observation model in some embodiments;
FIG. 5 is a schematic flow diagram of steps involved in discarding measurements in some embodiments;
FIG. 6 is a schematic diagram illustrating the position relationship between the feature points and the first type of sampling points when the target marker is a lane line in some embodiments;
FIG. 7 is a schematic diagram of the position of a feature point of a second type of target marker in relation to a second type of sample point in some embodiments;
FIG. 8 is a block diagram of a vision-based pointing device in one embodiment;
FIG. 9 is a diagram of an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The vision-based positioning method provided by the application can be applied to the application environment shown in fig. 1. The processor 101 may communicate with the vision camera 102 via a network to obtain an image of the road environment captured by the vision camera 102. The processor 101 may be implemented in hardware using at least one of a Programmable Logic Array (PLA), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a general purpose processor, or other programmable logic devices. The processor 101 and the vision camera 102 may be part of an onboard pointing device or may be part of other types of pointing devices.
In one embodiment, as shown in fig. 2, the present application provides a vision-based positioning method, which is described by taking the method as an example applied to the processor 101 in fig. 1, and includes steps S201 to S205 that can be executed by the processor 101, and the following steps are described.
In step S201, the image of the road environment is recognized, and the target marker is specified from the recognized markers.
The image of the road environment is an image including a projection of a road or a projection of a scene around the road. The processor 101 typically needs to acquire an image of the road environment prior to identification. In general, the image of the road environment may refer to an original image captured by the vision camera 102, or may refer to an image obtained by preprocessing the original image, and the preprocessing method is not particularly limited, and for example, an image preprocessing method in the existing vision positioning technology may be used. Generally, acquiring the image of the road environment may refer to directly acquiring the image acquired by the vision camera 102, or indirectly acquiring the image acquired by the vision camera 102 through other data relay equipment or data processing equipment, or referring to the processor 101 acquiring the image of the road environment preprocessed by the processor.
The processor 101 may identify the marker in the image using image recognition techniques. The markers may include traffic signs, traffic markings, or other visual markers that may be used for visual positioning.
The target marker is intended for visual positioning, and therefore, all of the markers recognized in the image may be used as the target marker, or only a part or one of the recognized markers may be used as the target marker. It follows that the number of target markers may be one or more.
And S202, determining sampling points of map elements in the electronic map by adopting a corresponding characteristic point matching mode according to the type of the target marker.
The boundary constraint is used to represent the boundary morphology of the marker. Different markers may have different boundary constraints, for example, when a certain marker is a lane line or a road guardrail, it may lack a boundary constraint in the longitudinal direction, that is, since the markers in the image extend all the way in the longitudinal direction, it is difficult to determine the extension length thereof. In an image, such markers have a boundary constraint in the short direction, i.e. a boundary in the short direction, whose width can also be determined. In some cases, for traffic signs such as straight lines, turns, etc. on roads, these markers typically have a closed border in one or a succession of multiple images. The boundary constraints of markers with different shapes can also be understood to be different for markers with more corners of the shape in some images and markers with less corners and more arcs in other images.
The feature point matching mode may be understood as a mode of determining a map element corresponding to a marker from an electronic map, and selecting a sampling point matching the feature point of the marker from the map element. In some embodiments, different markers may be classified according to boundary constraints, and different feature point matching modes may be configured for different types of markers. For example, for a marker such as a lane line which lacks a boundary constraint condition in the longitudinal direction, a type marker may be set for the marker, and the feature point matching manner is set to extract sampling points for corresponding map elements at preset sampling intervals in the longitudinal direction, including but not limited to extracting points on a center line in the longitudinal direction as the sampling points, or extracting points on a boundary line in the longitudinal direction as the sampling points; at this time, the number of extracted sampling points does not need to coincide with the number of feature points of the marker. For traffic prompting facilities such as traffic signs or ground traffic signs (without lane lines), another type mark can be set for the traffic prompting facilities or the ground traffic signs, and the feature point matching mode is set to extract the corner points, the central points, the mass points or the points with other meanings of the corresponding map elements as sampling points. Of course, different types of markers can be set in different shapes of traffic prompting facilities, ground traffic signs or other signs in a subdivision manner, and corresponding feature point matching modes are configured, for example, for some square or circular markers, the center point of the marker can be used as a sampling point; for some markers formed by combining various shapes, mass points corresponding to the whole shape of the marker can be used as sampling points, or angular points corresponding to the shapes of all parts of the marker can be used as sampling points; for some real-world markers with elongated shapes, such as rod-shaped markers, the corresponding type markers can be set according to the way of the lane lines, and the feature point matching way is set to extract sampling points at preset sampling intervals along the long direction of the markers. In short, the flexible arrangement can be carried out according to specific situations and requirements. Of course, it is also possible to obtain different sampling points for a certain type of marker by using multiple feature point matching modes through an experimental mode, obtain corresponding positioning results according to the feature points of the marker and the position data of the corresponding sampling points, and select the feature point matching mode with the highest precision as the feature point matching mode finally adopted by the marker of the type according to the precision of the positioning results corresponding to the different feature point matching modes.
Step S203, determining a target observation model corresponding to the type of the target marker from a plurality of preset observation models.
The observation model is used for representing the position relation between the characteristic point of the marker and the sampling point in the corresponding map element. Different observation models and different feature point matching modes correspond to markers with different boundary constraint conditions, for example, for a marker lacking boundary constraint conditions in the longitudinal direction, the corresponding observation model may be an observation equation about the distance between the feature point of the marker and a sampling line segment, and the sampling line segment refers to a line segment with two adjacent sampling points as end points; for a marker with a closed boundary, the corresponding observation model may be an observation equation relating to the distance between the feature point of the marker and the corresponding sampling point. It has been described above that the feature point matching method may be various, that is, the number and the position of the selected sampling points may be various, and the observation model may be flexibly adjusted according to the feature point matching method. The target observation model refers to an observation model corresponding to the type of the target marker.
Since the observation model is used to represent the position relationship, the output quantity of the observation model may be a value used to measure the distance between the marker and the corresponding map element in the same coordinate system, and the value may be generally represented by the distance between the feature point of the marker and the sampling point of the map element. In general, the sampling points of the map elements may be projected into the image coordinate system so that the feature points of the markers and the sampling points of the map elements are placed under the same coordinate system, and then the distance values between the feature points of the markers and the sampling points of the map elements under the same coordinate system may be calculated as the output quantity of the observation model. In other places herein, when referring to, adjective or referring to the distance between a feature point and a sample point, or expressing similar meanings, it refers to the distance between the feature point and the sample point projected onto the same coordinate system, unless otherwise specified.
It should be noted that the execution sequence between step S203 and step S202 is not particularly limited, and fig. 2 shows the execution sequence in only one case.
And step S204, inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into a target observation model, and outputting the measured value of the Kalman filter.
In general, the value output by the target observation model may be used as a measurement value of the kalman filter.
And S205, inputting the measured value into a corresponding Kalman filter to obtain a positioning result corresponding to the target marker.
Step S204 and step S205 show that the vision-based positioning method adopts a kalman filter algorithm to obtain a positioning result. The kalman filtering algorithm is an algorithm for performing optimal estimation on the system state by inputting and outputting measurement data through a system state equation, and the optimal estimation can also be regarded as a filtering process because the measurement data includes the influence of noise and interference in the system. For purposes of this application, the aforementioned system may refer to a vehicle having a pointing device. The core of the kalman filtering algorithm lies in the design of the kalman filter, and the following provides basic equations involved in the kalman filter, which include equations in the prediction phase and equations in the update phase, which have been well documented and described, and the principles of these equations are not described in detail herein.
The equations for the prediction phase include:
Figure BDA0003718007090000101
Figure BDA0003718007090000102
the equations in the update phase include:
Figure BDA0003718007090000103
Figure BDA0003718007090000104
Figure BDA0003718007090000105
Figure BDA0003718007090000106
P k|k =(I-K k H k )P k|k-1
wherein the content of the first and second substances,
Figure BDA0003718007090000107
representing the a priori state estimate at time k,
Figure BDA0003718007090000108
representing the estimate of the posterior state at time (k-1), the optimal estimate of the positioning result obtained at time (k-1) may be considered in the embodiments of the present application
Figure BDA0003718007090000109
Figure BDA00037180070900001010
The posterior state estimated value at the time k is represented, and in the embodiment of the present application, the positioning result corresponding to the target marker in step S205 may be represented; f k A state transition matrix is represented that represents the state transition,
Figure BDA00037180070900001011
is F k The transposed matrix of (2); b is k Represents a control matrix, u k Representing control vectors, the kalman filter may also omit terms for the control matrix and control vectors in some cases; p is k|k-1 Representing the covariance of the prior state, P, representing the time instant k k-1|k-1 Represents the posterior state covariance, P, at time (k-1) k|k Representing the posterior state covariance at time k; q k Representing a predicted noise covariance;
Figure BDA00037180070900001012
representing the Kalman innovation at the moment k; z is a radical of k Representing the measurement of the Kalman filter, the output result of the target observation model can be taken as z in the embodiment of the application k ;H k Is a transition matrix, representing the transition relationship between the state and the measurement,
Figure BDA00037180070900001013
is H k The transposed matrix of (2); s k Covariance representing kalman innovation; r k Representing the measurement noise covariance; k k Representation of KalmanGain; i denotes an identity matrix.
In some embodiments, z k The corresponding observation models may be non-linear, and they may employ the observation models mentioned below for representing point-to-line relationships, or for representing point-to-point relationships. The kalman filter algorithm in the embodiment of the present application, a basic kalman filter algorithm, an extended kalman filter algorithm may be employed, or error state Kalman filtering algorithms, the underlying equations of the different Kalman filters are similar, differing primarily in H k Of the underlying Kalman Filter Algorithm H k Often fixed or of definite expression, H for non-linear observation models k Is obtained by carrying out linearization around the predicted value according to the state of the current system. For the introduction of different kalman filtering algorithms, reference may be made to the descriptions of many existing documents, which are not described herein in detail.
In step S205, the measurement value of the kalman filter obtained in step S204 may be regarded as z k Obtaining the positioning result corresponding to the target marker at the time k according to the equation of the prediction stage and the equation of the update stage, namely
Figure BDA0003718007090000111
Of course, those skilled in the art may also make other transformations on the basis of the above equations, and use the equations obtained after the transformation as the equations of the kalman filter, as long as the basic principle of the kalman filter algorithm is met.
According to the positioning method based on the vision, the target observation models corresponding to the target markers of different types are determined according to different boundary constraint conditions corresponding to the target markers in the image by presetting the plurality of observation models, so that corresponding characteristic point matching modes are provided for the markers of different forms, the efficiency of characteristic point matching can be improved, the consumption of computing resources is reduced, and the noise brought by the characteristic point matching link is favorably reduced, so that the positioning error is reduced. In addition, the vision-based positioning method is a tightly-coupled Kalman fusion positioning mode, and for some target markers lacking boundary constraint conditions, the tightly-coupled mode does not need to solve absolute positions and postures in advance during feature point matching, and observation with constraint directions can be fused in a simple and efficient mode.
In some embodiments, the types of markers include a first type for indicating a lack of a boundary constraint in a long direction; the plurality of observation models comprise a first type of observation model corresponding to a first type, and the first type of observation model is used for representing the position relation between the characteristic point of the marker and the sampling line segment in the corresponding map element; when the target marker includes a first type of target marker, as shown in fig. 3, step S202 includes: step S301, determining a first type of map element corresponding to a first type of target marker; step S302, determining a first type sampling point on a first type map element according to a preset first type matching mode. Accordingly, step S204 includes: step S303, determining the corresponding relation between the characteristic point of the first type of target marker and the sampling line segment as a first type corresponding relation; step S304, determining the position data of the sampling line segment according to the position data of the first type of sampling point; step S305, inputting the observed value of the characteristic point of the first type of target marker and the position data of the sampling line segment into a first type of observation model according to the first type of corresponding relation, and outputting the distance between the characteristic point of the first type of target marker and the corresponding sampling line segment as the measurement value of the Kalman filter. The sampling line segment takes two adjacent first-type sampling points as endpoints.
In some application scenarios, the first type of marker comprises a lane line; in still other application scenarios, the first type of marker may further comprise an elongated shaft, a guardrail extending along the roadway, or other marker or object having an elongated shape.
The first-type observation model is used to represent a positional relationship between a feature point of a marker and a sampling line segment in a corresponding map element, and in general, an equation for calculating a point-to-line distance may be employed as the first-type observation model. For example, a first type of observation model may employ the following equation:
Figure BDA0003718007090000121
wherein z is 1 An output value representing the first observation model, can be directly used as the measurement value z of a Kalman filter k And the signal is input into a Kalman filter for filtering, and can also be used as z k One element in the matrix. i all right angle pdetect The position data of the characteristic points of the marker in the image coordinate system can be usually represented by a coordinate matrix, a superscript i can represent the serial number of the characteristic points, and a superscript T is a symbol of a transposed matrix; c. C l An equation representing a line on which a sampled line segment of a first type of map element is projected onto the image coordinate system, which can be generally expressed as a line equation
Figure BDA0003718007090000122
x and y are coordinate values representing points on a straight line, and therefore, may be used
Figure BDA0003718007090000123
And
Figure BDA0003718007090000124
understood as the element that determines the slope of the line equation.
In some embodiments, c l This can be obtained from the following projection equation:
Figure BDA0003718007090000125
Figure BDA0003718007090000126
wherein K represents camera internal parameters, n c N-component, d, representing the Planckian coordinate of a straight line in the camera coordinate system c D-component, L, representing the Planckian coordinate of a straight line in the camera coordinate system c Representing the Planckian coordinate, L, of a straight line in the camera coordinate system w Representing straight lines in a world coordinate systemGram coordinate, R cw Rotation matrix, p, representing world coordinate system to camera coordinate system cw A translation matrix representing the world coordinate system to the camera coordinate system. Further description of these projection equations can be found in the prior art literature and will not be described here too much.
The first type map elements are map elements matched with the first type target markers, and can be determined from the electronic map according to the recently obtained position information. For example, when the first type of target marker is a lane line, then a corresponding lane line element may be determined from an electronic map (typically a high-precision map) as a first type of map element based on recently obtained position information (e.g., satellite positioning information or other sensor-collected position information) of the current positioning device. For other types of markers, the same principle and mode can be adopted when corresponding map elements matched from the electronic map are needed, and the description is omitted later.
The preset first-type matching mode refers to a feature point matching mode adopted for the first-type target marker. Generally, the sampling points can be extracted from the corresponding map elements along the long direction at preset sampling intervals, including but not limited to extracting points on the center line along the long direction as the sampling points, or extracting points on the boundary line along the long direction as the sampling points, so that the number of the sampling points is controlled by the sampling intervals, which is beneficial to flexibly controlling the calculated amount and saving the calculation resources.
In some embodiments, the number of the first type sampling points is less than the number of the feature points of the target marker, and at this time, for a feature point of a certain target marker, two first type sampling points closest to the feature point may be determined, and these two first type sampling points may serve as end points of the sampling line segment corresponding to the feature point. For convenience of understanding, the target marker is taken as a lane line for explanation, as shown in fig. 6, the feature point 621, the feature point 622, the feature point 623, and the feature point 624 are four feature points extracted from a lane line in an image, correspondingly, the first type sampling point 611, the first type sampling point 612, and the first type sampling point 613 are three first type sampling points projected under an image coordinate system, extracted from a lane line element in an electronic map after matching the feature points, and the drawing also shows a sampling line segment 601 and a sampling line segment 602 projected under the image coordinate system, where the sampling line segment 601 takes the first type sampling point 611 and the first type sampling point 612 as end points, and the sampling line segment 602 takes the first type sampling point 612 and the first type sampling point 613 as end points. At this time, the number of the first type sampling points extracted from the lane line elements is less than the number of the feature points, and the distance between a feature point and a corresponding sampling line segment may be used as the measurement value of the kalman filter corresponding to each feature point, for example, for the feature point 621, the distance between the feature point 621 and the sampling line segment 601 may be used as the measurement value of the kalman filter corresponding to the feature point. In some embodiments, step S204 and step S205 may be performed on each feature point in sequence, and the distance between each feature point and the corresponding sampling line segment is input to the kalman filter as the measurement value of the kalman filter in sequence, so as to perform kalman filtering for multiple times; in other embodiments, a measurement value matrix may be generated by using distances between a plurality of feature points and respective corresponding sampling line segments, and the measurement value matrix may be input to the kalman filter as a measurement value of the kalman filter at one time.
In most cases, there are a plurality of feature points and a plurality of sampling line segments, and at this time, the first correspondence may be determined according to the distance between the first-class sampling point and the feature point, and specifically, a sampling line segment formed by two first-class sampling points closest to the feature point may be selected, so that the sampling line segment and the feature point have the first correspondence. For example, in fig. 6, the feature point 621 has a first corresponding relationship with the sampling line segment 601, and the feature point 624 has a first corresponding relationship with the sampling line segment 602. After the first corresponding relationship is determined, the observed values of the feature points having the first corresponding relationship and the position data of the sampling line segments may be input into the first type observation model. The position data of the sampling line segment, which mainly refers to the position of the sampling line segment in the image coordinate system, can be generally expressed by a straight line equation.
In some embodiments, step S302 includes: determining a center line of the first type of map elements extending along the long direction; and selecting a plurality of sampling points from the center line as first type sampling points according to a preset sampling interval. When the first type map element is a lane line element, the aforementioned center line is a center line of the lane line element, and the longitudinal direction may be understood as a direction in which the lane line element extends forward along the road. The sampling interval is used for representing the distance between two adjacent sampling points, the sampling interval is set, flexible sampling can be carried out according to the extension characteristics of the first type map elements, for example, for the lane line elements, the sampling interval can be relatively smaller at the curve of the road, relatively more first type sampling points can be obtained at the moment, the positioning precision is favorably improved, and the sampling interval can be relatively larger at the straight part of the road, so that the calculation resources are favorably saved while the positioning precision is kept. The sampling interval may be variable or constant, and may be specifically set according to actual needs.
In some embodiments, the sampling interval varies according to a change in curvature of the centerline. Specifically, when the curvature of a center line of a certain segment is greater than a preset threshold, a relatively small sampling interval is selected for the center line of the segment, and when the curvature of the center line of the certain segment is less than the preset threshold, a relatively large sampling interval is selected for the center line of the segment.
In some embodiments, the types of markers further include a second type for indicating having a closed boundary; the plurality of observation models further includes a second type of observation model corresponding to the second type, the second type of observation model being for representing a positional relationship between the feature point of the marker and the sampling point in the corresponding map element. When the target marker includes a second type of target marker, as shown in fig. 4, step S202 includes: step S401, determining a second type of map element corresponding to a second type of target marker; step S402, determining a second type of sampling points on a second type of map elements according to a preset second type of matching mode. Accordingly, step S204 further includes: step S403, determining the corresponding relation between the characteristic point of the target marker of the second type and the second type sampling point as a second type corresponding relation; step S404, inputting the observed value of the characteristic point of the target marker of the second type and the position data of the second type sampling point into a second type observation model according to the second type corresponding relation, and outputting the distance between the characteristic point of the target marker of the second type and the corresponding second type sampling point as the measurement value of the Kalman filter.
In some application scenarios, the second type of marker comprises a traffic sign; in still other application scenarios, the second type of marker may further include a ground traffic marker, other marker, or object. The second type of observation model is used to represent a position relationship between a feature point of a marker and a second type of sampling point in a second type of map element, and in general, an equation for calculating a point-to-point distance may be used as the second type of observation model. It should be noted that the manner of representing the point-to-point distance may be various, for example, the distance between two-dimensional points may be represented, and the distance may also be represented in other forms for measuring the distance between two-dimensional points, where one form may be to calculate the difference between two-dimensional points in the image coordinate system (including the perpendicular x direction and y direction) to obtain the distance in the x direction and the distance in the y direction, respectively, and in this case, the representation form may be (x 1, y 1) - (x 2, y 2) = (dx, dy), that is, the values dx and dy may also be used to represent the point-to-point distance.
In some embodiments, the second type of observation model may employ the following equation:
Figure BDA0003718007090000151
wherein z is 2 The output value of the second observation model can be directly used as the measured value z of the Kalman filter k And the signal is input into a Kalman filter for filtering, and can also be used as z k One element in the matrix; pi () represents a projection equation, and represents the projection relation of the three-dimensional space point in the camera coordinate system projected onto the image coordinate system;
Figure BDA0003718007090000152
a transformation matrix representing a world coordinate system to a camera coordinate system; w p map indicating the second type of sample point in the world coordinate systemThe position data of the following. i p detect And position data indicating a feature point of the marker in the image coordinate system.
The second type of map elements are map elements matched with the second type of target markers, and can be determined from the electronic map according to the recently obtained position information.
The preset second-type matching mode is a feature point matching mode adopted for the second type of target marker. Usually, the corner points, the center points, the mass points, or other points beneficial for visual positioning of the second type map elements may be extracted as sampling points to be matched with the feature points of the target markers, for example, in the case that the target markers are traffic signs, the center points of the target markers in the image may be used as the feature points, and the center points of the corresponding traffic sign elements (i.e., the second type map elements) in the electronic map may be extracted as the second type sampling points.
Typically, the number of feature points is identical to the number of sample points of the second type. At this time, the distance between each feature point and the corresponding second type sampling point can be calculated as the output quantity of the second type observation model. Fig. 7 shows a matching relationship between a feature point and a second type of sample point, where the feature point 711 and the feature point 712 are two corner points of a target marker, and the second type of sample point 721 and the second type of sample point 722 are sample points projected to an image coordinate system for two corresponding corner points of a second map element. At this time, according to the image recognition and target matching technique, it may be determined that the feature point 711 and the second type of sampling point 721 have a second corresponding relationship, and the feature point 712 and the second type of sampling point 722 have a second corresponding relationship, and then, according to the second corresponding relationship, the measurement values of the kalman filters corresponding to the feature point 711 and the feature point 712 may be calculated respectively or together.
In some embodiments, step S402 includes: and determining the corner points, the mass points or the central points of the second type map elements as second type sampling points according to the attributes of the second type map elements. The attribute of the second type map element can be set according to the boundary constraint condition of the second type map element.
In some embodiments, as shown in fig. 5, before step S205, the vision-based positioning method further includes:
step S501, determining the current Kalman innovation according to the measurement value of the Kalman filter;
step S502, determining a current innovation test threshold value according to the type of a target marker and a chi-square value of the past Kalman innovation;
step S503, determining a chi-square value of the current Kalman innovation;
step S504, when the chi-squared value of the current Kalman innovation exceeds the current innovation check threshold, the measured value is abandoned.
Specifically, the chi-squared value may be determined by the following equation:
Figure BDA0003718007090000171
wherein the content of the first and second substances,
Figure BDA0003718007090000172
kalman Innovation, S, representing time k k Covariance representing the Kalman Innovation, C k The chi-squared value, which represents the kalman innovation, is the square of the normalized kalman innovation and conforms to the chi-squared distribution.
In the visual positioning process, the situation that the marker is wrongly identified due to false detection can occur sometimes, and map elements with the physical world inconsistent with the electronic map can also occur sometimes, which can be regarded as abnormal situations. The steps S501-S504 are executed, so that the abnormal condition can be effectively identified, and the measured value corresponding to the abnormal marker is removed, thereby improving the robustness of the positioning system. In step S502, since the recognition accuracy or the feature point matching manner corresponding to different types of target markers may also be different, a corresponding innovation check threshold may be set according to the type of the target marker. Meanwhile, past kalman information can be combined, for example, a certain number of recent kalman information are selected in a sliding window mode, and the innovation test threshold value at the future time is dynamically adjusted by calculating the mean value, median value or mode value of the chi-square value of the past kalman information and the like.
If the chi-squared value of the current kalman innovation does not exceed the current innovation check threshold, step S205 is executed.
In some embodiments, when the marker in the image includes a plurality of different types of target markers, step S204 includes: and inputting the observed values of the characteristic points of the plurality of target markers and the position data of the sampling points into the corresponding target observation model, and outputting a measurement value matrix, wherein the measurement value matrix comprises the measurement values corresponding to the plurality of target markers. Accordingly, step S205 includes: and inputting the measurement value matrix into a Kalman filter to obtain a positioning result corresponding to the image.
In the vision-based positioning method, the Kalman filter can further fuse positioning information obtained by other positioning modes such as satellite positioning and the like, so that a more accurate positioning result is output. The fusion positioning by kalman filter is not limited in particular, and many prior arts are available for reference.
It should be understood that although the various steps in the flow charts of fig. 2-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps illustrated in fig. 2-5, as well as other embodiments, relate to steps that are not performed in the exact order recited, unless explicitly stated herein, and may be performed in other orders. Moreover, at least a portion of the steps of the foregoing embodiments may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In some embodiments, as shown in fig. 8, the present application provides a vision-based positioning device 800 comprising:
an identification module 801, configured to identify an image of a road environment, and determine a target marker from the identified markers;
the matching module 802 is used for determining sampling points of map elements in the electronic map by adopting a corresponding characteristic point matching mode according to the type of the target marker;
an observation model determining module 803, configured to determine, in a plurality of preset observation models, a target observation model corresponding to the type of the target marker; the observation model is used for representing the position relation between the characteristic point of the marker and the sampling point in the corresponding map element, and different observation models and different characteristic point matching modes correspond to the markers with different boundary constraint conditions;
the measured value determining module 804 is used for inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into the target observation model and outputting the measured value of the Kalman filter;
the filtering module 805 inputs the measured value into a corresponding kalman filter, and obtains a positioning result corresponding to the target marker.
In some embodiments, the types of markers include a first type for indicating a lack of a boundary constraint in a long direction; the plurality of observation models comprise a first type of observation model corresponding to a first type, and the first type of observation model is used for representing the position relation between the characteristic point of the marker and the sampling line segment in the corresponding map element; the matching module 802 includes: a first type element determination unit configured to determine a first type map element corresponding to a first type of target marker when the target marker includes the first type of target marker; and the first type sampling point determining unit is used for determining the first type sampling points on the first type map elements according to a preset first type matching mode. Accordingly, the measurement determination module 804 includes: the first corresponding relation determining unit is used for determining the corresponding relation between the characteristic point of the first type of target marker and the sampling line segment as a first type of corresponding relation; the sampling line segment data determining unit is used for determining the position data of the sampling line segment according to the position data of the first type of sampling points; the first measurement value determining unit is used for inputting the observed values of the feature points of the first type of target markers and the position data of the sampling line segments into the first type of observation model according to the first type of corresponding relation, and outputting the distances between the feature points of the first type of target markers and the corresponding sampling line segments to serve as the measurement values of the Kalman filter; the sampling line segment takes two adjacent first-type sampling points as endpoints.
In some embodiments, the first type sample point determining unit includes: a center line determining subunit, configured to determine a center line along which the first type of map element extends in the long direction; and the interval sampling subunit is used for selecting a plurality of sampling points from the center line as first class sampling points according to a preset sampling interval.
In some embodiments, the types of markers further include a second type for indicating having a closed boundary; the plurality of observation models further comprises a second type of observation model corresponding to the second type, and the second type of observation model is used for representing the position relation between the characteristic point of the marker and the sampling point in the corresponding map element; the matching module 802 further includes: a second element determination unit for determining a second type of map element corresponding to a second type of target marker when the target marker includes the second type of target marker; and the second type sampling point determining unit is used for determining a second type sampling point on the second type map element according to a preset second type matching mode. Accordingly, the measurement value determining module 804 further comprises: the second corresponding relation determining unit is used for determining the corresponding relation between the characteristic point of the second type of target marker and the second type of sampling point as a second type of corresponding relation; and the second measurement value determining unit is used for inputting the observed value of the characteristic point of the second type of target marker and the position data of the second type of sampling point into a second type observation model according to the second type corresponding relation, and outputting the distance between the characteristic point of the second type of target marker and the corresponding second type of sampling point as the measurement value of the Kalman filter.
In some embodiments, the second type sampling point determining unit includes a sampling subunit configured to determine, as the second type sampling point, a corner point, a mass point, or a center point of the second type map element according to the attribute of the second type map element.
In some embodiments, the vision-based positioning device 800 further comprises:
the Kalman innovation determination module is used for determining the current Kalman innovation according to the measured value before the measured value is input into the corresponding Kalman filter to obtain the positioning result corresponding to the target marker;
the threshold value determining module is used for determining a current innovation inspection threshold value according to the type of the target marker and a chi-square value of the past Kalman innovation;
the chi-square value determination module is used for determining the chi-square value of the current Kalman innovation;
and the abnormal observation value processing module is used for discarding the measurement value when the chi-square value of the current Kalman innovation exceeds the current innovation detection threshold value.
In some embodiments, the measurement value determining module 804 includes a measurement value processing unit, configured to, when the marker in the image includes a plurality of different types of target markers, input the observed values of the feature points of the plurality of target markers and the position data of the sampling points into the corresponding target observation model, and output a measurement value matrix, where the measurement value matrix includes the measurement values corresponding to the plurality of target markers. Correspondingly, the filtering module 805 includes a filtering unit, configured to input the measurement value matrix into the kalman filter, and obtain a positioning result corresponding to the image.
For specific limitations of the vision-based positioning apparatus 800, reference may be made to the above limitations of the vision-based positioning method, which are not described herein again. The various modules in the vision-based pointing device 800 described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In some embodiments, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. Which computer program is executed by a processor for implementing the aforementioned vision-based positioning method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configuration shown in fig. 9 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In some embodiments, the present application provides a computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
identifying an image of a road environment, and determining a target marker from the identified markers;
determining a sampling point of a map element in the electronic map by adopting a corresponding characteristic point matching mode according to the type of the target marker;
determining a target observation model corresponding to the type of the target marker in a plurality of preset observation models; the observation model is used for representing the position relation between the characteristic point of the marker and the sampling point in the corresponding map element, and different observation models and different characteristic point matching modes correspond to the markers with different boundary constraint conditions;
inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into a target observation model, and outputting the measured value of the Kalman filter;
and inputting the measured value into a corresponding Kalman filter to obtain a positioning result corresponding to the target marker.
In some embodiments, the processor, when executing the computer program, may also perform the other steps of the vision-based localization method in any of the foregoing embodiments.
In one embodiment, the present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
identifying an image of a road environment, and determining a target marker from the identified markers;
determining a sampling point of a map element in the electronic map by adopting a corresponding characteristic point matching mode according to the type of the target marker;
determining a target observation model corresponding to the type of the target marker in a plurality of preset observation models; the observation model is used for representing the position relation between the characteristic point of the marker and the sampling point in the corresponding map element, and different observation models and different characteristic point matching modes correspond to the markers with different boundary constraint conditions;
inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into a target observation model, and outputting the measured value of the Kalman filter;
and inputting the measured value into a corresponding Kalman filter to obtain a positioning result corresponding to the target marker.
In some embodiments, the computer program, when executed by the processor, may also implement the other steps of the vision-based localization method of any of the preceding embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A vision-based positioning method, characterized in that the method comprises:
identifying an image of a road environment, and determining a target marker from the identified markers;
determining sampling points of map elements in the electronic map by adopting a corresponding characteristic point matching mode according to the type of the target marker;
determining a target observation model corresponding to the type of the target marker in a plurality of preset observation models; the observation model is used for representing the position relation between the characteristic point of the marker and the sampling point in the corresponding map element, and different observation models and different characteristic point matching modes correspond to the markers with different boundary constraint conditions;
inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into the target observation model, and outputting the measured value of the Kalman filter;
and inputting the measured value into a corresponding Kalman filter to obtain a positioning result corresponding to the target marker.
2. The method of claim 1, wherein the types of markers include a first type for indicating a lack of a boundary constraint in a long direction; the plurality of observation models comprise a first type of observation model corresponding to the first type, and the first type of observation model is used for representing the position relation between the feature point of the marker and the sampling line segment in the corresponding map element;
when the target marker comprises a first type of target marker, determining sampling points of map elements in an electronic map by adopting a corresponding feature point matching mode according to the type of the target marker, wherein the method comprises the following steps: determining a first type of map element corresponding to the first type of target marker; determining a first type of sampling point on the first type of map element according to a preset first type matching mode;
the inputting the observed value of the characteristic point of the target marker and the position data of the sampling point into the target observation model and outputting the measured value of the Kalman filter comprises: determining the corresponding relation between the characteristic points of the first type of target markers and the sampling line segments as a first type of corresponding relation; determining the position data of the sampling line segment according to the position data of the first type of sampling points; inputting the observed values of the feature points of the first type of target markers and the position data of the sampling line segments into the first type of observation models according to the first type of corresponding relations, and outputting the distances between the feature points of the first type of target markers and the corresponding sampling line segments as the measured values of the Kalman filter; the sampling line segment takes two adjacent first-type sampling points as end points.
3. The method according to claim 2, wherein the determining the first type of sample points on the first type of map elements according to a preset first type matching manner comprises:
determining a center line of the first type of map elements extending in the long direction;
selecting a plurality of sampling points from the central line as first class sampling points according to a preset sampling interval;
preferably, the sampling interval varies according to a curvature variation of the center line.
4. The method of any of claims 1-3, wherein the types of markers further comprise a second type for indicating having a closed boundary; the plurality of observation models further comprises a second type of observation model corresponding to the second type, the second type of observation model being used for representing a positional relationship between a feature point of a marker and a sampling point in a corresponding map element;
when the target marker comprises a second type of target marker, determining a sampling point of a map element in the electronic map by adopting a corresponding feature point matching mode according to the type of the target marker, wherein the method comprises the following steps: determining a second type of map element corresponding to the second type of target marker; determining a second type of sampling point on the second type of map element according to a preset second type of matching mode;
the method includes the steps of inputting the observed values of the feature points of the target marker and the position data of the sampling points into the target observation model, and outputting the measured values of a Kalman filter, and further includes: determining the corresponding relation between the characteristic points of the second type of target marker and the second type of sampling points as a second type of corresponding relation; and inputting the observed value of the characteristic point of the second type of target marker and the position data of the second type of sampling point into the second type observation model according to the second type corresponding relation, and outputting the distance between the characteristic point of the second type of target marker and the corresponding second type of sampling point as the measured value of the Kalman filter.
5. The method according to claim 4, wherein the determining a second type of sample point on the second type of map element according to a preset second type of matching manner comprises:
and determining corner points, mass points or central points of the second type map elements as the second type sampling points according to the attributes of the second type map elements.
6. The method according to claim 1, wherein before said inputting the measured values into the corresponding kalman filter to obtain the positioning result corresponding to the target marker, the method further comprises:
determining the current Kalman innovation according to the measured value;
determining a current innovation test threshold value according to the type of the target marker and a past chi-square value of Kalman innovation;
determining a chi-square value of the current Kalman innovation;
discarding the measurement value when the chi-squared value of the current kalman innovation exceeds the current innovation check threshold.
7. The method according to claim 1, wherein when the marker in the image includes a plurality of different types of target markers, the inputting observed values of feature points of the target markers and position data of sample points into the target observation model and outputting measured values of a kalman filter includes: inputting observed values of characteristic points of a plurality of target markers and position data of sampling points into corresponding target observation models, and outputting a measurement value matrix, wherein the measurement value matrix comprises measurement values corresponding to the plurality of target markers;
inputting the measured value into a corresponding kalman filter to obtain a positioning result corresponding to the target marker, including: and inputting the measurement value matrix into a Kalman filter to obtain a positioning result corresponding to the image.
8. A vision-based positioning device, the device comprising:
the identification module is used for identifying the image of the road environment and determining a target marker from the identified markers;
the matching module is used for determining the sampling points of map elements in the electronic map by adopting a corresponding characteristic point matching mode according to the type of the target marker;
the observation model determining module is used for determining a target observation model corresponding to the type of the target marker in a plurality of preset observation models; the observation model is used for representing the position relation between the characteristic point of the marker and the sampling point in the corresponding map element, and different observation models and different characteristic point matching modes correspond to the markers with different boundary constraint conditions;
the measurement value determining module is used for inputting the observed values of the characteristic points of the target marker and the position data of the sampling points into the target observation model and outputting the measurement value of the Kalman filter;
and the filtering module is used for inputting the measured value into a corresponding Kalman filter to obtain a positioning result corresponding to the target marker.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202210750190.3A 2022-06-28 2022-06-28 Vision-based positioning method and device, computer equipment and storage medium Pending CN115147792A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210750190.3A CN115147792A (en) 2022-06-28 2022-06-28 Vision-based positioning method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210750190.3A CN115147792A (en) 2022-06-28 2022-06-28 Vision-based positioning method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115147792A true CN115147792A (en) 2022-10-04

Family

ID=83409537

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210750190.3A Pending CN115147792A (en) 2022-06-28 2022-06-28 Vision-based positioning method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115147792A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965756A (en) * 2023-03-13 2023-04-14 安徽蔚来智驾科技有限公司 Map construction method, map construction apparatus, driving apparatus, and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965756A (en) * 2023-03-13 2023-04-14 安徽蔚来智驾科技有限公司 Map construction method, map construction apparatus, driving apparatus, and medium

Similar Documents

Publication Publication Date Title
EP3581890B1 (en) Method and device for positioning
CN108564874B (en) Ground mark extraction method, model training method, device and storage medium
CN110599537A (en) Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system
US10942130B2 (en) Damage information processing device and damage information processing method
EP3624001B1 (en) Methods and systems for parking line marker detection and pairing and parking spot detection and classification
CN109754396B (en) Image registration method and device, computer equipment and storage medium
CN110363774B (en) Image segmentation method and device, computer equipment and storage medium
CN111080573B (en) Rib image detection method, computer device and storage medium
CN110047108B (en) Unmanned aerial vehicle pose determination method and device, computer equipment and storage medium
US9336595B2 (en) Calibration device, method for implementing calibration, and camera for movable body and storage medium with calibration function
CN107817246B (en) Medium storing image processing program, image processing method and image processing apparatus
JP2009020014A (en) Self-location estimation device
JP2010033447A (en) Image processor and image processing method
Konrad et al. Localization in digital maps for road course estimation using grid maps
CN111160086B (en) Lane line identification method, device, equipment and storage medium
US20160189359A1 (en) Sampling method and image processing apparatus of cs-ransac for estimating homography
JP2009139323A (en) Travel road surface detecting apparatus for vehicle
CN115147792A (en) Vision-based positioning method and device, computer equipment and storage medium
CN110006432B (en) Indoor robot rapid relocation method based on geometric prior information
Jung et al. An improved linear-parabolic model for lane following and curve detection
CN112215887B (en) Pose determining method and device, storage medium and mobile robot
CN114543819B (en) Vehicle positioning method, device, electronic equipment and storage medium
JP2006090957A (en) Surrounding object detecting device for moving body, and surrounding object detection method for moving body
US20220254062A1 (en) Method, device and storage medium for road slope predicating
CN110570680A (en) Method and system for determining position of object using map information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination