CN113790761A - Driving end positioning method and device, computer equipment and storage medium - Google Patents

Driving end positioning method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113790761A
CN113790761A CN202111105490.8A CN202111105490A CN113790761A CN 113790761 A CN113790761 A CN 113790761A CN 202111105490 A CN202111105490 A CN 202111105490A CN 113790761 A CN113790761 A CN 113790761A
Authority
CN
China
Prior art keywords
target
positioning
object positioning
precision
sensing data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111105490.8A
Other languages
Chinese (zh)
Other versions
CN113790761B (en
Inventor
陈共龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111105490.8A priority Critical patent/CN113790761B/en
Publication of CN113790761A publication Critical patent/CN113790761A/en
Application granted granted Critical
Publication of CN113790761B publication Critical patent/CN113790761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application relates to a driving end positioning method, a driving end positioning device, computer equipment and a storage medium. The method can be applied to the map field, the automatic driving field, the traffic field and the vehicle-mounted scene, and comprises the following steps: acquiring a target sensing data set acquired by collecting a target environment where a target driving end is located; respectively carrying out positioning analysis on the target sensing data to obtain candidate object positioning results corresponding to the target sensing data; acquiring target environment influence data corresponding to a sensor when the sensor corresponding to the target sensing data acquires the target sensing data; obtaining object positioning precision of a candidate object positioning result corresponding to the target sensing data based on the target environment influence data; and selecting the candidate object positioning result meeting the accuracy condition based on the object positioning accuracy corresponding to the candidate object positioning result, and taking the candidate object positioning result as the target object positioning result corresponding to the target driving end. By adopting the method, the positioning accuracy of the driving end can be improved.

Description

Driving end positioning method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for locating a driving end, a computer device, and a storage medium.
Background
With the development of computer technology, automatic driving technology has emerged, in which an automatic driving end, for example, an object around an automatic driving vehicle, can be detected, and the distance between the automatic driving end and the surrounding object can be controlled, thereby reducing the occurrence of a collision between the automatic driving end and the surrounding object.
At present, a plurality of methods for detecting objects around an automatic driving end exist, however, the existing method for detecting objects around an automatic driving end has the condition that the objects around the automatic driving end cannot be accurately detected, so that the accuracy of positioning the driving end is low.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device and a storage medium for positioning a driving end, which can improve the accuracy of positioning the driving end.
A method of driver end location, the method comprising: acquiring a target sensing data set acquired by collecting a target environment where a target driving end is located; the target sensing data set comprises target sensing data acquired by a plurality of sensors respectively; respectively carrying out positioning analysis on the target sensing data to obtain candidate object positioning results corresponding to the target sensing data; acquiring target environment influence data corresponding to a sensor when the sensor corresponding to the target sensing data acquires the target sensing data; obtaining the object positioning precision of a candidate object positioning result corresponding to the target sensing data based on the target environment influence data; and selecting the candidate object positioning result meeting the precision condition based on the object positioning precision corresponding to the candidate object positioning result, and taking the candidate object positioning result meeting the precision condition as the target object positioning result corresponding to the target driving end.
A steering end positioning device, the device comprising: the target sensing data set acquisition module is used for acquiring a target sensing data set acquired by acquiring a target environment where a target driving end is located; the target sensing data set comprises target sensing data acquired by a plurality of sensors respectively; a candidate object positioning result obtaining module, configured to perform positioning analysis on the target sensing data respectively to obtain candidate object positioning results corresponding to the target sensing data respectively; the target environment influence data acquisition module is used for acquiring target environment influence data corresponding to the sensor when the sensor corresponding to the target sensing data acquires the target sensing data; an object positioning accuracy obtaining module, configured to obtain, based on the target environment influence data, object positioning accuracy of a candidate object positioning result corresponding to the target sensing data; and the target object positioning result obtaining module is used for selecting and obtaining a candidate object positioning result meeting the precision condition based on the object positioning precision corresponding to the candidate object positioning result, and taking the candidate object positioning result meeting the precision condition as a target object positioning result corresponding to the target driving end.
In some embodiments, the target sensing data includes a target image captured by a capture sensor, the target environment influence data includes image brightness of the target image, and the object location accuracy obtaining module includes: a target positioning accuracy obtaining unit, configured to calculate, based on image brightness of the target image, a target positioning accuracy, where the target positioning accuracy and the brightness have a positive correlation; and the target space acquisition unit is used for acquiring a target space corresponding to the target image and taking the target positioning precision as the target positioning precision of the candidate object positioning result corresponding to the target space.
In some embodiments, the target positioning accuracy includes area positioning accuracy corresponding to each sub-image area of the target image; the target positioning precision obtaining unit is further configured to perform region division on the target image to obtain a plurality of sub-image regions corresponding to the target image; respectively carrying out brightness statistics on the pixel point brightness of each sub-image region to obtain the region brightness corresponding to each sub-image region; obtaining the area positioning precision corresponding to the sub-image area based on the area brightness corresponding to the sub-image area; the target space obtaining unit is further configured to determine a subspace corresponding to the sub-image region from a target space corresponding to the target image, and use the region positioning accuracy corresponding to the sub-image region as the object positioning accuracy of the candidate object positioning result corresponding to the subspace.
In some embodiments, the target sensing data includes object detection data detected by a detection sensor, the target environment influence data includes an environment attribute data set of the target environment influencing a detection substance transmitting the detection sensor, the environment attribute data set includes a plurality of environment attribute data, and the object positioning accuracy obtaining module includes: a target environment state information obtaining unit, configured to synthesize the environment attribute data in the environment attribute data set to obtain target environment state information corresponding to the target environment; and the object positioning accuracy obtaining unit is used for obtaining the positioning accuracy corresponding to the target environment state information based on the corresponding relation between the environment state information and the positioning accuracy, and the positioning accuracy is used as the object positioning accuracy of the candidate object positioning result corresponding to the target sensing data.
In some embodiments, the environment attribute data is an environment attribute numerical value, and the target environment state information obtaining unit is further configured to obtain an attribute weight corresponding to the environment attribute numerical value, and perform weighted calculation on the environment attribute numerical value based on the attribute weight to obtain a weighted environment attribute numerical value; and counting the weighted environment attribute values corresponding to the environment attribute data set, and taking the counted environment attribute statistical values as the target environment state information corresponding to the target environment.
In some embodiments, the object location accuracy obtaining module comprises: an environment positioning accuracy obtaining unit, configured to obtain, based on the target environment influence data, environment positioning accuracy of a candidate object positioning result corresponding to the target sensing data; a historical feedback positioning accuracy obtaining unit, configured to obtain a target position point corresponding to the candidate object positioning result, and obtain a historical feedback positioning accuracy corresponding to the target position point; and the object positioning precision obtaining unit is used for obtaining the object positioning precision of the candidate object positioning result corresponding to the target sensing data based on the environment positioning precision and the historical feedback positioning precision.
In some embodiments, the historical feedback positioning accuracy obtaining unit is further configured to obtain a historical object positioning result corresponding to the target location point, and display result presentation information corresponding to the historical object positioning result on an information display device at the target driving end; and responding to the precision feedback operation of the result presentation information, and adjusting the original positioning precision corresponding to the historical object positioning result based on the precision feedback operation to obtain the historical feedback positioning precision corresponding to the target position point.
In some embodiments, the historical feedback positioning accuracy obtaining unit is further configured to determine a feedback accuracy adjustment direction based on the accuracy feedback operation, and obtain a feedback accuracy adjustment parameter corresponding to the feedback accuracy adjustment direction; and adjusting the original positioning accuracy corresponding to the historical object positioning result by using the feedback accuracy adjustment parameter to obtain the historical feedback positioning accuracy corresponding to the target position point.
In some embodiments, the target object location result obtaining module includes: an accuracy comparison result obtaining unit, configured to compare the object positioning accuracy corresponding to the candidate object positioning result with a positioning accuracy threshold, so as to obtain an accuracy comparison result; and the target object positioning result obtaining unit is used for taking the precision comparison result as a candidate object positioning result of which the object positioning precision is greater than the positioning precision threshold value as a target object positioning result corresponding to the target driving end.
In some embodiments, the target sensory data is sensory data collected by a sensor having an ambient influence level greater than an influence level threshold, the apparatus further comprising: the additional sensing data obtaining module is used for obtaining sensing data acquired by acquiring a target environment where the target driving end is located by a sensor with an environmental influence degree smaller than an influence degree threshold value, and the acquired sensing data is used as additional sensing data; an additional object positioning result obtaining module, configured to perform positioning analysis on the additional sensing data to obtain an additional object positioning result corresponding to the additional sensing data; the sensor positioning precision obtaining module is used for obtaining the sensor positioning precision corresponding to the sensor with the environmental influence degree smaller than the influence degree threshold value; the precision comparison module is used for comparing the positioning precision of the sensor with the positioning precision threshold when the precision comparison result is that the positioning precision of the object is smaller than the positioning precision threshold; and the target object positioning result determining module is used for taking an additional object positioning result corresponding to the additional sensing data as a target object positioning result corresponding to the target driving end when the sensor positioning accuracy is greater than the positioning accuracy threshold.
In some embodiments, the target object positioning result is an object positioning result corresponding to a position point in a position point set corresponding to the target environment; the device further comprises: a current positioning result set composing module, configured to compose a current positioning result set corresponding to current time from target object positioning results corresponding to each position point in the position point set; a corresponding relation establishing module, configured to determine a sensor corresponding to a target object positioning result corresponding to each position point in the position point set, and establish a corresponding relation between the target object positioning result and the corresponding sensor; and the display module is used for displaying the current positioning result set corresponding to the current time at the target driving end and correspondingly displaying the indication information of the sensor corresponding to the target object positioning result on the basis of the corresponding relation.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps in the steering end positioning method when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method of tip location.
A computer program product comprising a computer program which, when executed by a processor, carries out the steps in the above-mentioned driver end positioning method.
The driver end positioning method, the driver end positioning device, the computer equipment and the storage medium acquire a target sensing data set acquired by acquiring a target environment where a target driver end is located, the target sensing data set comprises target sensing data acquired by a plurality of sensors respectively, the target sensing data are positioned and analyzed respectively to obtain candidate object positioning results corresponding to the target sensing data respectively, target environment influence data corresponding to the sensors when the sensors corresponding to the target sensing data acquire the target sensing data are acquired, the object positioning accuracy of the candidate object positioning results corresponding to the target sensing data is obtained based on the target environment influence data, the candidate object positioning results meeting the accuracy condition are selected based on the object positioning accuracy corresponding to the candidate object positioning results, and the candidate object positioning results meeting the accuracy condition are used as the target object positioning results corresponding to the target driver end, because the object positioning precision is determined based on the environmental influence data, the object positioning precision can accurately reflect the positioning precision, so that a target object positioning result corresponding to the target driving end is screened out based on the object positioning precision, an accurate target object positioning result can be screened out, and the accuracy of the driving end positioning is improved.
Drawings
FIG. 1 is a diagram of an exemplary driver-side location method in some embodiments;
FIG. 2 is a flow chart illustrating a method for locating a steering end in some embodiments;
FIG. 3 is a schematic diagram showing result presentation information in some embodiments;
FIG. 4 is a schematic diagram illustrating visualization of information in some embodiments;
FIG. 5 is a diagram of an exemplary driver-side location method in some embodiments;
FIG. 6 is a schematic diagram of a steering end location method in some embodiments;
FIG. 7 is a flow chart illustrating a method for locating a steering end in some embodiments;
FIG. 8 is a block diagram of a steering end positioning device in some embodiments;
FIG. 9 is a diagram of the internal structure of a computer device in some embodiments;
FIG. 10 is a diagram of the internal structure of a computer device in some embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The driving end positioning method provided by the application can be applied to the application environment shown in fig. 1. The application environment includes a terminal 102 and a sensor set 104, and the terminal 102 and the sensor set 104 can communicate in a wired manner or a wireless manner. The set of sensors 104 and the terminal 102 may be devices in a driving end, for example, may be devices in an autonomous vehicle. The sensor set 104 includes a plurality of sensors, which may include at least one of a camera, a lidar, a temperature and humidity sensor, or a UWB (Ultra Wide Band) millimeter wave sensor, for example.
The terminal 102 may obtain transmission data acquired by each sensor in the sensor set 104, perform positioning analysis based on sensing data acquired by each sensor, obtain a positioning result, determine an object included in the surrounding environment of the driving end based on the positioning result, and position the object in the surrounding environment. The terminal 102 may transmit the positioning result to a driving strategy determination module in the driving end, and the driving strategy determination module may adjust the driving strategy based on the positioning result, so that the driving end keeps a distance from surrounding objects, thereby reducing the situation that the driving end collides with the surrounding objects.
Specifically, the terminal 102 may obtain a target sensing data set obtained by collecting a target environment where the target driving end is located, where the target sensing data set includes target sensing data obtained by respectively collecting multiple sensors, the terminal 102 may further perform positioning analysis on the target sensing data respectively to obtain candidate object positioning results corresponding to each target sensing, when the sensor corresponding to the obtained target sensing data collects the target sensing data, and obtaining the object positioning precision of the candidate object positioning result corresponding to the target transmission based on the target environment influence data corresponding to the sensor, selecting and obtaining the candidate object positioning result meeting the precision condition based on the object positioning precision corresponding to the candidate object positioning result, and taking the candidate object positioning result meeting the precision condition as the target object positioning result corresponding to the target driving end.
The terminal 102 includes, but is not limited to, a mobile phone, a computer, an intelligent voice interaction device, an intelligent appliance, a vehicle-mounted terminal, and the like.
The driving end positioning method can be applied to the automatic driving field, the map field, the Traffic field and the vehicle-mounted scene, an Intelligent Transportation System (ITS) is also called an Intelligent Transportation System (Intelligent Transportation System), advanced scientific technologies (information technology, computer technology, data communication technology, sensor technology, electronic control technology, automatic control theory, operational research, artificial intelligence and the like) are effectively and comprehensively applied to Traffic Transportation, service control and vehicle manufacturing, and the relation among a vehicle, a road and a user is strengthened, so that the comprehensive Transportation System for guaranteeing safety, improving efficiency, improving environment and saving energy is formed. An Intelligent Vehicle Infrastructure Cooperative System (IVICS), referred to as a Vehicle Infrastructure Cooperative system for short, is a development direction of an Intelligent Transportation System (ITS). The vehicle-road cooperative system adopts the advanced wireless communication, new generation internet and other technologies, implements vehicle-vehicle and vehicle-road dynamic real-time information interaction in all directions, develops vehicle active safety control and road cooperative management on the basis of full-time dynamic traffic information acquisition and fusion, fully realizes effective cooperation of human and vehicle roads, ensures traffic safety, improves traffic efficiency, and thus forms a safe, efficient and environment-friendly road traffic system.
It can be understood that the above application scenario is only an example, and does not constitute a limitation on the positioning of the driving end provided in the embodiment of the present application, and the method provided in the embodiment of the present application may also be applied in other application scenarios, for example, the positioning of the driving end provided in the present application may be executed by a server, the terminal 102 may upload the obtained sensing data to the server, and the server may obtain a target object positioning result corresponding to the target driving end by using the sensing data, and return the target object positioning result to the terminal 102.
In some embodiments, as shown in fig. 2, a method for locating a driving end is provided, which is described by taking the method as an example for being applied to the terminal 102 in fig. 1, and includes the following steps:
s202, acquiring a target sensing data set acquired by collecting a target environment where a target driving end is located; the target sensing data set comprises target sensing data acquired by a plurality of sensors respectively.
The target pilot may be any pilot, such as a vehicle, an airplane, or a train, the vehicle includes but is not limited to an autonomous vehicle and a non-autonomous vehicle, such as an autonomous automobile, and the airplane includes but is not limited to a drone and an airplane with a pilot. The specific type of target autopilot is not limited in this application. The target driving end is provided with a plurality of sensors, for example, at least one of a camera, a laser radar, a temperature and humidity sensor or a UWB millimeter wave sensor can be arranged on the target driving end. The target environment refers to the surrounding environment where the target driving end is located.
The sensor is a device for acquiring data, and may be at least one of a camera, a laser radar, a temperature and humidity sensor, or a UWB millimeter wave sensor, and the sensing data refers to data acquired by the sensor, and may be at least one of point cloud data, images, or video data, without limitation on the type of the sensor and the type of the sensing data, for example, the sensing data may be video data acquired by the camera. The sensor for acquiring the target sensing data may be a sensor arranged on the target driving end, or a sensor arranged around the target driving end, for example, a sensor arranged on a road where the target driving end runs.
The types of target sensing data in the target sensing data set may be the same or different, for example, one type of target sensing data may be point cloud data, and one type of target sensing data may be image data. The acquisition time corresponding to each target sensing data in the target sensing data set may be the same, and the acquisition time refers to the time for acquiring the target sensing data, for example, each target sensing data is data acquired in a time period from 8 points 30 to 8 points 35.
Specifically, the terminal may periodically acquire the target sensing data set, for example, the target sensing data set may be acquired every 5 minutes, and of course, the target sensing data set acquired each time is different. The target autopilot may also obtain the target sensing data set when obtaining an object locating instruction, where the object locating instruction is used to instruct locating an object in a target environment of the target autopilot, and the object may be an animate object, such as a pedestrian, or an inanimate object, such as at least one of a street lamp, a building, or a plant. The object may be, for example, an obstacle during driving of the target drive end.
And S204, respectively carrying out positioning analysis on the target sensing data to obtain candidate object positioning results corresponding to the target sensing data.
The positioning analysis is to determine an object included in the target environment and a position of the determined object based on the target sensing data. The object location result may include at least one of an object location, an object existence possibility, or an object distance, where the object existence possibility refers to a possibility that an object exists, for example, a probability that the object exists, the object location may be represented by a coordinate, for example, may be indicated by a two-dimensional coordinate or a three-dimensional coordinate, and the object distance refers to a distance between the object and the target driving end. The candidate object positioning result refers to an object positioning result obtained by performing positioning analysis on each target sensing data. There may be a plurality of candidate object localization results.
Specifically, the candidate object positioning result may be an object positioning result corresponding to a position point in a target space in the target environment, for example, when the target space is an environment in a cube with the target driving end as the center, the position point may be each position point included in the cube, for example, the size of the cube is K × P × Q, where K is the length of the cube, P is the width of the cube, and Q is the height of the cube, then L × K × P × Q position points are included in the cube, each position point in the target space may respectively correspond to a candidate object positioning result, and the candidate object positioning result corresponding to the position point may reflect a probability that an object exists at the position point, for example, a probability that an obstacle exists at the position point. The object position in the candidate object positioning result may be represented by the coordinates of the corresponding position point.
In some embodiments, each position point in the target space may correspond to a candidate object positioning result set, and each candidate object positioning result in the candidate object positioning result set is a result obtained by performing positioning analysis on different target sensing data. For example, since there are a plurality of target sensing data, each target sensing data can be respectively located to obtain a candidate object location result corresponding to the location point, so that each candidate object location result corresponding to the location point can be obtained to form a candidate object location result set corresponding to the location point. The terminal may select a candidate object positioning result from the candidate object positioning result set corresponding to the location point, and use the candidate object positioning result as the target object positioning result corresponding to the location point, for example, may use any one of the candidate object positioning results as the target object positioning result of the location point, and obtain the target object positioning result of the location point by further screening the candidate object positioning result set.
In some embodiments, the terminal may perform positioning analysis on the target sensing data by using a positioning model, where the positioning model may be set in the terminal or the target driving end, and the terminal may input the target sensing data into the positioning model to perform positioning analysis, so as to obtain an object positioning result. The positioning module may be a neural network model based on artificial intelligence, the types of the target sensing data are different, and the corresponding positioning modules may also be different, for example, when the target sensing data is image data, the positioning module may be an image recognition module, the image recognition module may recognize an object included in the image, and when the target sensing data is point cloud data, the positioning module may be a model capable of recognizing the object from the point cloud data.
In some embodiments, the positioning model may be located at a position outside the terminal, for example, the positioning model may be located in a server that can communicate with the terminal, the terminal may send the target sensing data to the server, and the server may perform positioning analysis on the target sensing data by using the positioning model and return the obtained positioning result to the terminal.
And S206, acquiring target environment influence data corresponding to the sensor when the sensor corresponding to the target sensing data acquires the target sensing data.
The environmental impact data refers to environmental data affecting the sensor or the detecting substance emitted by the sensor, and includes but is not limited to at least one of light, brightness, temperature or humidity. The detection substance is an object emitted when the sensor is used to detect a surrounding object, and when the sensor is a laser radar, the detection substance may be an electromagnetic wave. The environmental impact data for different sensors may be the same or different. The target environmental impact data refers to environmental impact data corresponding to a sensor which acquires target sensing data.
Specifically, the terminal may determine the environmental impact data corresponding to the sensor based on the type of the sensing data acquired by the sensor, for example, when the sensing data is a video or an image, the environmental impact data corresponding to the sensor may include brightness, and when the sensing data is lidar data, that is, data acquired by a lidar, the environmental impact data corresponding to the sensor may include at least one of temperature or humidity.
And S208, obtaining the object positioning precision of the candidate object positioning result corresponding to the target sensing data based on the target environment influence data.
The object positioning accuracy is used to reflect the accuracy of the candidate object positioning result, for example, the candidate object positioning result includes an object existence likelihood, the object positioning accuracy may reflect the accuracy of the object existence likelihood, for example, the object existence likelihood at the position a is 0.9, the object positioning accuracy may reflect the accuracy that "the object existence likelihood at the position a is 0.9", and the object positioning accuracy may also be referred to as a confidence of the object positioning result.
Specifically, the terminal may obtain the environmental positioning accuracy of the candidate object positioning result corresponding to the target sensing data based on the target environmental influence data, and obtain the object positioning accuracy of the candidate object positioning result corresponding to the target sensing data based on the environmental positioning accuracy. The object positioning precision and the environment positioning precision are in positive correlation. Wherein, the positive correlation refers to: under the condition that other conditions are not changed, the changing directions of the two variables are the same, and when one variable changes from large to small, the other variable also changes from large to small. It is understood that a positive correlation herein means that the direction of change is consistent, but does not require that when one variable changes at all, another variable must also change. For example, it may be set that the variable b is 100 when the variable a is 10 to 20, and the variable b is 120 when the variable a is 20 to 30. Thus, the change directions of a and b are both such that when a is larger, b is also larger. But b may be unchanged in the range of 10 to 20 a.
In some embodiments, when the target sensing data is an image, the terminal may analyze brightness of the image, and determine the environmental positioning accuracy based on the analyzed image brightness.
In some embodiments, a temperature and humidity sensor may be disposed in the target driving end, and the terminal may acquire at least one of temperature data and humidity data acquired by the temperature and humidity sensor, and determine the environmental positioning accuracy based on the at least one of the temperature data and the humidity data.
In some embodiments, the terminal may obtain historical feedback positioning accuracy of a candidate object positioning result corresponding to the target sensing data, and obtain object positioning accuracy of the candidate object positioning result corresponding to the target sensing data based on the historical feedback positioning accuracy of the candidate object positioning result corresponding to the target sensing data and the environmental positioning accuracy, where the object positioning accuracy and the historical feedback positioning accuracy have a positive correlation. The historical feedback positioning precision is the positioning precision determined by the feedback operation on the precision of the historical object positioning result and is used for reflecting the precision of the historical object positioning result, and the higher the historical feedback positioning precision is, the higher the precision of the historical object positioning result is. The historical object positioning result refers to a target object positioning result corresponding to the position point obtained in the historical time period.
S210, selecting a candidate object positioning result meeting the accuracy condition based on the object positioning accuracy corresponding to the candidate object positioning result, and taking the candidate object positioning result meeting the accuracy condition as a target object positioning result corresponding to the target driving end.
For example, the target object positioning results corresponding to each position point in the target space may include a target object positioning result corresponding to each position point, each position point corresponds to one target object positioning result, the target object positioning results corresponding to each position point may form a target object positioning result set, and each target object positioning result in the target object positioning result set is used as the target object positioning result corresponding to the target driver.
The accuracy condition may include at least one of the object positioning accuracy being greater than the positioning accuracy threshold or the positioning result being sorted before the sorting threshold, and the positioning accuracy threshold and the sorting threshold may be preset or set as needed. The positioning result sequence refers to the sequence of the candidate object positioning results in the candidate object positioning result sequence, the candidate object positioning result sequence is a sequence obtained by arranging each candidate object positioning result in the candidate object positioning result set according to the object positioning precision, and the higher the object positioning precision is, the higher the sequence of the candidate object positioning results in the candidate object positioning result set is.
Specifically, the position point in the target space corresponds to a candidate object positioning result set, each candidate object positioning result in the candidate object positioning result set has an object positioning accuracy, and the terminal may select, based on the object positioning accuracy, a candidate object positioning result that satisfies an accuracy condition from the candidate object positioning result set as the target object positioning result.
In some embodiments, the terminal may determine the target object location result from the candidate object location result set based on the type of the target sensing data corresponding to the candidate object location result, for example, the terminal may determine, based on the type of the target sensing data corresponding to the candidate object positioning result, arranging the positioning results of each candidate object in the candidate object positioning result set to obtain a candidate object positioning result sequence, for example, when the candidate object positioning result set includes the candidate object positioning result corresponding to the image data and the candidate object positioning result corresponding to the point cloud data, candidate object localization results corresponding to the image data may be arranged before candidate object localization results corresponding to the point cloud data, or arranging the candidate object positioning result corresponding to the point cloud data before the candidate object positioning result corresponding to the image data. And obtaining a current object positioning result from the candidate object positioning result sequence according to the arrangement sequence, comparing the object positioning precision of the current object positioning result with a positioning precision threshold, taking the current object positioning result as a target object positioning result when the object positioning precision is greater than the positioning precision threshold, and otherwise, returning to the step of obtaining the current object positioning result from the candidate object positioning result sequence according to the arrangement sequence, thereby obtaining the target object positioning result corresponding to the position point. Each location point corresponds to a target object location result.
In some embodiments, the target sensing data is sensing data collected by a sensor having an environmental influence degree greater than an influence degree threshold, where the environmental influence degree may be used to reflect a degree of influence of environmental factors on the sensing data collected by the sensor, and the greater the environmental influence degree, the different environments when collecting the data are different, and the greater the difference between the sensing data collected by the sensor is. The sensor may be influenced by an environmental factor, such as at least one of light, temperature or humidity, when collecting data, so that there is a difference between the collected sensing data and the real data. The influence threshold may be set as desired.
In some embodiments, when there is no candidate object positioning result satisfying the accuracy condition in the candidate object positioning result set corresponding to the location point in the target space, the terminal may obtain the additional sensing data, perform positioning analysis on the additional sensing data to obtain an object positioning result corresponding to the additional sensing data, as the first positioning result, determine the target object positioning result corresponding to the location point based on the first positioning result, for example, the first positioning result may be used as the target object positioning result corresponding to the location point, or obtain an object positioning accuracy corresponding to the first positioning result as the first positioning accuracy, compare the first positioning accuracy with the positioning accuracy threshold, when it is determined that the first positioning accuracy is greater than the positioning accuracy threshold, use the first positioning result as the target object positioning result corresponding to the location point, when the first positioning accuracy is less than the positioning accuracy threshold, and determining that the target object positioning result corresponding to the position point is a preset object positioning result, wherein the preset object positioning result is a preset object positioning result, and the preset object positioning result can be used for reflecting that the object positioning result of the position point cannot be determined. The additional sensing data is obtained by collecting a target environment where a target driving end is located by using a sensor with an environmental influence degree smaller than an influence degree threshold, and the sensing data collected by the sensor with the environmental influence degree smaller than the influence degree threshold is less influenced by environmental factors.
In the method for positioning the driving end, a target sensing data set acquired by acquiring a target environment where the target driving end is located is acquired, the target sensing data set comprises target sensing data acquired by a plurality of sensors respectively, the target sensing data are positioned and analyzed respectively to obtain candidate object positioning results corresponding to the target sensing data respectively, target environment influence data corresponding to the sensors when the sensors corresponding to the target sensing data acquire the target sensing data are acquired, the object positioning accuracy of the candidate object positioning results corresponding to the target sensing data is obtained based on the target environment influence data, the candidate object positioning results meeting the accuracy condition are selected based on the object positioning accuracy corresponding to the candidate object positioning results, and the candidate object positioning results meeting the accuracy condition are used as the target object positioning results corresponding to the target driving end, because the object positioning precision is determined based on the environmental influence data, the object positioning precision can accurately reflect the positioning precision, so that a target object positioning result corresponding to the target driving end is screened out based on the object positioning precision, an accurate target object positioning result can be screened out, and the accuracy of the driving end positioning is improved.
The high-precision positioning has important functions on navigation, obstacle avoidance and the like of the automatic driving automobile. Solutions to achieve high accuracy positioning may include positioning algorithms using a single sensor and algorithms that fuse multiple sensor data. The positioning method of the single sensor can comprise millimeter wave sensor UWB positioning, video positioning or laser radar positioning. The laser radar and the millimeter waves mainly calculate the time difference between signal sending and signal returning through scattering radar signals to surrounding three-dimensional space, calculate the distance of an obstacle and further obtain the position of a surrounding object. The video positioning scheme mainly judges the position of the obstacle by analyzing the depth of field of the surrounding obstacle through a video. In the multi-sensor fusion positioning, the positioning is mainly performed by combining the positioning results of the plurality of sensors, and the method is mainly used for overcoming the defects among different sensors, for example, in a scene with weak light, a video positioning algorithm cannot analyze obstacles, but can perform positioning by means of a laser radar. For an area or a scene where a plurality of sensors can all provide positioning results, one sensor can be randomly selected from the plurality of sensors to output results, and the coordination capacity of the plurality of sensors is wasted.
For a positioning scheme of a single sensor, high-precision positioning cannot be realized in some scenes, for example, high-precision positioning cannot be realized in a heavy rain or heavy fog scene by a laser radar, and high-precision positioning cannot be provided in a dark environment by a video positioning algorithm. For the positioning scheme of the multiple sensors, positioning is output in a randomly selected mode, the obtained positioning output result is not high in precision, the positioning accuracy is low, the risk of potential collision with obstacles is caused, the probability of loss of an automatically-driven automobile is improved, the safety of a driver is reduced, and the potential maintenance cost of the automobile is improved.
The driving end positioning method provided by the application realizes comprehensive consideration of positioning confidence coefficients of different sensors under different sensing conditions in different environments, provides comprehensive high-precision positioning by combining feedback of a driver, improves positioning precision of an automatic driving automobile in rich scenes and complex scenes, reduces collision risks of the automobile, and improves safety of the automobile in a driving process.
In some embodiments, the target sensing data includes a target image captured by the capturing sensor, the target environment influence data includes image brightness of the target image, and obtaining the object positioning accuracy of the candidate object positioning result corresponding to the target sensing data based on the target environment influence data includes: calculating based on the image brightness of the target image to obtain target positioning precision, wherein the target positioning precision and the brightness form a positive correlation; and acquiring a target space corresponding to the target image, and taking the target positioning precision as the target positioning precision of the candidate target positioning result corresponding to the target space.
The shooting sensor is a device with a shooting function, that is, an image acquisition function, and may be at least one of a video camera, or the like. The image brightness of the target image is obtained by analyzing the brightness corresponding to the pixel points in the target image. The target positioning accuracy is positively correlated with the brightness, for example, the target positioning accuracy may be positively correlated with the image brightness.
The target space is a space in the target environment, that is, the target environment includes the target space, and the target space refers to a space where the target driving end is located. The position of the target driving end in the target space may be arbitrary, for example, the target driving end may be located at the center of the target space, and the size of the target space may be set as needed or in advance, for example, the target driving end may be a cube with a size of K × P × Q, and may also be in other shapes. The target space belongs to a three-dimensional space and can be represented by a three-dimensional coordinate system. A subspace is a spatial region in a target space, which may be divided into a plurality of subspaces.
Specifically, the terminal may calculate luminances corresponding to a plurality of pixel points in the target image, and perform a statistical operation based on the luminances corresponding to the plurality of pixel points to obtain the image luminance of the target image, where the statistical operation may include at least one of an addition operation, a mean operation, or a weighting operation, for example, the luminance corresponding to the plurality of pixel points may be subjected to the mean operation, and a result of the operation is taken as the image luminance. Alternatively, the terminal may select the representative luminance from the luminances corresponding to the plurality of pixel points as the image luminance. The representative luminance may be any one of the maximum luminance and the minimum luminance among the luminances corresponding to the plurality of pixel points.
In some embodiments, the terminal may obtain pixel information corresponding to the pixel point, where the pixel information may include a pixel value or an RGB value, and the terminal may calculate the luminance corresponding to the pixel point according to the pixel information, for example, the luminance corresponding to the pixel point may be calculated based on the RGB value corresponding to the pixel point.
In some embodiments, the terminal may obtain a correspondence between the luminance range and the positioning accuracy, and determine the luminance range to which the image luminance belongs, and use the positioning accuracy corresponding to the luminance range to which the image luminance belongs as the target positioning accuracy, where the positioning accuracy corresponding to different luminance ranges is different.
In some embodiments, the number of the target images is multiple, and the terminal may perform statistical calculation, for example, mean calculation, on the image brightness corresponding to each of the multiple target images to obtain a brightness statistical value, and determine the target positioning accuracy based on the brightness statistical value.
In some embodiments, the image brightness of the target image may include area brightness corresponding to a plurality of sub-image areas, respectively, that is, the target environment influence data may include area brightness corresponding to each sub-image area of the target image, which is obtained by area division of the target image. The terminal can perform statistical calculation on the brightness of a plurality of pixel points in the sub-image region to obtain the region brightness of the sub-image region, or the terminal can select representative brightness from the brightness corresponding to the plurality of pixel points to be used as the image brightness. The representative luminance may be any one of the maximum luminance and the minimum luminance among the luminances corresponding to the plurality of pixel points. The target positioning accuracy may include positioning accuracies corresponding to the respective sub-image regions. The area positioning accuracy corresponding to the sub-image area is determined based on the area brightness corresponding to the sub-image area.
In this embodiment, in the process of acquiring the image, the effect of the acquired image and the intensity of the light of the acquired image have a great relationship, the intensity of the light is different, and the effect of the acquired image may be different, for example, the brightness of the image is different, the effect of the image is different, and the definition of the image may be different, so that when the image is used for positioning, the positioning accuracy may be different, and therefore the target positioning accuracy is determined based on the brightness of the image, and the accuracy of the positioning accuracy can be improved.
In some embodiments, the target positioning accuracy comprises a region positioning accuracy corresponding to each sub-image region of the target image; the step of calculating the target positioning precision based on the image brightness of the target image comprises the following steps: dividing the target image into a plurality of sub-image areas corresponding to the target image; respectively carrying out brightness statistics on the pixel point brightness of each sub-image area to obtain the area brightness corresponding to each sub-image area; obtaining the area positioning precision corresponding to the sub-image area based on the area brightness corresponding to the sub-image area; acquiring a target space corresponding to a target image, wherein the step of taking the target positioning accuracy as the target positioning accuracy of the candidate target positioning result corresponding to the target space comprises the following steps: and determining a subspace corresponding to the sub-image region from a target space corresponding to the target image, and taking the region positioning precision corresponding to the sub-image region as the object positioning precision of the candidate object positioning result corresponding to the subspace.
The sizes of the sub-image areas may be the same or different. The area brightness refers to the brightness corresponding to the sub-image area. The area positioning accuracy refers to the area positioning accuracy corresponding to the sub-image area. The pixel brightness refers to the brightness calculated according to the pixel information corresponding to the pixel point, and each pixel point can correspond to the pixel brightness. The target space refers to a space where the target driving end is located. The position of the target driving end in the target space may be arbitrary, for example, the target driving end may be located at the center of the target space, and the size of the target space may be set as needed or in advance, for example, the target driving end may be a cube with a size of K × P × Q, and may also be in other shapes. The target space belongs to a three-dimensional space and can be represented by a three-dimensional coordinate system. A subspace is a spatial region in a target space, which may be divided into a plurality of subspaces.
Specifically, the terminal may divide the target image into sub-image regions with a target number, where the target number may be preset or set as needed, for example, may be I × J, that is, the image is divided into I × J regions. The method for dividing the target image into the sub-image regions is not limited, for example, the target image may be uniformly divided into a plurality of sub-image regions, so that the size of each sub-image region is the same, or the target image may be non-uniformly divided into a plurality of sub-image regions, and each sub-image region in the plurality of sub-image regions may be the same or different. For example, the target image may be represented by a two-dimensional matrix G [ M ] [ N ], where M is the height of the image, N is the length of the image, and each pixel in the target image is stored in G [ M ] [ N ], and G [ M ] [ N ] may be divided into a plurality of sub-image regions, and assuming that the height of each sub-image region is M and the length is N, the ith sub-image region may be represented by G [ (I-1) M: I × M ] [ (J-1) N: J × N ], and 0< I, and 0< J.
In some embodiments, the terminal may perform statistical operation on pixel luminance corresponding to each of a plurality of pixels in the sub-image region, and use a statistical result as the region luminance corresponding to the sub-image region, or select representative luminance from the pixel luminance corresponding to each of the plurality of pixels as the region luminance corresponding to the sub-image region, for example, the region luminance may be identified by formula (1): a [ i ] [ j ] - { Bright (G [ (i-1) × m: i × m ] [ (j-1) × n: j × n ]) } (1), Bright represents the calculated luminance.
In some embodiments, the area positioning accuracy may be in a positive correlation with the area brightness, for example, the area brightness may be used as the area positioning accuracy, or a brightness adjustment coefficient is determined, the area brightness is adjusted by using the brightness adjustment coefficient, and the adjusted result is used as the area positioning accuracy. The brightness adjusting coefficient can be preset or set according to requirements.
In some embodiments, the terminal may determine, from the target space, a subspace corresponding to the sub-image region by using a mapping relationship between the two-dimensional space and the three-dimensional space, and determine, based on the region positioning accuracy corresponding to the sub-image region, the object positioning accuracy of the candidate object positioning result at each position point in the corresponding subspace, for example, the region positioning accuracy corresponding to the image region may be used as the object positioning accuracy of the candidate object positioning result at each position point in the corresponding subspace, or the region positioning accuracy may be adjusted, and the adjusted region positioning accuracy may be used as the object positioning accuracy.
In this embodiment, the target image is divided into regions to obtain sub-image regions, the region brightness corresponding to each sub-image region is determined, the positioning precision corresponding to each sub-image region is obtained, the subspace corresponding to the sub-image region is determined from the target space corresponding to the target image, and the region positioning precision corresponding to the sub-image region is used as the object positioning precision of the candidate object positioning result corresponding to the subspace, so that the object positioning precision corresponds to the sub-space in the target space, the object positioning precision is refined, and the accuracy of the object positioning precision is improved.
In some embodiments, the target sensing data includes object detection data detected by the detection sensor, the target environment influence data includes an environment attribute data set that influences a detection substance of the transmission detection sensor in the target environment, the environment attribute data set includes a plurality of environment attribute data, and obtaining the object localization accuracy of the candidate object localization result corresponding to the target sensing data based on the target environment influence data includes: synthesizing environment attribute data in the environment attribute data set to obtain target environment state information corresponding to the target environment; and acquiring the positioning precision corresponding to the target environmental state information based on the corresponding relation between the environmental state information and the positioning precision, and taking the positioning precision corresponding to the target environmental state information as the object positioning precision of the candidate object positioning result corresponding to the target sensing data.
The detection sensor is a device having a detection function, and may be a laser radar sensor, for example. The object detection data refers to data detected by a detection sensor.
The plurality of environmental attribute data may be included in the set of environmental attribute data, and the environmental attribute data may include at least one of temperature data, humidity data, or magnetic field strength, or the like. The environmental status information is used to reflect the environmental status, and the environmental status may include at least one of sunny weather, foggy weather, rainy weather, or sandstorm weather, and may also include other weather types. The environment attribute data may include environment data acquired by the environment sensor at a plurality of times in the acquisition time period, for example, the terminal may acquire the environment data acquired by the environment sensor at the plurality of times in the acquisition time period, form an environment data sequence, and use the environment data sequence as the environment attribute data, where the environment data may include at least one of temperature data or humidity data. The collection time period refers to a time period for collecting environmental data, for example, a time period for collecting temperature data or humidity data by a temperature and humidity sensor. The environmental attribute data may also be a numerical value.
Different environment state information can correspond to different positioning accuracy, and the positioning accuracy corresponding to the environment state information can be preset or set according to requirements.
Specifically, the terminal may perform statistical calculation on each environmental attribute data in the environmental attribute data set to obtain statistical attribute data, and determine target environmental state information corresponding to the target environment based on the statistical attribute data. The statistical calculation may include at least one of a weighted calculation, an average calculation, or a sum calculation, for example, the weighted calculation may be performed on each environment attribute data, and a result of the weighted calculation may be used as the statistical attribute data.
In some embodiments, the terminal may determine weights corresponding to the environmental attribute data, and perform weighted calculation on the environmental attribute data by using the weights to obtain statistical attribute data. For example, when the environment attribute data set includes a temperature data sequence and a humidity data sequence, the terminal may determine an attribute weight corresponding to the temperature data sequence and an attribute weight corresponding to the humidity data sequence, perform weighting calculation on the temperature data sequence and the humidity data sequence using the attribute weights, and determine target environment state information based on the calculation result. For example, the target environmental status information may be expressed as formula (2): and F [ L ] + beta Mosi [ L ] (2), wherein the array Temp [ L ] is a temperature data sequence, the array Mosi [ L ] is a humidity data sequence, the array F [ L ] is target environment state information, temperature data corresponding to a plurality of moments are stored in the array Temp [ L ], humidity data corresponding to a plurality of moments are stored in the Mosi [ L ], alpha is an attribute weight corresponding to Temp [ L ], and beta is an attribute weight corresponding to Mosi [ L ]. The temperature data and the humidity data can be data collected by a temperature and humidity sensor. L is the length of the array, i.e., the number of temperature data included in Temp [ L ]. The attribute weights may be preset or set as needed.
In the embodiment, the target positioning precision is determined based on the corresponding relation between the environment state information and the positioning precision, and the accuracy of the positioning precision is improved.
In some embodiments, the obtaining of the target environment state information corresponding to the target environment by synthesizing the environment attribute data in the environment attribute data set includes: acquiring attribute weights corresponding to the environment attribute values, and performing weighted calculation on the environment attribute values based on the attribute weights to obtain weighted environment attribute values; and counting the weighted environment attribute values corresponding to the environment attribute data set, and taking the counted environment attribute statistical values as target environment state information corresponding to the target environment.
Each environment attribute value may correspond to an attribute weight, and the attribute weight may be preset or determined as needed. The weighted environment attribute value is obtained by performing a weighted calculation, such as a calculation result obtained by performing a product operation, based on the attribute weight and the environment attribute value.
Specifically, the terminal may acquire environmental data acquired by the environmental sensor at multiple moments in an acquisition time period to form an environmental data sequence, perform statistical operation on the environmental data in the environmental data sequence, for example, perform at least one of mean calculation or sum calculation, and use a value obtained by the calculation as an environmental attribute value. Of course, the terminal may use any one of the environmental data in the environmental data sequence as the environmental attribute value, or use the maximum value or the minimum value in the environmental data sequence as the environmental attribute value.
In some embodiments, the terminal may obtain an attribute weight corresponding to the environment attribute value, perform a product operation on the attribute weight and the environment attribute value, and use a result of the product operation as the weighted environment attribute value.
In some embodiments, the terminal may obtain weighted environment attribute values obtained by each environment data value, perform statistical operation on each weighted environment attribute value, for example, perform at least one of mean operation or summation operation to obtain an environment attribute statistical value, use the environment attribute statistical value as target environment state information corresponding to a target environment, for example, the terminal may perform summation operation on each weighted environment value to obtain an environment attribute statistical value.
In the embodiment, the attribute weight corresponding to the environment attribute numerical value is obtained, and the environment attribute numerical value is subjected to weighted calculation based on the attribute weight to obtain a weighted environment attribute numerical value; and counting the weighted environment attribute values corresponding to the environment attribute data set, and taking the counted environment attribute statistical values as target environment state information corresponding to the target environment, so that the accuracy of determining the environment state information is improved.
In some embodiments, obtaining the object positioning accuracy of the candidate object positioning result corresponding to the target sensing data based on the target environment influence data includes: obtaining the environment positioning precision of a candidate object positioning result corresponding to the target sensing data based on the target environment influence data; acquiring a target position point corresponding to a candidate object positioning result, and acquiring historical feedback positioning accuracy corresponding to the target position point; and obtaining the object positioning precision of the candidate object positioning result corresponding to the target sensing data based on the environment positioning precision and the historical feedback positioning precision.
The environment positioning accuracy is a positioning accuracy calculated based on the target environment influence data, and may include at least one of target positioning accuracy or positioning accuracy corresponding to the target environment state information. For example, when the target sensing data includes the target image, the environment positioning accuracy may include a target positioning accuracy corresponding to the target image, and when the target sensing data includes the object detection data, the environment positioning accuracy may include a positioning accuracy corresponding to the target environment state information. Each candidate object positioning result may correspond to a position point in the target space, where the position point in the target space may correspond to a candidate object positioning result obtained by a plurality of sensors. The target position point refers to a position point corresponding to the candidate object positioning result.
Each position point in the target space may correspond to a historical feedback positioning accuracy, and the historical feedback positioning accuracy corresponding to different position points may be the same or different. The object positioning precision and the environment positioning precision form a positive correlation, and the object positioning precision and the historical feedback positioning precision form a positive correlation. The historical feedback positioning accuracy corresponding to the candidate object positioning result refers to the historical feedback positioning accuracy corresponding to the position point corresponding to the candidate object positioning result.
Specifically, the terminal may obtain the object positioning accuracy of the candidate object positioning result according to the historical feedback positioning accuracy and the environmental positioning accuracy corresponding to the candidate object positioning result, for example, the terminal may perform a product operation on the historical feedback positioning accuracy and the environmental positioning accuracy, and use a result of the product operation as the object positioning accuracy, or may perform an addition operation on the historical feedback positioning accuracy and the environmental positioning accuracy, and use a result of the addition operation as the object positioning accuracy.
In this embodiment, the environmental positioning accuracy may reflect an accuracy of the positioning result under an influence of an environment, the historical feedback positioning accuracy may reflect an accuracy of the positioning result in a historical time period, and the object positioning accuracy of the candidate object positioning result is determined based on the environmental positioning accuracy and the historical feedback positioning accuracy, which may take into account both the influence of the environment and the accuracy of the historical positioning result, thereby improving the accuracy of the object positioning accuracy.
In some embodiments, obtaining the historical feedback positioning accuracy corresponding to the target location point comprises: obtaining a historical object positioning result corresponding to the target position point, and displaying result presentation information corresponding to the historical object positioning result on information display equipment of the target driving end; and responding to the precision feedback operation on the result presentation information, and adjusting the original positioning precision corresponding to the historical object positioning result based on the precision feedback operation to obtain the historical feedback positioning precision corresponding to the target position point.
The terminal can periodically acquire the target sensing data set, and the acquisition time of data in the target sensing data set acquired each time can be different. The terminal can obtain target object positioning results corresponding to each position point in the target space respectively to form a target positioning result set, therefore, in each period, the terminal can obtain a target positioning result set, target sensing data sets obtained in different periods are different depending on the target positioning result sets, for example, in a first period, data collected in the period are obtained to obtain a target sensing data set of the period, and a target positioning result set of the period is obtained based on the target sensing data set of the period.
The historical object positioning result refers to a target object positioning result obtained in a previous period corresponding to the current period, the current period refers to a period in which the current time is located, and if the period is 5 minutes, the current period is 2 points, 15 minutes to 2 points, 20 minutes, the previous period is 2 points, 10 minutes to 2 points, 15 minutes, and the historical object positioning result is a target object positioning result obtained in the period from 2 points, 10 minutes to 2 points, 15 minutes.
The result presentation information is visualized information, and may be, for example, a two-dimensional image or a three-dimensional perspective, and the result presentation information may include an object located by the historical object positioning result and a position of the object, and may also include a distance between the object and the target driving end, for example, when the historical object positioning result positions that a pedestrian is at the position a, the result presentation information may present the pedestrian at the position a. As shown in fig. 3, result display information is displayed, in which an autonomous vehicle 301 and a pedestrian 302 are displayed, and "a pedestrian is present at the front 50 m" is also displayed.
The original positioning accuracy can be set as required, each position point in the target space can correspond to the original positioning accuracy, the original positioning accuracy corresponding to different position points can be the same, for example, all the original positioning accuracy are 1, and of course, the original positioning accuracy corresponding to different position points can also be different.
The accuracy feedback operation is used to trigger an adjustment to the raw positioning accuracy to generate a historical feedback positioning accuracy. The accuracy feedback operation may include at least one of a positive feedback operation to indicate that the original positioning accuracy is maintained or a negative feedback operation to indicate that the downward adjustment is made on the basis of the original positioning accuracy. The information display equipment is used for displaying information on the target driving end.
Specifically, when the terminal acquires the accuracy feedback operation on the displayed result presentation information, when it is determined that the accuracy feedback operation is the positive feedback operation, the original positioning accuracy may be used as the historical feedback positioning accuracy, when it is determined that the accuracy feedback operation is the negative feedback operation, the original positioning accuracy may be adjusted downward on the basis of the original positioning accuracy, and the adjusted accuracy may be used as the historical feedback positioning accuracy.
In some embodiments, the terminal may obtain the historical object positioning results corresponding to the position points in the target space, form a historical object positioning result set, generate the visual display information corresponding to the historical object positioning result set, and display each object positioned by the historical object positioning result set and the position where the object is located in the visual display information. The visualization display information may include result presentation information corresponding to each location point.
In some embodiments, when the terminal obtains the precision feedback operation on the displayed visual display information, a feedback region corresponding to the precision feedback operation may be determined, each position point corresponding to the feedback region in the target space is determined as a feedback position point, and when the precision feedback operation is a forward precision feedback operation, the original positioning precision corresponding to the feedback position point is used as the historical feedback positioning precision corresponding to the feedback position point. And when the precision feedback operation is a negative precision feedback operation, downwards adjusting on the basis of the original positioning precision corresponding to the feedback position point, and taking the adjusted precision as the historical feedback positioning precision corresponding to the feedback position point.
In some embodiments, when the visual display information is displayed, the precision feedback control may be displayed, the precision feedback control may include a positive feedback control corresponding to a positive feedback operation or a negative feedback control corresponding to a negative feedback operation, and each object may correspond to a precision feedback control. As shown in fig. 4, 3 objects, namely, a pedestrian at a position a and a tree at a position B are displayed on the information display apparatus 400, the pedestrian at the position a corresponds to the positive feedback control 403 and the negative feedback control 404, the tree at the position B corresponds to the positive feedback control 401 and the negative feedback control 402, when the information display apparatus 400 receives the trigger operation on 401, the positive feedback operation of obtaining the positioning result of the position B is determined, and when the information display apparatus 400 receives the trigger operation on 402, the negative feedback operation of obtaining the positioning result of the position B is determined.
In this embodiment, the result presentation information corresponding to the historical object positioning result is displayed, the original positioning accuracy of the historical object positioning result is adjusted based on the accuracy feedback operation on the result presentation information, and the historical feedback positioning accuracy corresponding to the target position point is obtained, so that the feedback of the driver on the accuracy of the object positioning result can be obtained, and thus the historical feedback positioning accuracy can be obtained by using the feedback of the driver on the positioning result.
In some embodiments, in response to an accuracy feedback operation on the result presentation information, adjusting an original positioning accuracy corresponding to the historical object positioning result based on the accuracy feedback operation, and obtaining the historical feedback positioning accuracy corresponding to the target location point includes: determining a feedback precision adjusting direction based on precision feedback operation, and acquiring a feedback precision adjusting parameter corresponding to the feedback precision adjusting direction; and adjusting the original positioning accuracy corresponding to the historical object positioning result by using the feedback accuracy adjustment parameter to obtain the historical feedback positioning accuracy corresponding to the target position point.
Wherein the feedback precision adjustment direction may include at least one of a positive precision adjustment direction or a negative precision adjustment direction. The precision feedback operation corresponding to the positive precision adjustment direction is a positive feedback operation, and the precision feedback operation corresponding to the negative precision adjustment direction is a negative feedback operation.
The feedback precision adjustment parameters can be preset or set according to needs, the feedback precision adjustment directions are different, and the corresponding feedback precision adjustment parameters are also different.
Specifically, the terminal may perform an addition operation on the feedback accuracy adjustment parameter and the original positioning accuracy corresponding to the historical object positioning result, and use a result of the addition operation as the historical feedback positioning accuracy corresponding to the target position point. The feedback precision adjustment parameter corresponding to the positive precision adjustment direction may be a value greater than or equal to 0, for example, 0, and the feedback precision adjustment parameter corresponding to the negative precision adjustment direction may be a value less than 0, for example, minus 0.2.
In some embodiments, the terminal may perform a product operation on the feedback accuracy adjustment parameter and the original positioning accuracy corresponding to the historical object positioning result, and use a result of the product operation as the historical feedback positioning accuracy corresponding to the target location point. The feedback precision adjustment parameter corresponding to the positive precision adjustment direction may be a value greater than or equal to 1, for example, may be 1, and the feedback precision adjustment parameter corresponding to the negative precision adjustment direction may be a value less than 1, for example, may be 0.8.
In this embodiment, the original positioning accuracy of the historical object positioning result is adjusted by using the feedback accuracy adjustment parameter, and the positioning accuracy can be flexibly adjusted according to the feedback of the driver to the positioning result, so that the accuracy of the obtained historical feedback positioning accuracy is improved.
In some embodiments, the candidate object positioning result satisfying the accuracy condition is selected and obtained based on the object positioning accuracy corresponding to the candidate object positioning result, and the taking of the candidate object positioning result satisfying the accuracy condition as the target object positioning result corresponding to the target driver includes: comparing the object positioning precision corresponding to the candidate object positioning result with a positioning precision threshold value to obtain a precision comparison result; and taking the precision comparison result as a candidate object positioning result of which the object positioning precision is greater than the positioning precision threshold value as a target object positioning result corresponding to the target driving end.
The accuracy comparison result is obtained by comparing the object positioning accuracy with a positioning accuracy threshold, and the positioning accuracy threshold can be preset or set as required.
Specifically, the terminal may compare the object positioning accuracy of the candidate object positioning result with the positioning accuracy threshold, obtain a candidate object positioning result with an object positioning accuracy greater than the positioning accuracy threshold, use the candidate object positioning result as a selected object positioning result, determine a position point corresponding to the selected object positioning result, use the selected object positioning result as a target object positioning result corresponding to the position point, and include target object positioning results corresponding to each position point in the target space in the target object positioning result corresponding to the target driver.
In some embodiments, the terminal may obtain a candidate object positioning result set corresponding to a position point in the target space, and each candidate object positioning result in the candidate object positioning result set is obtained by performing positioning analysis based on target sensing data acquired by different sensors. The terminal may rank the candidate object positioning results in the candidate object positioning result set to obtain a candidate object positioning result sequence. The ranking may be performed based on the type of the sensor, or may be performed based on the object existence probability in the candidate object positioning result, for example, the candidate object positioning result positioned by the data acquired by the camera is ranked before the candidate object positioning result positioned by the data acquired by the lidar, or the candidate object positioning result positioned by the data acquired by the lidar is ranked before the object existence probability is smaller. When determining a target object positioning result corresponding to the location point, the terminal may obtain a current object positioning result from the candidate object positioning result sequence according to the sorting order, if the object positioning accuracy corresponding to the current object positioning result is greater than the positioning accuracy threshold, the current object positioning result is used as the target object positioning result without considering the result of the candidate object positioning result arranged behind the current object positioning result, and if the object positioning accuracy corresponding to the current object positioning result is less than the positioning accuracy threshold, a new current object positioning result is obtained from the candidate object positioning result sequence according to the sorting order, and the target object positioning result is determined based on the object positioning accuracy of the new current object positioning result.
In this embodiment, the candidate object positioning result with the object positioning accuracy greater than the positioning accuracy threshold is used as the target object positioning result, so that the object positioning result with the higher positioning accuracy can be used as the final positioning result, and the positioning accuracy is improved.
In some embodiments, the target sensory data is sensory data collected by a sensor having an environmental influence level greater than an influence level threshold, the method further comprising: acquiring sensing data acquired by a sensor with the environmental influence degree smaller than the influence degree threshold value for acquiring a target environment where a target driving end is located, and taking the sensing data as additional sensing data; positioning analysis is carried out on the additional sensing data to obtain an additional object positioning result corresponding to the additional sensing data; acquiring sensor positioning accuracy corresponding to a sensor with the environment influence degree smaller than the influence degree threshold; when the precision comparison result shows that the positioning precision of the objects is smaller than the positioning precision threshold, comparing the positioning precision of the sensor with the positioning precision threshold; and when the positioning accuracy of the sensor is greater than the positioning accuracy threshold, taking an additional object positioning result corresponding to the additional sensing data as a target object positioning result corresponding to the target driving end.
The additional sensing data is data collected by a sensor with an environmental influence degree smaller than an influence degree threshold, for example, data collected by a UWB millimeter wave sensor. The additional object localization result is a result of localization based on the additional sensing data. At least one of an object position, an object distance, or an object presence probability may be included in the additional object localization result. Each position point in the target space may correspond to an additional object positioning result, respectively. The object position in the additional object localization result may be represented by the coordinates of the corresponding location point. The sensor positioning accuracy can be preset or determined according to the type of the sensor. The acquisition time of the additional sensing data is consistent with the acquisition time of the target sensing data.
Specifically, the terminal may determine, by using the additional sensing data, a probability that an object exists at each location point in the target space, and obtain an accessory object positioning result.
In some embodiments, the terminal may obtain historical feedback positioning accuracy, and use the historical feedback positioning accuracy as the sensor positioning accuracy.
In some embodiments, when the accuracy comparison result is that the object positioning accuracy is less than the positioning accuracy threshold, that is, the object positioning accuracy of each candidate object positioning result in the candidate object positioning result set corresponding to the location point is less than the positioning accuracy threshold, the terminal may determine the target object positioning result based on the additional object positioning result, for example, the additional object positioning result may be used as the target object positioning result, or the sensor positioning accuracy may be compared with the positioning accuracy threshold, and when the sensor positioning accuracy is greater than the positioning accuracy threshold, the additional object positioning result may be used as the target object positioning result.
In this embodiment, when the object location accuracy is less than the location accuracy threshold, it is indicated that the location result of the sensing data acquired by the sensor whose environmental influence degree is greater than the influence degree threshold is poor, for example, when the environment is harsh due to the influence of environmental factors, which causes the interference in the sensing data to be large, the location result may be poor, at this time, the sensor whose environmental influence degree is less than the influence degree threshold is used for location, which can improve the flexibility of location and the success rate of location, so that the location can still be accurate in harsh environment.
In some embodiments, the target object positioning result is an object positioning result corresponding to a position point in a position point set corresponding to the target environment; the method further comprises the following steps: respectively corresponding target object positioning results of each position point in the position point set to form a current positioning result set corresponding to the current time; determining sensors corresponding to target object positioning results corresponding to all position points in the position point set respectively, and establishing a corresponding relation between the target object positioning results and the corresponding sensors; and displaying a current positioning result set corresponding to the current time at the target driving end, and correspondingly displaying the indication information of the sensor corresponding to the target object positioning result on the basis of the corresponding relation.
The current time refers to a time period of the current time. The indication information of the sensor is used for identifying the sensor, and the indication information of the sensor can be the name or model of the sensor, for example. There may be only one target object location result corresponding to the location point. Since the target object positioning result of the position point is determined from the candidate object positioning result set, and the sensors corresponding to the respective candidate object positioning results in the candidate object positioning result set are different, so that the sensors corresponding to the target object positioning results may be different for different position points, for example, there are 3 sensors, i.e., sensor 1, sensor 2 and sensor 3, the target object positioning result of position point a corresponds to sensor 1, and the target object positioning result of position point B corresponds to sensor 2, so that it can be known that the sensors corresponding to the target object positioning results of the respective position points in the target space may be the same or different, and thus the target object positioning result set is a result obtained by fusing the positioning of the plurality of sensors, thereby fully utilizing the positioning effects of the plurality of sensors, the positioning accuracy is improved, wherein the target object positioning result set comprises target object positioning results corresponding to all position points in the position point set. Therefore, the driving end positioning method provided by the application can also be called as an automatic driving positioning technology based on multi-data fusion.
Specifically, the terminal may periodically obtain a target sensing data set, perform positioning by using the target sensing data set, obtain target object positioning results corresponding to each position point in the position point set, and may combine the target object positioning results into a target positioning result set, so that in each period, the terminal may obtain a target positioning result set, where the current positioning result set is a target positioning result set obtained in a period where the current time is located, and in each period, after obtaining the target positioning result set, the terminal may generate visual display information based on the target positioning result set, where the visual display information may be, for example, a two-dimensional image or a three-dimensional stereogram, where the visual display information may include a positioned object and may also include indication information of a sensor, where the indication information of the sensor has a corresponding position in the visual display information, the position of the indicating information in the visual display information is called an indicating position, and the positioning result at the indicating position is positioned by the data collected by the sensor corresponding to the indicating information at the indicating position. So that the sensor on which the current time location is mainly based can be intuitively determined based on the indication information.
In some embodiments, the terminal may obtain a sensor corresponding to the target object positioning result corresponding to the location point, and use the sensor as a target sensor to establish a corresponding relationship between the target sensor and the target object positioning result, and may display, based on the corresponding relationship, indication information of the corresponding target sensor at a location corresponding to the target object positioning result when displaying the visual display information.
In this embodiment, the corresponding relationship between the target object positioning result and the corresponding sensor is established, and the indication information of the sensor corresponding to the target object positioning result is correspondingly displayed, so that the driver can specify which sensor the positioning result is specifically obtained by, thereby visually finding out the sensor on which the current time positioning mainly depends, determining the sensor with lower current time positioning accuracy, and visually reflecting the positioning accuracy of the data acquired by the sensor.
The application scene comprises an automatic driving automobile and one or more sensing modules, wherein the sensing modules comprise a UWB millimeter wave sensor, a camera, a laser radar and a temperature and humidity sensor. The UWB millimeter wave sensor, the camera and the laser radar are used for collecting driving positioning data and fusing different sensing data in a later period to perform high-precision positioning in multiple scenes. And the temperature and humidity sensor judges the user environment and weather so as to evaluate the positioning precision conditions of different sensors. Specifically, as shown in fig. 7, the driving end positioning method is applied to the application scenario as follows:
s702, acquiring monitoring video data acquired by a camera in the automatic driving automobile, acquiring radar reflection data acquired by a laser radar sensor in the automatic driving automobile, acquiring UWB reflection data acquired by a UWB millimeter wave sensor in the automatic driving automobile, and acquiring temperature and humidity sensing data acquired by a temperature and humidity sensor in the automatic driving automobile.
As shown in fig. 6, the terminal may include a joint confidence data analysis system, a positioning area accuracy evaluation module, and a positioning result output module, where the joint confidence data analysis system acquires temperature and humidity sensing data, monitoring video data, radar reflection data, and UWB reflection data. The temperature and humidity sensing data comprises temperature data and humidity data. Various sensors on the autonomous vehicle simultaneously begin collecting data about the vehicle surroundings and continue to send the collected data to the joint confidence analysis system.
The multiple sensors simultaneously acquire sensing data around the automatic driving automobile and respectively send the sensing data to the combined confidence data analysis system so as to carry out confidence evaluation on the output precision after the data are input into the positioning algorithm. For the monitoring video, the high precision can be obtained when the illumination intensity is high, therefore, the positioning precision of different positions can be evaluated according to the brightness data of pixel points in the video stream, and for the laser radar reflection data, the positioning precision of the laser radar reflection data under the weather of heavy fog, heavy rain, sand storm and the like is low, so that the current weather can be judged by combining the temperature and humidity sensor and the video data to judge the positioning precision of different areas. The UWB reflection data has low precision in angular positioning, and can be used as a supplement of other high-precision sensors such as a camera and a laser radar sensor. After the positioning accuracy of different sensors in different areas is evaluated, the positioning output result can be selected according to the positioning accuracy of different sensing data, in addition, the positioning accuracy of the sensors corresponding to the different areas can be occasionally evaluated in an auxiliary manner according to the manual feedback result of the driver, and the weight of the evaluation feedback of the driver can be adjusted as required. And the output positioning result is fed back to the automatic driving system to adjust the driving strategy, and the positioning is carried out in real time. The driver can evaluate the output positioning result, for example, when the positioning result deviates from the judgment of the obstacle, the judgment of the turning, the judgment of the pedestrian and the like, the driver can feed back and train the system.
The combined confidence data analysis system executes different data processing operations according to different sensors, if the sensors are temperature and humidity sensors, temperature data of the environment are continuously acquired and stored in an array Temp [ L ], humidity data of the environment are continuously acquired and stored in an array Mosi [ L ], the weighted value alpha and beta of the temperature and the humidity are given, the environment state when the positioning result is obtained is predicted, for example, the environment state information is calculated by adopting a formula (2), and the environment state is determined based on the environment state information.
If the video is a monitoring camera, continuously collecting monitoring video content, storing pixel arrays of images in a two-dimensional matrix G [ M ] [ N ] of M × N, further dividing the video into areas of I × J, performing brightness analysis on the video in each area, giving a weight of video positioning accuracy according to the brightness degree of the video, giving an array A [ I ] [ J ] of the weight of the positioning accuracy of each area, and calculating A [ I ] [ J ] according to a formula (1). And inputting the video stream into a positioning algorithm, and outputting whether an obstacle exists in a three-dimensional space B _ V [ k ] [ P ] [ q ] with the automobile as the center, the distance of the obstacle, the probability P that the obstacle exists and the positioning distance D of the obstacle, wherein each element of the three-dimensional array B _ V comprises 2 components. And performing three-dimensional reconstruction on the two-dimensional precision weight, and projecting the two-dimensional precision weight to a three-dimensional positioning precision weight Conf _ V [ K ] [ P ] [ Q ], wherein the Conf _ V [ K ] [ P ] [ Q ] (hassoure (B _ V [ K ] [ P ] [ Q ], A [ i ] [ j ]) } (3) is mapped according to a two-dimensional input area on which the three-dimensional positioning area depends, wherein hassoure represents that the two-dimensional precision weight is subjected to three-dimensional reconstruction, and 0< K < K,0< P < P, and 0< Q < Q. The three-dimensional positioning accuracy weight Conf _ V [ k ] [ p ] [ q ] includes positioning accuracy corresponding to each position point.
If the sensor is a laser radar sensor, continuously collecting data of the laser radar, storing the data into an array R [ L ], setting a positioning accuracy confidence value Ra [ I ] [ J ] of the laser radar according to a detection result F [ L ] of the temperature and the humidity, inputting the laser radar data R [ L ] into a positioning algorithm, and outputting whether an obstacle exists in a three-dimensional space B _ R [ k ] [ p ] [ q ] with the automobile as the center and the distance of the obstacle. Each element of the three-dimensional array B _ R comprises 2 components, the probability P of having an obstacle and the positioning distance D of the obstacle; and performing three-dimensional reconstruction on the two-dimensional precision weight, and projecting the two-dimensional precision weight to the three-dimensional positioning precision weight Conf _ R [ k ] [ p ] [ q ], wherein the method is to perform mapping according to a two-dimensional input area on which the three-dimensional positioning area depends.
If the sensor is a millimeter wave UWB sensor, continuously acquiring data U [ L ] of the millimeter wave UWB, directly outputting the data U [ L ] to a final positioning algorithm, and outputting whether an obstacle exists in a three-dimensional space B _ U [ k ] [ p ] [ q ] with the automobile as the center and the distance of the obstacle. Each element of the three-dimensional array B _ U contains 2 components, the probability P of having an obstacle, and the location distance D of the obstacle.
And jointly judging the reliability of the positioning results of different sensors in different areas according to the positioning accuracy confidence of each sensor and the area positioning accuracy D [ k ] [ p ] [ q ] fed back by the driver. For the final positioning result L [ k ] [ p ] [ q ], given a reliability threshold T, T referring to a positioning accuracy threshold, L [ k ] [ p ] [ q ] i.e. the above target object positioning result set, and the initialization index k, p, q is 0,0,0, i.e. traversing the target space from the coordinate (0,0,0), the following steps are performed:
a) judging whether Conf _ V [ k ] [ p ] [ q ]. D [ k ] [ p ] [ q ] is larger than T or not, if so, enabling L [ k ] [ p ] [ q ] to be B _ V [ k ] [ p ] [ q ], and jumping to the step D);
b) judging whether Conf _ R [ k ] [ p ] [ q ]. D [ k ] [ p ] [ q ] is larger than T or not, if so, enabling L [ k ] [ p ] [ q ] to be B _ R [ k ] [ p ] [ q ], and jumping to the step D);
c) judging whether D [ k ] [ p ] [ q ] is larger than T or not, if so, enabling L [ k ] [ p ] [ q ] to be B _ U [ k ] [ p ] [ q ], and jumping to the step D); otherwise, the L [ k ] [ p ] [ q ] "can not be judged", and the step d) is skipped;
d) incrementing k by 1;
e) if K > K, K is set to 0, p is incremented by 1, and f is jumped to); otherwise jump to a);
f) if P > P, P is set to 0, q is incremented by 1, jumping to g); otherwise jump to a);
g) and if Q is greater than Q, outputting a current positioning result, showing the current positioning result as a driver to check, allowing the driver to assist in judging whether positioning is ready, storing the positioning result into a feedback array D [ k ] [ p ] [ Q ], skipping to the step of data acquisition, and repeatedly acquiring and positioning data.
S704, determining a target space where the automatic driving automobile is located.
For example, the size of the target space may be K × P × Q, and the target space includes L × K × P × Q location points.
S706, positioning and analyzing the monitoring video data to obtain first obstacle positioning results corresponding to all position points in the target space, positioning and analyzing radar reflection data to obtain second obstacle positioning results corresponding to all position points in the target space, positioning and analyzing UWB reflection data to obtain third obstacle positioning results corresponding to all position points in the target space.
S708, dividing a target image in the monitoring video data to obtain a plurality of sub-image regions corresponding to the target image, performing brightness statistics on pixel point brightness of each sub-image region to obtain region brightness corresponding to each sub-image region, taking the region brightness corresponding to the sub-image region as region positioning precision corresponding to the sub-image region, determining a subspace corresponding to the sub-image region from a target space corresponding to the target image, and taking the region positioning precision corresponding to the sub-image region as first environment positioning precision of a first obstacle positioning result corresponding to each position point in the subspace.
S710, obtaining a weighted value of the temperature data and a weighted value of the humidity data, performing weighted calculation on the temperature data and the humidity data to obtain target environment state information, obtaining a positioning accuracy corresponding to the target environment state information based on a corresponding relationship between the environment state information and the positioning accuracy, and using the positioning accuracy corresponding to the target environment state information as a second environment positioning accuracy corresponding to a second obstacle positioning result.
And S712, acquiring the precision of manual feedback of the driver, and determining the historical feedback positioning precision corresponding to each position point.
As shown in fig. 6, the positioning region accuracy evaluation module acquires the accuracy of the manual feedback of the driver.
S714, sequentially, obtain the current location point from the target space.
The current position point is any one position point in the target space.
And S716, multiplying the historical feedback positioning accuracy at the current position point by the first environment positioning accuracy corresponding to the first obstacle positioning result at the current position point, and taking the result of the multiplication as the obstacle positioning accuracy of the first obstacle positioning result at the current position point.
And S718, multiplying the historical feedback positioning accuracy corresponding to the current position point by the second environment positioning accuracy corresponding to the second obstacle positioning result at the current position point, and taking the result of the multiplication as the obstacle positioning accuracy of the second obstacle positioning result at the current position point.
S720, comparing the obstacle positioning accuracy of the first obstacle positioning result at the current position point with a positioning accuracy threshold.
S722, determining whether the obstacle positioning accuracy of the first obstacle positioning result at the current position point is greater than the positioning accuracy threshold, if so, performing step S724, and if not, performing step S726.
And S724, taking the first obstacle positioning result corresponding to the current position point as a target obstacle positioning result corresponding to the current position point.
And S726, comparing the obstacle positioning accuracy of the second obstacle positioning result at the current position point with the positioning accuracy threshold.
And step 728, determining whether the obstacle positioning accuracy of the second obstacle positioning result at the current position point is greater than a positioning threshold, if so, executing step 730, and if not, executing step 732.
And S730, taking the second obstacle positioning result corresponding to the current position point as a target obstacle positioning result corresponding to the current position point.
And S732, comparing the historical feedback positioning accuracy corresponding to the current position point with a positioning accuracy threshold.
S734, determine whether the historical feedback positioning accuracy corresponding to the current location point is greater than the positioning accuracy threshold, if yes, execute step S736, otherwise execute step S738.
And S736, taking the third obstacle positioning result corresponding to the current position point as the target obstacle positioning result of the current position point.
S738, when the historical feedback positioning accuracy corresponding to the current position point is less than the positioning accuracy threshold, it may be determined that the positioning result of the current position point is "unable to be determined".
S740 determines whether each position point in the target space has been traversed, if not, returns to step S714, and if so, executes step S742.
And obtaining target barrier positioning results corresponding to the position points respectively until traversing the position points in the target space.
And S742, outputting the target obstacle positioning results corresponding to the position points respectively.
As shown in fig. 6, the positioning result output module outputs the target obstacle positioning result, and the driver can feed back the output result.
In this embodiment, fuse the data of multiple sensor, synthesize the location advantage of each sensor, the holistic positioning accuracy of autopilot has been promoted, provide the adaptability under multiple scene and the multiple environment for autopilot car, make autopilot car in abominable weather environment, also can obtain the location of high accuracy, the probability that autopilot car received the striking has been reduced, the probability that the driver received the harm has been reduced, the security has been improved, the latent cost of maintenance of car has been reduced, the life of car has been prolonged.
It should be understood that although the various steps in the flowcharts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In some embodiments, as shown in fig. 8, there is provided a steering end positioning apparatus, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, and specifically includes: a target sensing data set obtaining module 802, a candidate object positioning result obtaining module 804, a target environment influence data obtaining module 806, an object positioning accuracy obtaining module 808, and a target object positioning result obtaining module 810, wherein:
a target sensing data set acquisition module 802, configured to acquire a target sensing data set obtained by acquiring a target environment where a target driving end is located; the target sensing data set comprises target sensing data acquired by a plurality of sensors respectively;
a candidate object positioning result obtaining module 804, configured to perform positioning analysis on the target sensing data respectively, so as to obtain candidate object positioning results corresponding to each target sensing data respectively;
a target environment influence data obtaining module 806, configured to obtain target environment influence data corresponding to the sensor when the sensor corresponding to the target sensing data collects the target sensing data;
an object positioning accuracy obtaining module 808, configured to obtain, based on the target environment influence data, object positioning accuracy of a candidate object positioning result corresponding to the target sensing data;
and a target object positioning result obtaining module 810, configured to select a candidate object positioning result meeting the accuracy condition based on the object positioning accuracy corresponding to the candidate object positioning result, and use the candidate object positioning result meeting the accuracy condition as a target object positioning result corresponding to the target driver.
In some embodiments, the object sensing data includes an image of the object captured by the capture sensor, the object environment influence data includes image brightness of the image of the object, and the object location accuracy obtaining module includes: the target positioning precision obtaining unit is used for obtaining target positioning precision by calculating based on the image brightness of the target image, and the target positioning precision and the brightness form a positive correlation; and the target space acquisition unit is used for acquiring a target space corresponding to the target image and taking the target positioning precision as the target positioning precision of the candidate target positioning result corresponding to the target space.
In some embodiments, the target positioning accuracy comprises a region positioning accuracy corresponding to each sub-image region of the target image; the target positioning precision obtaining unit is also used for carrying out region division on the target image to obtain a plurality of sub-image regions corresponding to the target image; respectively carrying out brightness statistics on the pixel point brightness of each sub-image area to obtain the area brightness corresponding to each sub-image area; obtaining the area positioning precision corresponding to the sub-image area based on the area brightness corresponding to the sub-image area; the target space obtaining unit is further configured to determine a subspace corresponding to the sub-image region from a target space corresponding to the target image, and use the region positioning accuracy corresponding to the sub-image region as the object positioning accuracy of the candidate object positioning result corresponding to the subspace.
In some embodiments, the object sensing data includes object detection data detected by a detection sensor, the object environment influence data includes an environment attribute data set of a detection substance influencing a transmission detection sensor in an object environment, the environment attribute data set includes a plurality of environment attribute data, and the object positioning accuracy obtaining module includes: the target environment state information obtaining unit is used for integrating the environment attribute data in the environment attribute data set to obtain target environment state information corresponding to the target environment; and the object positioning precision obtaining unit is used for obtaining the positioning precision corresponding to the target environment state information based on the corresponding relation between the environment state information and the positioning precision, and the positioning precision is used as the object positioning precision of the candidate object positioning result corresponding to the target sensing data.
In some embodiments, the environment attribute data is an environment attribute numerical value, and the target environment state information obtaining unit is further configured to obtain an attribute weight corresponding to the environment attribute numerical value, and perform weighted calculation on the environment attribute numerical value based on the attribute weight to obtain a weighted environment attribute numerical value; and counting the weighted environment attribute values corresponding to the environment attribute data set, and taking the counted environment attribute statistical values as target environment state information corresponding to the target environment.
In some embodiments, the object location accuracy deriving module comprises: the environment positioning precision obtaining unit is used for obtaining the environment positioning precision of a candidate object positioning result corresponding to the target sensing data based on the target environment influence data; a historical feedback positioning accuracy obtaining unit, configured to obtain a target position point corresponding to the candidate object positioning result, and obtain a historical feedback positioning accuracy corresponding to the target position point; and the object positioning precision obtaining unit is used for obtaining the object positioning precision of the candidate object positioning result corresponding to the target sensing data based on the environment positioning precision and the historical feedback positioning precision.
In some embodiments, the historical feedback positioning accuracy obtaining unit is further configured to obtain a historical object positioning result corresponding to the target location point, and display result presentation information corresponding to the historical object positioning result on information display equipment at the target driving end; and responding to the precision feedback operation on the result presentation information, and adjusting the original positioning precision corresponding to the historical object positioning result based on the precision feedback operation to obtain the historical feedback positioning precision corresponding to the target position point.
In some embodiments, the historical feedback positioning accuracy obtaining unit is further configured to determine a feedback accuracy adjustment direction based on the accuracy feedback operation, and obtain a feedback accuracy adjustment parameter corresponding to the feedback accuracy adjustment direction; and adjusting the original positioning accuracy corresponding to the historical object positioning result by using the feedback accuracy adjustment parameter to obtain the historical feedback positioning accuracy corresponding to the target position point.
In some embodiments, the target object location result obtaining module includes: the accuracy comparison result obtaining unit is used for comparing the object positioning accuracy corresponding to the candidate object positioning result with the positioning accuracy threshold value to obtain an accuracy comparison result; and the target object positioning result obtaining unit is used for taking the precision comparison result as a candidate object positioning result of which the object positioning precision is greater than the positioning precision threshold value as a target object positioning result corresponding to the target driving end.
In some embodiments, the target sensory data is sensory data collected by a sensor having an environmental influence level greater than an influence level threshold, the apparatus further comprising: the additional sensing data obtaining module is used for obtaining sensing data acquired by acquiring a target environment where a target driving end is located by a sensor with an environmental influence degree smaller than an influence degree threshold value, and the acquired sensing data is used as additional sensing data; the additional object positioning result obtaining module is used for carrying out positioning analysis on the additional sensing data to obtain an additional object positioning result corresponding to the additional sensing data; the sensor positioning precision obtaining module is used for obtaining the sensor positioning precision corresponding to the sensor with the environmental influence degree smaller than the influence degree threshold value; the precision comparison module is used for comparing the positioning precision of the sensor with the positioning precision threshold when the precision comparison result shows that the positioning precision of the object is smaller than the positioning precision threshold; and the target object positioning result determining module is used for taking an additional object positioning result corresponding to the additional sensing data as a target object positioning result corresponding to the target driving end when the sensor positioning accuracy is greater than the positioning accuracy threshold.
In some embodiments, the target object positioning result is an object positioning result corresponding to a position point in a position point set corresponding to the target environment; the device still includes: a current positioning result set forming module, configured to form a current positioning result set corresponding to the current time from target object positioning results corresponding to each position point in the position point set; the corresponding relation establishing module is used for determining the sensors corresponding to the target object positioning results respectively corresponding to each position point in the position point set and establishing the corresponding relation between the target object positioning results and the corresponding sensors; and the display module is used for displaying the current positioning result set corresponding to the current time at the target driving end and correspondingly displaying the indication information of the sensor corresponding to the target object positioning result on the basis of the corresponding relation.
For specific definition of the driving end positioning device, reference may be made to the definition of the driving end positioning method above, and details are not described here. All or part of each module in the steering end positioning device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In some embodiments, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a steering end positioning method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In some embodiments, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 10. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing relevant data involved in the driving end positioning method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a steering end positioning method.
Those skilled in the art will appreciate that the configurations shown in fig. 9 and 10 are merely block diagrams of portions of configurations related to aspects of the present application, and do not constitute limitations on the computing devices to which aspects of the present application may be applied, as particular computing devices may include more or less components than shown, or combine certain components, or have a different arrangement of components.
In some embodiments, there is further provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above method embodiments when executing the computer program.
In some embodiments, a computer-readable storage medium is provided, in which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments. A computer program product comprising a computer program which, when executed by a processor, carries out the steps in the above-mentioned driver end positioning method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A method for locating a steering end, the method comprising:
acquiring a target sensing data set acquired by collecting a target environment where a target driving end is located; the target sensing data set comprises target sensing data acquired by a plurality of sensors respectively;
respectively carrying out positioning analysis on the target sensing data to obtain candidate object positioning results corresponding to the target sensing data;
acquiring target environment influence data corresponding to a sensor when the sensor corresponding to the target sensing data acquires the target sensing data;
obtaining the object positioning precision of a candidate object positioning result corresponding to the target sensing data based on the target environment influence data;
and selecting the candidate object positioning result meeting the precision condition based on the object positioning precision corresponding to the candidate object positioning result, and taking the candidate object positioning result meeting the precision condition as the target object positioning result corresponding to the target driving end.
2. The method of claim 1, wherein the target sensing data comprises a target image captured by a capturing sensor, the target environment influence data comprises image brightness of the target image, and the obtaining of the object positioning accuracy of the candidate object positioning result corresponding to the target sensing data based on the target environment influence data comprises:
calculating to obtain target positioning precision based on the image brightness of the target image, wherein the target positioning precision and the brightness form a positive correlation;
and acquiring a target space corresponding to the target image, and taking the target positioning precision as the object positioning precision of the candidate object positioning result corresponding to the target space.
3. The method according to claim 2, wherein the target positioning accuracy comprises area positioning accuracy corresponding to each sub-image area of the target image; the step of calculating the target positioning accuracy based on the image brightness of the target image comprises the following steps:
performing area division on the target image to obtain a plurality of sub-image areas corresponding to the target image;
respectively carrying out brightness statistics on the pixel point brightness of each sub-image region to obtain the region brightness corresponding to each sub-image region;
obtaining the area positioning precision corresponding to the sub-image area based on the area brightness corresponding to the sub-image area;
the obtaining of the target space corresponding to the target image and the taking of the target positioning accuracy as the object positioning accuracy of the candidate object positioning result corresponding to the target space includes:
and determining a subspace corresponding to the sub-image region from a target space corresponding to the target image, and taking the region positioning precision corresponding to the sub-image region as the object positioning precision of the candidate object positioning result corresponding to the subspace.
4. The method of claim 1, wherein the target sensing data comprises object detection data detected by a detection sensor, the target environment influence data comprises an environment attribute data set influencing a detection substance transmitting the detection sensor in the target environment, the environment attribute data set comprises a plurality of environment attribute data, and the obtaining of the object positioning accuracy of the candidate object positioning result corresponding to the target sensing data based on the target environment influence data comprises:
synthesizing the environment attribute data in the environment attribute data set to obtain target environment state information corresponding to the target environment;
and acquiring the positioning precision corresponding to the target environmental state information based on the corresponding relation between the environmental state information and the positioning precision, wherein the positioning precision is used as the object positioning precision of the candidate object positioning result corresponding to the target sensing data.
5. The method according to claim 4, wherein the environment attribute data is an environment attribute numerical value, and the obtaining the target environment state information corresponding to the target environment by integrating the environment attribute data in the environment attribute data set comprises:
acquiring an attribute weight corresponding to the environment attribute numerical value, and performing weighted calculation on the environment attribute numerical value based on the attribute weight to obtain a weighted environment attribute numerical value;
and counting the weighted environment attribute values corresponding to the environment attribute data set, and taking the counted environment attribute statistical values as the target environment state information corresponding to the target environment.
6. The method of claim 1, wherein obtaining the object location accuracy of the candidate object location result corresponding to the target sensing data based on the target environment influence data comprises:
obtaining the environment positioning precision of a candidate object positioning result corresponding to the target sensing data based on the target environment influence data;
acquiring a target position point corresponding to the candidate object positioning result, and acquiring historical feedback positioning accuracy corresponding to the target position point;
and obtaining the object positioning precision of the candidate object positioning result corresponding to the target sensing data based on the environment positioning precision and the historical feedback positioning precision.
7. The method of claim 6, wherein the obtaining the historical feedback positioning accuracy corresponding to the target location point comprises:
obtaining a historical object positioning result corresponding to the target position point, and displaying result presentation information corresponding to the historical object positioning result on information display equipment of the target driving end;
and responding to the precision feedback operation of the result presentation information, and adjusting the original positioning precision corresponding to the historical object positioning result based on the precision feedback operation to obtain the historical feedback positioning precision corresponding to the target position point.
8. The method of claim 7, wherein the adjusting the original positioning accuracy corresponding to the historical object positioning result based on the accuracy feedback operation in response to the accuracy feedback operation on the result presentation information to obtain the historical feedback positioning accuracy corresponding to the target location point comprises:
determining a feedback precision adjusting direction based on the precision feedback operation, and acquiring a feedback precision adjusting parameter corresponding to the feedback precision adjusting direction;
and adjusting the original positioning accuracy corresponding to the historical object positioning result by using the feedback accuracy adjustment parameter to obtain the historical feedback positioning accuracy corresponding to the target position point.
9. The method according to claim 1, wherein the selecting, based on the object positioning accuracy corresponding to the candidate object positioning result, a candidate object positioning result satisfying an accuracy condition, and the using the candidate object positioning result satisfying the accuracy condition as the target object positioning result corresponding to the target driver includes:
comparing the object positioning precision corresponding to the candidate object positioning result with a positioning precision threshold value to obtain a precision comparison result;
and taking the precision comparison result as a candidate object positioning result with the object positioning precision larger than the positioning precision threshold value as a target object positioning result corresponding to the target driving end.
10. The method of claim 9, wherein the target sensory data is sensory data acquired by a sensor having an ambient influence level greater than an influence level threshold, the method further comprising:
acquiring sensing data acquired by a sensor with the environmental influence degree smaller than the influence degree threshold value for acquiring a target environment where the target driving end is located, wherein the sensing data is used as additional sensing data;
positioning and analyzing the additional sensing data to obtain an additional object positioning result corresponding to the additional sensing data;
acquiring sensor positioning accuracy corresponding to a sensor with the environment influence degree smaller than the influence degree threshold;
when the precision comparison result is that the positioning precision of the objects is smaller than the positioning precision threshold, comparing the positioning precision of the sensor with the positioning precision threshold;
and when the positioning accuracy of the sensor is greater than the positioning accuracy threshold, taking an additional object positioning result corresponding to the additional sensing data as a target object positioning result corresponding to the target driving end.
11. The method according to claim 1, wherein the target object positioning result is an object positioning result corresponding to a position point in a position point set corresponding to the target environment; the method further comprises the following steps:
respectively corresponding target object positioning results of each position point in the position point set to form a current positioning result set corresponding to the current time;
determining sensors corresponding to target object positioning results respectively corresponding to each position point in the position point set, and establishing a corresponding relation between the target object positioning results and the corresponding sensors;
and displaying the current positioning result set corresponding to the current time at the target driving end, and correspondingly displaying the indication information of the sensor corresponding to the target object positioning result on the basis of the corresponding relation.
12. A steering end positioning device, the device comprising:
the target sensing data set acquisition module is used for acquiring a target sensing data set acquired by acquiring a target environment where a target driving end is located; the target sensing data set comprises target sensing data acquired by a plurality of sensors respectively;
a candidate object positioning result obtaining module, configured to perform positioning analysis on the target sensing data respectively to obtain candidate object positioning results corresponding to the target sensing data respectively;
the target environment influence data acquisition module is used for acquiring target environment influence data corresponding to the sensor when the sensor corresponding to the target sensing data acquires the target sensing data;
an object positioning accuracy obtaining module, configured to obtain, based on the target environment influence data, object positioning accuracy of a candidate object positioning result corresponding to the target sensing data;
and the target object positioning result obtaining module is used for selecting and obtaining a candidate object positioning result meeting the precision condition based on the object positioning precision corresponding to the candidate object positioning result, and taking the candidate object positioning result meeting the precision condition as a target object positioning result corresponding to the target driving end.
13. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor realizes the steps of the method of any one of claims 1 to 11 when executing the computer program.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 11.
15. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 11 when executed by a processor.
CN202111105490.8A 2021-09-22 2021-09-22 Driving end positioning method, device, computer equipment and storage medium Active CN113790761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111105490.8A CN113790761B (en) 2021-09-22 2021-09-22 Driving end positioning method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111105490.8A CN113790761B (en) 2021-09-22 2021-09-22 Driving end positioning method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113790761A true CN113790761A (en) 2021-12-14
CN113790761B CN113790761B (en) 2023-08-04

Family

ID=78878999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111105490.8A Active CN113790761B (en) 2021-09-22 2021-09-22 Driving end positioning method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113790761B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007057373A (en) * 2005-08-24 2007-03-08 Denso Corp Vehicle-mounted navigation system
CN104199023A (en) * 2014-09-15 2014-12-10 南京大学 RFID indoor positioning system based on depth perception and operating method thereof
CN108254199A (en) * 2017-12-08 2018-07-06 泰康保险集团股份有限公司 Vehicle health Forecasting Methodology, device and equipment
CN108828645A (en) * 2018-06-28 2018-11-16 郑州云海信息技术有限公司 A kind of navigation locating method, system, equipment and computer readable storage medium
US20190084618A1 (en) * 2016-03-18 2019-03-21 Kyocera Corporation Parking assistance apparatus, on-vehicle camera, vehicle, and parking assistance method
CN110228436A (en) * 2019-06-27 2019-09-13 深圳市元征科技股份有限公司 A kind of vehicle drive parameter adjusting method, system, equipment and computer media
CN110556012A (en) * 2019-09-16 2019-12-10 北京百度网讯科技有限公司 Lane positioning method and vehicle positioning system
CN112417967A (en) * 2020-10-22 2021-02-26 腾讯科技(深圳)有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN112595329A (en) * 2020-12-25 2021-04-02 北京百度网讯科技有限公司 Vehicle position determining method and device and electronic equipment
CN112650220A (en) * 2020-12-04 2021-04-13 东风汽车集团有限公司 Automatic vehicle driving method, vehicle-mounted controller and system
CN112702690A (en) * 2020-12-14 2021-04-23 上海锐承通讯技术有限公司 Correction positioning method for mobile terminal, mobile terminal and terminal system
CN112833880A (en) * 2021-02-02 2021-05-25 北京嘀嘀无限科技发展有限公司 Vehicle positioning method, positioning device, storage medium, and computer program product
CN113095150A (en) * 2021-03-18 2021-07-09 武汉理工大学 Augmented reality-based traffic environment perception system, method, apparatus and medium
US20210245745A1 (en) * 2020-09-24 2021-08-12 Beijing Baidu Netcom Science And Technology Co., Ltd. Cruise control method, electronic device, vehicle and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007057373A (en) * 2005-08-24 2007-03-08 Denso Corp Vehicle-mounted navigation system
CN104199023A (en) * 2014-09-15 2014-12-10 南京大学 RFID indoor positioning system based on depth perception and operating method thereof
US20190084618A1 (en) * 2016-03-18 2019-03-21 Kyocera Corporation Parking assistance apparatus, on-vehicle camera, vehicle, and parking assistance method
CN108254199A (en) * 2017-12-08 2018-07-06 泰康保险集团股份有限公司 Vehicle health Forecasting Methodology, device and equipment
CN108828645A (en) * 2018-06-28 2018-11-16 郑州云海信息技术有限公司 A kind of navigation locating method, system, equipment and computer readable storage medium
CN110228436A (en) * 2019-06-27 2019-09-13 深圳市元征科技股份有限公司 A kind of vehicle drive parameter adjusting method, system, equipment and computer media
CN110556012A (en) * 2019-09-16 2019-12-10 北京百度网讯科技有限公司 Lane positioning method and vehicle positioning system
US20210245745A1 (en) * 2020-09-24 2021-08-12 Beijing Baidu Netcom Science And Technology Co., Ltd. Cruise control method, electronic device, vehicle and storage medium
CN112417967A (en) * 2020-10-22 2021-02-26 腾讯科技(深圳)有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN112650220A (en) * 2020-12-04 2021-04-13 东风汽车集团有限公司 Automatic vehicle driving method, vehicle-mounted controller and system
CN112702690A (en) * 2020-12-14 2021-04-23 上海锐承通讯技术有限公司 Correction positioning method for mobile terminal, mobile terminal and terminal system
CN112595329A (en) * 2020-12-25 2021-04-02 北京百度网讯科技有限公司 Vehicle position determining method and device and electronic equipment
CN112833880A (en) * 2021-02-02 2021-05-25 北京嘀嘀无限科技发展有限公司 Vehicle positioning method, positioning device, storage medium, and computer program product
CN113095150A (en) * 2021-03-18 2021-07-09 武汉理工大学 Augmented reality-based traffic environment perception system, method, apparatus and medium

Also Published As

Publication number Publication date
CN113790761B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
US11521009B2 (en) Automatically generating training data for a lidar using simulated vehicles in virtual space
CN112417967B (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
JP6682833B2 (en) Database construction system for machine learning of object recognition algorithm
US20190204834A1 (en) Method and apparatus for object detection using convolutional neural network systems
CN112149550B (en) Automatic driving vehicle 3D target detection method based on multi-sensor fusion
US20180188733A1 (en) Multi-channel sensor simulation for autonomous control systems
KR101534056B1 (en) Traffic signal mapping and detection
US20100104199A1 (en) Method for detecting a clear path of travel for a vehicle enhanced by object detection
CN114454809A (en) Intelligent light switching method, system and related equipment
CN109307869A (en) For increasing the equipment and lighting device of the visual field of laser radar detection device
US20210012089A1 (en) Object detection in point clouds
US20230237783A1 (en) Sensor fusion
CN112911249B (en) Target object tracking method and device, storage medium and electronic device
CN116601681A (en) Estimating automatic exposure values of a camera by prioritizing objects of interest based on relevant inputs of a 3D map
KR20240047408A (en) Detected object path prediction for vision-based systems
CN113792598B (en) Vehicle-mounted camera-based vehicle collision prediction system and method
CN113734176A (en) Environment sensing system and method for intelligent driving vehicle, vehicle and storage medium
CN113790761B (en) Driving end positioning method, device, computer equipment and storage medium
EP4160269A1 (en) Systems and methods for onboard analysis of sensor data for sensor fusion
CN114581748B (en) Multi-agent perception fusion system based on machine learning and implementation method thereof
Rana et al. Comparative study of Automotive Sensor technologies used for Unmanned Driving
CN113611008B (en) Vehicle driving scene acquisition method, device, equipment and medium
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN116700228A (en) Robot path planning method, electronic device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant