CN113378694B - Method and device for generating target detection and positioning system and target detection and positioning - Google Patents

Method and device for generating target detection and positioning system and target detection and positioning Download PDF

Info

Publication number
CN113378694B
CN113378694B CN202110635784.5A CN202110635784A CN113378694B CN 113378694 B CN113378694 B CN 113378694B CN 202110635784 A CN202110635784 A CN 202110635784A CN 113378694 B CN113378694 B CN 113378694B
Authority
CN
China
Prior art keywords
map
point cloud
loss value
sample
target detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110635784.5A
Other languages
Chinese (zh)
Other versions
CN113378694A (en
Inventor
方进
周定富
宋希彬
张良俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110635784.5A priority Critical patent/CN113378694B/en
Publication of CN113378694A publication Critical patent/CN113378694A/en
Application granted granted Critical
Publication of CN113378694B publication Critical patent/CN113378694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a method and a device for generating a target detection and positioning system and a target detection and positioning method and device, relates to the technical field of artificial intelligence, in particular to the technical field of computer vision and deep learning, and can be applied to an automatic driving scene. The specific implementation scheme is as follows: a sample set and a map are acquired. Selecting samples from the sample set, and performing the following training steps: and extracting fusion features from the point cloud data and the map in the selected sample. And inputting the fusion characteristics into a target detection model to obtain a prediction label set. Generating a segmentation map based on point cloud data in the selected sample; a total loss value is calculated based on the prediction tagset, the sample tagset, the segmentation map, and the map. And if the total loss value is less than a preset threshold value, constructing a target detection and positioning system according to the target detection model. The embodiment can improve the accuracy of detection and can carry out accurate positioning.

Description

Method and device for generating target detection and positioning system and target detection and positioning
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of computer vision and deep learning technologies, and more particularly, to a method and an apparatus for generating a target detection and positioning system and a target detection and positioning method and apparatus.
Background
For automatic driving, a perception system (target detection) is 'eyes', directly influences subsequent modules such as target tracking and path planning, and is of great importance to safety of automatic driving. Meanwhile, the precision of the positioning system directly determines whether the environment can be accurately interacted in the follow-up process of the main vehicle or not, and the follow-up decision is influenced.
In the prior art, the application of map information in the target detection process is not considered, and partial automatic driving algorithms can filter the target detection result by using a map, but still do not perform fusion in the process. And the map cost is very high.
For a Positioning System, a high-precision GPS (Global Positioning System) is costly, and searching from a database by means of image or point cloud information is dependent on, on one hand, the cost for establishing such a database is high, and the time cost for searching is also high.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, storage medium and computer program product for generating an object detection and localization system and an object detection and localization.
According to a first aspect of the present disclosure, there is provided a method of generating an object detection and localization system, comprising: a sample set and a map are obtained, wherein each sample in the sample set comprises a frame of point cloud data and a sample label set corresponding to the point cloud data. Selecting samples from the sample set, and performing the following training steps: and extracting fusion features from the point cloud data in the selected sample and the map. And inputting the fusion characteristics into a target detection model to obtain a prediction label set. And generating a segmentation map based on the point cloud data in the selected sample. A total loss value is calculated based on the prediction tag set, the sample tag set, the segmentation map, and the map. And if the total loss value is smaller than a preset threshold value, constructing a target detection and positioning system according to the target detection model.
According to a second aspect of the present disclosure, there is provided an object detection and localization method, comprising: and acquiring an actual map according to the GPS positioning information of the current position, and acquiring point cloud data of the current position. The point cloud data of the current position is input into the target detection and positioning system generated by the method of the first aspect, and a detection result and a segmentation map are output. And searching and matching the segmentation map and the actual map to determine the position deviation. And correcting the GPS positioning information according to the position deviation.
According to a third aspect of the present disclosure, there is provided an apparatus for generating an object detection and localization system, comprising: an acquisition unit configured to acquire a sample set and a map, wherein each sample in the sample set includes a frame of point cloud data and a sample tag set corresponding to the point cloud data. A training unit configured to select samples from the set of samples and to perform the following training steps: and extracting fusion features from the point cloud data in the selected sample and the map. And inputting the fusion characteristics into a target detection model to obtain a prediction label set, and generating a segmentation map based on point cloud data in the selected sample. A total loss value is calculated based on the prediction tagset, the sample tagset, the segmentation map, and the map. And if the total loss value is less than a preset threshold value, constructing a target detection and positioning system according to the target detection model.
According to a fourth aspect of the present disclosure, there is provided an object detecting and locating device comprising: and the acquisition unit is configured to acquire an actual map according to the GPS positioning information of the current position and acquire point cloud data of the current position. A detection unit configured to input point cloud data of a current position into the target detection and positioning system generated by the apparatus according to the second aspect, and output a detection result and a segmentation map. And the determining unit is configured to search and match the divided map and the actual map to determine the position deviation. And the correcting unit is configured to correct the GPS positioning information according to the position deviation.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising: at least one processor. And a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first or second aspect.
According to a sixth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of the first or second aspect.
According to a seventh aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to the first or second aspect.
The method and the device for generating the target detection and positioning system and the target detection and positioning provided by the embodiment of the disclosure are integrated in the target detection and positioning system by utilizing an interface provided by a map and combining map information. For the target detection and positioning system, the accuracy of the target detection can be improved, and for the positioning system, the dependence on a high-precision GPS can be reduced, and accurate positioning is carried out. The technology can be widely applied to automatic driving and intelligent driving systems, such as vehicle-mounted auxiliary driving systems, unmanned driving systems and other products.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method of generating an object detection and location system according to the present application;
FIG. 3 is a schematic illustration of an application scenario of a method of generating an object detection and localization system according to the present application;
FIG. 4 is a block diagram of one embodiment of an apparatus for generating an object detection and localization system according to the present application;
FIG. 5 is a flow diagram of one embodiment of a method for object detection and location according to the present application;
FIG. 6 is a schematic diagram of an embodiment of an apparatus for object detection and localization according to the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 illustrates an exemplary system architecture 100 to which a method of generating an object detection and localization system, an apparatus for generating an object detection and localization system, a method of object detection and localization, or an apparatus for object detection and localization may be applied, according to embodiments of the present application.
As shown in fig. 1, system architecture 100 may include unmanned vehicles (also known as autonomous vehicles) 101, 102, a network 103, a database server 104, and a server 105. Network 103 is the medium used to provide communication links between the unmanned vehicles 101, 102, database server 104, and server 105. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The unmanned vehicles 101 and 102 are provided with driving control equipment and equipment for acquiring point cloud data, such as laser radar and millimeter wave radar. The driving control equipment (also called vehicle-mounted brain) is responsible for the intelligent control of the unmanned vehicle. The driving control device may be a Controller separately arranged, such as a Programmable Logic Controller (PLC), a single chip microcomputer, an industrial Controller, and the like; or the equipment consists of other electronic devices which have input/output ports and have the operation control function; but also a computer device installed with a vehicle driving control type application.
It should be noted that, in practice, the unmanned vehicle may also be equipped with at least one sensor, such as a camera, a gravity sensor, a wheel speed sensor, etc. In some cases, the unmanned vehicle may further include GNSS (Global Navigation Satellite System) equipment, SINS (Strap-down Inertial Navigation System), and the like.
Database server 104 may be a database server that provides various services. For example, a database server may have a sample set stored therein. The sample set contains a large number of samples. Wherein the sample may include point cloud data and a sample label corresponding to the point cloud data. In this way, the user may also select a sample from the set of samples stored by the database server 104 via the unmanned vehicles 101, 102.
The server 105 may also be a server that provides various services, such as a backend server that provides support for various applications displayed on the unmanned vehicles 101, 102. The background server may train the initial model using samples in a sample set collected by the unmanned vehicles 101, 102, and may send a training result (e.g., a generated target detection and positioning system) to the unmanned vehicles 101, 102. Therefore, the user can use the generated target detection and positioning system to detect the obstacles, and can enable the unmanned vehicle to detect the obstacles such as pedestrians, vehicles and the like, thereby controlling the vehicle running state and guaranteeing the running safety. The unmanned vehicle can also correct the GPS of the unmanned vehicle through the target detection and positioning system, and the GPS error caused by insufficient precision is reduced.
Here, the database server 104 and the server 105 may be hardware or software. When they are hardware, they can be implemented as a distributed server cluster composed of a plurality of servers, or as a single server. When they are software, they may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein. Database server 104 and server 105 may also be servers of a distributed system, or servers that incorporate a blockchain. Database server 104 and server 105 may also be cloud servers, or smart cloud computing servers or smart cloud hosts with artificial intelligence technology.
It should be noted that the method for generating an object detection and location system or the method for object detection and location provided by the embodiment of the present application is generally executed by the server 105. Accordingly, the means for generating an object detection and localization system or the means for object detection and localization are typically also provided in the server 105. The method of object detection and localization may also be performed by an unmanned vehicle.
It is noted that database server 104 may not be provided in system architecture 100, as server 105 may perform the relevant functions of database server 104.
It should be understood that the number of unmanned vehicles, networks, database servers, and servers in fig. 1 are merely illustrative. There may be any number of unmanned vehicles, networks, database servers, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method of generating an object detection and location system in accordance with the present application is shown. The method of generating an object detection and localization system may comprise the steps of:
step 201, a sample set and a map are obtained.
In this embodiment, the performing agent (e.g., server 105 shown in FIG. 1) of the method of generating an object detection and localization system may obtain the sample set in a variety of ways. For example, the executing entity may obtain the existing sample set stored therein from a database server (e.g., database server 104 shown in fig. 1) via a wired connection or a wireless connection. As another example, a user may collect a sample via an unmanned vehicle (e.g., unmanned vehicles 101, 102 shown in FIG. 1). In this way, the executive may receive samples collected by the unmanned vehicle and store the samples locally, thereby generating a sample set.
Each sample in the sample set includes a frame of point cloud data and a sample tag set corresponding to the point cloud data. Each frame of point cloud data is acquired by a laser radar or a millimeter wave radar under one scene. The same type of point cloud data needs to be used. The type and the position of each point are manually or automatically marked in advance to serve as a sample label, and for example, points of objects such as vehicles, pedestrians and green belts in one frame of point cloud data can be marked through a frame of a cuboid.
When a sample is taken at a designated area, a map of the area is also obtained, which is located by a highly accurate, error-free GPS. The map can comprise layers such as a current road satellite map, road elements and the like.
At step 202, a sample is selected from a sample set.
In this embodiment, the executing subject may select a sample from the sample set obtained in step 201, and perform the training steps from step 203 to step 208. The selection manner and the number of samples are not limited in the present application. For example, the samples may be randomly selected, or the sample with the largest point cloud data label size may be selected from the samples.
Step 203, extracting fusion features from the point cloud data and the map in the selected sample.
In this embodiment, the fusion features may be extracted from the point cloud data and the map in the selected sample by a manual method or a neural network. The input of the neural network is point cloud data and a map, and the output is fusion characteristics. The neural network can be supervised trained by using pre-labeled point cloud data and a pre-labeled map as training samples. The training process is prior art and therefore will not be described further.
Optionally, the fusion features may be extracted by respectively extracting features of the point cloud data and features of the map, and then performing feature fusion, and the specific steps are as follows: step 2031, inputting the point cloud data in the selected sample into the point cloud feature extraction model to obtain the point cloud features.
In this embodiment, the point cloud feature extraction model may be a neural network, such as resnet50 in 3D. The selected sample outputs point cloud characteristics which can be characteristic images or characteristic vectors after passing through the point cloud characteristic extraction model.
Step 2032, inputting the map into an image feature extraction model to obtain image features.
In this embodiment, the image feature extraction model may be a neural network, such as common image feature extraction models of resnet101, resnet50, and the like.
Step 2033, fusing the point cloud feature and the image feature to obtain a fused feature.
In the present embodiment, the point cloud feature may be converted into a 2-dimensional feature by projecting the 3-dimensional point cloud feature in the ground direction. And then fusing with the image features, wherein a specific fusion scheme can comprise any one of the following items: weight addition, 1 × 1 convolution and information superposition. The feature fusion mainly fuses the point cloud features and the image features, so that the point cloud features and the image features realize information exchange, and finally, performance is improved for the next two task ends.
And step 204, inputting the fusion characteristics into a target detection model to obtain a prediction label set.
In the present embodiment, the target detection model is a neural Network, for example, RPN (Region pro-active Network). The output of the target detection model is a detection result, and the detection result is obtained by enclosing some point cloud data in a detection frame mode and obtaining a prediction tag set of the point cloud data, namely the predicted type of the obstacle.
Step 205, a segmentation map is generated based on the point cloud data in the selected sample.
In this embodiment, the point cloud data may be input into the composition model and the segmentation map may be output. The pattern model is a neural network and can be supervised trained by using marked point cloud data as a training sample. The composition model training process is prior art and thus is not described in detail.
Optionally, the point cloud features can be input into a map segmentation model. The map segmentation model is a neural network model, such as depeplab. For identifying semantics of each point in the point cloud, e.g., lane, green belt, etc. The map segmentation model outputs a segmented map.
Step 206, a total loss value is calculated based on the prediction labelset, the sample labelset, the segmentation map, and the map.
In the present embodiment, a weighted sum of the first loss value between the prediction tag set, the sample tag set, and the second loss value between the division map and the map may be calculated as the total loss value.
The prediction tag set and the sample tag set can be used as parameters and input into a specified first loss function (loss function), so that a first loss value between the prediction tag set and the sample tag set can be calculated.
In this embodiment, the first loss function is generally used to measure the degree of disparity between the predicted values (e.g., the predicted labelsets) and the actual values (e.g., the sample labelsets) of the model. It is a non-negative real-valued function. In general, the smaller the first loss value, the better the robustness of the target detection model, the image feature extraction model, and the point cloud feature extraction model. The first loss function may be set according to actual requirements.
The output divided map is used as a key to search and match in the actual map, and the position deviation between the divided map and the map can be determined by a feature matching algorithm. The position offset may be input into a specified second penalty function so that a second penalty value may be calculated therebetween. In general, the smaller the second loss value, the better the robustness of the map segmentation model, the image feature extraction model, and the point cloud feature extraction model. The second loss function may be set according to actual requirements.
And taking the weighted sum of the first loss value and the second loss value as the total loss value. The weights may be set as desired. The weight of the second penalty value may be set higher if the purpose of the current training is mainly to perform localization, and the weight of the first penalty value may be set higher if the purpose of the current training is mainly to perform target detection. The weights may also be adjusted according to the model's completion, for example, if the target detection model is currently converging better, the weight of its corresponding first penalty value may be set lower. If the map segmentation model is currently converging poorly, the weight of its corresponding second loss value may be set higher.
And step 207, if the total loss value is smaller than the preset threshold value, constructing a target detection and positioning system according to the target detection model.
In this embodiment, when the total loss value reaches the predetermined threshold, the predetermined threshold may be considered to be close to or approximate the true value. The predetermined threshold may be set according to actual requirements. If the total loss value is smaller than the preset threshold value, the training of the target detection model is completed, and the target detection and positioning system can be formed by the neural network for extracting the fusion features and the neural network for generating the segmentation map and used for target detection and positioning.
Optionally, a point cloud feature extraction model, an image feature extraction model, a map segmentation model, and an object detection model may be included in the object detection and localization system. The point cloud feature extraction model, the image feature extraction model and the map segmentation model can be trained and can be directly used. Therefore, only the target detection model is trained in steps 202-208, but when the target detection system is applied, the point cloud feature extraction model, the map feature extraction model and the map segmentation model are still used. In step 208, if the total loss value is not less than the predetermined threshold, the relevant parameters of the target detection model are adjusted, and the steps 202 to 208 are continuously executed.
In this embodiment, if the total loss value is not less than the predetermined threshold, which indicates that the training of the target detection model is not completed, the relevant parameters of the target detection model are adjusted, for example, the weights in each convolution layer in the target detection model are modified by using the back propagation technique. And may return to step 202 to re-select samples from the sample set. So that the training steps described above can be continued.
Optionally, a point cloud feature extraction model, an image feature extraction model, a map segmentation model, and an object detection model may be included in the object detection and localization system. The point cloud feature extraction model, the image feature extraction model and the map segmentation model are untrained models and need to be jointly trained together with the target detection model. If the total loss value is not less than the preset threshold value, the training of the point cloud feature extraction model, the image feature extraction model, the map segmentation model and the target detection model is not finished, and then relevant parameters of the point cloud feature extraction model, relevant parameters of the image feature extraction model, relevant parameters of the map segmentation model and relevant parameters of the target detection model are adjusted.
The method and the device for generating the target detection and positioning system can be widely applied to automatic driving and driving assistance, and are used for improving the precision of a sensing system and a positioning system. The performance of the automatic driving algorithm, particularly the positioning precision and the target detection accuracy are further improved, and the safety of the product is improved. Meanwhile, the technical scheme disclosed by the invention adopts a low-cost solution, and the problem of high cost in the actual landing of the automatic driving can be relieved.
In some optional implementation manners of this embodiment, the point cloud data in the selected sample is input to the point cloud feature extraction model to obtain the point cloud features, including: dividing point cloud data in a selected sample into a three-dimensional grid set with a fixed resolution; and inputting the three-dimensional grid set into a point cloud feature extraction model to obtain point cloud features. And (3) gridding the three-dimensional sparse radar point cloud, namely dividing the three-dimensional point cloud into three-dimensional grids with fixed resolution, and finally obtaining a tensor of the size (H × W × C). For acceleration, the process may be parallelized using GPU devices, and each point is quantized and fed into the corresponding grid. The point cloud data is converted into the three-dimensional grid for processing, so that the data processing speed can be improved.
In some optional implementations of this embodiment, the point cloud feature extraction model is a sparse convolution network or a three-dimensional convolution network. On the basis of the three-dimensional grid, a deep learning technology is used for feature extraction, and two common technical schemes, namely sparse convolution and three-dimensional convolution, are used. Through a multi-layer neural network, grid voxels can be converted into feature data with higher dimensions. Specifically, the three-dimensional tensor can extract more abstract features through sparse three-dimensional convolution or three-dimensional convolution network, and the more abstract features are finally converted into point cloud features through multilayer neural network. The speed and accuracy of feature extraction are improved.
With further reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method of generating an object detection and localization system according to the present embodiment. In the application scenario of fig. 3, a user randomly selects a sample from a sample set, where the sample includes a frame of point cloud data, and the sample is labeled as a vehicle. A map of the area where the sample was taken, including road elements such as greenbelts, lanes, buildings, is also obtained. And inputting the point cloud data into a point cloud feature extraction model to obtain point cloud features. And inputting the map into an image feature extraction model to obtain image features. And then fusing the point cloud characteristics and the image characteristics, and inputting the fused characteristics into a target detection model to obtain a prediction label set. A first loss value is calculated based on the difference between the prediction tag set and the sample tag set. The cloud features are input into a map segmentation model to obtain a segmentation map, and a second loss value is calculated according to the position offset between the segmentation map and the map. And taking the sum of the first loss value and the second loss value as a total loss value. And if the total loss value is less than a preset threshold value, finishing the training of the point cloud feature extraction model, the image feature extraction model, the map segmentation model and the target detection model, and constructing a target detection and positioning system. Otherwise, adjusting the relevant parameters of the point cloud feature extraction model, the image feature extraction model, the map segmentation model and the target detection model, reselecting the sample, and continuing training to reduce the total loss value until the total loss value is converged to a preset threshold value.
Referring to fig. 4, a flowchart 400 of one embodiment of a method for object detection and localization provided herein is shown. The method of object detection and localization may comprise the steps of:
step 401, obtaining an actual map according to the GPS positioning information of the current position, and collecting point cloud data of the current position.
In the present embodiment, the execution subject (e.g., the server 105 shown in fig. 1) of the method of object detection and localization may acquire point cloud data of the current position in various ways. For example, the executing entity may obtain the point cloud data stored therein from a database server (e.g., database server 104 shown in fig. 1) through a wired connection or a wireless connection. As another example, the executive may also receive point cloud data for a current location acquired by an unmanned vehicle (e.g., unmanned vehicles 101, 102 shown in fig. 1). The laser radar continuously scans and collects point cloud data in the driving process of the unmanned vehicle. The detection target is to judge whether the current position has an obstacle, and the position and the category of the obstacle. The execution subject may also be an unmanned vehicle 101, 102.
In addition, a map of the current position needs to be acquired, positioning can be performed according to the GPS of the unmanned vehicle, and then the map is acquired from a database, and the map can also be acquired from a map server of a third party. The accuracy of the GPS at this time may not be high, and the position obtained by positioning may deviate from the true position.
Step 402, inputting the point cloud data of the current position into a target detection and positioning system, and outputting a detection result and a segmentation map.
In this embodiment, the executing entity may input the point cloud data acquired in step 401 into a point cloud feature extraction model of the target detection and localization system, input a map into an image feature extraction model, and then input the fusion features into the target detection model through feature extraction and feature fusion. And finally outputting the detection result of the area to be detected. The detection result can be used for describing whether the area to be detected has obstacles or not, and the positions and the types of the obstacles. And inputting the point cloud characteristics into a map segmentation model to obtain a segmentation map.
In this embodiment, the object detection and location system may be generated using the method described above in the embodiment of FIG. 2. For a specific generation process, reference may be made to the related description of the embodiment in fig. 2, which is not described herein again.
And step 403, searching and matching the segmentation map and the actual map to determine the position deviation.
In this embodiment, this output divided map is used as a key to search and match the actual map, and the positional deviation is determined by a Feature matching algorithm (for example, scale Invariant Feature Transform (SIFT) matching algorithm). The image feature matching method is the prior art, and therefore is not described in detail.
And step 404, correcting the GPS positioning information according to the position deviation.
In this embodiment, the GPS correction can be performed by directly using the positional deviation as the deviation of the unmanned vehicle GPS. For example, the accuracy of the current GPS is 100 meters, which may cause the obtained map to deviate from the actual map. It can be determined through steps 401 to 403 that the position deviation of the divided map calculated from the point cloud data from the actual map obtained through the GPS is 50 meters, and then the GPS can be corrected. And steps 401-403 can be repeated to correct the GPS positioning information for multiple times, so that multiple position deviations are obtained, and the average value is used as the deviation of the unmanned vehicle GPS for GPS correction. For example, if the first calculated positional deviation is 50 meters, the second calculated positional deviation is 30 meters, and the third calculated positional deviation is 40 meters, the GPS positioning information is corrected by an average value of 40 meters.
Alternatively, the actual map may be reacquired after GPS corrections, and steps 402-404 may continue. The positional deviation can be reduced. Correcting the GPS again, and repeatedly executing the steps 401-404 until the position deviation is 0, thereby realizing the precision calibration of the GPS.
It should be noted that the method for detecting and positioning an object in this embodiment can be used to test the object detecting and positioning system generated in the above embodiments. And then the target detection and positioning system can be continuously optimized according to the test result. The method may also be a practical application method of the object detection and positioning system generated by the above embodiments. The target detection and positioning system generated by the embodiments is adopted to detect and position the target, which is beneficial to improving the performance of the target detection and positioning system. Such as the type and position of the found obstacle are more accurate, etc. The GPS of the unmanned vehicle can be corrected, so that the dependence of the unmanned vehicle on the high-precision GPS is reduced, the cost is saved, the positioning precision is improved, and the safety of the unmanned vehicle is improved.
As shown in fig. 5, the apparatus 500 of the object detection system of the present embodiment may include: an acquisition unit 501 and a training unit 502. The obtaining unit 501 is configured to obtain a sample set and a map, where each sample in the sample set includes a frame of point cloud data and a sample label set corresponding to the point cloud data. A training unit 502 configured to select samples from a sample set and to perform the following training steps: and extracting fusion features from the point cloud data and the map in the selected sample. Inputting the fusion characteristics into a target detection model to obtain a prediction label set, and generating a segmentation map based on point cloud data in the selected sample; a total loss value is calculated based on the prediction tagset, the sample tagset, the segmentation map, and the map. And if the total loss value is smaller than a preset threshold value, constructing a target detection and positioning system according to the target detection model.
In some optional implementations of this embodiment, the training unit 502 is further configured to: if the total loss value is not less than the preset threshold value, adjusting the relevant parameters of the target detection model, reselecting the sample from the sample set, and continuing to execute the training step.
In some optional implementations of this embodiment, the training unit 502 is further configured to: inputting point cloud data in the selected sample into a point cloud feature extraction model to obtain point cloud features; inputting a map into an image feature extraction model to obtain image features; and fusing the point cloud characteristics and the image characteristics to obtain fused characteristics.
In some optional implementations of this embodiment, the training unit 502 is further configured to: and dividing the point cloud data in the selected sample into a three-dimensional grid set with a fixed resolution. And inputting the three-dimensional grid set into a point cloud feature extraction model to obtain point cloud features.
In some optional implementations of this embodiment, the training unit 502 performs point cloud feature extraction on the point cloud feature extraction model by using a sparse convolution network or a three-dimensional convolution network.
In some optional implementations of this embodiment, the training unit 502 is further configured to: inputting the point cloud characteristics into a map segmentation model to obtain a segmentation map; calculating a first loss value according to the prediction label set and the sample label set; calculating a second loss value according to the position offset between the divided map and the map; a total loss value is calculated based on the first loss value and the second loss value.
In some optional implementations of this embodiment, the training unit 502 is further configured to: if the total loss value is smaller than a preset threshold value, constructing a target detection and positioning system according to the point cloud feature extraction model, the image feature extraction model, the map segmentation model and the target detection model; and if the total loss value is not less than the preset threshold value, adjusting relevant parameters of the point cloud feature extraction model, the image feature extraction model, the map segmentation model and the target detection model.
With continued reference to FIG. 6, the present application provides one embodiment of an apparatus for object detection and localization as an implementation of the methods illustrated in the above figures. The embodiment of the device corresponds to the embodiment of the method shown in fig. 4, and the device can be applied to various electronic devices.
As shown in fig. 6, the apparatus 600 for detecting and locating an object of the present embodiment may include: an obtaining unit 601 configured to obtain an actual map according to GPS positioning information of a current location, and collect point cloud data of the current location. A detection unit 602 configured to input point cloud data of a current location into the object detection and localization system using the apparatus 500 as generated, and output a detection result and a segmentation map. A determination unit 603 configured to search and match the divided map and the actual map, and determine a positional deviation. A correcting unit 604 configured to correct the GPS positioning information according to the position deviation.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of flows 200 or 400.
A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of flow 200 or 400.
A computer program product comprising a computer program which, when executed by a processor, implements the method of flow 200 or 400.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 701 performs the various methods and processes described above, such as the method of generating an object detection and localization system. For example, in some embodiments, the method of generating an object detection and localization system may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM703 and executed by the computing unit 701, one or more steps of the method of generating an object detection and localization system described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured by any other suitable means (e.g. by means of firmware) to perform the method of generating an object detection and localization system.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a server of a distributed system or a server incorporating a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology. The server may be a server of a distributed system or a server incorporating a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (14)

1. A method of generating an object detection and location system, comprising:
acquiring a sample set and a map, wherein each sample in the sample set comprises a frame of point cloud data and a sample label set corresponding to the point cloud data;
selecting samples from the sample set, and performing the following training steps: extracting fusion features from the point cloud data in the selected sample and the map; inputting the fusion characteristics into a target detection model to obtain a prediction label set; generating a segmentation map based on point cloud data in the selected sample; calculating a total loss value based on the prediction labelset, the sample labelset, the segmentation map, and the map; if the total loss value is less than a preset threshold value, constructing an object detection and positioning system according to the object detection model, wherein the total loss value is the weighted sum of a first loss value between the prediction label set and the sample label set and a second loss value between the segmentation map and the map, if the aim of the current training is mainly positioning, the weight of the second loss value is set to be higher than that of the first loss value, and if the aim of the current training is mainly target detection, the weight of the first loss value is set to be higher than that of the second loss value;
the extracting of the fusion features from the point cloud data in the selected sample and the map includes: inputting point cloud data in the selected sample into a point cloud feature extraction model to obtain point cloud features; inputting the map into an image feature extraction model to obtain image features; projecting the 3-dimensional point cloud features in the ground direction, converting the point cloud features into 2-dimensional features, and fusing the 2-dimensional features with the image features to obtain fused features, wherein the fused scheme comprises any one of the following steps: adding weights, performing 1-by-1 convolution, and superposing information;
generating a segmentation map based on point cloud data in the selected sample, wherein the generating the segmentation map based on the point cloud data in the selected sample comprises: inputting the point cloud characteristics into a map segmentation model to obtain a segmentation map; and
said calculating a total loss value based on said prediction tagset, said sample tagset, said segmentation map, and said map, comprising: calculating a first loss value from the prediction tag set and the sample tag set; calculating a second loss value according to a position offset between the divided map and the map; calculating a total loss value according to the first loss value and the second loss value;
wherein said calculating a second loss value according to the location offset between the segmentation map and the map comprises:
searching and matching the actual map by using the segmentation map as a key, and determining the position offset between the segmentation map and the map by using a feature matching algorithm;
and inputting the position deviation into a specified second loss function, and calculating to obtain a second loss value between the position deviation and the specified second loss function.
2. The method of claim 1, wherein the method further comprises:
and if the total loss value is not less than a preset threshold value, adjusting the relevant parameters of the target detection model, reselecting a sample from the sample set, and continuing to execute the training step.
3. The method of claim 1, wherein the inputting the point cloud data in the selected sample into a point cloud feature extraction model to obtain the point cloud features comprises:
dividing point cloud data in a selected sample into a three-dimensional grid set with a fixed resolution;
and inputting the three-dimensional grid set into a point cloud feature extraction model to obtain point cloud features.
4. The method of claim 1, wherein the point cloud feature extraction model is a sparse convolutional network or a three-dimensional convolutional network.
5. The method of claim 1, wherein said constructing an object detection and localization system from said object detection model comprises:
constructing a target detection and positioning system according to the point cloud feature extraction model, the image feature extraction model, the map segmentation model and the target detection model; and
the method further comprises the following steps:
and if the total loss value is not less than a preset threshold value, adjusting relevant parameters of the point cloud feature extraction model, the image feature extraction model, the map segmentation model and the target detection model.
6. An object detection and localization method, comprising:
acquiring an actual map according to GPS positioning information of a current position, and acquiring point cloud data of the current position;
inputting point cloud data of the current location into a target detection and positioning system generated by the method of any one of claims 1-5, outputting a detection result and a segmentation map;
searching and matching the segmentation map and the actual map to determine a position deviation;
and correcting the GPS positioning information according to the position deviation.
7. An apparatus for generating an object detection and localization system, comprising:
an acquisition unit configured to acquire a sample set and a map, wherein each sample in the sample set includes a frame of point cloud data and a sample tag set corresponding to the point cloud data;
a training unit configured to select samples from the set of samples and to perform the following training steps: extracting fusion features from the point cloud data in the selected sample and the map; inputting the fusion characteristics into a target detection model to obtain a prediction label set; generating a segmentation map based on point cloud data in the selected sample; calculating a total loss value based on the prediction labelset, the sample labelset, the segmentation map, and the map; if the total loss value is less than a preset threshold value, constructing an object detection and positioning system according to the object detection model, wherein the total loss value is the weighted sum of a first loss value between the prediction label set and the sample label set and a second loss value between the segmentation map and the map, if the aim of the current training is mainly positioning, the weight of the second loss value is set to be higher than that of the first loss value, and if the aim of the current training is mainly target detection, the weight of the first loss value is set to be higher than that of the second loss value;
the training unit is further configured to: inputting point cloud data in the selected sample into a point cloud feature extraction model to obtain point cloud features; inputting the map into an image feature extraction model to obtain image features; projecting the 3-dimensional point cloud features in the ground direction, converting the point cloud features into 2-dimensional features, and fusing the 2-dimensional features with the image features to obtain fusion features, wherein the fusion scheme comprises any one of the following steps: weight addition, 1 × 1 convolution and information superposition;
wherein the training unit is further configured to: inputting the point cloud characteristics into a map segmentation model to obtain a segmentation map; calculating a first loss value from the prediction tag set and the sample tag set; calculating a second loss value according to a position offset between the divided map and the map; calculating a total loss value according to the first loss value and the second loss value;
wherein said calculating a second loss value according to a position offset between the segmentation map and the map comprises:
searching and matching in an actual map by taking the segmentation map as a key, and determining the position offset between the segmentation map and the map through a feature matching algorithm;
and inputting the position deviation into a specified second loss function, and calculating to obtain a second loss value between the position deviation and the specified second loss function.
8. The apparatus of claim 7, wherein the training unit is further configured to:
if the total loss value is not less than a preset threshold value, adjusting relevant parameters of the target detection model, reselecting samples from the sample set, and continuing to execute the training step.
9. The apparatus of claim 7, wherein the training unit is further configured to:
dividing point cloud data in a selected sample into a three-dimensional grid set with a fixed resolution;
and inputting the three-dimensional grid set into a point cloud feature extraction model to obtain point cloud features.
10. The apparatus of claim 7, wherein the point cloud feature extraction model is a sparse convolutional network or a three-dimensional convolutional network.
11. The apparatus of claim 7, wherein the training unit is further configured to:
if the total loss value is smaller than a preset threshold value, constructing a target detection and positioning system according to the point cloud feature extraction model, the image feature extraction model, the map segmentation model and the target detection model;
and if the total loss value is not less than a preset threshold value, adjusting relevant parameters of the point cloud feature extraction model, the image feature extraction model, the map segmentation model and the target detection model.
12. An object detection and localization apparatus, comprising:
an acquisition unit configured to acquire an actual map according to GPS positioning information of a current position and acquire point cloud data of the current position;
a detection unit configured to input point cloud data of the current location into an object detection and localization system generated with the apparatus of any one of claims 7-11, output a detection result and a segmentation map;
a determination unit configured to search and match the divided map and the actual map to determine a position deviation;
a correction unit configured to correct the GPS positioning information according to the position deviation.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202110635784.5A 2021-06-08 2021-06-08 Method and device for generating target detection and positioning system and target detection and positioning Active CN113378694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110635784.5A CN113378694B (en) 2021-06-08 2021-06-08 Method and device for generating target detection and positioning system and target detection and positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110635784.5A CN113378694B (en) 2021-06-08 2021-06-08 Method and device for generating target detection and positioning system and target detection and positioning

Publications (2)

Publication Number Publication Date
CN113378694A CN113378694A (en) 2021-09-10
CN113378694B true CN113378694B (en) 2023-04-07

Family

ID=77576422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110635784.5A Active CN113378694B (en) 2021-06-08 2021-06-08 Method and device for generating target detection and positioning system and target detection and positioning

Country Status (1)

Country Link
CN (1) CN113378694B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114429631B (en) * 2022-01-27 2023-11-14 北京百度网讯科技有限公司 Three-dimensional object detection method, device, equipment and storage medium
WO2023231212A1 (en) * 2022-06-02 2023-12-07 合众新能源汽车股份有限公司 Prediction model training method and apparatus, and map prediction method and apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052103A (en) * 2017-12-13 2018-05-18 中国矿业大学 The crusing robot underground space based on depth inertia odometer positions simultaneously and map constructing method
CN109583415A (en) * 2018-12-11 2019-04-05 兰州大学 A kind of traffic lights detection and recognition methods merged based on laser radar with video camera
CN112862877A (en) * 2021-04-09 2021-05-28 北京百度网讯科技有限公司 Method and apparatus for training image processing network and image processing

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108732603B (en) * 2017-04-17 2020-07-10 百度在线网络技术(北京)有限公司 Method and device for locating a vehicle
CN107577646A (en) * 2017-08-23 2018-01-12 上海莫斐信息技术有限公司 A kind of high-precision track operation method and system
CN108398705A (en) * 2018-03-06 2018-08-14 广州小马智行科技有限公司 Ground drawing generating method, device and vehicle positioning method, device
US11520347B2 (en) * 2019-01-23 2022-12-06 Baidu Usa Llc Comprehensive and efficient method to incorporate map features for object detection with LiDAR
CN112258618B (en) * 2020-11-04 2021-05-14 中国科学院空天信息创新研究院 Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map
CN112434119A (en) * 2020-11-13 2021-03-02 武汉中海庭数据技术有限公司 High-precision map production device based on heterogeneous data fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108052103A (en) * 2017-12-13 2018-05-18 中国矿业大学 The crusing robot underground space based on depth inertia odometer positions simultaneously and map constructing method
CN109583415A (en) * 2018-12-11 2019-04-05 兰州大学 A kind of traffic lights detection and recognition methods merged based on laser radar with video camera
CN112862877A (en) * 2021-04-09 2021-05-28 北京百度网讯科技有限公司 Method and apparatus for training image processing network and image processing

Also Published As

Publication number Publication date
CN113378694A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN109521756B (en) Obstacle motion information generation method and apparatus for unmanned vehicle
CN113902897B (en) Training of target detection model, target detection method, device, equipment and medium
CN113378693B (en) Method and device for generating target detection system and detecting target
CN110979346B (en) Method, device and equipment for determining lane where vehicle is located
CN113377888B (en) Method for training object detection model and detection object
CN113264066A (en) Obstacle trajectory prediction method and device, automatic driving vehicle and road side equipment
CN113378694B (en) Method and device for generating target detection and positioning system and target detection and positioning
CN116783620A (en) Efficient three-dimensional object detection from point clouds
CN111563450A (en) Data processing method, device, equipment and storage medium
CN111310840A (en) Data fusion processing method, device, equipment and storage medium
CN113724388B (en) High-precision map generation method, device, equipment and storage medium
CN114034295A (en) High-precision map generation method, device, electronic device, medium, and program product
CN115205391A (en) Target prediction method based on three-dimensional laser radar and vision fusion
CN114677655A (en) Multi-sensor target detection method and device, electronic equipment and storage medium
CN113688730A (en) Obstacle ranging method, apparatus, electronic device, storage medium, and program product
CN113205041A (en) Structured information extraction method, device, equipment and storage medium
CN113592015B (en) Method and device for positioning and training feature matching network
CN114528941A (en) Sensor data fusion method and device, electronic equipment and storage medium
CN113111787A (en) Target detection method, device, equipment and storage medium
CN113326796A (en) Object detection method, model training method and device and electronic equipment
CN115952248A (en) Pose processing method, device, equipment, medium and product of terminal equipment
CN115908992A (en) Binocular stereo matching method, device, equipment and storage medium
CN114267027A (en) Image processing method and device
CN113901903A (en) Road identification method and device
CN114674328A (en) Map generation method, map generation device, electronic device, storage medium, and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant