CN116821854B - Matching fusion method and related device for target projection - Google Patents

Matching fusion method and related device for target projection Download PDF

Info

Publication number
CN116821854B
CN116821854B CN202311101947.7A CN202311101947A CN116821854B CN 116821854 B CN116821854 B CN 116821854B CN 202311101947 A CN202311101947 A CN 202311101947A CN 116821854 B CN116821854 B CN 116821854B
Authority
CN
China
Prior art keywords
dimensional
area
sensors
region
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311101947.7A
Other languages
Chinese (zh)
Other versions
CN116821854A (en
Inventor
傅泽卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311101947.7A priority Critical patent/CN116821854B/en
Publication of CN116821854A publication Critical patent/CN116821854A/en
Application granted granted Critical
Publication of CN116821854B publication Critical patent/CN116821854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The embodiment of the application can be applied to the field of automatic driving; the application provides a matching fusion method and a related device of target projection, wherein the method comprises the following steps: acquiring three-dimensional data of an acquisition object acquired by each of two sensors to be fused; when the two sensors are of different types, respectively mapping each obtained three-dimensional data projection to a preset two-dimensional coordinate system to obtain a corresponding two-dimensional area; at least one two-dimensional area obtained based on one sensor is subjected to layer ordering according to the distance between the corresponding acquisition object and the sensor, and an ordering result is obtained; based on the sequencing result, respectively obtaining the region integrity of at least one two-dimensional region; and acquiring a matched two-dimensional region with the overlapping degree of the two-dimensional region being larger than a first threshold and the matching degree of the region integrity being larger than a second threshold, and fusing three-dimensional data corresponding to the two-dimensional region and the matched two-dimensional region. By the method, the accuracy of the multi-sensor fusion data can be improved.

Description

Matching fusion method and related device for target projection
Technical Field
The application relates to the technical field of automatic driving, in particular to a matching fusion method and a related device of target projection.
Background
Currently, intelligent traffic typically employs camera sensors and radar sensors to collect data in the environment. The camera sensor can acquire the appearance attributes such as the color, the size and the like of the target in the environment, but cannot determine the specific three-dimensional position of the target; the radar sensor can sense the three-dimensional position of the target, but cannot acquire the appearance attribute of the target. Therefore, in the case where there is a disadvantage in each of the data collected by the single sensor, it tends to be a great trend to fuse a plurality of sensors.
The fusion of the plurality of sensors is divided into two types, namely the fusion of the same type of sensor and the fusion of multiple types of sensors. For example: the fusion of the camera sensors or the radar sensors is the fusion of the same type of sensor, and the fusion of the camera sensors and the radar sensors is the fusion of multiple types of sensors. When multiple types of sensors are fused, calibration parameters are required to be set so that data of different sensors can be mutually converted, and meanwhile, position compensation is required to be carried out to solve motion errors of different sensors. In addition, the situation that shielding exists among targets sensed by different sensors needs to be considered.
However, existing fusion methods are sensitive to the configuration of the sensors (e.g., number, angle, etc.). When the number of the sensors is small or the angle setting of the sensors is improper, the probability exists that the collected data of different types of sensors on different targets are erroneously matched to the same target, so that the attribute of the matched target is disordered, and further, the subsequent judgment related to the target (namely the matched target) is wrong.
For example, the data collected by the camera sensor is white car a and white car B, the data collected by the radar sensor is white car B, because of the angle setting of the camera sensor, the collected white car a shields the white car B in a large area, and at this time, the radar sensor has a probability that the collected white car B data is mismatched with the white car a collected by the camera sensor, so that the attribute of the following white car a is problematic, for example, the camera sensor judges that the white car a changes lanes, and the radar sensor judges that the white car a (actually the white car B) still runs according to the original road, and the vehicle equipped with the two sensors is difficult to select.
Therefore, there is a need for a multi-sensor fusion method that reduces the sensitivity to the sensor configuration and improves the accuracy of the multi-sensor fusion data.
Disclosure of Invention
The embodiment of the application provides a matching fusion method and a related device for target projection in automatic driving, which are used for reducing the sensitivity degree to a sensor configuration mode and improving the accuracy degree of multi-sensor fusion data.
The specific technical scheme provided by the embodiment of the application is as follows:
in a first aspect of the present application, a matching fusion method for target projection is provided, including:
Acquiring three-dimensional data of at least one acquisition object acquired by each of two sensors to be fused;
when the two sensors are sensors of different types, respectively mapping the obtained three-dimensional data projections to a preset two-dimensional coordinate system to obtain corresponding two-dimensional areas;
for the two sensors, the following operations are performed respectively: at least one two-dimensional area obtained based on one sensor is subjected to layer ordering according to the distance between a corresponding acquisition object and the one sensor, and a corresponding ordering result is obtained; and based on the sorting result, respectively obtaining the respective region integrity of the at least one two-dimensional region, wherein the region integrity represents: the degree of occlusion of the corresponding two-dimensional region;
for at least one two-dimensional area obtained based on one of the two sensors, the following operations are performed, respectively: acquiring a matched two-dimensional region with the overlapping degree of the two-dimensional region being larger than a first threshold and the matching degree of the region integrity being larger than a second threshold, and fusing three-dimensional data corresponding to the two-dimensional region and the matched two-dimensional region respectively; wherein the one matching two-dimensional region is obtained based on another sensor.
In a second aspect of the present application, there is provided an apparatus for matching fusion of target projections, comprising:
the receiving and transmitting unit is used for acquiring three-dimensional data of at least one acquisition object acquired by each of the two sensors to be fused;
the processing unit is used for respectively mapping each obtained three-dimensional data projection to a preset two-dimensional coordinate system when the two sensors are sensors of different types, so as to obtain a corresponding two-dimensional area; for the two sensors, the following operations are performed respectively: at least one two-dimensional area obtained based on one sensor is subjected to layer ordering according to the distance between a corresponding acquisition object and the one sensor, and a corresponding ordering result is obtained; and based on the sorting result, respectively obtaining the respective region integrity of the at least one two-dimensional region, wherein the region integrity represents: the degree of occlusion of the corresponding two-dimensional region; for at least one two-dimensional area obtained based on one of the two sensors, the following operations are performed, respectively: acquiring a matched two-dimensional region with the overlapping degree of the two-dimensional region being larger than a first threshold and the matching degree of the region integrity being larger than a second threshold, and fusing three-dimensional data corresponding to the two-dimensional region and the matched two-dimensional region respectively; wherein the one matching two-dimensional region is obtained based on another sensor.
Optionally, the processing unit is configured to, before acquiring three-dimensional data of at least one acquisition object acquired by each of the two sensors to be fused, acquire position information and angle information of each of the two sensors respectively; based on the obtained position information and the angle information, respectively determining the respective acquisition ranges of the two sensors; and when an overlapping part exists between the acquired acquisition ranges, the two sensors are used as the two sensors to be fused.
Optionally, the processing unit is configured to, before acquiring the position information and the angle information of each of the two sensors, respectively, acquire attribute information of each of the two sensors, where the attribute information characterizes: the corresponding sensor is a road side sensor or a vehicle-mounted sensor; and determining at least one sensor of the two sensors as an on-vehicle sensor based on the obtained attribute information.
Optionally, the processing unit is configured to acquire an original data set acquired by each of the two sensors before acquiring three-dimensional data of at least one acquired object acquired by each of the two sensors to be fused; wherein each of the raw data sets comprises: data subsets acquired by corresponding sensors at each acquisition time point respectively, wherein each data subset comprises: raw data of each of the at least one acquisition object obtained at one acquisition time point; the two sensors use different acquisition time points;
For each acquisition time point used by one of the two sensors, the following operations are performed:
acquiring, for one acquisition time point used by one sensor, each subset of data acquired by the other sensor before and after the one acquisition time point;
for the at least one acquisition object, the following operations are performed: interpolation processing is carried out on the original data of one acquisition object before and after the acquisition time point, so that interpolation data of the one acquisition object at the acquisition time point is obtained;
and respectively obtaining the three-dimensional data of the at least one acquisition object acquired by each of the two sensors to be fused at one acquisition time point according to the original data of one sensor at the one acquisition time point and the interpolation data of the other sensor at the one acquisition time point.
Optionally, the processing unit is configured to, when the two sensors are sensors of the same class, respectively convert each obtained three-dimensional data into a preset world three-dimensional coordinate system, and obtain corresponding converted three-dimensional data;
for at least one converted three-dimensional data obtained by one of the two sensors, the following operations are performed: according to the coordinate information in one piece of converted three-dimensional data, obtaining one piece of matched three-dimensional data with the distance from the coordinate information of the one piece of converted three-dimensional data being smaller than a third threshold value, and fusing the one piece of converted three-dimensional data and the one piece of matched three-dimensional data; wherein the one matching three-dimensional data is converted three-dimensional data obtained by another sensor.
Optionally, the processing unit is configured to obtain, based on the sorting result, a degree of overlapping of the two-dimensional area and the reference two-dimensional area and a degree of matching of the region integrity when the obtained degree of overlapping of the two-dimensional area and the one matching two-dimensional area is greater than a first threshold and the degree of matching of the region integrity is greater than a second threshold; the reference two-dimensional area is a two-dimensional area which is in the same layer sequence as the two-dimensional area in the other sensor;
when the overlapping degree is greater than the first threshold and the matching degree of the region integrity is greater than the second threshold, determining the reference two-dimensional region as the matching two-dimensional region;
when the overlapping degree is not greater than the first threshold value or the matching degree of the region integrity is not greater than the second threshold value, acquiring the overlapping degree of the two-dimensional region and other two-dimensional regions and the matching degree of the region integrity, wherein the other two-dimensional regions are two-dimensional regions except the reference two-dimensional region in the other sensor; and taking one other two-dimensional region with the overlapping degree of the two-dimensional region being larger than the first threshold and the matching degree of the region integrity being larger than the second threshold as the one matching two-dimensional region.
Optionally, the overlapping degree is obtained according to an intersection ratio of the two-dimensional region and the two-dimensional region to be matched; the two-dimensional region to be matched comprises the reference two-dimensional region and the other two-dimensional regions.
Optionally, the processing unit is configured to obtain a layer ordering of one two-dimensional area when the matching degree of the overlapping degree and the area integrity of the two-dimensional area and the reference two-dimensional area is obtained based on the ordering result, and mark the area integrity of the one two-dimensional area as 1 when the one two-dimensional area is located on the first layer; determining the region integrity of the two-dimensional region according to the intersection ratio of the two-dimensional region of the previous layer and the two-dimensional region when the two-dimensional region is not positioned on the first layer;
acquiring the layer sequence of the reference two-dimensional area, and recording the area integrity of the reference two-dimensional area as 1 when the reference two-dimensional area is positioned on a first layer; when the layer of the reference two-dimensional area is not positioned on the first layer, determining the area integrity of the reference two-dimensional area according to the intersection ratio of the two-dimensional area of the previous layer and the reference two-dimensional area;
And determining the matching degree of the region integrity according to the region integrity of the two-dimensional region and the region integrity of the reference two-dimensional region.
In a third aspect, an embodiment of the present application provides a computer device, including a processor and a memory, where the memory stores a computer program, and when the computer program is executed by the processor, causes the processor to execute a matching fusion method of any one of the target projections in the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, which includes a computer program, where the computer program when executed on a computer device is configured to cause the computer device to perform the matching fusion method of any one of the target projections in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program stored in a computer readable storage medium; when a processor of a computer device reads the computer program from a computer-readable storage medium, the processor executes the computer program, so that the computer device performs the matching fusion method of any one of the target projections in the first aspect.
The application has the following beneficial effects:
in the application, after three-dimensional data of at least one acquisition object of two sensors to be fused at the same acquisition time point are acquired, the three-dimensional data obtained by the two sensors are projected and mapped into the same two-dimensional coordinate system, and a two-dimensional area corresponding to each acquisition object in the two sensors is acquired. Therefore, the three-dimensional data is converted into the two-dimensional area, the three-dimensional data to be fused is convenient to judge based on the two-dimensional area, and when the distances in the three-dimensional data of the two sensors to be fused are far apart and matching is difficult, the corresponding three-dimensional data is projected to the two-dimensional area in the two-dimensional coordinates, and matching can be performed with probability, so that the situation of missing matching is avoided.
Then, for the two-dimensional area corresponding to each acquisition object, carrying out layer ordering on the two-dimensional area of each acquisition object according to the distance between the acquisition object and the corresponding sensor, so as to obtain the layer ordering result of the acquisition object of each of the two sensors; based on the results of the layer ordering, the regional integrity of the two-dimensional region of each acquisition object for the two sensors can then be determined. Therefore, when a plurality of two-dimensional areas are blocked, the area integrity of each two-dimensional area can be determined according to the layer ordering, and the matching degree of the area integrity of the acquisition object can be determined based on the area integrity of each two-dimensional area, so that the problem of mismatching of the acquisition object when the layer is blocked is solved, and the accuracy of the three-dimensional data to be fused subsequently is improved.
Finally, two-dimensional areas needing data fusion are determined together according to the overlapping degree of the two-dimensional areas between the two sensor acquisition objects and the matching degree of the area integrity. In the application, the overlapping degree of the acquisition objects and the matching degree of the region integrity are considered at the same time, and when the overlapping degree of the acquisition object of one sensor and a plurality of acquisition objects of the other sensor is higher, the three-dimensional data to be fused can be determined by combining the matching degree of the region integrity; or when the matching degree of the area integrity of the acquisition object of one sensor and the area integrity of the acquisition objects of the other sensor is high, the overlapping degree can be combined to determine the three-dimensional data to be fused. Therefore, even when the acquired object is shielded, the three-dimensional data to be fused can be accurately determined, and the accuracy degree of the multi-sensor fusion data is improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to the provided drawings without inventive effort for those skilled in the art.
Fig. 1 is a flow chart of a matching fusion method of target projection according to an embodiment of the present application;
fig. 2A is a schematic diagram of an overlapping portion of acquisition ranges of a sensor 1 and a sensor 2 according to an embodiment of the present application;
fig. 2B is a schematic diagram of an embodiment of the present application where there is no overlapping portion between the acquisition ranges of the sensor 3 and the sensor 4;
FIG. 3 is a schematic flow chart of determining a corresponding road side sensor according to pose information of a vehicle according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an acquisition range of 6 on-board sensors on a vehicle according to an embodiment of the present application;
fig. 5 is a schematic diagram of a unified acquisition time point of 10ms of a sensor 1 and a sensor 2 provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of projection mapping of three-dimensional data into corresponding two-dimensional regions according to an embodiment of the present application;
FIG. 7 is a diagram of a layer ordering result of 4 two-dimensional regions according to an embodiment of the present application;
FIG. 8 is a schematic diagram of the overlapping degree of 2 two-dimensional regions according to an embodiment of the present application;
FIG. 9 is a schematic diagram of matching degree of overlapping degree and region integrity of a plurality of two-dimensional regions according to an embodiment of the present application;
FIG. 10 is a schematic diagram of three-dimensional data fusion of similar sensors provided by an embodiment of the present application;
fig. 11 is a schematic diagram of a matching fusion device 1100 for target projection according to an embodiment of the present application;
fig. 12 shows a matching fusion apparatus 1200 for object projection according to another embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application. Embodiments of the application and features of the embodiments may be combined with one another arbitrarily without conflict. Also, while a logical order is depicted in the flowchart, in some cases, the steps depicted or described may be performed in a different order than presented herein.
It will be appreciated that in the following detailed description of the application, data relating to projection of objects and the like is referred to, and that when embodiments of the application are applied to a particular product or technology, relevant permissions or consents need to be obtained, and that the collection, use and processing of relevant data is required to comply with relevant laws and regulations and standards of the relevant country and region. For example, where relevant data is required, this may be implemented by recruiting relevant volunteers and signing the relevant agreement of volunteer authorisation data, and then using the data of these volunteers; alternatively, by implementing within the scope of the authorized allowed organization, relevant recommendations are made to the organization's internal members by implementing the following embodiments using the organization's internal member's data; alternatively, the relevant data used in the implementation may be analog data, for example, analog data generated in a virtual scene.
In order to facilitate understanding of the technical solution provided by the embodiments of the present application, some key terms used in the embodiments of the present application are explained here:
1. target projection
The coordinates of the three-dimensional data of the object are converted to the coordinates of the two-dimensional region, thereby projection-mapping the three-dimensional data of the object into the two-dimensional region.
2. Radar coordinate system
The radar coordinate system may be described as the position of the target relative to the radar sensor, denoted [ XL, YL, ZL ], where the origin is the geometric center of the radar sensor, the XL axis is horizontally forward, the YL axis is horizontally to the left, and the ZL axis is vertically upward.
3. Camera coordinate system
The camera coordinate system may be described as the position of the object relative to the camera sensor, denoted as [ XC, YC, ZC ], where the origin is the camera optical center, the XC axis is parallel to the X-axis of the image coordinate system, the YC axis is parallel to the Y-axis of the image coordinate system, and the ZC axis is parallel to the camera optical axis, i.e. the shooting direction of the camera lens.
4. Image coordinate system
The image coordinate system takes the center of the image as an origin, and the X axis and the Y axis are respectively parallel to two vertical sides of the image plane, and the coordinates of the image coordinate system are expressed as (X, Y).
5. World coordinate system
In a three-dimensional scene, there is usually no object, and the coordinate system constructed by taking a certain fixed point in the scene as an origin is the world coordinate system, i.e. the coordinate system of the whole scene is the world coordinate system. The X axis of the world coordinate system points to the east, the Y axis points to the north, and the Z axis points to the earth center.
6. Cross-over ratio
The intersection ratio (Intersection over Union, IOU) is an evaluation index most commonly used for tasks such as target detection and semantic segmentation, and is a ratio of intersection to union of two regions. The overlap ratio is 1 when the two regions are completely overlapped, and 0 when the two regions are not overlapped at all.
The technical scheme of the embodiment of the application relates to automatic driving and image processing technology, and automatic driving, also called unmanned driving, computer driving or wheeled mobile robot, is a front-edge technology for completing complete, safe and effective driving under the condition of no manual operation by means of a computer and artificial intelligence technology. In the 21 st century, the problems of congestion, safety accidents and the like faced by road traffic became more serious due to the continuous increase of automobile users. The automatic driving technology can coordinate the travel route and the planning time under the support of the Internet of vehicles technology and the artificial intelligence technology, so that the travel efficiency is greatly improved, and the energy consumption is reduced to a certain extent. Automatic driving can also help avoiding potential safety hazards such as drunk driving, fatigue driving and the like, reducing driver errors and improving safety. In the application scenario of autopilot, a sensor is needed to obtain a more complete, accurate and stable perception of the environment, thus involving image processing.
Image processing, also known as image processing, is a technique in which images are analyzed by a computer to achieve a desired result. Image processing generally refers to digital image processing, and digital images refer to a large two-dimensional array obtained by shooting with equipment such as industrial cameras, video cameras, scanners and the like, wherein elements of the array are called pixels, and values of the elements are called gray values. Image processing techniques generally include image compression, enhancement and restoration, matching, description and recognition of 3 parts.
At least one sensor is arranged on a vehicle, and the purpose of matching and fusing the acquired target projections is to provide automatic driving service, and relates to automatic driving technology; when the acquisition object to be fused is determined, the three-dimensional data is mapped to the corresponding two-dimensional area, and the two-dimensional area is matched, and the image processing technology is involved.
The following briefly describes the design concept of the embodiment of the present application:
in the application scenario of autopilot, sensors are required to acquire a perception of the environment. Different types of sensors have different advantages and disadvantages, for example, a camera sensor can only sense the existence, approximate distance, appearance and other attributes of a target in the field of view of the camera, but cannot accurately sense the specific three-dimensional position of the target; the radar sensor can sense the specific three-dimensional position of the target and the general shape of the target, but cannot accurately sense the specific appearance attributes of the target, such as the type of the vehicle, the color of the vehicle, the content of a license plate and the like. Therefore, it is important to fuse targets sensed by different types of sensors. However, when fusing the advantages of different classes of sensors, matching issues between the different classes of sensors present challenges to the fusion technique. The existing fusion technology has higher requirements on the configuration of the sensors, and is sensitive to the number, the angle and the like of the sensors. For example, the camera sensor is limited by camera imaging, there is a probability that occlusion exists between the acquired target images, and when the occlusion is severe, the target B is completely occluded by the target A. At this time, the radar sensor senses the target a and the target B at the same time, and because the camera sensor only collects the data of the target a at this time, and the three-dimensional positions of the target a and the target B in the data collected by the radar sensor are very close, there is a probability that the target B collected by the radar sensor is erroneously matched with the target a collected by the camera sensor, resulting in that the fused data is also erroneous.
In view of this, the following embodiments of the present application are provided.
Three-dimensional data acquired by two sensors to be fused at the same time are respectively acquired, and the three-dimensional data respectively correspond to at least one acquisition object. Because the distance of partial three-dimensional data in the two sensors is far, in order to avoid the situation that partial three-dimensional data are missed to be matched, the three-dimensional data can be projected and mapped to the same two-dimensional coordinate system, and a two-dimensional area corresponding to each acquisition object is obtained.
In order to determine the shielding condition of the two-dimensional areas of the objects, the two-dimensional areas of each acquisition object are subjected to layer ordering according to the distance between each acquisition object and the corresponding sensor, so that the shielding degree of each two-dimensional area, namely the area integrity of each two-dimensional area, can be calculated based on the layer ordering, and the matching degree of the area integrity of the two-dimensional area between the acquisition objects in the two sensors can be determined based on the shielding degree.
At the same time, the overlapping degree of the two-dimensional area between the acquisition objects in the two sensors can be calculated and determined. By combining the overlapping degree and the matching degree of the region integrity of the two-dimensional region, when the overlapping degree of the acquisition object of one sensor and the overlapping degree of a plurality of acquisition objects of the other sensor are higher, the matching degree of the region integrity is combined to accurately determine the three-dimensional data to be fused; or when the matching degree of the area integrity of the acquisition object of one sensor and the area integrity of the acquisition objects of the other sensor is high, determining the three-dimensional data to be fused by combining the overlapping degree. Therefore, even when the configuration of the two sensors is imperfect, and the acquired object is blocked, the three-dimensional data to be fused can be accurately determined, the accuracy of the multi-sensor fusion data is improved, and the problem that the current sensor configuration is sensitive is solved.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and not for limitation of the present application, and embodiments of the present application and features of the embodiments may be combined with each other without conflict.
As shown in fig. 1, the flow of the matching fusion method of the target projection provided by the application is specifically as follows:
step 100: three-dimensional data of at least one acquisition object acquired by each of two sensors to be fused are acquired.
For example, before acquiring three-dimensional data acquired by each of two sensors to be fused, it is necessary to determine whether the two sensors are fusion-enabled sensors.
Specifically, by acquiring the position information and the angle information of each of the two sensors, and further determining the acquisition ranges of the two sensors based on the position information and the angle information, when the acquisition ranges of the two sensors have overlapping portions, the two sensors are considered to be the two sensors to be fused. Wherein the two sensors may be of different classes of sensors, for example: one sensor is a camera sensor, then the other sensor is a radar sensor; or the two sensors may be the same class of sensors, for example: both sensors are camera sensors, or both sensors are radar sensors. The radar sensor may be a laser radar sensor, a millimeter wave radar sensor or any other type of radar sensor, which is not limited by the present application.
Specifically, as shown in fig. 2A, if there is an overlapping portion of the acquisition ranges of the sensor 1 and the sensor 2 in fig. 2A, the sensor 1 and the sensor 2 are two sensors to be fused; as shown in fig. 2B, if there is no overlapping portion of the acquisition ranges of the sensor 3 and the sensor 4 in fig. 2B, the sensor 3 and the sensor 4 are not two sensors to be fused.
In addition, attribute information of two sensors needs to be acquired to determine whether the sensors are road side sensors or vehicle-mounted sensors, and at least one of the two sensors to be fused needs to be a vehicle-mounted sensor. Specifically, when one of the two sensors is a vehicle-mounted sensor and the other sensor is a road-side sensor, the corresponding road-side sensor needs to be determined according to pose information of the vehicle in which the vehicle-mounted sensor is located. For example, when it is determined that the vehicle is located on the a street for a certain period of time according to the pose information of the vehicle, the road side sensor to be fused for the period of time needs to be also on the a street.
Specifically, a schematic flow chart of determining a corresponding road side sensor according to pose information of a vehicle is shown in fig. 3.
Step 301: and acquiring pose information of the vehicle and a corresponding positioning time point.
Specifically, pose information and corresponding positioning time points of a vehicle where the vehicle-mounted sensor is located are obtained. The pose information refers to a specific geographic position of the vehicle and the direction of the vehicle. The positioning time point refers to a specific time point corresponding to a specific geographic position of the vehicle.
Step 302: and determining a corresponding road side sensor according to the pose information of the vehicle and the corresponding positioning time point.
Specifically, after the pose information of the vehicle is obtained in step 301, a plurality of roadside sensors may be determined according to the pose information of the vehicle and the corresponding positioning time point, for example, the vehicle may travel on the a street for a certain period of time, where the determined plurality of roadside sensors are a plurality of roadside sensors on the a street, and a roadside sensor that partially overlaps with an acquisition range of an on-vehicle sensor of the vehicle is selected from the plurality of roadside sensors as the corresponding roadside sensor.
Illustratively, a specific description will be given of an example in which there are 6 in-vehicle sensors on the vehicle. As shown in fig. 4 in particular, it can be seen from fig. 4 that there are 6 on-board sensors on the vehicle 1, which 6 sensors can be radar sensors or camera sensors. Specifically, as shown in fig. 4, the 6 sensors are a sensor 1, a sensor 2, a sensor 3, a sensor 4, a sensor 5, and a sensor 6, respectively. Wherein, sensor 1 and sensor 2 are located in the locomotive of vehicle 1, sensor 3 and sensor 4 are located in the car of vehicle 1, and sensor 5 and sensor 6 are located in the rear of vehicle 1.
Specifically, the dotted line portion in the figure represents the acquisition range of each sensor. As can be seen from fig. 4, the sensor 2 overlaps with both the partial acquisition range of the sensor 1 and the partial acquisition range of the sensor 4, so that the sensor 2 and the sensor 1 may be two sensors to be fused, and the sensor 2 and the sensor 4 may also be two sensors to be fused. The sensor 3 overlaps with the sensor 5 in part, and the two sensors may also be two sensors to be fused. For sensor 6, there is no overlap of the acquisition range with any of the other sensors, so sensor 6 is not the sensor to be fused.
For example, due to the different acquisition time points of different sensors, the acquisition time points of two sensors need to be unified for the subsequent fusion of three-dimensional data of the two sensors. Thus, the raw data sets acquired by the two sensors are acquired before the three-dimensional data of at least one acquisition object acquired by the two sensors to be fused are acquired. Wherein the raw data set is formed by data subsets acquired by the two sensors at respective acquisition time points, each data subset comprising raw data of at least one acquisition object acquired by the sensor at one time point. Then, after the raw data set is acquired, a plurality of data subsets of one of the two sensors before and after the acquisition time point are acquired for the one acquisition time point of the sensor based on the entire acquisition time points of the other sensor. Then, for each object in the plurality of data subsets, the original data of the object before the acquisition time point and the original data of the object after the acquisition time point are respectively interpolated to obtain the interpolation data of the object at the acquisition time point. Finally, the raw data of one sensor at one acquisition time point and the interpolation data of the other sensor at the acquisition time point are used as the three-dimensional data of at least one acquisition object acquired by the two sensors to be fused at one acquisition time point respectively. The interpolation method is linear interpolation, and the interpolation data is three-dimensional data.
Specifically, two sensors to be fused are set to be a sensor 1 and a sensor 2, the sensor 1 collects data at three collection time points of 10ms, 15ms and 20ms, the sensor 2 collects data at three collection time points of 8ms, 12ms and 16ms, and the collection time points of the sensor 1 are taken as a reference to unify the collection time points.
As shown in fig. 5, a specific explanation will be given by taking an acquisition time point of 10ms of the sensor 1 as an example. The data subset of the sensor 2 before 10ms is the data subset corresponding to 8ms, and the data subset of the sensor 2 after 10ms is the data subset corresponding to 12 ms. The data subset corresponding to 8ms of the sensor 2 comprises three-dimensional data of the acquisition object a, three-dimensional data of the acquisition object B and three-dimensional data of the acquisition object C, and the data subset corresponding to 12ms of the sensor 2 also comprises three-dimensional data of the acquisition object a, three-dimensional data of the acquisition object B and three-dimensional data of the acquisition object C. For the acquisition object A, interpolation is carried out on three-dimensional data of the acquisition object A in 8ms and 12ms to obtain interpolation data of the acquisition object A in 10ms, and similarly, interpolation data of the acquisition object B and the acquisition object C in 10ms are obtained. Thus, three-dimensional data of the sensor 1 at 10ms and interpolation data of the sensor 2 at 10ms are used as three-dimensional data to be fused by the two sensors.
Step 110: when the two sensors are sensors of different types, respectively mapping the obtained three-dimensional data projections to a preset two-dimensional coordinate system to obtain corresponding two-dimensional areas.
For example, when the two sensors to be fused in step 100 are different types of sensors, the three-dimensional data obtained by each of the two sensors may be projected and mapped onto a preset two-dimensional coordinate system, so as to obtain a corresponding two-dimensional area of each three-dimensional data. Specifically, the preset two-dimensional coordinate system may be an image coordinate system on the picture, or may be a coordinate system with any point on the picture as an origin, for example, a coordinate system with the upper left corner of the picture as an origin, or a coordinate system with the upper right corner of the picture as an origin, etc. The picture refers to a blank image with a fixed length and width.
Specifically, three-dimensional data of each acquisition object acquired by the sensor is presented in a cuboid form, and the three-dimensional data comprises a center point coordinate, a length, a width and a height of the cuboid, an orientation angle and coordinates of eight vertexes of the cuboid. The corresponding two-dimensional area is obtained according to the three-dimensional data projection mapping, wherein the three-dimensional data projection mapping is based on the coordinates of eight vertexes of a cuboid corresponding to the three-dimensional data, the coordinates are projected onto a preset two-dimensional coordinate system to form an irregular figure, and then the minimum rectangle capable of wrapping the irregular figure is taken as the corresponding two-dimensional area. Specifically, as shown in fig. 6, the irregular pattern ABCDE is obtained by projection mapping eight vertices of a cuboid corresponding to three-dimensional data, and the minimum rectangle abcg wrapping the irregular pattern ABCDE is a corresponding two-dimensional region.
Step 120: for the two sensors, the following operations are performed, respectively: at least one two-dimensional area obtained based on one sensor is subjected to layer sequencing according to the distance between a corresponding acquisition object and one sensor, and a corresponding sequencing result is obtained; and based on the sequencing result, respectively obtaining the respective region integrity of at least one two-dimensional region, wherein the region integrity represents: the degree of occlusion of the corresponding two-dimensional region.
Illustratively, after the two-dimensional regions are acquired in step 110, each acquisition object may be layer-ordered with respect to its corresponding two-dimensional region according to the distance between the acquisition object and the corresponding sensor. In general, in a picture in which a plurality of two-dimensional regions are located, the larger the area, the closer the two-dimensional region distance is, the smaller the area, but there are a plurality of two-dimensional regions having similar areas, and in this case, it is difficult to determine the distance between the two-dimensional regions and the occlusion relationship between the two-dimensional regions, so that it is necessary to perform layer sorting.
Specifically, for the camera sensors in the two sensors, the three-dimensional data of each acquisition object acquired according to the camera sensors are located in a camera coordinate system, so that the distance between each acquisition object and the camera sensor can be determined based on the center point coordinates in the three-dimensional data of each acquisition object, and therefore the two-dimensional areas of the plurality of acquisition objects acquired by the camera sensors are subjected to layer sorting according to distance, the layer sorting is from small to large, namely, a first layer to a Q layer, and Q is the number of the two-dimensional areas corresponding to the camera sensors. Wherein, the layer ordering result is smaller when the distance is closer, and the layer ordering result is larger when the distance is farther. Further, after the sorting result is obtained, the integrity of the corresponding region can be calculated according to the layer corresponding to each two-dimensional region.
Also, for the radar sensor of the two sensors, the three-dimensional data of each acquisition object acquired according to the radar sensor is located in the radar coordinate system, and therefore, the distance between the acquisition object and the radar sensor can be determined based on the center point coordinates in the three-dimensional data of each acquisition object, so that the two-dimensional areas of the plurality of acquisition objects acquired by the radar sensor are subjected to layer ordering according to distance. Specifically, the sorting manner is the same as that of the camera sensor, and will not be described here again.
Specifically, when the two-dimensional area is located in the first layer, no other two-dimensional area occludes the two-dimensional area, and therefore, the area integrity of the two-dimensional area is 1. When the two-dimensional area is not located in the first layer, the area integrity of the two-dimensional area needs to be determined according to the intersection ratio of the two-dimensional area of the layer before the two-dimensional area and the layer. The intersection ratio is 1 when the two areas are completely overlapped, and 0 when the two areas are not completely overlapped, and the two-dimensional areas are not shielded, so that the area integrity of the two-dimensional areas is higher. In addition, the region integrity of the two-dimensional region can be represented by other modes based on the cross-over ratio, and the application is not limited.
Specifically, it is assumed that the camera sensor acquires three-dimensional data of 4 acquisition objects, and the sorting result after the 4 three-dimensional data are mapped into 4 two-dimensional areas is shown in fig. 7. The two-dimensional area A is located on the first layer, the two-dimensional area B is located on the second layer, the two-dimensional area C is located on the third layer, and the two-dimensional area D is located on the fourth layer. Therefore, the two-dimensional area A is not blocked, the area integrity is 1, the two-dimensional area B is blocked by the two-dimensional area A, the area integrity of the two-dimensional area B is 1- (A n B)/(A U B), the two-dimensional area C is blocked by the two-dimensional area B, the area integrity of the two-dimensional area C is 1- (B n C)/(B U C), the two-dimensional area D is blocked by the two-dimensional area C, and the area integrity of the two-dimensional area D is 1- (C n D)/(C U D).
Step 130: for at least one two-dimensional region obtained based on one of the two sensors, the following operations are performed, respectively: and acquiring a matching two-dimensional region with the overlapping degree of the matching two-dimensional region being larger than a first threshold and the matching degree of the region integrity being larger than a second threshold, and fusing three-dimensional data corresponding to each of the two-dimensional region and the matching two-dimensional region, wherein one matching two-dimensional region is acquired based on another sensor.
Illustratively, after the multiple two-dimensional areas corresponding to each sensor and the area integrality of the multiple two-dimensional areas are obtained in step 120, the overlapping degree of the two-dimensional areas may be obtained according to the two-dimensional areas of the two sensors, and the matching degree of the area integrality of the two-dimensional areas may be obtained. Wherein the overlapping degree of the two-dimensional regions is determined by the intersection ratio.
Specifically, as shown in fig. 8, the two-dimensional area M is set to be one two-dimensional area of the camera sensor, the two-dimensional area N is one two-dimensional area of the radar sensor, four point coordinates of the two-dimensional area M are (1, 1), (5, 1), (1, 3), and (5, 3), four point coordinates of the two-dimensional area N are (3, 2), (7, 2), (3, 4), and (7, 4), and therefore, an intersection of the two-dimensional area M and the two-dimensional area N is 2 ((5-3) x (3-2)), and a union of the two-dimensional area M and the two-dimensional area N is 14 ((5-1) x (3-1) + (7-3) x (4-2) -2), and therefore, an intersection ratio of the two-dimensional area N is 2/14, that is, a degree of overlapping of the two-dimensional area N is 2/14.
Specifically, the matching degree of the region integrity of the two-dimensional region can be determined in various ways, and only the matching degree that the closer the region integrity is, the higher the matching degree is, and the lower the matching degree that the region integrity is, the less the matching degree is. Illustratively, the matching degree of the region integrity of the two-dimensional region is set by the following calculation method: if the region integrity of one two-dimensional region is 1 and the region integrity of the other two-dimensional region is also 1, the matching degree of the region integrity of the two-dimensional regions is 100%; if the region integrity of one two-dimensional region is 1 and the region integrity of the other two-dimensional region is 3/5, then the matching of the two-dimensional regions may be 60% (3/5/1). In addition, a plurality of calculation modes are also possible, and the application is not repeated by way of example.
Determining a matching two-dimensional region in a respective one of the two sensors for each two-dimensional region of the other sensor includes two implementations:
in the first embodiment, for one of the two sensors (illustrated as a camera sensor), the degree of overlap of one two-dimensional region a in the camera sensor and all the two-dimensional regions in the radar sensor and the degree of matching of the region integrity of the two-dimensional regions are calculated and determined. When the overlapping degree of the two-dimensional areas is larger than a first threshold and the matching degree of the integrity of the areas is larger than a second threshold, the two-dimensional areas in the radar sensor are the matching two-dimensional areas of the two-dimensional area A in the camera sensor. Wherein the first threshold and the second threshold are determined based on empirical values.
Specifically, as shown in fig. 9, the two-dimensional area A1, the two-dimensional area A2, and the two-dimensional area A3 in fig. 9 are two-dimensional areas of the camera sensor, the two-dimensional area B1, the two-dimensional area B2, and the two-dimensional area B3 are two-dimensional areas of the radar sensor, the two-dimensional area in the radar sensor corresponding to the two-dimensional area A1 is set to be required to be determined at this time, the first threshold is set to be 1/5, and the second threshold is set to be 5/6. The four point coordinates of the two-dimensional area A1 are (1, 2), (6, 2), (1, 6), and (6, 6), the four point coordinates of the two-dimensional area B1 are (2, 1), (7, 1), (2, 5), and (7, 5), the four point coordinates of the two-dimensional area B2 are (3, 4), (8, 4), (3, 7), and (8, 7), and the four point coordinates of the two-dimensional area B3 are (7, 6), (10, 6), (7, 9), and (10, 9). Therefore, the degree of overlap between the two-dimensional area A1 and the two-dimensional area B1 is 3/7 (12/28), the degree of overlap between the two-dimensional area A1 and the two-dimensional area B2 is 6/29, and the degree of overlap between the two-dimensional area A1 and the two-dimensional area B3 is 0. Meanwhile, the matching degree of the region integrity of the two-dimensional region is also required to be calculated. Wherein, the two-dimensional area A1 is positioned on the first layer, and the area integrity is 1; the two-dimensional area B1 is located on the first layer, the area integrity is 1, the two-dimensional area B2 is located on the second layer, the area integrity is set to be 3/4, the two-dimensional area B3 is located on the third layer, and the area integrity is set to be 7/8. The matching degree of the area integrity of the two-dimensional area A1 and the two-dimensional area B1 is 100%, the matching degree of the area integrity of the two-dimensional area A1 and the two-dimensional area B2 is 75%, and the matching degree of the area integrity of the two-dimensional area A1 and the two-dimensional area B3 is 87.5%.
Therefore, at this time, the overlapping degree of the two-dimensional area A1 and the two-dimensional area B2 is greater than the first threshold, but the matching degree of the area integrity of the two-dimensional area A1 and the two-dimensional area B2 is smaller than the second threshold; the matching degree of the area integrity of the two-dimensional area A1 and the two-dimensional area B3 is larger than a first threshold value, but the overlapping degree of the two-dimensional area A1 and the two-dimensional area B3 is smaller than the first threshold value; the overlapping degree of the two-dimensional area A1 and the two-dimensional area B1 is greater than the first threshold value, and the matching degree of the area integrity is greater than the second threshold value, so that the two-dimensional area B1 is a matching two-dimensional area of the two-dimensional area A1.
In a second embodiment, for one of the two sensors (illustrated as a camera sensor), the degree of overlap and the degree of matching of the region integrity between one two-dimensional region a in the camera sensor and the reference two-dimensional region in the radar sensor are calculated based on the sequencing result of the layer sequencing. Wherein the reference two-dimensional region is the same two-dimensional region in the radar sensor as the layer ordering in the camera sensor. For example, if the layer ordering of the two-dimensional area a in the camera sensor is 3, then its corresponding reference two-dimensional area is a two-dimensional area whose layer ordering is also 3 in the radar sensor. Then, when the overlapping degree between the two is larger than the first threshold value and the region integrity is larger than the second threshold value, the reference two-dimensional region is the matching two-dimensional region of the two-dimensional region A. If the overlapping degree of the two-dimensional area A and the reference two-dimensional area is not greater than a first threshold value or the matching degree of the area integrity is not greater than a second threshold value, the two-dimensional area A and other two-dimensional areas except the reference two-dimensional area in the radar sensor are not matched, the overlapping degree and the matching degree of the area integrity are calculated, and one other two-dimensional area, the overlapping degree of which is greater than the first threshold value and the matching degree of the area integrity is greater than the second threshold value, is taken as the matching two-dimensional area.
The matching degree of the overlapping degree and the region integrity of the two-dimensional region close to the layer sequence of the reference two-dimensional region and the two-dimensional region A can be calculated preferentially, namely the two-dimensional region in the radar sensor close to the layer sequence of the two-dimensional region A. The probability that the two-dimensional region that is closest to the layer ordering of two-dimensional region a is a matching two-dimensional region of two-dimensional region a is greater than in other radar sensors.
After the two-dimensional area a corresponding to the two-dimensional area a in the camera sensor is determined through the first embodiment or the second embodiment, it may be determined that the two-dimensional area a and the two-dimensional area a are two-dimensional areas corresponding to the same acquisition object, and then three-dimensional data corresponding to the two-dimensional areas are fused, so as to obtain three-dimensional data with perfect acquisition object.
For example, if it is determined that the camera sensor and the radar sensor match the same acquisition object (for example, acquisition object a), the two sensors are fused to the three-dimensional data of acquisition object a. For the acquisition object A, the camera sensor acquires the center point coordinate of the acquisition object A, the length, width and height of the cuboid corresponding to the acquisition object A and the orientation angle of the acquisition object A, and meanwhile, the radar sensor also acquires the center point coordinate of the acquisition object A, the length, width and height of the cuboid corresponding to the acquisition object A and the orientation angle of the cuboid corresponding to the acquisition object A. In general, the accuracy of the data acquired by the radar sensor is higher, and the center point coordinate of the acquisition object a of the radar sensor, the length, width and height of the cuboid corresponding to the acquisition object a, and the orientation angle of the cuboid corresponding to the acquisition object a are selected as more choices. When the camera sensor acquires the three-dimensional data of the acquisition object A, the camera sensor acquires the two-dimensional data of the acquisition object A, such as the color, texture, shape and the like of the acquisition object A, so that the three-dimensional data of the acquisition object A acquired by the radar sensor is fused with the two-dimensional data of the acquisition object A acquired by the camera sensor, and the three-dimensional center point coordinate of the acquisition object A, the length, width, height, orientation angle and the like of the corresponding cuboid, and the two-dimensional color, texture, shape and other information are determined.
In addition, the application can not only fuse the sensors of different types, but also fuse the sensors of the same type.
For example, after determining two sensors to be fused in step 100 and unifying the acquisition time points of the two sensors to be fused, when the two sensors are sensors of the same class, three-dimensional data of the two sensors are respectively converted into a preset world three-dimensional coordinate system, so as to obtain corresponding converted three-dimensional data.
Illustratively, for at least one converted three-dimensional data obtained by one of the two sensors, the following operations are performed, respectively: and acquiring matching three-dimensional data with the distance from the coordinate information of the converted three-dimensional data smaller than a third threshold value according to the coordinate information in the converted three-dimensional data, and fusing the converted three-dimensional data with the corresponding matching three-dimensional data. Wherein one matching three-dimensional data is converted three-dimensional data obtained by another sensor. Specifically, the third threshold is determined according to an empirical value, for example, in a scene of high speed, city expressways, etc., the running speed of the vehicle is faster, and at this time, the third threshold is correspondingly larger; when the vehicle running speed is slower in the scenes of urban ordinary roads, intra-cell roads and the like, the third threshold value is correspondingly smaller.
For example, when the two sensors are radar sensors, the three-dimensional data fusion of the two sensors fuses the coordinates of the center point of the three-dimensional data, the length, width and height of the cuboid corresponding to the three-dimensional data, and the orientation angle of the acquisition object corresponding to the three-dimensional data. Specifically, fusion may be selected according to the accuracy of three-dimensional data acquired by the two sensors. For example, for the center point coordinate of the three-dimensional data, if the accuracy of the first sensor of the two sensors is higher, the center point coordinate of the three-dimensional data of the first sensor is selected as the fused center point coordinate; and for the length, width and height of the cuboid corresponding to the three-dimensional data, the second sensor in the two sensors has higher precision, and the length, width and height of the cuboid corresponding to the second sensor is selected as the length, width and height of the cuboid corresponding to the fused cuboid. And updating the three-dimensional data of the two sensors after the fused three-dimensional data is obtained, and taking the fused three-dimensional data as the three-dimensional data acquired by the two sensors.
For example, when the two sensors are both camera sensors, the three-dimensional data fusion of the two sensors fuses the coordinates of the center point of the three-dimensional data, the length, width and height of the cuboid corresponding to the three-dimensional data, and the orientation angle of the acquisition object corresponding to the three-dimensional data, which are the same as the fusion of the three-dimensional data of the two radar sensors, and will not be repeated here. In addition, since the camera sensor can acquire two-dimensional data at the same time, the camera sensor also includes fusion of the two-dimensional data at the time of fusion. Specifically, for example, two-dimensional data acquired by one of the two sensors includes a color, a line, and two-dimensional data acquired by the other of the two sensors includes a license plate number, so that the two-dimensional data after fusion is the color, the line, and the license plate number. And updating the two-dimensional data and the three-dimensional data of the two sensors after the two-dimensional data and the three-dimensional data are fused, and taking the two-dimensional data and the three-dimensional data after the fusion as the data acquired by the two sensors.
Specifically, as shown in fig. 10, the third threshold value is set to 3, one piece of converted three-dimensional data obtained by the sensor 1 is three-dimensional data a, one piece of converted three-dimensional data obtained by the sensor 2 is three-dimensional data B, the sensor 1 and the sensor 2 are both camera sensors, or the sensor 1 and the sensor 2 are both radar sensors. Wherein the coordinates of the center point of the three-dimensional data A are (3, 4), and the coordinates of the center point of the three-dimensional data B are (1, 4). Therefore, the distance between the two pieces of coordinate information is 2 and is smaller than the third threshold value 3, so that the three-dimensional data of the two pieces of coordinate information can be fused at the moment.
By way of example, after the data of different types of sensors are matched and fused and/or the data of the same type of sensors are matched and fused, the sensors directly acquire data and have certain deviation, so that the 3D target detection algorithm can be used for optimizing the corresponding three-dimensional data of the acquisition object in order to ensure that the motion track of the acquisition object is more coherent, and the deviation in part of the sensors is eliminated through the filtering processing of the 3D target detection algorithm. The 3D target detection algorithm may be an AB3DMOT algorithm, or any other algorithm, which is not limited by the present application. In addition, the acceleration and the angular velocity of the acquisition object can be further acquired by adopting the 3D target detection algorithm, so that the data of the acquisition object are more abundant.
For example, after the data of the collected object is optimized by using the 3D target detection algorithm, corresponding data can be output according to the user requirement, where the data can be three-dimensional data of the collected object, two-dimensional data of the collected object, or data of different sensors corresponding to the collected object, that is, all data in the data fusion process can output corresponding data according to the user requirement.
The division of the units in the embodiments of the present application is schematically shown, which is merely a logic function division, and may have another division manner when actually implemented, and in addition, each functional unit in each embodiment of the present application may be integrated in one processor, or may exist separately and physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The embodiment of the present application further provides a matching fusion device 1100 for target projection, as shown in fig. 11, where the matching fusion device 1100 for target projection includes: a processing module 1110 and a transceiver module 1120.
The transceiver module 1120 may include a receiving module and a transmitting module. The processing module 1110 is configured to control and manage the actions of the matching fusion device 1100 for target projection. The transceiver module 1120 is used to support communication of the matched fusion device 1100 of the target projection with other devices. Optionally, the matching fusion device 1100 of the target projection may further include a storage module for storing program code and data of the matching fusion device 1100 of the target projection.
Alternatively, the various modules in the matched fusion device 1100 of the target projection may be implemented in software.
In the alternative, the processing module 1110 may be a processor or a controller, and the transceiver module 1120 may be a communication interface, a transceiver circuit, or the like, where the communication interface is generally called, and in a specific implementation, the communication interface may include multiple interfaces, and the storage module may be a memory.
In one possible implementation, the matching fusion apparatus 1100 of the target projection is applicable to a radio access controller device or a radio access point device;
the transceiver module 1120 is configured to acquire three-dimensional data of at least one acquisition object acquired by each of the two sensors to be fused;
the processing module 1110 is configured to map each obtained three-dimensional data projection to a preset two-dimensional coordinate system when the two sensors are sensors of different types, so as to obtain a corresponding two-dimensional region; for the two sensors, the following operations are performed, respectively: at least one two-dimensional area obtained based on one sensor is subjected to layer sequencing according to the distance between a corresponding acquisition object and one sensor, and a corresponding sequencing result is obtained; based on the sequencing result, respectively obtaining the respective region integrity of at least one two-dimensional region, and representing the region integrity: the degree of occlusion of the corresponding two-dimensional region; for at least one two-dimensional region obtained based on one of the two sensors, the following operations are performed, respectively: acquiring a matched two-dimensional region with the overlapping degree of the two-dimensional region being larger than a first threshold and the matching degree of the region integrity being larger than a second threshold, and fusing three-dimensional data corresponding to each two-dimensional region and each matched two-dimensional region; wherein one matching two-dimensional region is obtained based on another sensor.
The embodiment of the present application further provides another matching fusion apparatus 1200 for target projection, where the matching fusion apparatus 1200 for target projection may be a terminal device or a chip system inside the terminal device, as shown in fig. 12, including:
a communication interface 1201, a memory 1202 and a processor 1203;
wherein, the matching fusion device 1200 of the target projection communicates with other devices, such as receiving and sending messages, through the communication interface 1201; a memory 1202 for storing program instructions; the processor 1203 is configured to call up program instructions stored in the memory 1202 and execute the method according to the obtained program.
The processor 1203 calls the communication interface 1201 and the program instructions stored in the memory 1202 to execute:
acquiring three-dimensional data of at least one acquisition object acquired by each of two sensors to be fused;
when the two sensors are sensors of different types, respectively mapping each obtained three-dimensional data projection to a preset two-dimensional coordinate system to obtain a corresponding two-dimensional region;
for the two sensors, the following operations are performed, respectively: at least one two-dimensional area obtained based on one sensor is subjected to layer sequencing according to the distance between a corresponding acquisition object and one sensor, and a corresponding sequencing result is obtained; based on the sequencing result, respectively obtaining the respective region integrity of at least one two-dimensional region, and representing the region integrity: the degree of occlusion of the corresponding two-dimensional region;
For at least one two-dimensional region obtained based on one of the two sensors, the following operations are performed, respectively: acquiring a matched two-dimensional region with the overlapping degree of the two-dimensional region being larger than a first threshold and the matching degree of the region integrity being larger than a second threshold, and fusing three-dimensional data corresponding to each two-dimensional region and each matched two-dimensional region; wherein one matching two-dimensional region is obtained based on another sensor.
The specific connection medium between the communication interface 1201, the memory 1202 and the processor 1203 is not limited to the above embodiments of the present application, and may be, for example, a bus, which may be classified into an address bus, a data bus, a control bus, and the like.
In the embodiment of the present application, the processor may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps and logic blocks disclosed in the embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution.
In the embodiment of the present application, the memory may be a nonvolatile memory, such as a hard disk (HDD) or a Solid State Drive (SSD), or may be a volatile memory (RAM). The memory may also be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in embodiments of the present application may also be circuitry or any other device capable of performing memory functions for storing program instructions and/or data.
The embodiment of the present application also provides a computer readable storage medium including program code for causing a computer to execute the steps of the method provided in the embodiment of the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A method for matching fusion of target projections, the method comprising:
Acquiring three-dimensional data of at least one acquisition object acquired by each of two sensors to be fused;
when the two sensors are sensors of different types, respectively mapping the obtained three-dimensional data projections to a preset two-dimensional coordinate system to obtain corresponding two-dimensional areas;
for the two sensors, the following operations are performed respectively: at least one two-dimensional area obtained based on one sensor is subjected to layer ordering according to the distance between a corresponding acquisition object and the one sensor, and a corresponding ordering result is obtained; and based on the sorting result, respectively obtaining the respective region integrity of the at least one two-dimensional region, wherein the region integrity represents: the degree of occlusion of the corresponding two-dimensional region;
for at least one two-dimensional area obtained based on one of the two sensors, the following operations are performed, respectively: acquiring a matching two-dimensional area associated with a two-dimensional area, and fusing three-dimensional data corresponding to each of the two-dimensional area and the matching two-dimensional area; wherein the one matching two-dimensional region is obtained based on another sensor, and the following condition is satisfied: the overlapping degree of the two-dimensional matching area and the two-dimensional matching area is larger than a first threshold value, and the matching degree of the area integrity of the two-dimensional matching area and the area integrity of the two-dimensional matching area is larger than a second threshold value;
Wherein the obtaining the respective region integrity of the at least one two-dimensional region based on the sorting result includes:
when the two-dimensional area is positioned on the first layer, the area integrity of the two-dimensional area is recorded as 1; and when the two-dimensional area is not positioned on the first layer, determining a previous layer of the current layer on which the two-dimensional area is positioned, and determining the area integrity of the two-dimensional area according to the intersection ratio of the two-dimensional area of the previous layer and the two-dimensional area.
2. The method of claim 1, further comprising, prior to the acquiring three-dimensional data of at least one acquisition object acquired by each of the two sensors to be fused:
respectively acquiring the position information and the angle information of each of the two sensors;
based on the obtained position information and the angle information, respectively determining the respective acquisition ranges of the two sensors;
and when an overlapping part exists between the acquired acquisition ranges, the two sensors are used as the two sensors to be fused.
3. The method of claim 2, further comprising, prior to separately acquiring the position information and the angle information of each of the two sensors:
Respectively acquiring attribute information of each of the two sensors, wherein the attribute information represents: the corresponding sensor is a road side sensor or a vehicle-mounted sensor;
and determining at least one sensor of the two sensors as an on-vehicle sensor based on the obtained attribute information.
4. The method of claim 1, wherein prior to acquiring three-dimensional data of at least one acquisition object acquired by each of the two sensors to be fused, further comprising:
acquiring original data sets acquired by the two sensors respectively; wherein each of the raw data sets comprises: data subsets acquired by corresponding sensors at each acquisition time point respectively, wherein each data subset comprises: raw data of each of the at least one acquisition object obtained at one acquisition time point; the two sensors use different acquisition time points;
for each acquisition time point used by one of the two sensors, the following operations are performed:
acquiring, for one acquisition time point used by one sensor, each subset of data acquired by the other sensor before and after the one acquisition time point;
for the at least one acquisition object, the following operations are performed: interpolation processing is carried out on the original data of one acquisition object before and after the acquisition time point, so that interpolation data of the one acquisition object at the acquisition time point is obtained;
And respectively obtaining the three-dimensional data of the at least one acquisition object acquired by each of the two sensors to be fused at one acquisition time point according to the original data of one sensor at the one acquisition time point and the interpolation data of the other sensor at the one acquisition time point.
5. The method as recited in claim 4, further comprising:
when the two sensors are sensors of the same type, respectively converting the obtained three-dimensional data into a preset world three-dimensional coordinate system to obtain corresponding converted three-dimensional data;
for at least one converted three-dimensional data obtained by one of the two sensors, the following operations are performed: according to the coordinate information in one piece of converted three-dimensional data, obtaining one piece of matched three-dimensional data with the distance from the coordinate information of the one piece of converted three-dimensional data being smaller than a third threshold value, and fusing the one piece of converted three-dimensional data and the one piece of matched three-dimensional data; wherein the one matching three-dimensional data is converted three-dimensional data obtained by another sensor.
6. The method according to any of claims 1-4, wherein said obtaining a matching two-dimensional region having a degree of overlap with a two-dimensional region greater than a first threshold and a degree of match of region integrity greater than a second threshold, specifically comprises:
Based on the sequencing result, acquiring the overlapping degree of a two-dimensional region and a reference two-dimensional region and the matching degree of the region integrity; the reference two-dimensional area is a two-dimensional area which is in the same layer sequence as the two-dimensional area in the other sensor;
when the overlapping degree is greater than the first threshold and the matching degree of the region integrity is greater than the second threshold, determining the reference two-dimensional region as the matching two-dimensional region;
when the overlapping degree is not greater than the first threshold value or the matching degree of the region integrity is not greater than the second threshold value, acquiring the overlapping degree of the two-dimensional region and other two-dimensional regions and the matching degree of the region integrity, wherein the other two-dimensional regions are two-dimensional regions except the reference two-dimensional region in the other sensor; and taking one other two-dimensional region with the overlapping degree of the two-dimensional region being larger than the first threshold and the matching degree of the region integrity being larger than the second threshold as the one matching two-dimensional region.
7. The method of claim 6, wherein the degree of overlap is obtained from an intersection ratio of the one two-dimensional region and the two-dimensional region to be matched; the two-dimensional region to be matched comprises the reference two-dimensional region and the other two-dimensional regions.
8. A matched fusion device for projection of an object, comprising:
the receiving and transmitting unit is used for acquiring three-dimensional data of at least one acquisition object acquired by each of the two sensors to be fused;
the processing unit is used for respectively mapping each obtained three-dimensional data projection to a preset two-dimensional coordinate system when the two sensors are sensors of different types, so as to obtain a corresponding two-dimensional area; for the two sensors, the following operations are performed respectively: at least one two-dimensional area obtained based on one sensor is subjected to layer ordering according to the distance between a corresponding acquisition object and the one sensor, and a corresponding ordering result is obtained; and based on the sorting result, respectively obtaining the respective region integrity of the at least one two-dimensional region, wherein the region integrity represents: the degree of occlusion of the corresponding two-dimensional region; for at least one two-dimensional area obtained based on one of the two sensors, the following operations are performed, respectively: acquiring a matching two-dimensional area associated with one two-dimensional area, and fusing three-dimensional data corresponding to the two-dimensional area and the matching two-dimensional area respectively; wherein the one matching two-dimensional region is obtained based on another sensor, and the following condition is satisfied: the overlapping degree of the two-dimensional matching area and the two-dimensional matching area is larger than a first threshold value, and the matching degree of the area integrity of the two-dimensional matching area and the area integrity of the two-dimensional matching area is larger than a second threshold value;
Wherein the obtaining the respective region integrity of the at least one two-dimensional region based on the sorting result includes:
when the two-dimensional area is positioned on the first layer, the area integrity of the two-dimensional area is recorded as 1; and when the two-dimensional area is not positioned on the first layer, determining a previous layer of the current layer on which the two-dimensional area is positioned, and determining the area integrity of the two-dimensional area according to the intersection ratio of the two-dimensional area of the previous layer and the two-dimensional area.
9. A computer readable non-volatile storage medium, characterized in that the computer readable non-volatile storage medium stores a program which, when run on a computer, causes the computer to implement the method of any one of claims 1 to 7.
10. A computer device, comprising:
a memory for storing a computer program;
a processor for invoking a computer program stored in said memory, performing the method according to any of claims 1 to 7 in accordance with the obtained program.
CN202311101947.7A 2023-08-30 2023-08-30 Matching fusion method and related device for target projection Active CN116821854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311101947.7A CN116821854B (en) 2023-08-30 2023-08-30 Matching fusion method and related device for target projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311101947.7A CN116821854B (en) 2023-08-30 2023-08-30 Matching fusion method and related device for target projection

Publications (2)

Publication Number Publication Date
CN116821854A CN116821854A (en) 2023-09-29
CN116821854B true CN116821854B (en) 2023-12-08

Family

ID=88124322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311101947.7A Active CN116821854B (en) 2023-08-30 2023-08-30 Matching fusion method and related device for target projection

Country Status (1)

Country Link
CN (1) CN116821854B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor
CN110371108A (en) * 2019-06-14 2019-10-25 浙江零跑科技有限公司 Cartborne ultrasound wave radar and vehicle-mounted viewing system fusion method
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
WO2020248614A1 (en) * 2019-06-10 2020-12-17 商汤集团有限公司 Map generation method, drive control method and apparatus, electronic equipment and system
CN116229408A (en) * 2022-11-22 2023-06-06 重庆邮电大学 Target identification method for fusing image information and laser radar point cloud information
CN116246142A (en) * 2023-04-07 2023-06-09 浙江大学 Three-dimensional scene perception method for multi-sensor data fusion requirement

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112243698B (en) * 2020-10-22 2021-08-13 安徽农业大学 Automatic walnut picking and collecting method based on multi-sensor fusion technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020248614A1 (en) * 2019-06-10 2020-12-17 商汤集团有限公司 Map generation method, drive control method and apparatus, electronic equipment and system
CN110371108A (en) * 2019-06-14 2019-10-25 浙江零跑科技有限公司 Cartborne ultrasound wave radar and vehicle-mounted viewing system fusion method
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN116229408A (en) * 2022-11-22 2023-06-06 重庆邮电大学 Target identification method for fusing image information and laser radar point cloud information
CN116246142A (en) * 2023-04-07 2023-06-09 浙江大学 Three-dimensional scene perception method for multi-sensor data fusion requirement

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Improving the Performance of 3D Shape Measurement of Moving Objects by Fringe Projection and Data Fusion;Chengpu Duan et al;《 IEEE Access》;第9卷;第34682 - 34691页 *
基于多传感器融合的无人车建图和定位研究;马振强;《中国优秀硕士学位论文全文数据库》(第02期);第1-107页 *

Also Published As

Publication number Publication date
CN116821854A (en) 2023-09-29

Similar Documents

Publication Publication Date Title
US11367291B2 (en) Traffic signal display estimation system
CN112180373B (en) Multi-sensor fusion intelligent parking system and method
CN111595357B (en) Visual interface display method and device, electronic equipment and storage medium
US11282164B2 (en) Depth-guided video inpainting for autonomous driving
CN111209956A (en) Sensor data fusion method, and vehicle environment map generation method and system
US20220414917A1 (en) Method and apparatus for obtaining 3d information of vehicle
US11699235B2 (en) Way to generate tight 2D bounding boxes for autonomous driving labeling
CN109583312A (en) Lane detection method, apparatus, equipment and storage medium
JP2021197009A (en) Risk determination system and risk determination program
CN116821854B (en) Matching fusion method and related device for target projection
CN113895429A (en) Automatic parking method, system, terminal and storage medium
CN113420714A (en) Collected image reporting method and device and electronic equipment
CN115661014A (en) Point cloud data processing method and device, electronic equipment and storage medium
CN116137655A (en) Intelligent vehicle system and control logic for surround view enhancement
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN115346193A (en) Parking space detection method and tracking method thereof, parking space detection device, parking space detection equipment and computer readable storage medium
CN114141055B (en) Parking space detection device and method of intelligent parking system
US20240144491A1 (en) Object tracking device
CN114842458B (en) Obstacle detection method, obstacle detection device, vehicle, and storage medium
JP7425223B2 (en) Object tracking device and object tracking method
Schneider et al. An evaluation framework for stereo-based driver assistance
US20230252638A1 (en) Systems and methods for panoptic segmentation of images for autonomous driving
CN116682091A (en) Obstacle sensing method and device for automatic driving vehicle
CN114877903A (en) Lane-level path planning method, system and equipment fusing traffic live-action information and storage medium
CN117949004A (en) Navigation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant