CN109859254B - Method and device for sending information in automatic driving - Google Patents

Method and device for sending information in automatic driving Download PDF

Info

Publication number
CN109859254B
CN109859254B CN201910151676.3A CN201910151676A CN109859254B CN 109859254 B CN109859254 B CN 109859254B CN 201910151676 A CN201910151676 A CN 201910151676A CN 109859254 B CN109859254 B CN 109859254B
Authority
CN
China
Prior art keywords
preset
matching
image
determining
matching data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910151676.3A
Other languages
Chinese (zh)
Other versions
CN109859254A (en
Inventor
程凯
王军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910151676.3A priority Critical patent/CN109859254B/en
Publication of CN109859254A publication Critical patent/CN109859254A/en
Application granted granted Critical
Publication of CN109859254B publication Critical patent/CN109859254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the disclosure discloses a method and a device for sending information in automatic driving. One embodiment of the method comprises: acquiring a target image, wherein the target image comprises a first number of image areas of a preset type and a second number of polygons, and the polygons are used for representing positions of preset category objects, indicated after point cloud data is mapped based on preset calibration parameters, displayed in the target image; generating a target number of image region-polygon matching data pairs of a predetermined type based on the first number of image regions of the predetermined type and the second number of polygons; for a matching data pair of the target number of matching data pairs, determining a deviation between a predetermined type of image area comprised by the matching data pair and a position of the corresponding polygon in the target image; and sending information representing the abnormity of the preset calibration parameters based on the comparison of the obtained at least one deviation and a preset threshold value. The embodiment realizes the on-line detection of whether the preset calibration parameters are abnormal or not.

Description

Method and device for sending information in automatic driving
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method and a device for sending information in automatic driving.
Background
With the development of automatic driving technology, the comprehensive application of multiple sensors is more and more extensive. Because the calibration parameters reflect the conversion of coordinate systems among different sensors, when the calibration parameters are not matched with the installed sensors, the obstacles sensed by the automatic driving system have deviation from key parameters such as positions, speeds and the like in the actual environment, and the safety of the automatic driving technology is seriously influenced. Therefore, ensuring that the calibration parameters are correct is an important part of ensuring safe driving of the autonomous vehicle.
A related approach is to manually verify the preset calibration parameters of the radar and camera before the vehicle enters the autonomous driving mode. And if the verification is passed, the preset calibration parameters are considered to be correct.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for sending information in automatic driving.
In a first aspect, an embodiment of the present disclosure provides a method for sending information in automatic driving, the method including: acquiring a target image, wherein the target image comprises a first number of image areas of a preset type and a second number of polygons, and the polygons are used for representing positions of preset category objects, indicated after point cloud data is mapped based on preset calibration parameters, displayed in the target image; generating a target number of matching data pairs based on the first number of image regions of the predetermined type and the second number of polygons, wherein the matching data pairs include polygons corresponding to the image regions of the predetermined type and the image regions of the predetermined type; for a matching data pair of the target number of matching data pairs, determining a deviation between a predetermined type of image area comprised by the matching data pair and a position of the corresponding polygon in the target image; and sending information representing the abnormity of the preset calibration parameters based on the comparison of the obtained at least one deviation and a preset threshold value.
In some embodiments, the generating a target number of matching data pairs includes: generating a third number of candidate matching data pairs based on the first number of image regions of the predetermined type and the second number of polygons, wherein the candidate matching data pairs include polygons corresponding to the image regions of the predetermined type and the image regions of the predetermined type; for a candidate matching data pair in the third number of candidate matching data pairs, determining the matching degree between the image area of the predetermined type included in the candidate matching data pair and the polygon corresponding to the image area of the predetermined type; and determining a target number of matching data pairs from the third number of candidate matching data pairs according to the obtained at least one matching degree.
In some embodiments, the polygon is a rectangle; and the above-mentioned matching degree between the image area of the predetermined type and the polygon corresponding to the image area of the predetermined type included in the candidate matching data pair is determined, including: determining a circumscribed rectangle of the candidate matching data pair in the image area of the preset type; acquiring the length and the width of the circumscribed rectangle and the length and the width of a polygon corresponding to the image area of the preset type; determining the shape matching degree according to the length and the width of the obtained circumscribed rectangle and the length and the width of a polygon corresponding to the image area of the preset type; acquiring coordinates of a preset reference point of the circumscribed rectangle, and coordinates of at least one point and the number of points in point cloud data surrounded by a polygon corresponding to the image area of the preset type; and determining the distance matching degree according to the coordinates of the acquired preset reference points of the circumscribed rectangle, and the coordinates of at least one point in the point cloud data surrounded by the polygon corresponding to the image area of the preset type and the number of the points.
In some embodiments, the determining a target number of matching data pairs from the third number of candidate matching data pairs includes: for candidate matching data pairs in the third number of candidate matching data pairs, determining the candidate matching data pairs as matching data pairs in response to determining that the matching degree corresponding to the candidate matching data pairs meets a preset selection condition, wherein the preset selection condition includes at least one of the following: the shape matching degree is larger than a preset shape matching degree threshold value, and the distance matching degree is larger than a preset distance matching degree threshold value.
In some embodiments, the sending the information indicating that the preset calibration parameter is abnormal includes: determining a reference deviation from the at least one deviation obtained; and sending information representing that the preset calibration parameters are abnormal in response to the fact that the reference deviation is larger than the preset threshold value.
In some embodiments, the deviation comprises an offset vector indicating a difference between positions of a predetermined type of image region and a polygon corresponding to the predetermined type of image region in the target image; and said determining a reference deviation from said at least one derived deviation comprises: selecting an offset vector from the at least one offset vector obtained, and performing the following determining steps: determining differences between the modulus of the selected offset vector and the moduli of other offset vectors of the at least one offset vector; for a difference value of the determined at least one difference value, determining the difference value as a matching difference value in response to determining that the difference value is less than a preset difference value threshold; determining a number of the determined matching difference values and a sum of the determined at least one matching difference value; in response to the fact that the number of selection times reaches the preset selection times, selecting an offset vector from the obtained at least one offset vector as a reference deviation based on the number of matching difference values corresponding to the obtained at least one offset vector and the sum of the matching difference values; and in response to the fact that the number of selection times is smaller than the preset number of selection times, selecting unselected offset vectors from the obtained at least one offset vector, and continuing to execute the determining step.
In some embodiments, the selecting an offset vector from the obtained at least one offset vector as the reference offset includes: and selecting the offset vector with the maximum number of corresponding matching difference values from the obtained at least one offset vector as the reference deviation.
In some embodiments, the selecting an offset vector from the obtained at least one offset vector as the reference offset further includes: in response to determining that there are at least two offset vectors with the largest number of matching differences, selecting, from the at least two offset vectors with the largest number of matching differences, the offset vector with the smallest sum of the corresponding matching differences as the reference offset.
In a second aspect, an embodiment of the present disclosure provides an apparatus for transmitting information in automatic driving, the apparatus including: the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is configured to acquire a target image, the target image comprises a first number of image areas of a preset type and a second number of polygons, and the polygons are used for representing positions of preset category objects indicated after point cloud data are mapped based on preset calibration parameters and displayed in the target image; a generation unit configured to generate a target number of matching data pairs based on a first number of predetermined types of image regions and a second number of polygons, wherein the matching data pairs include predetermined types of image regions and polygons corresponding to the predetermined types of image regions; a determination unit configured to determine, for a matching data pair of a target number of matching data pairs, a deviation between positions in the target image of a predetermined type of image area and a corresponding polygon included in the matching data pair; and the sending unit is configured to send information representing the abnormity of the preset calibration parameters based on the comparison of the obtained at least one deviation and a preset threshold value.
In some embodiments, the generating unit includes: a generating module configured to generate a third number of candidate matching data pairs based on the first number of image regions of the predetermined type and the second number of polygons, wherein the candidate matching data pairs include polygons corresponding to the image regions of the predetermined type and the image regions of the predetermined type; a matching degree determination module configured to determine, for a candidate matching data pair of the third number of candidate matching data pairs, a matching degree between an image region of the predetermined type included in the candidate matching data pair and a polygon corresponding to the image region of the predetermined type; a data pair determination module configured to determine a target number of matching data pairs from the third number of candidate matching data pairs based on the obtained at least one matching degree.
In some embodiments, the polygon is a rectangle; the matching degree determination module includes: a circumscribed rectangle determination submodule configured to determine a circumscribed rectangle of the image region of the predetermined type included in the candidate matching data pair; the length and width acquisition sub-module is configured to acquire the length and width of the circumscribed rectangle and the length and width of a polygon corresponding to the image area of the preset type; the shape matching degree determining submodule is configured to determine the shape matching degree according to the length and the width of the obtained circumscribed rectangle and the length and the width of a polygon corresponding to the image area of the preset type; the point acquisition submodule is configured to acquire the coordinates of a preset reference point of the circumscribed rectangle, and the coordinates of at least one point in the point cloud data surrounded by the polygon corresponding to the image area of the preset type and the number of points; and the distance matching degree determining submodule is configured to determine the distance matching degree according to the coordinates of the acquired preset reference points of the circumscribed rectangle, the coordinates of at least one point in the point cloud data surrounded by the polygon corresponding to the image area of the preset type and the number of the points.
In some embodiments, the data pair determination module is further configured to: for candidate matching data pairs in the third number of candidate matching data pairs, determining the candidate matching data pairs as matching data pairs in response to determining that the matching degree corresponding to the candidate matching data pairs meets a preset selection condition, wherein the preset selection condition includes at least one of the following: the shape matching degree is larger than a preset shape matching degree threshold value, and the distance matching degree is larger than a preset distance matching degree threshold value.
In some embodiments, the sending unit includes: a reference deviation determination module configured to determine a reference deviation from the obtained at least one deviation; a sending module configured to send information characterizing an abnormality of a preset calibration parameter in response to determining that the reference deviation is greater than a preset threshold.
In some embodiments, the deviation comprises an offset vector indicating a difference between positions of a predetermined type of image region and a polygon corresponding to the predetermined type of image region in the target image; the reference deviation determination module is further configured to: selecting an offset vector from the at least one offset vector obtained, and performing the following determining steps: determining differences between the modulus of the selected offset vector and the moduli of other offset vectors of the at least one offset vector; for a difference value of the determined at least one difference value, determining the difference value as a matching difference value in response to determining that the difference value is less than a preset difference value threshold; determining a number of the determined matching difference values and a sum of the determined at least one matching difference value; in response to the fact that the number of selection times reaches the preset selection times, selecting an offset vector from the obtained at least one offset vector as a reference deviation based on the number of matching difference values corresponding to the obtained at least one offset vector and the sum of the matching difference values; and in response to the fact that the number of selection times is smaller than the preset number of selection times, selecting unselected offset vectors from the obtained at least one offset vector, and continuing to execute the determining step.
In some embodiments, the reference deviation determining module comprises: and the first reference deviation determining submodule is configured to select the offset vector with the maximum number of corresponding matching difference values from the obtained at least one offset vector as the reference deviation.
In some embodiments, the reference deviation determining module further comprises: a second reference bias determination sub-module configured to select, as the reference bias, an offset vector having a smallest sum of the corresponding matching difference values from among the offset vectors having the largest number of the at least two matching difference values, in response to determining that there are the offset vectors having the largest number of the at least two matching difference values.
In a third aspect, an embodiment of the present disclosure provides a terminal, including: one or more processors; a storage device having one or more programs stored thereon; when executed by one or more processors, cause the one or more processors to implement a method as described in any implementation of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which when executed by a processor implements the method as described in any of the implementations of the first aspect.
The method and the device for sending information in automatic driving provided by the embodiment of the disclosure comprise the steps of firstly obtaining a target image, wherein the target image comprises a first number of image areas of a preset type and a second number of polygons, and the polygons are used for representing positions of preset category objects, indicated after point cloud data is mapped based on preset calibration parameters, displayed in the target image; then, based on the first number of image areas of the predetermined type and the second number of polygons, generating a target number of matching data pairs, wherein the matching data pairs comprise the image areas of the predetermined type and the polygons corresponding to the image areas of the predetermined type; then, for a matching data pair of the target number of matching data pairs, determining a deviation between a predetermined type of image area included by the matching data pair and a position of the corresponding polygon in the target image; and finally, sending information representing the abnormity of the preset calibration parameters based on the comparison of the obtained at least one deviation and a preset threshold value. Therefore, the online detection of whether the preset calibration parameters are abnormal or not is realized.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for sending information in autonomous driving according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for sending information in autonomous driving according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of yet another embodiment of a method for sending information in autonomous driving according to the present disclosure;
FIG. 5 is a schematic diagram illustrating an embodiment of an apparatus for transmitting information in autonomous driving according to the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and the features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows an exemplary architecture 100 of a method for sending information in autonomous driving or an apparatus for sending information in autonomous driving to which the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a driving assistance system 105. The network 104 is a medium to provide a communication link between the terminal devices 101, 102, 103 and the driving assistance system 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The terminal devices 101, 102, 103 interact with the driving assistance system 105 via the network 104 to receive or transmit messages or the like. Terminal devices 101 and 102 may include various types of detection devices, which may include, but are not limited to, at least one of the following: LiDAR (Light Detection And Ranging), image sensors (e.g., cameras). The terminal device 103 may include a GPU (Graphics Processing Unit) having an image recognition function.
The driving assistance system 105 may be various data processing systems that support data processing and information transmission, such as a data processing system that determines whether a calibration parameter preset between sensors is abnormal. The driving assistance system may determine whether calibration parameters preset between the sensors are abnormal according to data acquired from the terminal devices 101, 102, 103, and may transmit prompt information in time when the calibration parameters are abnormal.
The driving assistance system may be hardware or software. When the driving assistance system is hardware, it may be implemented as a processor system composed of a plurality of processors, or may be implemented as a single processor. When the driving assistance system is software, it may be implemented as a plurality of software or software modules (for example, to provide distributed services), or may be implemented as a single software or software module. And is not particularly limited herein. It should be noted that the method for transmitting information provided by the embodiment of the present disclosure is generally performed by the driving assistance system 105, and accordingly, the apparatus for transmitting information is generally provided in the driving assistance system 105.
It should be understood that the number of terminal devices, networks, and driving assistance systems in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and driving assistance systems, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for sending information in autonomous driving according to the present disclosure is shown. The method for sending information in automatic driving comprises the following steps:
step 201, acquiring a target image.
In this embodiment, the target image may include a first number of image regions of a predetermined type and a second number of polygons. Wherein the predetermined type of image area may be used to indicate a predetermined type of image identified using image identification techniques. The predetermined type of image may be, for example, a vehicle image, a pedestrian image, or other obstacle image. The polygon may be used to represent the position of the preset class object indicated by the point cloud data displayed in the target image. The preset category of objects may be, for example, vehicles, pedestrians, or other obstacles. The position of the preset type object displayed in the target image can be obtained by mapping the point cloud data to the plane of the target image according to preset calibration parameters. The point cloud data is associated with a predetermined type of image area in time and object. As an example, a vehicle equipped with a driving assistance system or an automatic driving system is mounted with a laser radar and a camera, and the point cloud data and the image including the predetermined type of image area may be generated from data in which a difference in acquisition time between the laser radar and the camera is smaller than a preset time threshold.
In the present embodiment, the execution subject of the method for transmitting information in automatic driving (such as the driving assistance system 105 shown in fig. 1) may acquire the target image by a wired connection manner or a wireless connection manner. As an example, the executing agent may first acquire point cloud data and an initial image of a current frame from the LiDAR and the camera. And then, the executing body maps the acquired point cloud data to the acquired initial image according to preset calibration parameters. Thereafter, the executing entity may determine a first number of predetermined types of image regions from the initial image and a second number of polygons surrounding the identified obstacle point cloud from the point cloud data mapped to the initial image using image recognition techniques and point cloud obstacle detection algorithms. Thereby obtaining the target image. As still another example, the execution subject described above may also acquire a target image generated by a method similar to the above-described steps from an electronic device (e.g., the terminal device 103 shown in fig. 1) to which it is communicatively connected.
Step 202 generates a target number of matching data pairs based on the first number of predetermined types of image regions and the second number of polygons.
In this embodiment, the execution subject may recognize the predetermined type of image region and the polygon corresponding to the same object in various ways to form the matching data pair. The matching data pairs can be used for representing image areas and polygons of a predetermined type corresponding to the same object. As can be seen, the matching data pairs may comprise image areas of a predetermined type and polygons corresponding to the image areas of the predetermined type. The target number may be a preset number, or may be a number that satisfies a preset condition (for example, the number of data pairs that can be matched within a preset time period). It is understood that the number of targets can be a minimum of 1 and a maximum of the smaller of the first number and the second number.
In some optional implementations of this embodiment, the executing entity may generate the target number of matching data pairs according to the following steps:
in a first step, a third number of candidate matching-data pairs is generated based on the first number of image regions of the predetermined type and the second number of polygons.
In these implementations, for each of a first number of predetermined types of image regions, the execution subject may group the predetermined type of image region and each of a second number of polygons into a candidate matching data pair, respectively. The candidate matching data pair may include a predetermined type of image area and a polygon corresponding to the predetermined type of image area. The third number may be a product of the first number and the second number.
And a second step of determining, for a candidate matching data pair of the third number of candidate matching data pairs, a matching degree between an image region of the predetermined type included in the candidate matching data pair and a polygon corresponding to the image region of the predetermined type.
In these implementations, for a candidate matching data pair in the third number of candidate matching data pairs, the executing body may determine the matching degree between the image area of the predetermined type included in the candidate matching data pair and the polygon corresponding to the image area of the predetermined type in various ways. As an example, the execution subject described above may first extract the outline of a predetermined type of image region from the predetermined type of image regions included in the pair of candidate matching data. The center of gravity of the contour of the image area of the predetermined type described above can then be determined. Thereafter, the distance between the center of gravity of the contour of the image area of the above-mentioned predetermined type and the center of gravity of the corresponding polygon is determined. It will be appreciated that the greater the distance, the lower the corresponding degree of matching. Finally, at least one and at most a third number of matching degrees can be obtained.
Alternatively, the polygon may be a rectangle. The executing body may further determine a matching degree between the image area of the predetermined type included in the candidate matching data pair and a polygon corresponding to the image area of the predetermined type according to the following steps:
and S1, determining the circumscribed rectangle of the image area of the preset type included by the candidate matching data pair.
And S2, acquiring the length and the width of the circumscribed rectangle and the length and the width of a polygon corresponding to the image area of the preset type.
In these implementations, the length in the horizontal direction is generally taken as the length of a rectangle; the length in the vertical direction is taken as the width of the rectangle.
And S3, determining the shape matching degree according to the length and the width of the obtained circumscribed rectangle and the length and the width of the polygon corresponding to the image area of the preset type.
In these implementations, the shape matching degree may be used to characterize the degree of similarity in shape between the circumscribed rectangle and the corresponding polygon. As an example, the shape matching degree may be expressed by the following formula:
Figure GDA0003175966250000101
wherein, the Similarity shape Representing the shape matching degree between the circumscribed rectangle and the corresponding polygon; f () represents the accumulation of chi-squared distributions with a degree of freedom of 2A distribution function; sigma w The method is a preset parameter which can be expressed as a standard deviation of w, and the value range of the parameter can be set to be 0.3-0.5; sigma h The parameter can also be a preset parameter which can be expressed as a standard deviation of h, and the value range of the parameter can be set to be 0.6-0.8. w and h are respectively
Figure GDA0003175966250000102
The component (b) of (a) is,
Figure GDA0003175966250000103
Figure GDA0003175966250000104
indicates the length w of the circumscribed rectangle c And width h c The composed vector;
Figure GDA0003175966250000105
indicating the length w of the corresponding polygon l And width h l The composed vector; the "\\" symbol represents the operation of the vector dividing element by element.
It is understood that the closer the length and width of the circumscribed rectangle are to the length and width of the corresponding polygon, the greater the degree of shape matching.
And S4, acquiring the coordinates of the preset reference point of the circumscribed rectangle, the coordinates of at least one point in the point cloud data surrounded by the polygon corresponding to the image area of the preset type and the number of the at least one point.
And S5, determining the distance matching degree according to the coordinates of the acquired preset reference points of the circumscribed rectangle, the coordinates of at least one point in the point cloud data surrounded by the polygon corresponding to the image area of the preset type and the number of the at least one point.
In these implementations, the distance matching degree may be used to characterize the distance between the circumscribed rectangle and the corresponding polygon. As an example, the shape matching degree may be expressed by the following formula:
Figure GDA0003175966250000111
wherein, the Similarity distance Representing the distance matching degree between the circumscribed rectangle and the corresponding polygon; f () represents a cumulative distribution function of chi-squared distribution with a degree of freedom of 2; sigma x The method is a preset parameter which can be expressed as a standard deviation of w', and the value range of the parameter can be set to be 0.4-0.5; sigma y The parameter can also be a preset parameter, which can be expressed as a standard deviation of h', and the value range of the parameter can be set to be 0.5-0.6. w 'and h' are respectively
Figure GDA0003175966250000112
The component (b) of (a) is,
Figure GDA0003175966250000113
c represents the number of points in the point cloud data surrounded by the polygon;
Figure GDA0003175966250000114
representing a vector formed by the x and y coordinates of the ith point in the point cloud data surrounded by the polygon on the target image; wherein x is i The abscissa of the ith point in the point cloud data on the target image can be represented; y is i The ordinate of the ith point in the point cloud data on the target image can be represented;
Figure GDA0003175966250000115
a vector whose center is composed of x, y coordinates and which can represent a predetermined type of image area; x is the number of o May represent the abscissa of the center of a predetermined type of image area on the target image; y is o May represent the ordinate of the center of the predetermined type of image area on the target image. The "|" symbol represents a vector component-by-component absolute value operation.
It is understood that the closer all the projection points in the point cloud data enclosed by the polygon are to the center of the predetermined type of image area, the greater the above distance matching degree.
Note that σ is the above w 、σ h 、σ x And σ y The larger the value of (A) is, the more corresponding the subscript thereof isThe lower the confidence of the parameter. As an example, according to the values of the above parameters, the confidence of the width value in the shape matching degree is lower than the length value. This is because in practice LiDAR is typically a lateral line scan, resulting in the upper edge of an obstacle that may not be swept to a line, so the width value is small, but the length value is generally accurate.
And thirdly, determining a target number of matching data pairs from the third number of candidate matching data pairs according to the obtained at least one matching degree.
In these implementations, the executing entity may determine the target number of matching data pairs from the third number of candidate matching data pairs in various ways. The target number may be a preset number, or may be a number determined according to practical applications, for example, the number of candidate matching data pairs with a matching degree greater than a preset matching degree threshold. As an example, the execution main body may further select a preset number of candidate matching data pairs in an order from high matching degree to low matching degree, and determine the selected candidate matching data pairs as matching data pairs.
Optionally, the executing body may further determine a target number of matching data pairs from the third number of candidate matching data pairs according to the following steps: and for the candidate matching data pairs in the third number of candidate matching data pairs, determining the candidate matching data pairs as matching data pairs in response to the fact that the matching degree corresponding to the candidate matching data pairs meets the preset selection condition. Wherein, the preset selecting condition may include at least one of the following: the shape matching degree is larger than a preset shape matching degree threshold value, and the distance matching degree is larger than a preset distance matching degree threshold value.
It should be noted that the preset shape matching degree threshold is usually larger than the preset distance matching degree threshold. This is because when there is an abnormality in the preset calibration parameter, the distance matching degree may be low, but the shape matching degree is less affected by the preset calibration parameter. Setting the threshold value according to the size relationship helps to better select the correct matching data pair.
For a matching data pair of the target number of matching data pairs, a deviation between the image area of the predetermined type and the position of the corresponding polygon in the target image is determined, step 203.
In the present embodiment, for a matching data pair of the target number of matching data pairs, the execution subject described above may determine, in various ways, a deviation between a predetermined type of image area included in the matching data pair and a position of a corresponding polygon in the target image. As an example, the executing entity may determine the distance between the center of gravity of the contour of the predetermined type of image region included in the matching data pair and the center of gravity of the corresponding polygon using the method as described in the second step of the aforementioned step 202. Then, the distance between the centers of gravity corresponding to the determined pair of matching data is determined as the deviation between the above-mentioned positions.
And step 204, sending information representing that the preset calibration parameter is abnormal based on the comparison between the obtained at least one deviation and a preset threshold value.
In this embodiment, based on the comparison between the obtained at least one deviation and the preset threshold, the execution main body may determine whether to send information indicating that the preset calibration parameter is abnormal in various ways. As an example, the executing entity may determine an average of the obtained at least one deviation; then, in response to determining that the determined average value is greater than the preset threshold, the execution subject may send information characterizing an abnormality of the preset calibration parameter. As yet another example, the execution subject may randomly choose any number of deviations from the at least one deviation obtained; then, the executing agent may compare each of the selected deviations with the preset threshold; then, in response to determining that the ratio of the deviation larger than the preset threshold in the selected deviation exceeds a preset ratio, the executing body may send information indicating that the preset calibration parameter is abnormal.
In this embodiment, the information characterizing the abnormality of the preset calibration parameter may be in various forms. As an example, it may be in the form of text such as numbers, letters, chinese characters, etc. As a further example, it may also be a control signal of the circuit, so that an indicator light connected thereto starts to flash or a buzzer starts to alarm.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of a method for sending information in autonomous driving according to an embodiment of the present disclosure. In the application scenario of fig. 3, a driving assistance system mounted on a vehicle first acquires a target image 301. The target image 301 includes obstacle regions 302, 303 and polygons 304, 305. The obstacle regions 302 and 303 are obtained by recognizing an image captured by a camera mounted on the vehicle. The polygons 304 and 305 are convex hulls generated by clustering point cloud data acquired by a laser radar mounted on the vehicle and having a time close to the image acquisition time. Then, the driving support system generates 2 matching data pairs, each of which is: obstacle region 302-polygon 304, obstacle region 303-polygon 305. Then, the driving support system may determine, as the deviation, the distances between the barycenter of the obstacle region included in each of the matching data pairs and the barycenter of the corresponding polygon (point a to point x, point b to point y). In response to determining that the average of the 2 deviations is greater than the preset threshold, the driving assistance system may send a message for controlling the indicator light to blink, so as to prompt that the preset calibration parameter is abnormal.
At present, one of the prior art generally collects data of each sensor in a specific scene before a vehicle enters an automatic driving mode, and then, checks whether barrier point cloud data of a laser radar can be correctly projected onto a corresponding barrier in a camera view field through a preset calibration parameter by using manpower. However, the above calibration parameter calibration method has a problem of poor real-time performance in an off-line state, which results in that it is impossible to always perform calibration on the preset calibration parameters. Especially, when the position of the sensor changes due to reasons such as jolt in the automatic driving or auxiliary driving process, the correctness of the preset calibration parameter cannot be fed back in time, and potential safety hazards may be caused. In the method provided by the embodiment of the disclosure, the target image including the point cloud data and the image area of the predetermined type is obtained in real time, and the on-line detection of whether the preset calibration parameter is abnormal is realized according to the position deviation between the target image projected by the point cloud data through the preset calibration parameter and the image area of the predetermined type included in the image acquired by the camera. And moreover, prompt information is sent after the parameter abnormity is determined, so that the safety and the reliability of driving assistance are effectively improved.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for sending information in autonomous driving is shown. The process 400 of the method for sending information in autonomous driving includes the following steps:
step 401, a target image is acquired.
Step 402 generates a target number of matching data pairs based on a first number of predetermined types of image regions and a second number of polygons.
For a matching data pair of the target number of matching data pairs, a deviation between the image area of the predetermined type and the position of the corresponding polygon in the target image is determined, step 403.
Step 401, step 402, and step 403 are respectively the same as step 201, step 202, and step 203 in the foregoing embodiment, and the above description for step 201, step 202, and step 203 also applies to step 401, step 402, and step 403, which is not described herein again. )
A reference deviation is determined 404 from the at least one deviation obtained.
In the present embodiment, the execution body described above may determine the reference deviation from the obtained at least one deviation in various ways. Wherein the reference deviation may reflect an overall level of the at least one deviation obtained. As an example, the above-mentioned reference deviation may include, but is not limited to, at least one of: maximum, minimum, median.
In some optional implementations of the present embodiment, the deviation may include an offset vector indicating a difference between positions of the predetermined type of image area and a polygon corresponding to the predetermined type of image area in the target image. The executing entity may further determine a reference deviation from the obtained at least one deviation according to the following steps:
a first step of selecting an offset vector from the at least one offset vector obtained, and performing the following determination steps: determining a difference between a modulus of the selected offset vector and a modulus of the other offset vectors of the at least one offset vector; for a difference value of the determined at least one difference value, determining the difference value as a matching difference value in response to determining that the difference value is less than a preset difference value threshold; determining a number of the determined matching difference values and a sum of the determined at least one matching difference value; and in response to the fact that the selection times reach the preset selection times, selecting an offset vector from the obtained at least one offset vector as the reference deviation based on the number of the matching difference values corresponding to the obtained at least one offset vector and the sum of the matching difference values.
In these implementations, the execution body may select, as the reference offset, an offset vector having a smallest sum of corresponding matching differences from the obtained at least one offset vector. In response to determining that there is an offset vector with the smallest sum of at least two matching difference values, the execution body may further select, as the reference offset, an offset vector with the largest number of corresponding matching difference values from the offset vector with the smallest sum of at least two matching difference values.
Optionally, the executing body may further select, as the reference offset, an offset vector with a largest number of corresponding matching differences from the obtained at least one offset vector.
Optionally, in response to determining that there are at least two offset vectors with the largest number of matching difference values, the executing body may further select, as the reference offset, an offset vector with the smallest sum of corresponding matching difference values from the at least two offset vectors with the largest number of matching difference values.
And secondly, in response to the fact that the number of selection times is smaller than the preset number of selection times, selecting unselected offset vectors from the obtained at least one offset vector, and continuing to execute the determining step.
In these implementations, the first step to the second step are based on a RANSAC (Random Sample Consensus) algorithm, and the reference offset is directly determined from the obtained at least one offset vector without using a model, thereby saving calculation time. In addition, through the steps, the determined reference deviation has higher confidence coefficient, and the deviation between the projection of the point cloud data and the image area of the preset type can be reflected more accurately, so that the result of whether the determined preset calibration parameter is abnormal is more accurate and reliable.
Step 405, in response to determining that the reference deviation is greater than the preset threshold, sending information representing that the preset calibration parameter is abnormal.
In this embodiment, in response to determining that the reference deviation determined in step 404 is greater than the preset threshold, the executing entity may send information indicating that the preset calibration parameter is abnormal.
In this embodiment, the information characterizing the abnormality of the preset calibration parameter may be in various forms. By way of example, the text may be in the form of numbers, letters, Chinese characters, and the like. As a further example, it may also be a control signal of the circuit, so that an indicator light connected thereto starts to flash or a buzzer starts to alarm.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the method for sending information in automatic driving in the present embodiment embodies the steps of determining a reference deviation from the obtained at least one deviation, and sending information indicating that the preset calibration parameter is abnormal in response to determining that the reference deviation is greater than the preset threshold value. Therefore, the solution described in this embodiment can improve the sensitivity for detecting whether the preset calibration parameter is abnormal by determining the reference deviation. Particularly, the confidence coefficient of the reference deviation is improved by utilizing the idea of the RANSAC algorithm, so that the deviation between the projection of the point cloud data and the image area of the preset type is reflected more accurately, and the sensitivity of detecting whether the preset calibration parameter is abnormal is improved.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present disclosure provides an embodiment of an apparatus for sending information in automatic driving, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for transmitting information in autonomous driving provided by the present embodiment includes an acquisition unit 501, a generation unit 502, a determination unit 503, and a transmission unit 504. The acquiring unit 501 is configured to acquire a target image, where the target image includes a first number of image areas of a predetermined type and a second number of polygons, and the polygons are used to represent positions of the point cloud data, which are mapped based on preset calibration parameters, where indicated preset category objects are displayed in the target image; a generating unit 502 configured to generate a target number of matching data pairs based on a first number of image regions of a predetermined type and a second number of polygons, wherein the matching data pairs include image regions of the predetermined type and polygons corresponding to the image regions of the predetermined type; a determining unit 503 configured to determine, for a matching data pair of the target number of matching data pairs, a deviation between positions in the target image of an image area of a predetermined type and a corresponding polygon included in the matching data pair; a sending unit 504 configured to send information characterizing an abnormality of the preset calibration parameter based on a comparison of the obtained at least one deviation with a preset threshold.
In the present embodiment, in the apparatus 500 for transmitting information: the specific processing of the obtaining unit 501, the generating unit 502, the determining unit 503 and the sending unit 504 and the technical effects brought by the processing can refer to the related descriptions of step 201, step 202, step 203 and step 204 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the generating unit 502 may include: a generating module (not shown), a matching degree determining module (not shown), and a data pair determining module (not shown). Wherein the generating module may be configured to generate a third number of candidate matching data pairs based on the first number of image regions of the predetermined type and the second number of polygons, wherein the candidate matching data pairs include polygons corresponding to the image regions of the predetermined type and the image regions of the predetermined type; the matching degree determination module may be configured to determine, for a candidate matching data pair of the third number of candidate matching data pairs, a matching degree between an image region of the predetermined type included in the candidate matching data pair and a polygon corresponding to the image region of the predetermined type; the data pair determination module may be configured to determine a target number of matching data pairs from the third number of candidate matching data pairs based on the obtained at least one degree of matching.
In some optional implementations of this embodiment, the polygon may be a rectangle; the matching degree determination module may include: the device comprises an external rectangle determining submodule (not shown in the figure), a length and width obtaining submodule (not shown in the figure), a shape matching degree determining submodule (not shown in the figure), a point obtaining submodule (not shown in the figure) and a distance matching degree determining submodule (not shown in the figure). Wherein the circumscribed rectangle determining sub-module may be configured to determine a circumscribed rectangle of the image region of the predetermined type included in the candidate matching data pair; the length and width acquisition submodule can be configured to acquire the length and width of the circumscribed rectangle and the length and width of a polygon corresponding to the image area of the predetermined type; the shape matching degree determination submodule can be configured to determine the shape matching degree according to the length and the width of the obtained circumscribed rectangle and the length and the width of a polygon corresponding to the image area of the preset type; the point acquisition submodule may be configured to acquire coordinates of a preset reference point of the circumscribed rectangle, coordinates of at least one point in the point cloud data surrounded by a polygon corresponding to the predetermined type of image area, and the number of points; the distance matching degree determination sub-module may be configured to determine the distance matching degree according to the coordinates of the acquired preset reference point of the circumscribed rectangle, the coordinates of at least one point in the point cloud data surrounded by the polygon corresponding to the predetermined type of image area, and the number of points.
In some optional implementations of this embodiment, the data pair determination module may be further configured to: for candidate matching data pairs in the third number of candidate matching data pairs, determining the candidate matching data pairs as matching data pairs in response to determining that the matching degree corresponding to the candidate matching data pairs meets a preset selection condition, wherein the preset selection condition includes at least one of the following: the shape matching degree is larger than a preset shape matching degree threshold value, and the distance matching degree is larger than a preset distance matching degree threshold value.
In some optional implementations of this embodiment, the sending unit 504 may include: a reference deviation determining module (not shown in the figure) and a sending module (not shown in the figure). Wherein the reference deviation determination module may be configured to determine a reference deviation from the obtained at least one deviation; the sending module may be configured to send information characterizing an abnormality of the preset calibration parameter in response to determining that the reference deviation is greater than the preset threshold.
In some optional implementations of the present embodiment, the deviation may include an offset vector indicating a difference between positions of the image area of the predetermined type and a polygon corresponding to the image area of the predetermined type in the target image; the reference deviation determination module may be further configured to: selecting an offset vector from the at least one offset vector obtained, and performing the following determining steps: determining differences between the modulus of the selected offset vector and the moduli of other offset vectors of the at least one offset vector; for a difference value of the determined at least one difference value, determining the difference value as a matching difference value in response to determining that the difference value is less than a preset difference value threshold; determining a number of the determined match differences and a sum of the determined at least one match difference; in response to the fact that the number of selection times reaches the preset selection times, selecting an offset vector from the obtained at least one offset vector as a reference deviation based on the number of matching difference values corresponding to the obtained at least one offset vector and the sum of the matching difference values; and in response to the fact that the number of the selection times is smaller than the preset number of the selection times, selecting unselected offset vectors from the obtained at least one offset vector, and continuing to execute the determining step.
In some optional implementations of this embodiment, the reference deviation determining module may include: and the first reference deviation determining submodule is configured to select the offset vector with the maximum number of corresponding matching difference values from the obtained at least one offset vector as the reference deviation.
In some optional implementations of this embodiment, the reference deviation determining module may further include: a second reference bias determination sub-module configured to select, as the reference bias, an offset vector having a smallest sum of the corresponding matching difference values from among the offset vectors having the largest number of the at least two matching difference values, in response to determining that there are the offset vectors having the largest number of the at least two matching difference values.
In the apparatus provided by the above embodiment of the present disclosure, a target image is obtained by the obtaining unit 501, where the target image includes a first number of image areas of a predetermined type and a second number of polygons, and the polygons are used to represent positions of the point cloud data, which are displayed in the target image, of preset category objects indicated after the point cloud data is mapped based on preset calibration parameters; then, the generation unit 502 generates a target number of matching data pairs based on the first number of predetermined types of image areas and the second number of polygons, wherein the matching data pairs include predetermined types of image areas and polygons corresponding to the predetermined types of image areas; thereafter, for a matching data pair of the target number of matching data pairs, the determining unit 503 determines a deviation between the predetermined type of image area included in the matching data pair and the position of the corresponding polygon in the target image; finally, based on the comparison of the obtained at least one deviation with a preset threshold, the sending unit 504 sends information characterizing that the preset calibration parameter is abnormal. Therefore, the online detection of whether the preset calibration parameters are abnormal is realized.
Referring now to fig. 6, and referring now to fig. 6, a block diagram of an electronic device (e.g., the terminal device of fig. 1) 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a vehicle-mounted terminal (e.g., an automatic driving system) and a stationary terminal such as a desktop computer. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, or the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device. The computer readable medium carries one or more programs which, when executed by the terminal device, cause the terminal device to: acquiring a target image, wherein the target image comprises a first number of image areas of a preset type and a second number of polygons, and the polygons are used for representing positions of preset category objects, indicated after point cloud data is mapped based on preset calibration parameters, displayed in the target image; generating a target number of matching data pairs based on the first number of image regions of the predetermined type and the second number of polygons, wherein the matching data pairs include the image regions of the predetermined type and polygons corresponding to the image regions of the predetermined type; for a matching data pair of the target number of matching data pairs, determining a deviation between a predetermined type of image area comprised by the matching data pair and a position of the corresponding polygon in the target image; and sending information representing the abnormity of the preset calibration parameters based on the comparison of the obtained at least one deviation and a preset threshold value.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a generation unit, a determination unit, and a transmission unit. The names of the units do not form a limitation on the units themselves, for example, the acquiring unit may be further described as a unit for acquiring a target image, wherein the target image includes a first number of image areas of a predetermined type and a second number of polygons, and the polygons are used for representing positions of objects of a predetermined type indicated by the point cloud data after being mapped based on the predetermined calibration parameters and displayed in the target image.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (14)

1. A method for sending information in autonomous driving, comprising:
acquiring a target image, wherein the target image comprises a first number of image areas of a preset type and a second number of polygons, and the polygons are used for representing positions of preset category objects, indicated after point cloud data is mapped based on preset calibration parameters, displayed in the target image;
generating a target number of matching data pairs based on the first number of image areas of the predetermined type and the second number of polygons, wherein the matching data pairs comprise polygons corresponding to the image areas of the predetermined type and the image areas of the predetermined type, the shape matching degree of the image areas and the polygons included in the matching data pairs is greater than a preset shape matching degree threshold value, the distance matching degree of the image areas and the polygons included in the matching data pairs is greater than a preset distance matching degree threshold value, and the distance matching degree is positively correlated with the distance from the center of the image areas to a projection point in point cloud data surrounded by the polygons;
for a matching data pair of the target number of matching data pairs, determining a deviation between a predetermined type of image area comprised by the matching data pair and a position of the corresponding polygon in the target image;
sending information representing that the preset calibration parameter is abnormal based on the comparison of the obtained at least one deviation with a preset threshold value;
the sending of the information representing that the preset calibration parameter is abnormal includes:
determining a reference deviation from the obtained at least one deviation, wherein the reference deviation reflects an overall level of the obtained at least one deviation;
in response to determining that the reference deviation is greater than a preset threshold, sending information representing that the preset calibration parameter is abnormal;
wherein the deviation comprises an offset vector indicating a difference between positions of a predetermined type of image area and a polygon corresponding to the predetermined type of image area in the target image; and
said determining a reference deviation from the obtained at least one deviation comprises:
selecting an offset vector from the at least one offset vector obtained, and performing the following determining steps: determining a difference between a modulus of the selected offset vector and a modulus of the other offset vectors of the at least one offset vector; for a difference of the determined at least one difference, in response to determining that the difference is less than a preset difference threshold, determining the difference as a matching difference; determining a number of the determined match differences and a sum of the determined at least one match difference; in response to the fact that the number of selection times reaches a preset selection time, selecting an offset vector from the obtained at least one offset vector as the reference deviation based on the number of matching difference values corresponding to the obtained at least one offset vector and the sum of the matching difference values;
and in response to the fact that the number of selection times is smaller than the preset number of selection times, selecting unselected offset vectors from the obtained at least one offset vector, and continuing to execute the determining step.
2. The method of claim 1, wherein the generating a target number of matching data pairs comprises:
generating a third number of candidate matching data pairs based on the first number of image regions of the predetermined type and the second number of polygons, wherein the candidate matching data pairs include polygons corresponding to the image regions of the predetermined type and the image regions of the predetermined type;
for a candidate matching data pair in the third number of candidate matching data pairs, determining a matching degree between an image area of a predetermined type included in the candidate matching data pair and a polygon corresponding to the image area of the predetermined type;
and determining a target number of matching data pairs from the third number of candidate matching data pairs according to the obtained at least one matching degree.
3. The method of claim 2, wherein the polygon is a rectangle; and
the determining the matching degree between the image area of the predetermined type and the polygon corresponding to the image area of the predetermined type included in the candidate matching data pair includes:
determining a circumscribed rectangle of the image area of the predetermined type included in the candidate matching data pair;
acquiring the length and the width of the circumscribed rectangle and the length and the width of a polygon corresponding to the image area of the preset type;
determining the shape matching degree according to the length and the width of the obtained circumscribed rectangle and the length and the width of a polygon corresponding to the image area of the preset type;
acquiring coordinates of a preset reference point of the circumscribed rectangle, coordinates of at least one point in the point cloud data surrounded by a polygon corresponding to the image area of the preset type and the number of the at least one point;
and determining the distance matching degree according to the coordinates of the acquired preset reference points of the circumscribed rectangle, the coordinates of at least one point in the point cloud data surrounded by the polygon corresponding to the image area of the preset type and the number of the at least one point.
4. The method of claim 3, wherein said determining a target number of matching data pairs from the third number of candidate matching data pairs comprises:
for the candidate matching data pairs in the third number of candidate matching data pairs, determining the candidate matching data pairs as matching data pairs in response to determining that the matching degree corresponding to the candidate matching data pairs meets a preset selection condition, wherein the preset selection condition includes at least one of the following: the shape matching degree is larger than a preset shape matching degree threshold value, and the distance matching degree is larger than a preset distance matching degree threshold value.
5. The method of claim 1, wherein said selecting an offset vector from the obtained at least one offset vector as the reference bias comprises:
and selecting the offset vector with the maximum number of corresponding matching difference values from the obtained at least one offset vector as the reference deviation.
6. The method of claim 5, wherein said selecting an offset vector from the obtained at least one offset vector as the reference bias further comprises:
in response to determining that there are at least two offset vectors with the largest number of matching difference values, selecting, from the at least two offset vectors with the largest number of matching difference values, an offset vector with the smallest sum of corresponding matching difference values as the reference offset.
7. An apparatus for transmitting information in autonomous driving, comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is configured to acquire a target image, the target image comprises a first number of image areas of a predetermined type and a second number of polygons, and the polygons are used for representing positions of preset category objects indicated after point cloud data are mapped based on preset calibration parameters and displayed in the target image;
a generating unit configured to generate a target number of matching data pairs based on the first number of image regions of the predetermined type and the second number of polygons, wherein the matching data pairs include polygons corresponding to the image regions of the predetermined type and the image regions of the predetermined type, a shape matching degree of the image regions and the polygons included in the matching data pairs is greater than a preset shape matching degree threshold, and a distance matching degree of the image regions and the polygons included in the matching data pairs is greater than a preset distance matching degree threshold, and the distance matching degree is positively correlated with a distance from a center of the image region to a projection point in the point cloud data surrounded by the polygons;
a determination unit configured to determine, for a matching data pair of the target number of matching data pairs, a deviation between positions in the target image of an image area of a predetermined type and a corresponding polygon included in the matching data pair;
a sending unit configured to send information characterizing that the preset calibration parameter is abnormal based on a comparison of the obtained at least one deviation with a preset threshold;
wherein the transmitting unit includes:
a reference deviation determination module configured to determine a reference deviation from the derived at least one deviation, wherein the reference deviation reflects an overall level of the derived at least one deviation;
a sending module configured to send information characterizing that the preset calibration parameter is abnormal in response to determining that the reference deviation is greater than a preset threshold;
wherein the deviation comprises an offset vector indicating a difference between positions of a predetermined type of image area and a polygon corresponding to the predetermined type of image area in the target image; the reference deviation determination module is further configured to:
selecting an offset vector from the at least one offset vector obtained, and performing the following determining steps: determining a difference between a modulus of the selected offset vector and a modulus of the other offset vectors of the at least one offset vector; for a difference value of the determined at least one difference value, determining the difference value as a matching difference value in response to determining that the difference value is less than a preset difference value threshold; determining a number of the determined match differences and a sum of the determined at least one match difference; in response to the fact that the number of selection times reaches a preset selection time, selecting an offset vector from the obtained at least one offset vector as the reference deviation based on the number of matching difference values corresponding to the obtained at least one offset vector and the sum of the matching difference values;
and in response to the fact that the number of selection times is smaller than the preset number of selection times, selecting unselected offset vectors from the obtained at least one offset vector, and continuing to execute the determining step.
8. The apparatus of claim 7, wherein the generating unit comprises:
a generating module configured to generate a third number of candidate matching data pairs based on the first number of image regions of the predetermined type and the second number of polygons, wherein a candidate matching data pair includes a polygon corresponding to an image region of the predetermined type and an image region of the predetermined type;
a matching degree determination module configured to determine, for a candidate matching data pair of the third number of candidate matching data pairs, a matching degree between an image region of a predetermined type included in the candidate matching data pair and a polygon corresponding to the image region of the predetermined type;
a data pair determination module configured to determine a target number of matching data pairs from the third number of candidate matching data pairs based on the obtained at least one matching degree.
9. The apparatus of claim 8, wherein the polygon is a rectangle; the matching degree determination module comprises:
a circumscribed rectangle determination submodule configured to determine a circumscribed rectangle of the image region of the predetermined type included in the candidate matching data pair;
the length and width acquisition sub-module is configured to acquire the length and width of the circumscribed rectangle and the length and width of a polygon corresponding to the image area of the preset type;
the shape matching degree determining submodule is configured to determine the shape matching degree according to the length and the width of the obtained circumscribed rectangle and the length and the width of a polygon corresponding to the image area of the preset type;
a point acquisition submodule configured to acquire coordinates of a preset reference point of the circumscribed rectangle, coordinates of at least one point in the point cloud data surrounded by a polygon corresponding to the predetermined type of image area, and the number of the at least one point;
and the distance matching degree determining submodule is configured to determine the distance matching degree according to the coordinates of the acquired preset reference points of the circumscribed rectangle, the coordinates of at least one point in the point cloud data surrounded by the polygon corresponding to the image area of the preset type and the number of the at least one point.
10. The apparatus of claim 9, wherein the data pair determination module is further configured to:
for the candidate matching data pairs in the third number of candidate matching data pairs, determining the candidate matching data pairs as matching data pairs in response to determining that the matching degree corresponding to the candidate matching data pairs meets a preset selection condition, where the preset selection condition includes at least one of: the shape matching degree is larger than a preset shape matching degree threshold value, and the distance matching degree is larger than a preset distance matching degree threshold value.
11. The apparatus of claim 7, wherein the reference deviation determination module comprises:
a first reference bias determination sub-module configured to select, from the obtained at least one offset vector, an offset vector having a largest number of corresponding matching difference values as the reference bias.
12. The apparatus of claim 11, wherein the reference deviation determination module further comprises:
a second reference bias determination sub-module configured to select, as the reference bias, an offset vector having a smallest sum of corresponding matching difference values from among the offset vectors having the largest number of matching difference values, in response to determining that there are the offset vectors having the largest number of matching difference values.
13. A terminal, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-6.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201910151676.3A 2019-02-28 2019-02-28 Method and device for sending information in automatic driving Active CN109859254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910151676.3A CN109859254B (en) 2019-02-28 2019-02-28 Method and device for sending information in automatic driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910151676.3A CN109859254B (en) 2019-02-28 2019-02-28 Method and device for sending information in automatic driving

Publications (2)

Publication Number Publication Date
CN109859254A CN109859254A (en) 2019-06-07
CN109859254B true CN109859254B (en) 2022-08-30

Family

ID=66899462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910151676.3A Active CN109859254B (en) 2019-02-28 2019-02-28 Method and device for sending information in automatic driving

Country Status (1)

Country Link
CN (1) CN109859254B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160317B (en) * 2021-04-29 2024-04-16 福建汇川物联网技术科技股份有限公司 PTZ target tracking control method and device, PTZ control equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425129B (en) * 2008-10-22 2010-09-08 浙江万里学院 Target abnormal detecting method and device based on JPEG image
CN101882313B (en) * 2010-07-14 2011-12-21 中国人民解放军国防科学技术大学 Calibration method of correlation between single line laser radar and CCD (Charge Coupled Device) camera
US9767545B2 (en) * 2013-07-16 2017-09-19 Texas Instruments Incorporated Depth sensor data with real-time processing of scene sensor data
CN103559791B (en) * 2013-10-31 2015-11-18 北京联合大学 A kind of vehicle checking method merging radar and ccd video camera signal
CN103871071B (en) * 2014-04-08 2018-04-24 北京经纬恒润科技有限公司 Join scaling method outside a kind of camera for panoramic parking system
US10338209B2 (en) * 2015-04-28 2019-07-02 Edh Us Llc Systems to track a moving sports object
CN105069813B (en) * 2015-07-20 2018-03-23 阔地教育科技有限公司 A kind of method and device of stable detection moving target
CN105976312B (en) * 2016-05-30 2019-03-01 北京建筑大学 Point cloud autoegistration method based on point feature histogram
CN106875451B (en) * 2017-02-27 2020-09-08 安徽华米智能科技有限公司 Camera calibration method and device and electronic equipment
CN109118542B (en) * 2017-06-22 2021-11-23 阿波罗智能技术(北京)有限公司 Calibration method, device, equipment and storage medium between laser radar and camera
CN107392947B (en) * 2017-06-28 2020-07-28 西安电子科技大学 2D-3D image registration method based on contour coplanar four-point set

Also Published As

Publication number Publication date
CN109859254A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
EP3627180B1 (en) Sensor calibration method and device, computer device, medium, and vehicle
US11776155B2 (en) Method and apparatus for detecting target object in image
EP4177836A1 (en) Target detection method and apparatus, and computer-readable medium and electronic device
CN112580571A (en) Vehicle running control method and device and electronic equipment
CN110781779A (en) Object position detection method and device, readable storage medium and electronic equipment
CN112712036A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN109635868B (en) Method and device for determining obstacle type, electronic device and storage medium
CN109859254B (en) Method and device for sending information in automatic driving
CN111678488B (en) Distance measuring method and device, computer readable storage medium and electronic equipment
CN111710188B (en) Vehicle alarm prompting method, device, electronic equipment and storage medium
CN109827610B (en) Method and device for verifying sensor fusion result
CN114724116B (en) Vehicle traffic information generation method, device, equipment and computer readable medium
US20230082079A1 (en) Training agent trajectory prediction neural networks using distillation
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN114724115B (en) Method, device and equipment for generating obstacle positioning information and computer readable medium
CN115393423A (en) Target detection method and device
CN114677848A (en) Perception early warning system, method, device and computer program product
CN114674328A (en) Map generation method, map generation device, electronic device, storage medium, and vehicle
CN114419564A (en) Vehicle pose detection method, device, equipment, medium and automatic driving vehicle
WO2020100540A1 (en) Information processing device, information processing system, information processing method, and program
CN114445320A (en) Method and device for evaluating image segmentation quality, electronic equipment and storage medium
CN113191368B (en) Method and device for matching markers
CN112668371A (en) Method and apparatus for outputting information
CN116168366B (en) Point cloud data generation method, model training method, target detection method and device
CN113963322B (en) Detection model training method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant