WO2022228150A1 - 车内雷达占位识别方法、装置及车载雷达 - Google Patents

车内雷达占位识别方法、装置及车载雷达 Download PDF

Info

Publication number
WO2022228150A1
WO2022228150A1 PCT/CN2022/087064 CN2022087064W WO2022228150A1 WO 2022228150 A1 WO2022228150 A1 WO 2022228150A1 CN 2022087064 W CN2022087064 W CN 2022087064W WO 2022228150 A1 WO2022228150 A1 WO 2022228150A1
Authority
WO
WIPO (PCT)
Prior art keywords
matrix
point cloud
frame
vehicle
area
Prior art date
Application number
PCT/CN2022/087064
Other languages
English (en)
French (fr)
Inventor
郜丽敏
尹学良
秘石
黄小浦
刘坤明
李梦
包红燕
秦屹
Original Assignee
森思泰克河北科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 森思泰克河北科技有限公司 filed Critical 森思泰克河北科技有限公司
Publication of WO2022228150A1 publication Critical patent/WO2022228150A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications

Definitions

  • the present application belongs to the technical field of radar, and in particular relates to an in-vehicle radar occupancy identification method and device, and an in-vehicle radar.
  • in-vehicle seat occupancy recognition can use radar monitoring technology.
  • the technology obtains point cloud data according to the echo signal scanned by the radar receiver in the vehicle, and then determines the space occupancy in the vehicle based on the arrangement characteristics of the point cloud data.
  • this method is easily restricted by the size of the car cabin, and it is not suitable to quickly and easily find the optimal position in the region, and the occupancy recognition effect is not good.
  • the present application provides an in-vehicle radar occupancy identification method, device and in-vehicle radar to solve the problem of poor occupancy recognition effect of in-vehicle radar in the prior art.
  • a first aspect of the present application provides an in-vehicle radar occupancy identification method, the method comprising:
  • the elements in the area division matrix represent the position information of the multiple areas divided by the target vehicle
  • the elements in the spatial position matrix represent the position information of a plurality of spaces corresponding to each region
  • the spatial position matrix corresponding to the target vehicle and the point cloud data of each frame count the number of point clouds falling in each space, and obtain the point cloud statistical matrix corresponding to the point cloud data of each frame;
  • the segmentation threshold corresponding to each element in the point cloud statistical matrix of each frame is determined based on the area division matrix, and the elements in the point cloud statistical matrix of each frame are classified and assigned based on the segmentation threshold corresponding to each element, and the corresponding point cloud data of each frame is obtained.
  • the occupied area of the target vehicle is determined.
  • a second aspect of the present application provides an in-vehicle radar occupancy identification device, the device comprising:
  • the data acquisition module is used to acquire the area division matrix, the spatial position matrix and the multi-frame point cloud data obtained by scanning the interior of the target vehicle corresponding to the target vehicle;
  • the position information of each area, the elements in the spatial position matrix represent the position information of a plurality of spaces corresponding to each area;
  • a point cloud statistical matrix determination module configured to count the number of point clouds falling in each space according to the spatial position matrix corresponding to the target vehicle and each frame of point cloud data, and obtain a point cloud statistical matrix corresponding to each frame of point cloud data;
  • the first matrix determination module is used to determine the segmentation threshold corresponding to each element in the point cloud statistics matrix of each frame based on the area division matrix, and classify and assign the elements in the point cloud statistics matrix of each frame based on the segmentation threshold corresponding to each element. , obtain the first matrix corresponding to each frame of point cloud data;
  • the centroid position determination module is used to determine the initial centroid position of each frame of point cloud data in the spatial position matrix according to the nearest neighbor clustering method and the first matrix corresponding to each frame of point cloud data;
  • the occupancy recognition module is used for determining the area where the target vehicle has occupancy according to the initial centroid position of each frame of point cloud data in the spatial position matrix.
  • a third aspect of the present application provides an in-vehicle radar, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implements the above when executing the computer program The steps of the in-vehicle radar occupancy recognition method.
  • a fourth aspect of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the steps of the above-mentioned method for occupancy recognition of an in-vehicle radar .
  • the beneficial effect of the in-vehicle radar occupancy recognition method provided by the present application is that the in-vehicle radar occupancy recognition method provided by the present application adopts the following steps to perform occupancy recognition, so as to quickly obtain whether the target area in the vehicle is not.
  • the area division matrix, spatial position matrix and multi-frame point cloud data obtained by the vehicle-mounted radar scanning the interior of the target vehicle are obtained.
  • the spatial position matrix corresponding to the target vehicle and the point cloud data of each frame the number of point clouds falling in each space is counted, and the point cloud statistical matrix corresponding to the point cloud data of each frame is obtained.
  • the segmentation threshold corresponding to each element in the point cloud statistical matrix of each frame is determined based on the area division matrix, and the elements in the point cloud statistical matrix of each frame are classified and assigned based on the segmentation threshold corresponding to each element, and the corresponding point cloud data of each frame is obtained. the first matrix of .
  • the initial centroid position of each frame of point cloud data in the spatial position matrix is determined according to the nearest neighbor clustering method and the first matrix corresponding to each frame of point cloud data. According to the initial centroid position of each frame of point cloud data in the spatial position matrix, the occupied area of the target vehicle is determined.
  • the in-vehicle radar occupancy recognition method provided by the present application can quickly determine the in-vehicle occupancy based on the number of point clouds in each space by spatially dividing multiple target areas in the vehicle, thereby improving the in-vehicle occupancy recognition effect. .
  • FIG. 1 is a schematic flowchart of a method for identifying an in-vehicle radar occupancy provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of an in-vehicle area of a target vehicle provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of element position coordinates in a spatial position matrix provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an in-vehicle radar occupancy identification device provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a vehicle-mounted radar provided by an embodiment of the present application.
  • the present application provides an in-vehicle radar occupancy identification method.
  • the flow of the method is shown in Figure 1.
  • the in-vehicle radar occupancy identification method provided by the present application may include steps S101 to S105, which are respectively described as follows:
  • S101 Acquire a region division matrix corresponding to a target vehicle, a spatial position matrix, and multi-frame point cloud data obtained by scanning the interior of the target vehicle by a vehicle-mounted radar; the elements in the region division matrix represent the positions of the multiple regions divided by the target vehicle information, the elements in the spatial position matrix represent the position information of multiple spaces corresponding to each area.
  • the onboard radar may be mounted on the roof of the target vehicle.
  • I represents the number of horizontal area divisions
  • J represents the number of vertical area divisions
  • (x i-1 , x i , y j-1 , y j ) represents the two-dimensional position coordinates of the area in the top-down plane.
  • n, N, m, and M represent the elements of the area division matrix
  • i, I, j, and J represent the elements of the area division matrix.
  • the elements of the area division matrix in the previous embodiment are in one-to-one correspondence with the elements of the area division matrix in this embodiment, and there is no difference.
  • the spatial position matrix and its elements in this embodiment are substantially the same as the spatial position matrix and its elements in the previous embodiment, and there is no difference.
  • the vehicle radar may be a millimeter wave radar.
  • Millimeter waves have a strong ability to penetrate fog, smoke, and dust, and can achieve multi-dimensional resolution such as distance, speed, azimuth, and elevation.
  • Millimeter-wave radar has a wide FOV viewing angle, large effective bandwidth and high detection accuracy.
  • mmWave radars are able to identify micro-moving targets (such as breathing targets).
  • the millimeter-wave radar has the characteristics of small size and light weight, and can be installed in a concealed manner without destroying the interior shape of the car, maintaining the integrity and beauty of the interior shape.
  • Vehicle radar can use multiple transmit and receive (MIMO, Multiple-Input Multiple-Out-put) design scheme. This design solution is highly integrated, including RF front-end and signal processing modules, as well as rich peripheral interfaces.
  • the radar signal processing algorithm can be implemented in the MCU (Micro Controller Unit, microcontroller unit).
  • the radar can send the recognition result to the body controller through CAN (Controller Area Network).
  • the vehicle radar can also realize the sleep wake-up function and support the FOTA (Firmware Over the Air, firmware remote upgrade) remote upgrade function.
  • FOTA Wireless Over the Air, firmware remote upgrade
  • Vehicle radar is generally activated with vehicle ignition (IG-ON). After starting to work, the vehicle radar sends electromagnetic waves of a specific frequency to the interior space of the vehicle through the transmitting antenna. Electromagnetic waves are reflected by objects and produce echo signals. The receiving antenna receives the echo signal, and through ADC sampling and band-pass filtering processing, out-of-band interference signals are filtered out to a certain extent, and point cloud data corresponding to different distances, azimuth angles and elevation angles of a single frame in the vehicle are obtained.
  • the "single frame" in the previous sentence means a single frame of point cloud data. Each single frame of point cloud data includes a large number of point cloud data. Different point cloud data correspond to different distances, azimuth and pitch angles.
  • a sorting process may be performed, so that the point cloud data are arranged from near to far according to the distance.
  • Each echo signal corresponds to a single frame of point cloud data.
  • multiple single-frame point cloud data can be obtained, that is, multi-frame point cloud data.
  • FIG. 2 shows a schematic top view of the interior of a target vehicle.
  • the on-board radar M1 can be installed in the middle of the roof of the target vehicle, and is located between the roof layer and the sheet metal layer, which is a non-exposed installation.
  • the roof layer and the sheet metal layer are the inner and outer layers that make up the vehicle shell.
  • the roof layer is the inner layer and is generally made of non-metallic materials, which will not affect the emission of electromagnetic waves from the vehicle radar to the interior space of the vehicle.
  • the interior space can be divided into five areas, namely the area corresponding to the two front seats (1, 2) and the area corresponding to the three rear seats (3, 4, 5). ). Divide these five areas into cubes, and each area can get multiple cube spaces.
  • the above five areas can also be divided into a cuboid to obtain a cuboid space.
  • the size of the above-mentioned cuboid space may be: 5cm*5cm*120cm. 120cm here is the approximate height of the interior of the vehicle. That is, each of the above five areas is divided into several cuboid spaces, the projection of each cuboid space on the horizontal plane is a square with a side length of 5cm, and the height of each cuboid space is 120cm.
  • the top-view section can be used as the coordinate plane, and a certain corner of the top-view section can be used as the coordinate origin.
  • the two-dimensional coordinates of the vertical projection of each area on the coordinate plane can be used as the position information of each area in the area division matrix (that is, each element in the area division matrix), and the vertical projection of each cuboid space on the coordinate plane can be used.
  • the two-dimensional coordinates of are used as the position information of each cuboid space in the spatial position matrix (that is, each element of the spatial position matrix).
  • the orthographic projection of the cuboid space on the above coordinate plane is a rectangle.
  • the two-dimensional coordinates of the cuboid space represent the coordinates of the endpoints of the rectangle.
  • (x n-1 , x n , y m-1 , y m ) represents the spatial position corresponding to the element in the nth row and the mth column in the spatial position matrix, that is, a nm in Figure 3 corresponds to s position.
  • a rectangle has four endpoint coordinates.
  • (x n-1 , y m-1 ) represents the coordinates of the lower left endpoint
  • (x n-1 , y m ) represents the coordinates of the upper left endpoint.
  • the coordinate information of the four endpoints of the rectangle are all contained in (x n-1 , x n , y m-1 , y m ).
  • (x i-1 , x i , y j-1 , y j ) may represent the location information of the area corresponding to the element in the i-th row and the j-th column in the area division matrix.
  • the above-mentioned two-dimensional coordinates refer to coordinates in a two-dimensional plane, which does not mean that each coordinate contains only two values.
  • (x n-1 ,x n ,y m-1 ,y m ) contains four values, but it is a two-dimensional coordinate.
  • step S102 according to the spatial position matrix corresponding to the target vehicle and each frame of point cloud data, count the number of point clouds falling in each space, and obtain a point cloud statistics matrix corresponding to each frame of point cloud data.
  • Point cloud data consists of information from a large number of data points. Moreover, the point cloud data carries the position information of each data point. According to the position information of each data point, the vehicle-mounted radar can determine which cuboid space the point cloud corresponding to a certain point cloud data falls in. From another point of view, the vehicle radar can count the number of point clouds in each cuboid space, so as to obtain the point cloud statistical matrix.
  • the point cloud statistics matrix can be expressed as [k 11 ,k 12 ,k 13 ,...,k mn ] m*n , where k mn represents the nth row and the mth column of the space position matrix corresponding to the elements in the space. Number of point clouds.
  • step S103 determine the segmentation threshold corresponding to each element in the point cloud statistics matrix of each frame based on the area division matrix, and classify the elements in the point cloud statistics matrix of each frame based on the segmentation threshold corresponding to each element Assign the value to obtain the first matrix corresponding to the point cloud data of each frame.
  • the specific implementation process of S103 in FIG. 1 may include the following steps:
  • S201 According to the position information corresponding to each area in the area division matrix, determine the area to which each element in the point cloud statistics matrix belongs; each area corresponds to a segmentation threshold;
  • S202 Determine the segmentation threshold corresponding to each element in the point cloud matrix according to the area to which each element in the point cloud statistics matrix belongs.
  • the specific implementation process of S103 in FIG. 1 may further include the following steps:
  • the element For each element in the point cloud statistics matrix of each frame, if the number of point clouds corresponding to the element is greater than the segmentation threshold corresponding to the element, the element is assigned the first value; if the number of point clouds corresponding to the element is less than or equal to For the segmentation threshold corresponding to the element, the element is assigned a second value; the first value is not equal to the second value.
  • Each area can have different thresholds, and according to these different thresholds, the vehicle-mounted radar can assign values to the elements of the point cloud statistics matrix corresponding to each space.
  • the range sizes and thresholds of the five regions of the decision-making can be determined through a large number of tests.
  • the first value may be 1 and the second value may be 0.
  • step S104 Determine the initial centroid position of each frame of point cloud data in the spatial position matrix according to the nearest neighbor clustering method and the first matrix corresponding to each frame of point cloud data.
  • the specific implementation process of S104 in FIG. 1 may include the following steps:
  • D mn in the first matrix corresponding to each frame of point cloud data if the value of D mn is the first value, traverse D ( m+1 ) n , D m ( n+1 ) , D ( m +1 )( n+1 ) value; if the values of D ( m+1 ) n , D m ( n+1 ) and D ( m+1 ) ( n+1 ) are all the first value, then D ( m+1 ) ( n+1 ) ( m+1 ) n , D m ( n+1 ) , D ( m+1 ) ( n+1 ) are updated to the third value;
  • the element whose value is the third value in the first matrix is used as the target element of the first matrix, and the target element of the first matrix is obtained based on the spatial position matrix
  • the position information of the corresponding space; the average value of the position information of the space corresponding to the target element of the first matrix is obtained, and the initial centroid position of the point cloud data corresponding to the first matrix in the space position matrix is obtained.
  • the nearest neighbor analysis is performed on the first matrix, and the matrix data of each area is traversed. Specifically, in the first step, if D mn is equal to 1, traverse the values of D ( m+1 ) n , D m ( n+1 ) and D ( m+1 ) ( n+1 ) ; in the second step, if D ( m+1 ) n , Dm ( n+1 ) , D ( m+1 )( n+1 ) are all 1, then D ( m+1 ) n , Dm ( n+1 ) , D ( m+1 ) ( n+1 ) is set to "2" to form a new first matrix; in the third step, the position information with the value "2" in the updated first matrix is extracted from the spatial position matrix.
  • step S105 Determine the area where the target vehicle has occupied space according to the initial centroid position of each frame of point cloud data in the spatial position matrix.
  • the specific implementation process of S105 in FIG. 1 may include the following steps:
  • the first area is determined as a space-occupying area; the first area is any area within the target vehicle.
  • the data position representing this region is set to '1', and the rest of the regions are set to '0'.
  • count the number of occurrences of 1 in the M frame results of each region. If the number of occurrences of '1' exceeds 70% of the total number of times, it is considered that the seat in this area is occupied.
  • the "M" in the "M frame result” here represents the number of frames. The meaning of "M” here is different from that of "M” in the above "M represents the number of vertical area divisions". The two M values may be the same or different.
  • the onboard radar sends the occupancy information to the body controller.
  • the body controller displays occupancy information on the body display/dashboard to remind the driver and occupant to fasten their seat belts.
  • the in-vehicle radar occupancy identification method firstly obtains the area division matrix corresponding to the target vehicle, the spatial position matrix and the multi-frame point cloud data obtained by scanning the interior of the target vehicle by the vehicle-mounted radar; and then according to the target vehicle
  • the corresponding spatial position matrix and the point cloud data of each frame count the number of point clouds falling in each space, and obtain the point cloud statistical matrix corresponding to the point cloud data of each frame; determine the point cloud statistical matrix of each frame based on the area division matrix.
  • the segmentation threshold corresponding to each element, and based on the segmentation threshold corresponding to each element, the elements in the point cloud statistical matrix of each frame are classified and assigned to obtain the first matrix corresponding to the point cloud data of each frame; finally, according to the nearest neighbor clustering method and each frame
  • the first matrix corresponding to the point cloud data determines the initial centroid position of each frame of point cloud data in the spatial position matrix; the area where the target vehicle has occupied space is determined according to the initial centroid position of each frame of point cloud data in the spatial position matrix.
  • the occupancy recognition method provided by the embodiment of the present application can quickly determine the occupancy situation in the vehicle based on the number of point clouds in each space by spatially dividing multiple regions in the target vehicle, thereby improving the occupancy recognition effect in the vehicle.
  • the present application provides an in-vehicle radar occupancy identification device 100 .
  • the apparatus includes a data acquisition module 110 , a point cloud statistics matrix determination module 120 , a first matrix determination module 130 , a centroid position determination module 140 and an occupancy identification module 150 .
  • the data acquisition module 110 is used to acquire the area division matrix corresponding to the target vehicle, the spatial position matrix and the multi-frame point cloud data obtained by scanning the interior of the target vehicle by the vehicle-mounted radar; the elements in the area division matrix represent the multiple divisions of the target vehicle.
  • the position information of each area, the elements in the spatial position matrix represent the position information of a plurality of spaces corresponding to each area;
  • the point cloud statistical matrix determination module 120 is configured to count the number of point clouds falling in each space according to the spatial position matrix corresponding to the target vehicle and each frame of point cloud data, and obtain a point cloud statistical matrix corresponding to each frame of point cloud data;
  • the first matrix determination module 130 is configured to determine the segmentation threshold corresponding to each element in the point cloud statistics matrix of each frame based on the area division matrix, and to classify and assign the elements in the point cloud statistics matrix of each frame based on the segmentation threshold corresponding to each element. , obtain the first matrix corresponding to each frame of point cloud data;
  • the centroid position determination module 140 is configured to determine the initial centroid position of each frame of point cloud data in the spatial position matrix according to the nearest neighbor clustering method and the first matrix corresponding to each frame of point cloud data;
  • the occupancy identification module 150 is configured to determine the area where the target vehicle has occupancy according to the initial centroid position of each frame of point cloud data in the spatial position matrix.
  • the in-vehicle radar occupancy identification device 100 provided by the embodiment of the present application has various modules capable of implementing the in-vehicle radar occupancy identification method provided by another embodiment of the present application. Therefore, the in-vehicle radar occupancy recognition device 100 can spatially divide multiple regions in the target vehicle, and can quickly determine the in-vehicle occupancy according to the number of point clouds in each space, thereby improving the in-vehicle occupancy recognition effect.
  • the on-board radar can be installed on the top of the target vehicle;
  • the first matrix determination module 130 in FIG. 4 may include a region determination unit.
  • the area determination unit is used to determine the area to which each element in the point cloud statistics matrix belongs according to the position information corresponding to each area in the area division matrix; each area corresponds to a segmentation threshold;
  • the segmentation threshold obtaining unit is configured to determine the segmentation threshold corresponding to each element in the point cloud matrix according to the region to which each element in the point cloud statistics matrix belongs.
  • the first matrix determination module 130 in FIG. 4 may further include a classification assignment unit.
  • the classification and assignment unit is used for each element in the point cloud statistical matrix of each frame. If the number of point clouds corresponding to the element is greater than the segmentation threshold corresponding to the element, the element is assigned the first value; if the point corresponding to the element is The number of clouds is less than or equal to the segmentation threshold corresponding to the element, and the element is assigned a second value; the first value is not equal to the second value.
  • centroid position determination module 140 in FIG. 4 may implement the following steps:
  • D mn in the first matrix of each frame if the value of D mn is the first value, traverse D ( m+1 ) n , D m ( n+1 ) , D ( m+1 ) ( n +1 ) value; if the values of D ( m+1 ) n , Dm ( n+1 ) , D ( m+1 )( n+1 ) are all the first value , then D( m + 1 ) The values of n , D m ( n+1 ) and D ( m+1 ) ( n+1 ) are updated to the third value;
  • the element whose value is the third value in the first matrix of the frame is used as the target element of the first matrix of the frame, and the space corresponding to the target element of the first matrix of the frame is obtained based on the spatial position matrix
  • the position information of the frame average the position information of the space corresponding to the target element of the first matrix of the frame, and obtain the initial centroid position of the point cloud data of the frame in the spatial position matrix.
  • the occupancy recognition module 150 in FIG. 4 may implement the following steps:
  • the first area is determined as a space-occupying area; the first area is any area within the target vehicle.
  • FIG. 5 is a schematic diagram of a vehicle-mounted radar provided by an embodiment of the present application.
  • the vehicle-mounted radar 5 includes a processor 50 , a memory 51 , and a computer program 52 stored in the memory 51 and executable on the processor 50 .
  • the processor 50 executes the computer program 52
  • the steps of the in-vehicle radar occupancy identification method provided in the previous embodiment are implemented, for example, steps S101 to S105 shown in FIG. 1 .
  • the functions of each module/unit in the in-vehicle radar occupancy identification device provided in the previous embodiment can also be implemented, for example, the functions of the modules 110 to 150 shown in FIG. 4 .
  • the computer program 52 may be divided into one or more modules/units, and these modules/units can be stored in the memory 51 and executed by the processor 50 to complete and implement the technical solutions provided by the embodiments of the present application.
  • These modules/units can be a series of computer program instruction segments capable of performing specific functions, and these instruction segments can describe the execution process of the computer program 52 in the vehicle-mounted radar 5 .
  • the vehicle-mounted radar 5 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the vehicle-mounted radar may include, but is not limited to, the processor 50 and the memory 51 .
  • the vehicle-mounted radar 5 is only an example of the vehicle-mounted radar 5 , and does not constitute a limitation on the vehicle-mounted radar 5 .
  • the on-board radar 5 may include more or less components than shown, or a combination of some components, or different components.
  • the vehicle-mounted radar 5 may also include input and output devices, network access devices, buses, and the like.
  • the processor 50 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), application specific integrated circuits (Application Specific Integrated Circuits, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • CPU Central Processing Unit
  • DSPs Digital Signal Processors
  • ASICs Application Specific Integrated Circuits
  • ASICs Application Specific Integrated Circuits
  • FPGA Field-Programmable Gate Array
  • a general purpose processor may be a microprocessor or any other conventional processor or the like.
  • the memory 51 may be an internal storage unit of the in-vehicle radar 5 , such as a hard disk or a memory of the in-vehicle radar 5 .
  • the memory 51 may also be an external storage device of the vehicle-mounted radar 5, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card equipped on the vehicle-mounted radar 5 (Flash Card), etc.
  • the memory 51 may also include both an internal storage unit of the vehicle-mounted radar 5 and an external storage device.
  • the memory 51 is used to store the computer program 52 and other programs and data required by the in-vehicle radar 5 .
  • the memory 51 can also be used to temporarily store data that has been output or is to be output.
  • vehicle-mounted radar and method may be implemented in other ways.
  • vehicle-mounted radar embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division, and other division methods may be used in actual implementation. For example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated modules/units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in the computer-readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, the computer-readable media Electric carrier signals and telecommunication signals are not included.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种车内雷达占位识别方法、装置及车载雷达,适用于雷达技术领域。该方法包括:获取目标车辆对应的区域划分矩阵、空间位置矩阵和车载雷达对目标车辆内部扫描得到的多帧点云数据(S101);根据目标车辆对应的空间位置矩阵及各帧点云数据,统计落在各个空间内的点云数量,得到对应的点云统计矩阵(S102);基于分割阈值对各帧点云统计矩阵中的元素进行分类赋值,得到各帧点云数据对应的第一矩阵(S103);最后根据近邻聚类法及各帧第一矩阵确定各帧点云数据在空间位置矩阵中的初始质心位置(S104);基于初始质心位置确定目标车辆存在占位的区域(S105)。该方法通过空间划分,能够基于各个空间的点云数量快速的确定车内占位情况,从而提高车内占位识别效果。

Description

车内雷达占位识别方法、装置及车载雷达
本专利申请要求于2021年04月30日提交的中国专利申请No. CN202110483820.0的优先权。在先申请的公开内容通过整体引用并入本申请。
技术领域
本申请属于雷达技术领域,尤其涉及一种车内雷达占位识别方法、装置及车载雷达。
背景技术
随着人们对汽车安全性能意识的不断提升,车内座位占用识别技术的需求也越来越迫切。目前,车内座位占用识别可以采用雷达监测技术。该技术根据雷达接收机对车内扫描的回波信号获取点云数据,然后基于点云数据的排布特征确定车内占位情况。但是该方法容易受汽车舱内大小的制约,不宜方便快速的查找到区域最优位置,占位识别效果不佳。
技术问题
本申请提供了一种车内雷达占位识别方法、装置及车载雷达,以解决现有技术中车内雷达占位识别效果差的问题。
技术解决方案
本申请的第一方面提供了一种车内雷达占位识别方法,该方法包括:
获取目标车辆对应的区域划分矩阵、空间位置矩阵和车载雷达对目标车辆内部扫描得到的多帧点云数据;所述区域划分矩阵中的元素表示所述目标车辆划分的多个区域的位置信息,所述空间位置矩阵中的元素表示每个区域对应的多个空间的位置信息;
根据所述目标车辆对应的空间位置矩阵及各帧点云数据,统计落在各个空间内的点云数量,得到各帧点云数据对应的点云统计矩阵;
基于所述区域划分矩阵确定各帧点云统计矩阵中各个元素对应的分割阈值,并基于各个元素对应的分割阈值对各帧点云统计矩阵中的元素进行分类赋值,得到各帧点云数据对应的第一矩阵;
根据近邻聚类法及各帧点云数据对应的第一矩阵确定各帧点云数据在空间位置矩阵中的初始质心位置;
根据各帧点云数据在空间位置矩阵中的初始质心位置确定所述目标车辆存在占位的区域。
本申请的第二方面提供了一种车内雷达占位识别装置,该装置包括:
数据获取模块,用于获取目标车辆对应的区域划分矩阵、空间位置矩阵和车载雷达对目标车辆内部扫描得到的多帧点云数据;所述区域划分矩阵中的元素表示所述目标车辆划分的多个区域的位置信息,所述空间位置矩阵中的元素表示每个区域对应的多个空间的位置信息;
点云统计矩阵确定模块,用于根据所述目标车辆对应的空间位置矩阵及各帧点云数据,统计落在各个空间内的点云数量,得到各帧点云数据对应的点云统计矩阵;
第一矩阵确定模块,用于基于所述区域划分矩阵确定各帧点云统计矩阵中各个元素对应的分割阈值,并基于各个元素对应的分割阈值对各帧点云统计矩阵中的元素进行分类赋值,得到各帧点云数据对应的第一矩阵;
质心位置确定模块,用于根据近邻聚类法及各帧点云数据对应的第一矩阵确定各帧点云数据在空间位置矩阵中的初始质心位置;
占位识别模块,用于根据各帧点云数据在空间位置矩阵中的初始质心位置确定所述目标车辆存在占位的区域。
本申请的第三方面提供了一种车载雷达,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上所述车内雷达占位识别方法的步骤。
本申请的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上所述车内雷达占位识别方法的步骤。
有益效果
与现有技术相比,本申请提供的车内雷达占位识别方法的有益效果是:本申请提供的车内雷达占位识别方法采用如下步骤进行占位识别,以快速获取车内目标区域是否存在人员或物体:
首先,获取目标车辆对应的区域划分矩阵、空间位置矩阵和车载雷达对目标车辆内部扫描得到的多帧点云数据。然后,根据所述目标车辆对应的空间位置矩阵及各帧点云数据,统计落在各个空间内的点云数量,得到各帧点云数据对应的点云统计矩阵。基于所述区域划分矩阵确定各帧点云统计矩阵中各个元素对应的分割阈值,并基于各个元素对应的分割阈值对各帧点云统计矩阵中的元素进行分类赋值,得到各帧点云数据对应的第一矩阵。最后,根据近邻聚类法及各帧点云数据对应的第一矩阵确定各帧点云数据在空间位置矩阵中的初始质心位置。根据各帧点云数据在空间位置矩阵中的初始质心位置确定所述目标车辆存在占位的区域。
本申请提供的车内雷达占位识别方法,通过对车辆内的多个目标区域进行空间划分,能够基于各个空间的点云数量快速的确定车内占位情况,从而提高车内占位识别效果。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的车内雷达占位识别方法的流程示意图;
图2是本申请实施例提供的目标车辆的车内区域示意图;
图3是本申请实施例提供的空间位置矩阵中元素位置坐标的示意图;
图4是本申请实施例提供的车内雷达占位识别装置的示意图;
图5是本申请实施例提供的车载雷达的示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。
在一个实施例中,本申请提供了一种车内雷达占位识别方法。该方法的流程如图1所示。请参阅图1,本申请提供的车内雷达占位识别方法可以包括步骤S101至S105,分别说明如下:
S101:获取目标车辆对应的区域划分矩阵、空间位置矩阵和车载雷达对目标车辆内部扫描得到的多帧点云数据;所述区域划分矩阵中的元素表示所述目标车辆划分的多个区域的位置信息,所述空间位置矩阵中的元素表示每个区域对应的多个空间的位置信息。
在一个实施例中,车载雷达可以安装于目标车辆的车内顶部。区域划分矩阵(以A表示)可以是:A={(x n-1,x n,y m-1,y m)∣n∈[1,N],m∈[1,M]};其中,N表示横向区域划分数量,M表示纵向区域划分数量,(x n-1,x n,y m-1,y m)表示区域在俯视平面下的二维位置坐标。空间位置矩阵(以B表示)可以是:B={(x i-1,x i,y j-1,y j)∣i∈[1,I],j∈[1,J]};其中,I表示横向空间划分数量,J表示纵向空间划分数量,(x i-1,x i,y j-1,y j)表示空间在俯视平面下的二维位置坐标。
在另一个实施例中,空间位置矩阵(以A表示)可以是:A={(x n-1,x n,y m-1,y m)∣n∈[1,N],m∈[1,M]};其中,N表示横向空间划分数量,M表示纵向空间划分数量,(x n-1,x n,y m-1,y m)表示空间在俯视平面下的二维位置坐标。区域划分矩阵(以B表示)可以是:B={(x i-1,x i,y j-1,y j)∣i∈[1,I],j∈[1,J]};其中,I表示横向区域划分数量,J表示纵向区域划分数量,(x i-1,x i,y j-1,y j)表示区域在俯视平面下的二维位置坐标。需要说明的是,本实施例与上一实施例没有实质性区别,仅有字母的变化。例如,上一实施例以A表示区域划分矩阵,本实施例以B表示区域划分矩阵。虽然表示字母不同,但两者实质相同,没有区别。又例如,上一实施例以n、N、m和M表示区域划分矩阵的元素,本实施例以i、I、j和J表示区域划分矩阵的元素。虽然表示字母不同,但上一实施例区域划分矩阵的元素与本实施例区域划分矩阵的元素一一对应且相同,没有区别。同理,本实施例的空间位置矩阵及其元素与上一实施例的空间位置矩阵及其元素实质相同,没有区别。
车载雷达可以是毫米波雷达。毫米波穿透雾、烟、灰尘的能力强,能够实现距离、速度、方位角和俯仰角等多维分辨。毫米波雷达FOV视角广,有效带宽大,检测精度高。使用特殊的高频信号处理算法,毫米波雷达能够识别微动目标(例如呼吸目标)。同时,毫米波雷达具有体积小,重量轻的特点,能够隐藏式安装,无需破坏车内造型,保持车内造型的完整和美观。
车载雷达可以采用多发多收(MIMO,Multiple-Input Multiple-Out-put)的设计方案。这一设计方案集成度高,包含射频前端和信号处理模块,以及丰富的外设接口。雷达信号处理算法可以在车载雷达内部的MCU(Micro Controller Unit,微控制单元)中实现。雷达可以通过CAN(Controller Area Network,控制器局域网络)将识别结果发送到车身控制器。车载雷达还能够实现休眠唤醒功能,支持FOTA(Firmware Over the Air,固件远程升级)远程升级功能。
车载雷达一般随车辆点火(IG-ON)而激活。开始工作后,车载雷达通过发射天线向车内空间发送特定频率的电磁波。电磁波遇到物体发生反射,产生回波信号。接收天线接收该回波信号,经过ADC采样和带通滤波处理,从一定程度上滤除带外干扰信号,获得车内单帧对应的不同距离、方位角和俯仰角的点云数据。上一句话中的“单帧”即单帧点云数据。每个单帧点云数据包括数量众多的点云数据。不同的点云数据对应不同的距离、方位角和俯仰角。示例性地,获取到这些点云数据后,可以进行排序处理,使这些点云数据按照距离大小由近及远排列。每个回波信号对应一个单帧点云数据。经过一段时间,即可得到多个单帧点云数据,即多帧点云数据。
图2示出了一种目标车辆的车内俯视示意图。如图2所示,车载雷达M1可以安装在目标车辆的车顶中间部位,并且位于顶棚层和钣金层之间,为非外露式安装。顶棚层和钣金层是组成车辆外壳的内外两层。顶棚层是内层,并且一般由非金属材料制成,不会影响车载雷达向车内空间发射电磁波。如图2所示,从俯视视角观察,车内空间可以划分为五个区域,即前排两个座位对应的区域(1、2)和后排三个座位对应的区域(3、4、5)。将这五个区域分别进行立方体划分,每个区域可以得到多个立方体空间。
上述五个区域还能进行长方体划分,得到长方体空间。示例性的,上述长方体空间的尺寸可以是:5cm*5cm*120cm。这里的120cm是车辆内部的大致高度。即上述五个区域中的每个区域被划分为若干个长方体空间,每个长方体空间在水平面的投影是边长为5cm的正方形,每个长方体空间的高度是120cm。
由于车载雷达设置于车顶,因此,可以将俯视截面作为坐标平面,以俯视截面的某一角作为坐标原点。如此,可以将各个区域在该坐标平面的垂直投影的二维坐标作为区域划分矩阵中各个区域的位置信息(即区域划分矩阵中的各个元素),可以将各个长方体空间在该坐标平面的垂直投影的二维坐标作为空间位置矩阵中各个长方体空间的位置信息(即空间位置矩阵的各个元素)。
长方体空间在上述坐标平面的正投影是一个矩形。长方体空间的二维坐标表示该矩形的端点坐标。如图3所示,(x n-1,x n,y m-1,y m)表示空间位置矩阵中第n行、第m列的元素对应的空间的位置,即图3中 a nm 对应的位置。矩形有四个端点坐标。(x n-1,y m-1)表示左下端点的坐标,(x n-1,y m)表示左上端点的坐标。依次类推,矩形的四个端点的坐标信息均包含在(x n-1,x n,y m-1,y m)中。同理,(x i-1,x i,y j-1,y j)可表示区域划分矩阵中第i行、第j列的元素对应的区域的位置信息。需要说明的是,上述二维坐标是指二维平面内的坐标,不代表每个坐标只包含两个数值。例如,(x n-1,x n,y m-1,y m)包含四个数值,但它是一个二维坐标。
请继续参阅图1,步骤S102:根据所述目标车辆对应的空间位置矩阵及各帧点云数据,统计落在各个空间内的点云数量,得到各帧点云数据对应的点云统计矩阵。
点云数据由数量众多的数据点的信息组成。并且,点云数据中携带着各个数据点的位置信息。根据各个数据点的位置信息,车载雷达能够确定某一点云数据对应的点云落在哪个长方体空间内。从另一个角度讲,车载雷达能够统计各个长方体空间内的点云数量,从而得到点云统计矩阵。点云统计矩阵可表示为[k 11,k 12,k 13,…,k mn] m*n,其中k mn表示空间位置矩阵中第n行,第m列的元素所对应的空间中具有的点云数量。
请继续参阅图1,步骤S103:基于所述区域划分矩阵确定各帧点云统计矩阵中各个元素对应的分割阈值,并基于各个元素对应的分割阈值对各帧点云统计矩阵中的元素进行分类赋值,得到各帧点云数据对应的第一矩阵。
在一个实施例中,图1中的S103的具体实现流程可以包括以下步骤:
       S201:根据所述区域划分矩阵中各个区域对应的位置信息,确定所述点云统计矩阵中各个元素所属的区域;每个区域对应一个分割阈值;
       S202:根据所述点云统计矩阵中各个元素所属的区域,确定所述点云矩阵中各个元素对应的分割阈值。
在一个实施例中,图1中的S103的具体实现流程还可以包括以下步骤:
针对各帧点云统计矩阵中的每个元素,若该元素对应的点云数量大于该元素对应的分割阈值,则将该元素赋值为第一值;若该元素对应的点云数量小于或等于该元素对应的分割阈值,则将该元素赋值为第二值;所述第一值不等于所述第二值。
各个区域可以具有不同的阈值,根据这些不同的阈值,车载雷达可以对各个空间对应的点云统计矩阵的元素进行赋值。
具体地,判定决策五个区域的范围大小和阈值可以经过大量测试确定。
示例性地,第一值可以为1,第二值可以为0。
请继续参阅图1,步骤S104:根据近邻聚类法及各帧点云数据对应的第一矩阵确定各帧点云数据在空间位置矩阵中的初始质心位置。
在一个实施例中,图1中的S104的具体实现流程可以包括以下步骤:
针对各帧点云数据对应的第一矩阵中的任一元素 D mn,若 D mn 的数值为第一值,则遍历 D m+1 nD m n+1 D m+1 )( n+1 的数值;若 D m+1 nD m n+1 D m+1 )( n+1 的数值均为第一值,则将 D m+1 nD m n+1 D m+1 )( n+1 的数值更新为第三值;
针对任一帧点云数据对应的第一矩阵,将该第一矩阵中数值为第三值的元素作为该第一矩阵的目标元素,并基于所述空间位置矩阵获取该第一矩阵的目标元素对应空间的位置信息;对该第一矩阵的目标元素对应空间的位置信息求均值,得到该第一矩阵对应的点云数据在空间位置矩阵中的初始质心位置。
针对任一帧点云数据对应的第一矩阵,对该第一矩阵进行最近邻域分析,遍历每个区域的矩阵数据。具体地,第一步,若 D mn等于1,则遍历 D m+1 nD m n+1 D m+1 )( n+1 的数值;第二步,若 D m+1 nD m n+1 D m+1 )( n+1 均为1,则将 D m+1 nD m n+1 D m+1 )( n+1 设为“2”,形成新的第一矩阵;第三步,从空间位置矩阵中提取更新后的第一矩阵中数值为“2”的位置信息。
在得到第一矩阵中数值为“2”的元素对应的空间位置矩阵中的位置信息后,对这些位置信息包含的位置数据求均值,得到该第一矩阵对应的点云数据对应的初始质心位置。
请继续参阅图1,步骤S105:根据各帧点云数据在空间位置矩阵中的初始质心位置确定所述目标车辆存在占位的区域。
在一个实施例中,图1中的S105的具体实现流程可以包括以下步骤:
若第一区域中存在初始质心位置的点云数据帧数与点云数据总帧数的比值大于第一阈值,则将所述第一区域判定为存在占位的区域;所述第一区域为所述目标车辆内的任一区域。
若经过判断,一个初始质心位置在五个区域中的任一个,则将代表该区域的数据位置‘1’,其余区域置‘0’。滤波处理后,统计每个区域M帧结果中1出现的次数。若‘1’出现的次数占总次数的百分比超过70%,则认为该区域座位存在占用。需要说明的是,此处“M帧结果”中的“M”表示帧的数量。此处的“M”与上文“M表示纵向区域划分数量”中的“M”代表的含义不同。两个M的数值可以相同,也可以不同。
若某一座位存在占位(例如,该座位上有驾乘人员),车载雷达则将占位信息发送至车身控制器。车身控制器将占位信息在车身显示器/仪表盘上显示,提醒驾乘人员系好安全带。
本申请的实施例提供的车内雷达占位识别方法,首先获取目标车辆对应的区域划分矩阵、空间位置矩阵和车载雷达对目标车辆内部扫描得到的多帧点云数据;然后根据所述目标车辆对应的空间位置矩阵及各帧点云数据,统计落在各个空间内的点云数量,得到各帧点云数据对应的点云统计矩阵;基于所述区域划分矩阵确定各帧点云统计矩阵中各个元素对应的分割阈值,并基于各个元素对应的分割阈值对各帧点云统计矩阵中的元素进行分类赋值,得到各帧点云数据对应的第一矩阵;最后根据近邻聚类法及各帧点云数据对应的第一矩阵确定各帧点云数据在空间位置矩阵中的初始质心位置;根据各帧点云数据在空间位置矩阵中的初始质心位置确定所述目标车辆存在占位的区域。本申请的实施例提供的占位识别方法通过对目标车辆内的多区域进行空间划分,能够基于各个空间的点云数量快速的确定车内占位情况,从而提高车内占位识别效果。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在一个实施例中,如图4所示,本申请提供了车内雷达占位识别装置100。该装置包括数据获取模块110、点云统计矩阵确定模块120、第一矩阵确定模块130、质心位置确定模块140和占位识别模块150。
数据获取模块110用于获取目标车辆对应的区域划分矩阵、空间位置矩阵和车载雷达对目标车辆内部扫描得到的多帧点云数据;所述区域划分矩阵中的元素表示所述目标车辆划分的多个区域的位置信息,所述空间位置矩阵中的元素表示每个区域对应的多个空间的位置信息;
点云统计矩阵确定模块120用于根据所述目标车辆对应的空间位置矩阵及各帧点云数据,统计落在各个空间内的点云数量,得到各帧点云数据对应的点云统计矩阵;
第一矩阵确定模块130用于基于所述区域划分矩阵确定各帧点云统计矩阵中各个元素对应的分割阈值,并基于各个元素对应的分割阈值对各帧点云统计矩阵中的元素进行分类赋值,得到各帧点云数据对应的第一矩阵;
质心位置确定模块140用于根据近邻聚类法及各帧点云数据对应的第一矩阵确定各帧点云数据在空间位置矩阵中的初始质心位置;
占位识别模块150用于根据各帧点云数据在空间位置矩阵中的初始质心位置确定所述目标车辆存在占位的区域。
本申请的实施例提供的车内雷达占位识别装置100具有能够实现本申请另一实施例提供的车内雷达占位识别方法的各个模块。因此,车内雷达占位识别装置100能够对目标车辆内的多区域进行空间划分,并能够根据各个空间的点云数量快速确定车内占位情况,从而提高车内占位识别效果。
在一个实施例中,车载雷达可以安装于目标车辆的车内顶部;区域划分矩阵(以A表示)可以是:A={(x n,x n+1,y m,y m+1)∣n∈[0,N],m∈[0,M]};其中,N表示横向区域划分数量,M表示纵向区域划分数量,(x n,x n+1,y m,y m+1)表示区域在俯视平面下的二维位置坐标。空间位置矩阵(以B表示)可以是:B={(x i,x i+1,y j,y j+1)∣i∈[0,I],j∈[0,J]};其中,I表示横向空间划分数量,J表示纵向空间划分数量,(x i,x i+1,y j,y j+1)表示空间在俯视平面下的二维位置坐标。
在一个实施例中,图4中的第一矩阵确定模块130可以包括区域确定单元。
区域确定单元用于根据所述区域划分矩阵中各个区域对应的位置信息,确定所述点云统计矩阵中各个元素所属的区域;每个区域对应一个分割阈值;
分割阈值获取单元,用于根据所述点云统计矩阵中各个元素所属的区域,确定所述点云矩阵中各个元素对应的分割阈值。
在一个实施例中,图4中的第一矩阵确定模块130还可以包括分类赋值单元。
分类赋值单元用于针对各帧点云统计矩阵中的每个元素,若该元素对应的点云数量大于该元素对应的分割阈值,则将该元素赋值为第一值;若该元素对应的点云数量小于或等于该元素对应的分割阈值,则将该元素赋值为第二值;所述第一值不等于所述第二值。
在一个实施例中,图4中的质心位置确定模块140可以实现以下步骤:
针对各帧第一矩阵中的任一元素 D mn,若 D mn 的数值为第一值,则遍历 D m+1 nD m n+1 D m+1 )( n+1 的数值;若 D m+1 nD m n+1 D m+1 )( n+1 的数值均为第一值,则将 D m+1 nD m n+1 D m+1 )( n+1 的数值更新为第三值;
针对任一帧第一矩阵,将该帧第一矩阵中数值为第三值的元素作为该帧第一矩阵的目标元素,并基于所述空间位置矩阵获取该帧第一矩阵的目标元素对应空间的位置信息;对该帧第一矩阵的目标元素对应空间的位置信息求均值,得到该帧点云数据在空间位置矩阵中的初始质心位置。
在一个实施例中,图4中的占位识别模块150可以实现以下步骤:
若第一区域中存在初始质心位置的点云数据帧数与点云数据总帧数的比值大于第一阈值,则将所述第一区域判定为存在占位的区域;所述第一区域为所述目标车辆内的任一区域。
图5是本申请的实施例提供的车载雷达的示意图。如图5所示,车载雷达5包括处理器50、存储器51以及存储在所述存储器51中并可在处理器50上运行的计算机程序52。处理器50执行计算机程序52时实现前面的实施例提供的车内雷达占位识别方法的步骤,例如图1所示的步骤S101至S105。处理器50执行计算机程序52时还可以实现前面的实施例提供的车内雷达占位识别装置中各模块/单元的功能,例如图4所示模块110至150的功能。
计算机程序52可以被分割成一个或多个模块/单元,这些模块/单元能够被存储在存储器51中,并由处理器50执行,以完成、实现本申请的实施例提供的技术方案。这些模块/单元可以是能够完成特定功能的一系列计算机程序指令段,这些指令段能够描述计算机程序52在车载雷达5中的执行过程。车载雷达5可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。车载雷达可以包括,但不仅限于,处理器50、存储器51。本领域技术人员可以理解,图5仅仅是车载雷达5的示例,并不构成对车载雷达5的限定。车载雷达5可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件。例如车载雷达5还可以包括输入输出设备、网络接入设备、总线等。
处理器50可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者其它任何常规的处理器等。
存储器51可以是车载雷达5的内部存储单元,例如车载雷达5的硬盘或内存。存储器51也可以是车载雷达5的外部存储设备,例如所述车载雷达5上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。进一步地,存储器51还可以既包括车载雷达5的内部存储单元也包括外部存储设备。存储器51用于存储计算机程序52以及车载雷达5所需的其他程序和数据。存储器51还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例,应该理解,所揭露的车载雷达和方法,可以通过其它的方式实现。以上所描述的车载雷达实施例仅仅是示意性的。所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (11)

  1. 一种车内雷达占位识别方法,其特征在于,包括:
    数据获取步骤:获取目标车辆对应的区域划分矩阵、空间位置矩阵和车载雷达对目标车辆内部扫描得到的多帧点云数据;所述区域划分矩阵中的元素表示所述目标车辆划分的多个区域的位置信息,所述空间位置矩阵中的元素表示每个区域对应的多个空间的位置信息;
    点云统计步骤:根据所述目标车辆对应的空间位置矩阵及各帧点云数据,统计落在各个空间内的点云数量,得到各帧点云数据对应的点云统计矩阵;
    分类赋值步骤:基于所述区域划分矩阵确定各帧点云统计矩阵中各个元素对应的分割阈值,并基于各个元素对应的分割阈值对各帧点云统计矩阵中的元素进行分类赋值,得到各帧点云数据对应的第一矩阵;
    质心确定步骤:根据近邻聚类法及各帧点云数据对应的第一矩阵确定各帧点云数据在空间位置矩阵中的初始质心位置;
    占位识别步骤:根据各帧点云数据在空间位置矩阵中的初始质心位置确定所述目标车辆存在占位的区域。
  2. 如权利要求1所述的车内雷达占位识别方法,其特征在于,所述车载雷达安装于所述目标车辆的车内顶部;
    所述区域划分矩阵为:A={(x n-1,x n,y m-1,y m)∣n∈[1,N],m∈[1,M]},其中,N表示横向区域划分数量,M表示纵向区域划分数量;(x n-1,x n,y m-1,y m)表示区域在俯视平面下的二维位置坐标;
    所述空间位置矩阵为:B={(x i-1,x i,y j-1,y j)∣i∈[1,I],j∈[1,J]},其中,I表示横向空间划分数量,J表示纵向空间划分数量;(x i-1,x i,y j-1,y j)表示空间在俯视平面下的二维位置坐标。
  3. 如权利要求1所述的车内雷达占位识别方法,其特征在于,所述车载雷达安装于所述目标车辆的车内顶部;
    所述空间位置矩阵为:A={(x n-1,x n,y m-1,y m)∣n∈[1,N],m∈[1,M]},其中,N表示横向空间划分数量,M表示纵向空间划分数量;(x n-1,x n,y m-1,y m)表示空间在俯视平面下的二维位置坐标;
    所述区域划分矩阵为:B={(x i-1,x i,y j-1,y j)∣i∈[1,I],j∈[1,J]},其中,I表示横向区域划分数量,J表示纵向区域划分数量;(x i-1,x i,y j-1,y j)表示区域在俯视平面下的二维位置坐标。
  4. 如权利要求1所述的车内雷达占位识别方法,其特征在于,所述基于所述区域划分矩阵确定所述点云统计矩阵中各个元素对应的分割阈值,包括:
    根据所述区域划分矩阵中各个区域对应的位置信息,确定所述点云统计矩阵中各个元素所属的区域;每个区域对应一个分割阈值;
    根据所述点云统计矩阵中各个元素所属的区域,确定所述点云矩阵中各个元素对应的分割阈值。
  5. 如权利要求1所述的车内雷达占位识别方法,其特征在于,所述基于各个元素对应的分割阈值对各帧点云统计矩阵中的元素进行分类赋值,得到各帧点云数据对应的第一矩阵,包括:
    针对各帧点云统计矩阵中的每个元素,若该元素对应的点云数量大于该元素对应的分割阈值,则将该元素赋值为第一值;若该元素对应的点云数量小于或等于该元素对应的分割阈值,则将该元素赋值为第二值;所述第一值不等于所述第二值。
  6. 如权利要求1所述的车内雷达占位识别方法,其特征在于,所述质心确定步骤包括:
    针对各帧第一矩阵中的任一元素 D mn,若 D mn 的数值为第一值,则遍历 D m+1 nD m n+1 D m+1 )( n+1 的数值;若 D m+1 nD m n+1 D m+1 )( n+1 的数值均为第一值,则将 D m+1 nD m n+1 D m+1 )( n+1 的数值更新为第三值;
    针对任一帧第一矩阵,将该帧第一矩阵中数值为第三值的元素作为该帧第一矩阵的目标元素,并基于所述空间位置矩阵获取该帧第一矩阵的目标元素对应空间的位置信息;对该帧第一矩阵的目标元素对应空间的位置信息求均值,得到该帧点云数据在空间位置矩阵中的初始质心位置。
  7. 如权利要求1所述的车内雷达占位识别方法,其特征在于,所述占位识别步骤包括:
    若第一区域中存在初始质心位置的点云数据帧数与点云数据总帧数的比值大于第一阈值,则将所述第一区域判定为存在占位的区域;所述第一区域为所述目标车辆内的任一区域。
  8. 一种车内雷达占位识别装置,其特征在于,包括:
    数据获取模块,用于获取目标车辆对应的区域划分矩阵、空间位置矩阵和车载雷达对目标车辆内部扫描得到的多帧点云数据;所述区域划分矩阵中的元素表示所述目标车辆划分的多个区域的位置信息,所述空间位置矩阵中的元素表示每个区域对应的多个空间的位置信息;
    点云统计矩阵确定模块,用于根据所述目标车辆对应的空间位置矩阵及各帧点云数据,统计落在各个空间内的点云数量,得到各帧点云数据对应的点云统计矩阵;
    第一矩阵确定模块,用于基于所述区域划分矩阵确定各帧点云统计矩阵中各个元素对应的分割阈值,并基于各个元素对应的分割阈值对各帧点云统计矩阵中的元素进行分类赋值,得到各帧点云数据对应的第一矩阵;
    质心位置确定模块,用于根据近邻聚类法及各帧点云数据对应的第一矩阵确定各帧点云数据在空间位置矩阵中的初始质心位置;
    占位识别模块,用于根据各帧点云数据在空间位置矩阵中的初始质心位置确定所述目标车辆存在占位的区域。
  9. 如权利要求8所述的车内雷达占位识别装置,其特征在于,所述车载雷达安装于所述目标车辆的车内顶部;
    所述区域划分矩阵为:A={(x n-1,x n,y m-1,y m)∣n∈[1,N],m∈[1,M]},其中,N表示横向区域划分数量,M表示纵向区域划分数量;(x n-1,x n,y m-1,y m)表示区域在俯视平面下的二维位置坐标;
    所述空间位置矩阵为:B={(x i-1,x i,y j-1,y j)∣i∈[1,I],j∈[1,J]},其中,I表示横向空间划分数量,J表示纵向空间划分数量;(x i-1,x i,y j-1,y j)表示空间在俯视平面下的二维位置坐标。
  10. 一种车载雷达,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述方法的步骤。
  11. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述方法的步骤。
PCT/CN2022/087064 2021-04-30 2022-04-15 车内雷达占位识别方法、装置及车载雷达 WO2022228150A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110483820.0A CN113219446A (zh) 2021-04-30 2021-04-30 车内雷达占位识别方法、装置及车载雷达
CN202110483820.0 2021-04-30

Publications (1)

Publication Number Publication Date
WO2022228150A1 true WO2022228150A1 (zh) 2022-11-03

Family

ID=77090687

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/087064 WO2022228150A1 (zh) 2021-04-30 2022-04-15 车内雷达占位识别方法、装置及车载雷达

Country Status (2)

Country Link
CN (1) CN113219446A (zh)
WO (1) WO2022228150A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113219446A (zh) * 2021-04-30 2021-08-06 森思泰克河北科技有限公司 车内雷达占位识别方法、装置及车载雷达
CN113561911B (zh) * 2021-08-12 2023-03-17 森思泰克河北科技有限公司 车辆控制方法、装置、毫米波雷达及存储介质
WO2023016350A1 (zh) * 2021-08-12 2023-02-16 森思泰克河北科技有限公司 占位识别方法及应用其的车辆控制方法
CN113734046B (zh) * 2021-08-17 2023-09-19 江苏星图智能科技有限公司 基于雷达的车内位置分区人员检测方法、装置以及设备
CN113791426B (zh) * 2021-09-10 2024-07-16 深圳市唯特视科技有限公司 雷达p显界面生成方法、装置、计算机设备及存储介质
CN113640779B (zh) * 2021-10-15 2022-05-03 北京一径科技有限公司 雷达失效判定方法及装置、存储介质
CN115469330B (zh) * 2022-10-28 2023-06-06 深圳市云鼠科技开发有限公司 子图的构建方法、装置、终端设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018214084A1 (en) * 2017-05-25 2018-11-29 Bayerische Motoren Werke Aktiengesellschaft Method and apparatus for representing environmental elements, system, and vehicle/robot
CN109711410A (zh) * 2018-11-20 2019-05-03 北方工业大学 一种三维物体快速分割和识别方法、装置及系统
CN110033457A (zh) * 2019-03-11 2019-07-19 北京理工大学 一种目标点云分割方法
CN110879399A (zh) * 2019-10-08 2020-03-13 驭势科技(浙江)有限公司 处理点云数据的方法、装置、车辆、电子设备和介质
CN111144228A (zh) * 2019-12-05 2020-05-12 山东超越数控电子股份有限公司 基于3d点云数据的障碍物识别方法和计算机设备
CN111368604A (zh) * 2018-12-26 2020-07-03 北京图森智途科技有限公司 一种停车控制方法、设备及系统
CN111427032A (zh) * 2020-04-24 2020-07-17 森思泰克河北科技有限公司 基于毫米波雷达的房间墙体轮廓识别方法及终端设备
CN113219446A (zh) * 2021-04-30 2021-08-06 森思泰克河北科技有限公司 车内雷达占位识别方法、装置及车载雷达

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359614B (zh) * 2018-10-30 2021-06-11 百度在线网络技术(北京)有限公司 一种激光点云的平面识别方法、装置、设备和介质
CN110799989A (zh) * 2019-04-20 2020-02-14 深圳市大疆创新科技有限公司 一种障碍物检测方法、设备、可移动平台及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018214084A1 (en) * 2017-05-25 2018-11-29 Bayerische Motoren Werke Aktiengesellschaft Method and apparatus for representing environmental elements, system, and vehicle/robot
CN109711410A (zh) * 2018-11-20 2019-05-03 北方工业大学 一种三维物体快速分割和识别方法、装置及系统
CN111368604A (zh) * 2018-12-26 2020-07-03 北京图森智途科技有限公司 一种停车控制方法、设备及系统
CN110033457A (zh) * 2019-03-11 2019-07-19 北京理工大学 一种目标点云分割方法
CN110879399A (zh) * 2019-10-08 2020-03-13 驭势科技(浙江)有限公司 处理点云数据的方法、装置、车辆、电子设备和介质
CN111144228A (zh) * 2019-12-05 2020-05-12 山东超越数控电子股份有限公司 基于3d点云数据的障碍物识别方法和计算机设备
CN111427032A (zh) * 2020-04-24 2020-07-17 森思泰克河北科技有限公司 基于毫米波雷达的房间墙体轮廓识别方法及终端设备
CN113219446A (zh) * 2021-04-30 2021-08-06 森思泰克河北科技有限公司 车内雷达占位识别方法、装置及车载雷达

Also Published As

Publication number Publication date
CN113219446A (zh) 2021-08-06

Similar Documents

Publication Publication Date Title
WO2022228150A1 (zh) 车内雷达占位识别方法、装置及车载雷达
DE112015001807B4 (de) MM-Wellen-Fahrer-Ermüdungs-Detektions-Vorrichtung und Arbeitsweise
US20210261154A1 (en) Continuous obstacle detection method, device, and system, and storage medium
CN111427032B (zh) 基于毫米波雷达的房间墙体轮廓识别方法及终端设备
CN106971185B (zh) 一种基于全卷积网络的车牌定位方法及装置
CN110008891B (zh) 一种行人检测定位方法、装置、车载计算设备及存储介质
TW201500246A (zh) 車輛乘載人數監視器及車輛乘載人數監控方法與電腦可讀取紀錄媒體
CN112613344B (zh) 车辆占道检测方法、装置、计算机设备和可读存储介质
DE102018112151A1 (de) Verfahren und vorrichtung zum klassifizieren von lidardaten zur objekterkennung
GB2577734A (en) Emergency vehicle detection
WO2023284764A1 (zh) 车内生命体的雷达探测方法、装置及终端设备
CN112926526A (zh) 基于毫米波雷达的停车检测方法及系统
CN109214256A (zh) 一种交通图标的检测方法、装置及车辆
CN117416375A (zh) 车辆避让方法、装置、设备及存储介质
CN116189116B (zh) 一种交通状态感知方法及系统
CN113829994A (zh) 基于车外鸣笛声的预警方法、装置、汽车及介质
CN109360137B (zh) 一种车辆事故评估方法、计算机可读存储介质及服务器
Mahlisch et al. Heterogeneous fusion of Video, LIDAR and ESP data for automotive ACC vehicle tracking
CN113705406A (zh) 交通指示信号的检测方法及相关装置、设备、介质
US20230368485A1 (en) Preprocessing method and electronic device for radar point cloud
US20230400553A1 (en) Electronic device, method for controlling electronic device, and program
EP4418000A1 (en) Radar data processing by dnn
CN115131964B (zh) 隧道车流量感知系统
CN118314363B (zh) 目标跟踪方法、装置、存储介质及计算机设备
US11837085B1 (en) Method and system for rapid graduated motor vehicle detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794621

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22794621

Country of ref document: EP

Kind code of ref document: A1