WO2022217522A1 - Procédé et dispositif de détection de cible, système de détection, plateforme mobile et support de stockage - Google Patents

Procédé et dispositif de détection de cible, système de détection, plateforme mobile et support de stockage Download PDF

Info

Publication number
WO2022217522A1
WO2022217522A1 PCT/CN2021/087327 CN2021087327W WO2022217522A1 WO 2022217522 A1 WO2022217522 A1 WO 2022217522A1 CN 2021087327 W CN2021087327 W CN 2021087327W WO 2022217522 A1 WO2022217522 A1 WO 2022217522A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
point cloud
time window
perception
cloud data
Prior art date
Application number
PCT/CN2021/087327
Other languages
English (en)
Chinese (zh)
Inventor
杨帅
朱晏辰
陈亚林
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/087327 priority Critical patent/WO2022217522A1/fr
Publication of WO2022217522A1 publication Critical patent/WO2022217522A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00

Definitions

  • the present application relates to the field of intelligent perception, and in particular, to a target perception method, device, detection system, movable platform and computer-readable storage medium.
  • target perception is usually performed based on the acquired point cloud frames, so as to determine the environmental conditions around the movable platform, and provide information for the movable platform.
  • the motion control of the platform provides guidance information.
  • the maximum frequency of target perception is determined by the frequency of the point cloud frame. For example, for a mobile platform equipped with a scanning lidar, the acquisition frequency of the point cloud frame is 10 Hz, then, the The target sensing frequency that the mobile platform can achieve can only reach 10Hz at the highest. If there are objects with different properties in the environment around the movable platform, the properties may be different moving speeds, or different distances from the radar, and so on. Based on different attributes, different objects often have different frequencies of perception requirements. For example, for fast-moving objects, a faster perception frequency is required; for objects that are closer, a faster perception frequency is also required.
  • the limited target sensing frequency in the related art may cause the object to be unable to be sensed in time, which in turn leads to insufficient target sensing sensitivity, which may bring security risks.
  • the embodiments of the present application provide a target perception method, device, movable platform and computer readable storage medium.
  • a target perception method comprising: in the process of acquiring point cloud data constituting a frame of point cloud frames, performing at least one time in the frame of the point cloud frame The target point cloud data in the window is extracted, and the target point cloud data corresponds to the target area in the field of view of the point cloud frame; respectively, based on the target point cloud data extracted in each time window, the time window is obtained.
  • Target perception result output the target perception result of the time window; wherein, the point cloud frame includes point cloud data obtained by the radar scanning the target area for multiple times, and the extraction frequency of the point cloud data within the target duration At least twice as often as the point cloud frame is acquired.
  • a target sensing apparatus includes: a memory and a processor, and a computer program stored in the memory and executable on the processor, the processor executing the program When the method described in the first aspect of the present application is implemented.
  • a detection system includes: a light source for emitting a light pulse sequence; a scanning module for changing the optical path of the light pulse sequence, so as to monitor the field of view Scanning; a detection module for detecting the light beam reflected by the object of the light pulse sequence to obtain point cloud data, wherein each point cloud point data in the point cloud data is used to indicate the corresponding point cloud point.
  • the output module is used to continuously output point cloud frames, and each point cloud frame includes multiple point cloud point data;
  • the perception module is used to perform the following operations: after obtaining the In the process of point cloud data, the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the target point cloud data corresponds to the target area in the field of view of the point cloud frame; respectively; Obtain the target perception result of the time window based on the target point cloud data extracted in each time window; output the target perception result of the time window; wherein, the point cloud frame includes the scanning module's detection of the target area For point cloud data acquired by scanning multiple times, the extraction frequency of point cloud data within the time window is at least twice the acquisition frequency of the point cloud frame.
  • a movable platform is provided, where the movable platform includes a radar and the target sensing device according to the second aspect of the embodiments of the present application.
  • a computer-readable storage medium where computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed, the first aspect of the embodiments of the present application is implemented. method.
  • the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and obtain the target perception result of the time window based on the target point cloud data extracted in each time window, and finally output the target perception result of the time window.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame
  • the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high.
  • FIG. 1 is a schematic diagram of an application scenario of a target sensing method according to an exemplary embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a radar according to an exemplary embodiment of the present application.
  • FIG. 3 is a flow chart of a target perception method according to an exemplary embodiment of the present application.
  • FIG. 4A is a schematic diagram showing a result of scanning a target area by a radar according to an exemplary embodiment of the present application.
  • FIG. 4B is a schematic diagram showing the scanning result of another radar on a target area according to an exemplary embodiment of the present application.
  • FIG. 5A is a schematic diagram of a first process of collecting point cloud data in different time windows according to an exemplary embodiment of the present application.
  • FIG. 5B is a schematic diagram of a second process of collecting point cloud data in different time windows according to an exemplary embodiment of the present application.
  • FIG. 5C is a schematic diagram of a third process of collecting point cloud data of different time windows according to an exemplary embodiment of the present application.
  • FIG. 5D is a schematic diagram of a fourth process of collecting point cloud data in different time windows according to an exemplary embodiment of the present application.
  • Fig. 6 is a flow chart of obtaining the target perception result of each time window based on the point cloud data extracted in each time window according to an exemplary embodiment of the present application.
  • Fig. 7 is a schematic diagram of scanning the surrounding environment of a radar according to an exemplary embodiment of the present application.
  • Fig. 8 is a flow chart of determining the length of a time window according to a historical target perception result according to an exemplary embodiment of the present application.
  • Fig. 9 is a flow chart of determining the length of a time window based on the target perception object according to an exemplary embodiment of the present application.
  • Fig. 10 is a flow chart showing the output of the first target perception result according to an exemplary embodiment of the present application.
  • Fig. 11 is a flow chart showing the correlation output of a target perception result according to an exemplary embodiment of the present application.
  • Fig. 12 is a flow chart showing the output of a second target perception result according to an exemplary embodiment of the present application.
  • Fig. 13 is a flow chart of selecting a target perception model according to an exemplary embodiment of the present application.
  • Fig. 14 is a flow chart showing the output of a motion trajectory according to an exemplary embodiment of the present application.
  • Fig. 15 is a flow chart showing the output of a predicted motion trajectory according to an exemplary embodiment of the present application.
  • FIG. 16 is a schematic diagram of a biprism scanning assembly according to an exemplary embodiment of the present application.
  • FIG. 17A is a schematic diagram showing the result of scanning a target area based on a biprism scanning component according to an exemplary embodiment of the present application.
  • FIG. 17B is a schematic diagram showing the result of scanning a target area based on another biprism scanning component according to an exemplary embodiment of the present application.
  • FIG. 18 is a schematic diagram of a triangular prism scanning assembly according to an exemplary embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a target sensing apparatus according to an exemplary embodiment of the present application.
  • first, second, third, etc. may be used in this application to describe various information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information without departing from the scope of the present application.
  • word "if” as used herein can be interpreted as "at the time of” or "when” or "in response to determining.”
  • FIG. 1 shows a schematic diagram of an application scenario.
  • a smart car 102 equipped with a lidar 101 can use the lidar 101 to obtain point cloud frames, perform target perception on the point cloud frames, and obtain target perception of the surrounding environment. As a result, the operation of the smart car is further guided.
  • the radar may include a transmitter 201 , a collimating element 202 , a scanning module 203 and a detector 204 .
  • the transmitter 201 may be used to emit light pulses.
  • the transmitter 201 may include at least one light-emitting chip, which may emit laser beams at certain time intervals.
  • the collimating element 202 can be used for collimating the light pulse emitted by the transmitter, and it can specifically be a collimating lens or other elements capable of collimating the light beam.
  • the scanning module 203 can be used to change the propagation direction of the collimated light beam, so that the light beam is irradiated on different points.
  • the scanning module may include at least one optical element that can reflect, refract, diffract, etc. the light beam, such as a prism, mirror, galvanometer, etc., thereby changing the propagation path of the light beam.
  • the optical element can be rotated under the drive of the driver. In this way, when the transmitter continuously emits light pulses, different light pulses can be emitted in different directions, so as to reach different positions and realize the scanning of a certain area by the radar. When an object is present in the scanned area, the light beam is reflected by the object back to the radar and detected by the radar detector 204 . In this way, the radar can collect point cloud data containing surrounding environment information.
  • the foregoing embodiment is only an exemplary description of the radar, and the radar may also have other structures, which are not limited in the embodiments of the present application.
  • the obtained environmental perception information is relatively limited, and if a point cloud data is acquired and processed once, the computing speed of the system is extremely high. Therefore, multiple point cloud data obtained by scanning the radar in its field of view (FOV) area are usually stored first.
  • a common practice is to output the point cloud acquired within a certain period of time as a frame of point cloud frame. After the point cloud frame is acquired, target perception can be performed based on the point cloud data in one or more frames of point cloud frames, so as to obtain the target object contained in the surrounding environment of the radar and related information of the target object.
  • the frequency of target perception is limited by the acquisition frequency of the point cloud frames (ie the frame rate of the point cloud frames). For example, for a radar system with a frame rate of 10Hz of point cloud frames, the target perception frequency can only reach 10Hz at the highest.
  • a lower object perception frequency may be appropriate.
  • the environment to be perceived contains fast-moving target objects.
  • the perception of fast-moving target objects has more significance. Taking a smart car as an example, perceiving a fast-moving target object in the environment and making a timely response to it is a key issue to ensure the safe driving of smart cars, and it is also an important factor restricting the wide application of smart cars.
  • an embodiment of the present application provides a target sensing method, as shown in FIG. 3 .
  • the target perception methods described above include:
  • Step 301 In the process of acquiring point cloud data constituting a frame of point cloud frames, extract target point cloud data in at least one time window within the frame of the point cloud frame, and the target point cloud data corresponds to the the target area in the field of view of the point cloud frame;
  • Step 302 obtaining the target perception result of the time window based on the target point cloud data extracted in each time window;
  • Step 303 Output the target perception result of the time window.
  • the point cloud frame includes point cloud data obtained by the radar scanning the target area for multiple times, and the extraction frequency of the point cloud data in the time window is at least twice the acquisition frequency of the point cloud frame.
  • the radar will scan the focused area (that is, the target area) in its field of view multiple times to obtain the point cloud data.
  • the point cloud frame whose density reaches a certain value realizes the key monitoring of the target area.
  • the scanning results of the target area (area within the rectangular frame) in the field of view by a radar with a frame rate of 10 Hz of point cloud in the time range of 50 ms and 100 ms, respectively, are schematic diagrams. With the increase of the number of scans, a higher density of point cloud data can be obtained, so that the target perception of smaller objects in the surrounding environment can be performed, and the target object information in the surrounding environment can be obtained more comprehensively and with higher spatial resolution. .
  • the point cloud frame contains the point cloud data obtained by the radar scanning the target area for many times
  • the inventor of the present application found that, in addition to the target perception based on the point cloud frame, the point cloud can also be detected first.
  • the target point cloud data within a certain time window of the frame is extracted. There can be multiple time windows.
  • the target point cloud data of each time window includes the point cloud obtained by the radar scanning the target area at least once. data.
  • target perception is performed based on the target point cloud data extracted from each time window, so that in the process of acquiring a point cloud frame, at least two target perception results for the target area are acquired, that is, super frame rate perception, which has the ability to Beneficial effects of improving radar perception sensitivity, real-time performance, and safety.
  • FIGS. 5A to 5D respectively show the process of collecting point cloud data in different time windows.
  • point cloud frames are usually acquired first, such as the first frame of point cloud frames, the second frame of point cloud frames, and the third frame of point cloud frames as shown in FIG. 5A to FIG. 5D .
  • object perception is performed based on each point cloud frame.
  • the target perception method provided by the above-mentioned embodiments of the present application, the target perception is performed based on the point cloud data of a point cloud frame without waiting for all the point cloud data of a point cloud frame to be acquired, but a The target point cloud data within at least one time window is extracted.
  • the target point cloud data whose time window is T1 is extracted, and the target perception is performed based on the extracted target point cloud data.
  • the extraction frequency of the point cloud data in the time window T1 is at least twice the acquisition frequency of the point cloud frame, that is, the acquisition time of one point cloud frame is at least twice the time window T1
  • the acquisition frequency of the target perception results is greater than the frequency of the point cloud frame, so as to realize super frame rate perception.
  • the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the at least one time window can be a plurality of time windows of the same length, for example, as shown in FIG. 5A, Within the duration of a point cloud frame, the time windows for extracting different target point cloud data are all time windows with a duration of T1; it can also be multiple time windows of different lengths, such as shown in Figure 5B, in a Within the duration of the point cloud frame, the time window for extracting different target point cloud data can include multiple time windows of different durations such as duration T1, T2 and T3; of course, it can also be multiple time windows of the same length.
  • Extract the target point cloud data within the target point cloud data and also extract the target point cloud data in multiple time windows of different lengths. Extraction is also performed synchronously with the same time window of duration T2, the point cloud data extracted from the time window of time duration T1 and the time window of duration T2 overlap, and then two parallel The time window extracts the point cloud data in the point cloud frame.
  • time windows of different lengths are used for target perception of objects with different properties, and the properties include at least one of the following: the type of the object, the size of the object, the relative distance of the object to the radar distance, speed of movement of the object, etc. The corresponding relationship between the length of the time window and the attributes of the object will be described later.
  • the above at least one time window is a time window within a point cloud frame.
  • the lengths of the time windows may be the same or different, which is not limited in this embodiment of the present application.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame
  • the extraction frequency of the point cloud data in the time window is at least the same as that of the point cloud frame Therefore, based on the target sensing method provided by the embodiment of the present application, compared with the target sensing method based on point cloud frames, super frame rate sensing can be realized for the target area, that is, when one frame of point cloud frame is obtained In the process of obtaining multiple target perception results, it can realize faster perception of the surrounding environment and improve the real-time, sensitivity and safety of target perception.
  • the radar will scan the key areas in its field of view multiple times to obtain point cloud frames whose density of point cloud data reaches a certain value. Focused monitoring of the target area. Based on the difference in the scanning components of the radar, there are differences in the area where the radar performs multiple scans.
  • the radar is capable of performing multiple scans of a local area of its field of view, eg, the radar's scanning components include galvanometers and/or mirrors; while in some embodiments, the radar It is not possible to select the area that it scans multiple times, only the entire area of its field of view can be scanned multiple times, for example, the scanning component of the radar includes a rotating prism. Therefore, the target area may be the entire area of the radar's field of view, or may be a partial area of the radar's field of view, which is not limited in this embodiment of the present application.
  • the target point cloud data of the time window includes the radar's target point cloud data.
  • Point cloud data obtained by at least one scan of the target area.
  • the specific scanning times of the target area by the radar within the time window is not limited in this embodiment of the present application, and can be determined by those skilled in the art according to actual needs. For example, for some application scenarios that require relatively low accuracy of target perception results, if the size of the object to be recognized is relatively large, the requirements for the density of the point cloud are not very high, which can be obtained based on one scan within the time window.
  • the point cloud data obtains the target perception results that meet the requirements; for some application scenarios that require relatively high accuracy of the target perception results, if the size of the object to be recognized is relatively small, the requirements for the point cloud density are slightly higher. Scan multiple times to obtain point cloud data with sufficient density, and then obtain target perception results that meet the requirements based on the point cloud data obtained from multiple scans.
  • the manner of determining the number of times the radar scans the target area may be determined based on multiple experiments, or may be determined based on pre-calculation based on a physical model, and of course, may also be determined based on other methods. This is also not limited in the application examples. Several specific embodiments will be given hereinafter for illustrative illustration.
  • the target point cloud data in the time window includes the point cloud data obtained by the radar scanning the target area at least once
  • target perception results also have different effects based on the number of scans.
  • the target point cloud data of the time window includes the point cloud data obtained by the radar scanning the target area a few times, then based on the target point cloud data of the time window, the accuracy of the obtained target perception results It is relatively low, but has a relatively high perception frequency; when the target point cloud data of the time window includes the point cloud data obtained by the radar scanning the target area for many times, then the target point based on the time window Cloud data, although the perception frequency is relatively low, can obtain high-precision target perception results.
  • the duration of the point cloud frame includes multiple time windows.
  • the extracting target point cloud data in at least one time window in the frame of the point cloud frame includes: respectively: Extracting target point cloud data in each of the multiple consecutive time windows, for example, as shown in FIGS. 5A to 5C .
  • the time window may be multiple time windows of the same length, or multiple time windows of different lengths, and of course, it may also be the extraction of target point cloud data within multiple time windows of the same length , and also extracts target point cloud data within multiple time windows of different lengths, which is not limited in this embodiment of the present application.
  • the extracting the target point cloud data in at least one time window in the frame of the point cloud frame includes: separately extracting each of the multiple non-consecutive time windows.
  • the target point cloud data within a time window is extracted.
  • the time window can be multiple time windows of the same length (as shown in FIG. 5D ), or multiple time windows of different lengths, and of course, it can also be a target within multiple time windows of the same length.
  • the point cloud data is extracted, and the target point cloud data in multiple time windows of different lengths are also extracted, which is not limited in this embodiment of the present application.
  • the target area can be continuously monitored; when extracting the target point cloud data in each of the multiple non-consecutive time windows and performing target perception, certain computing resources can be saved.
  • Those skilled in the art can select the continuity of the time window according to the actual application situation, so as to adapt to different application requirements.
  • the time window may be a time window of different lengths, and the time windows of different lengths are used to perform target perception on objects with different attributes.
  • the two time windows of different lengths may not overlap each other.
  • Figure 5B When two time windows of different lengths do not overlap each other, they correspond to point cloud data in a time range, and are only used for super-frequency perception of objects with one attribute.
  • the time window T1 is used to perceive a dog at a distance of 5m from the radar
  • the time window T2 is used to perceive a person at a distance of 5m from the radar.
  • the target point cloud data obtained within a time range can only be used to perceive a dog at a distance of 5m, or a person at a distance of 5m.
  • two time windows of different lengths may overlap.
  • the point cloud data in the overlapped time range is essentially used for super-frequency perception of objects with at least two attributes.
  • the time window T1 is used to perceive the puppy at a distance of 5m from the radar
  • the time window T2 is used to perceive a person at a distance of 5m from the radar as an example, when the time window T1 and time
  • the target point cloud data obtained in the overlapped time range is used to perceive both the dog at 5m and the person at 5m.
  • the point cloud data obtained in the same time range can be used to perceive multiple objects with different attributes at the same time. , which can make full use of the acquired point cloud data, improve the utilization of point cloud data, and obtain richer target perception results.
  • the point cloud data obtained by the radar contains information such as the three-dimensional coordinates, and/or color, and/or reflectivity of the target object located in the surrounding environment of the radar.
  • the detector of the radar detects the return light signal from the target object to obtain point cloud data, it may not perform data processing on the point cloud data, but directly forward the detected original signal to other control units for data processing. , or the radar first performs certain data processing on the point cloud data to obtain depth information or coordinate information corresponding to the point cloud data.
  • the original signal within the time window can be directly extracted for data processing and target perception.
  • the radar first performs certain data processing on the point cloud data, and obtains the depth information or coordinate information corresponding to the point cloud data, it can be based on the depth information or coordinate information of the point cloud, and firstly based on the depth information or coordinate information of the point cloud.
  • the data extraction rules are used to extract point cloud data, and then target perception is performed on the extracted target point cloud data.
  • the duration of the point cloud frame includes at least two time windows of different lengths, wherein the data extraction rules corresponding to the time windows of different lengths are different.
  • the data extraction rules corresponding to time windows of different lengths can be unified data extraction rules pre-set by developers according to the needs of application scenarios, or an initial data extraction rule set in the initial stage of target perception, which can be set in the subsequent In the process of target perception, based on the actual effect of target perception corresponding to each time window, including the accuracy of target perception and the speed of target perception, etc., the initial data extraction rules are automatically adjusted and determined to be suitable for the corresponding length.
  • the data extraction rule for the time window may also be other data extraction rules, which are not specifically limited in this embodiment of the present application.
  • the target point cloud data extracted in time windows of different lengths correspond to point cloud data in different depth ranges in the target area.
  • the radar first performs certain data processing on the point cloud data to obtain the depth information or coordinate information corresponding to the point cloud data. Then, in this case, for time windows of different lengths, point cloud data satisfying different conditions in the target area can be extracted, and target perception can be performed based on the extracted point cloud data.
  • point cloud data can be filtered and extracted first, corresponding to time windows of different lengths, and point cloud data in different depth ranges can be extracted.
  • the depth range corresponds to the length of the time window.
  • the target point cloud data extracted in time windows of different lengths correspond to point cloud data in different directions in the target area.
  • point cloud data can be filtered and extracted first, corresponding to time windows of different lengths, and point cloud data in different orientations can be extracted.
  • the orientation corresponds to the length of the time window.
  • target point cloud data extraction can be performed based on different data extraction rules.
  • specific examples are given. Those skilled in the art should understand that the following embodiments are only exemplary descriptions, and time windows of different lengths may also be other data extraction rules, which are not limited in the embodiments of the present application.
  • the data extraction rule includes: for time windows of different lengths, extracting target point cloud data that satisfies a preset condition, and the preset condition corresponds to the length of the time window; or, for different lengths time window, extract all point cloud data in each time window.
  • the point cloud data contains depth information or coordinate information, etc.
  • the point cloud data For time windows of different lengths used to perceive objects with different attributes, it is possible to first determine whether the point cloud data in each time window satisfies the preset conditions corresponding to the time windows of the length, and if so, the point cloud data Extract it as the target point cloud data for super frame rate sensing. If not satisfied, do not extract this point cloud data as the target point cloud data for super frame rate sensing.
  • the multiple point cloud data acquired within the time window T1 may correspond to different depths.
  • the target to be perceived is determined, for example, the dog that is to be perceived is a puppy at a distance of 5m from the radar, then, of the multiple point cloud data acquired within the time window T1, only the point cloud data with a depth of 5m It is useful for object perception, and the rest of the data is redundant. Therefore, only the point cloud data with a depth of 5m can be extracted as the target point cloud data, and then the target perception can be performed.
  • the preset condition may not only be that the target point cloud data is located at a preset depth, but also that the coordinates of the target point cloud data are located at preset coordinates, and of course, it may also be other
  • the preset condition is not limited in this embodiment of the present application.
  • the target perception can be directly performed on all the point cloud data in each time window. .
  • super frame rate can be implemented for the target object corresponding to the time window.
  • Time windows of different lengths are used to sense target objects with different properties. For a target object that is closer to the radar, faster frequency perception is often required; while for a target object that is farther away from the radar, the requirement for the perception frequency is lower than that of a target object that is closer.
  • the point cloud data with a smaller depth can be extracted as the target point cloud data, and then the target objects with a closer distance can be sensed more frequently.
  • the point cloud data with a larger depth can be extracted as the target point cloud data, and then the target object with a slightly farther distance can be sensed at a lower frequency.
  • the data extraction rule includes: extracting target point cloud data of a first depth for a first time window, and extracting a second depth for a second time window whose length is greater than the first time window The target point cloud data of , the first depth is smaller than the second depth.
  • the target point cloud data with a smaller depth is extracted to perform super frame rate perception on the close-range target object; for a longer time window, the target point with a larger depth is extracted.
  • Cloud data for super frame rate perception of distant objects It can meet the perception requirements of the target objects at different distances in the real world, so that the movable platform equipped with the radar can obtain the situation of the target objects at different distances, and make corresponding responses in time, so as to improve the operation efficiency of the movable platform. safety.
  • step 302 obtain the target of the time window based on the point cloud data extracted in each time window.
  • the perception result may be to perform target perception directly based on all the extracted point cloud data, and obtain the super frame rate perception result corresponding to the time window.
  • step 302 the target perception result of each time window is obtained based on the point cloud data extracted in each time window, as shown in FIG. 6, including:
  • Step 601 Acquire the depth or coordinates of all the point cloud data, and determine the point cloud data satisfying the preset depth or coordinates;
  • Step 602 according to the point cloud data satisfying a preset depth or coordinates, acquire a target perception result including a target perception object located at a preset depth.
  • step 301 even if all point cloud data in the time window is extracted, before target perception is performed based on the extracted point cloud data, the extracted data can still be processed based on the above embodiment.
  • the point cloud data is screened again for the target point cloud data, and the point cloud data that meets the preset depth or coordinates is selected as the target point cloud data for target perception, so that the target perception result of a certain depth can be obtained.
  • the target-aware process saves certain computing resources.
  • time windows of different lengths are used to perceive objects with different attributes
  • the attributes include at least one of the following: the type of the object, the size of the object, the relative value of the object to the object The distance of the radar, the speed of movement of the object, etc.
  • step 302 acquiring the target perception results of the time windows based on the point cloud data extracted in each time window, respectively, includes: acquiring objects containing different types of target perception objects based on time windows of different lengths Perceived results. For example, based on the time window of length T1, the target perception result containing the dog is acquired; based on the time window of length T2, the target perception result containing the truck is acquired.
  • acquiring target perception results containing different types of target perception objects based on time windows of different lengths includes: acquiring the target perception results according to a variety of preset target perception methods, wherein each The target perception method corresponds to a length of time window and is used to identify a preset target object.
  • the target perception objects to be recognized are dogs and trucks; the dog corresponds to a time window of length T1, and the truck corresponds to a time window of length T2, and T1 is not equal to T2; The dog uses the first object perception method for object perception, and the truck uses the second object perception method for object perception.
  • step 301 extract the target point cloud data in the time window T1 and time window T2 in the frame of the point cloud frame; in step 302, based on The first target perception method is to perform target perception on the target point cloud data extracted in the time window T1 to obtain the first target perception result including the dog; based on the second target perception method, the target point cloud extracted in the time window T2 is detected.
  • the data is subjected to object perception to obtain a second object perception result containing the truck.
  • the multiple different target sensing methods may be based on multiple different neural network models for target sensing, each of the neural network models being used to identify a preset target object.
  • each target perception method may also be an algorithm other than the neural network model, such as a traditional feature-based recognition algorithm, which is used to identify a preset target object. This application The embodiment does not limit this.
  • each of the target sensing methods corresponds to a length of time window and is used to identify a preset target
  • the accuracy of the target perception of the window will be relatively high.
  • step 302 based on time windows of different lengths, acquiring target perception results containing different types of target perception objects includes: acquiring the target perception results according to a preset same type of target perception method, Wherein, the same target perception method is used to identify multiple preset target objects.
  • the target perception objects to be identified are dogs and trucks; the dog corresponds to a time window of length T1, the truck corresponds to a time window of length T2, and T1 is not equal to T2; in addition, the third target perception method, which can be used for object perception both for dogs and trucks.
  • target sensing can be performed on the target point cloud data extracted in the time window T1 and the time window T2 based on the third target sensing method. , respectively, to obtain the first target perception result including the dog and the second target perception result including the truck.
  • the same target perception method may be based on the same neural network model for target perception, and the same neural network model is used to identify multiple preset target objects.
  • an algorithm other than the neural network model may also be used to identify various preset target objects, which is not limited in this embodiment of the present application.
  • one target sensing method corresponds to time windows of multiple lengths and is used to identify multiple preset target objects, this target sensing method performs Object awareness takes up less storage space and has wider applicability.
  • the time windows of different lengths are used for object perception for objects with different properties.
  • the determination of the length of time windows corresponding to objects with different attributes is introduced.
  • the length of the time window is predetermined. There may be various methods for pre-setting the length of the time window, which is not limited in this embodiment of the present application. Next, several specific examples are given.
  • the length of the time window may be that after the software and hardware parameters of the radar are determined, multiple experiments are performed on different target objects in different target areas by using the radar, and based on the multiple experiments , the determined time windows corresponding to different target objects, and the length of the time window corresponds to the target objects.
  • the length of the time window may be determined based on a predetermined algorithm.
  • a predetermined algorithm There may be multiple algorithms, which are not limited in this embodiment of the present application. An example of a specific algorithm is given below:
  • FIG. 7 it is a schematic diagram of scanning the surrounding environment of the radar according to the embodiment of the present application.
  • 701 is the radar
  • 702 is the target object located in the target area of the radar
  • point A and point B are the two adjacent point cloud data obtained by the radar during the scanning process in its target area
  • the corresponding spatial position d represents the distance from the target object to the radar
  • h represents the size of the target object
  • r represents the point cloud angular resolution of the radar.
  • pre-recording and analysis can be performed on the change rule of the point cloud resolution of the radar in the target area. It is recorded that the radar scans the target area X times within the duration of acquiring a point cloud frame. Then, the function of the point cloud angular resolution r of the radar in the target area changing with the number of scans x can be recorded as:
  • X is a positive integer.
  • the effective perception capability of the radar is closely related to the angular resolution: the smaller the angular resolution, the farther the effective perception distance is; or, at the same distance, the smaller the angular resolution, the radar can be A smaller target object is perceived.
  • the angular resolution required to perceive the target object should satisfy:
  • ⁇ t the time interval between two scans of the radar in the target area. Then, the actual shortest point in the target area can be taken once every t time.
  • Cloud data where t is the length of the time window mentioned above, and its expression can be written as:
  • one or more target perception objects may be pre-determined, and then based on the pre-determined one or more target perception objects, the one or more target perception objects may be determined based on the above algorithm.
  • the length of the time window corresponding to the target-aware object may be any algorithm.
  • the length of the time window can be determined in other ways besides the preset method.
  • the length of the time window may be determined by: determining the length of the time window according to historical target perception results.
  • the historical target perception result includes at least one of the following: the target perception result obtained based on the point cloud frame before the current time window, or the target obtained based on the point cloud data in the time window before the current time window Perceived results.
  • the target sensing object to be sensed may not be determined in advance, but the length of the current time window may be adaptively determined based on the historical target sensing results obtained before the current time window. For example, according to the target perception result of the previous point cloud frame, it can be determined that there is a running car 5m in front of the radar, then the length of the time window corresponding to the car can be determined based on the above algorithm, Superframe rate sensing is performed on the car at its highest frequency allowed. This method does not depend on pre-setting, has wider applicability and flexibility, and can be applied to complex application environments.
  • determining the length of the time window according to historical target perception results includes:
  • Step 801 according to the historical target perception result, determine the target perception object
  • Step 802 Determine the length of the time window based on the target perception object.
  • the target sensing object may be determined based on the historical target sensing result.
  • the determination of the target-aware object may be implemented with reference to related technologies. For example, the objects included in the historical target perception results may be determined, and then the objects are screened based on certain conditions to determine the target perception objects.
  • the certain condition may be that the size of the object satisfies a certain size, the depth of the object satisfies a certain depth, the object is a certain kind of object, etc., and then the object is perceived based on the determined target, based on the above
  • the various embodiments described determine the length of the time window.
  • the length of the time window is determined based on the target perception object.
  • the length of the time window corresponding to the target sensing object may be searched from the preset mapping relationship between the sensing object and the time window length as the length of the current time window.
  • the length of the time window can also be determined in other ways.
  • the length of the time window is determined based on the target perception object, as shown in FIG. 9 , including:
  • Step 901 based on the target perception object, determine the motion speed of the target perception object and/or the depth of the target perception object and/or the target point cloud angular resolution;
  • Step 902 Determine the length of the time window based on the motion speed and/or the depth and/or the angular resolution of the target point cloud.
  • various attribute information of the target sensing object can be obtained. For example, based on the target perception object determined by one or more historical target perception results, the distance of the target perception object from the radar (ie the depth of the target perception object) and the size of the target perception object can be obtained. For another example, based on a target perception object determined based on multiple historical target perception results, the movement speed of the target perception object may be determined.
  • the length of the time window corresponding to the depth can be found from the preset mapping relationship between the depth and the length of the time window as the length of the current time window;
  • the height value of apply the algorithm described above or other algorithms to determine the length of the current time window.
  • the length of the time window corresponding to the motion speed can be found from the preset mapping relationship between the motion speed and the time window length as the length of the current time window.
  • the algorithm described above or other algorithms may be applied to determine the length of the current time window.
  • the various embodiments of determining the length of the time window based on the motion speed and/or the depth and/or the angular resolution of the target point cloud are only illustrative, Certainly, the length of the time window may also be determined in other manners based on the information, which is not limited in this embodiment of the present application.
  • time windows of different lengths may be set in consideration of different movement speeds. For example, when there are fast-running pedestrians and stationary pedestrians in the target area, the fast-running pedestrians have a greater impact on the movable platform carried by the radar and should be carried out in a shorter time window.
  • Target perception to perceive the target perception object in real time, so as to respond quickly to the target perception object in time.
  • the depths corresponding to multiple target sensing objects that is, the distances of the target sensing objects relative to the radar
  • different depths can be considered, and different lengths can be set. time window. For example, when the target area has a car that is only 5m away from the radar and a car that is 10m away from the radar, then, for the car that is closer to the radar, it will not affect the movable platform on which the radar is mounted. The impact of the target is greater, and the target perception should be carried out in a shorter time window to perceive the target perception object in real time, so as to respond quickly to the target perception object in time.
  • the target sensing objects with different speeds and depths are exemplified as the same kind of objects.
  • the target sensing objects with different speeds and depths may also be different kinds of objects. object, which is not limited in this embodiment of the present application.
  • each point cloud frame in the process of acquiring the point cloud data constituting a frame of point cloud, after extracting the target point cloud data in at least one time window in the frame of the point cloud frame, each point cloud frame can be directly processed in real time.
  • the point cloud data within the time window is subjected to target perception, and the target perception result of the time window is output, that is, the flow chart of the output of the target perception result is shown in FIG. 10 .
  • FIG. 10 only takes the extraction of point cloud data within a time window and the super-frequency perception as an example for description.
  • point cloud data in multiple different time windows may be extracted simultaneously, and the different time windows may overlap, so as to obtain multiple target perception results including different target perception objects.
  • the point cloud frame contains the point cloud data obtained by the radar scanning its field of view area. Compared with the point cloud data of the time window, the point cloud data density corresponding to the point cloud frame is higher. , which can obtain more detailed target perception results in more space. Therefore, in some embodiments, as shown in FIG. 11 , the method may further include:
  • Step 1101 obtaining the perception result of the point cloud frame
  • Step 1102 associate and output the perception result of the point cloud frame with a plurality of first perception results.
  • each of the first perception results is a target perception result acquired based on point cloud data of a time window in the process of acquiring point cloud data constituting the point cloud frame.
  • the flowchart of the above embodiment is shown in FIG. 12 .
  • the plurality of first perception results are output in the order in which they are acquired, and in this process, the perception results of the point cloud frame are continuously output, so that the perception results of the point cloud frame and the multiple The corresponding output of the first perception result.
  • other associated output manners may also be used, which are not limited in this embodiment of the present application.
  • the perception result of the point cloud frame is output in association with a plurality of first perception results, so that the more detailed target perception results obtained based on the point cloud frame can be output at the same time.
  • the more real-time target sensing results obtained by the multiple first sensing results satisfy both the requirements of the spatial sensitivity of the perception and the requirements of the temporal sensitivity of the perception.
  • the method before extracting the target point cloud data in at least one time window in the frame of the point cloud frame, as shown in FIG. 13 , the method further includes:
  • Step 1301 receive a trigger instruction
  • Step 1302 under the trigger of the trigger instruction, output target perception model selection information, and the target perception model at least includes: a super frame rate perception mode, and/or, a normal frame rate perception mode;
  • Step 1303 when it is determined to be in the super frame rate perception mode, extract the target point cloud data in at least one time window and output the acquired target perception result; when it is determined to be the normal frame rate perception mode, the point cloud The frame performs object perception and outputs the acquired object perception results.
  • the target point cloud data in at least one time window is extracted and the obtained target perception result is output, and the point cloud frame is subjected to target perception and output. Target perception results.
  • the sensing mode is determined based on the received trigger instruction, and then the target sensing result is output accordingly, which can facilitate interaction with users and enhance user experience.
  • the method may further include:
  • Step 1401 acquiring multiple target perception results of the same time window
  • Step 1402 based on the obtained multiple target sensing results, obtain the motion trajectories of the target sensing objects corresponding to the multiple identical time windows;
  • Step 1403 outputting the motion track.
  • the motion trajectory of the target sensing object is determined and output based on the target sensing results of multiple identical time windows, so that the user or monitoring personnel can respond accordingly based on the motion trajectory.
  • the method may further include:
  • Step 1501 based on the motion track, obtain the predicted motion track of the target perception object
  • Step 1502 outputting the predicted motion trajectory.
  • step 1501 based on the motion trajectory, obtains the predicted motion trajectory of the target perception object, which may be implemented with reference to related technologies, or may be implemented by an algorithm independently developed by a developer, which is not limited in this embodiment of the present application. It can be seen from the above embodiments that, based on the obtained target perception results, determining the predicted motion trajectory of the target perception object can make relatively reliable predictions for the target objects around the radar, and then can make more reliable predictions for the target objects. Timely response improves the sensitivity and safety of the system.
  • the target area in each of the above embodiments may be the entire area of the radar field of view, or may be a partial area of the radar field of view.
  • the central area of the field of view of the acquired point cloud frame is usually the area that needs to be monitored. Therefore, in some embodiments, the target area is a central area in the field of view of the point cloud frame.
  • the target area may also be another area set based on application requirements, which is not limited in this embodiment of the present application.
  • the radar scans the target area multiple times, which may be achieved by controlling the rotational speed of at least one component in the scanning module of the radar to reach at least a preset threshold in the target area.
  • the threshold may be determined based on a predetermined target perception object, or may be determined based on a target perception object determined in historical frames, and the embodiment of the present application does not limit the determination of the threshold.
  • the scanning module includes a double-prism scanning assembly composed of a first prism and a second prism, and the threshold is rotated by the first prism and the second prism by two within the time window.
  • the rotation speed corresponding to the circle is determined; wherein, the rotation speed of the first prism and the second prism are equal in value and opposite in direction.
  • the schematic diagram of the double-prism scanning assembly composed of the first prism and the second prism can be shown in FIG. 16 , wherein 1601 is the first prism, and 1602 is the second prism.
  • 1601 is the first prism
  • 1602 is the second prism.
  • the rotational speeds of the first prism and the second prism are resolved as w1,-w1.
  • the output frame rate of the point cloud frame of the radar is 10 Hz
  • N>1 super frame rate scanning can be achieved.
  • the number of scans to the target area is 2N times within the duration of a single point cloud frame (as shown in FIG. 17A ); when all When there is a slight difference in the deflection ability of the first prism and the second prism to light, the number of scans of the target area is N times within the duration of a single point cloud frame (as shown in FIG. 17B ).
  • the scanning module includes a triangular prism scanning component composed of a third prism, a fourth prism and a fifth prism, and the threshold value is corresponding to the rotation of the fifth prism two times within the acquisition time window.
  • the rotational speed is determined; wherein, the rotational speed of the third prism and the fourth prism are equal in value and opposite in direction.
  • FIG. 18 The schematic diagram of the triangular prism scanning assembly composed of the third prism, the fourth prism and the fifth prism can be shown in FIG. 18 , wherein 1801 to 1803 are the third prism, the fourth prism and the fifth prism, respectively.
  • 1801 to 1803 are the third prism, the fourth prism and the fifth prism, respectively.
  • the above embodiment is described with a specific example: when two of the prisms adopt the constant velocity reverse strategy (taking the rotation speed of the fifth prism and the sixth prism as an example), the third prism, the fourth prism and the third prism When the rotational speed of the pentaprism is resolved as w1, w2, -w2.
  • the scanning module includes a scanning component composed of a sixth prism and a mirror
  • the threshold is a rotational speed corresponding to two rotations of the sixth prism within the time window
  • the mirror The rotational speed corresponding to the preset scanning range is determined jointly by scanning twice within the time window.
  • the rotational speeds of the sixth prism and the reflecting mirror are respectively w1 and w2, and both w1 and w2 are greater than or equal to the rotational speed r corresponding to one rotation of the driving component (such as a motor, etc.) of the scanning assembly.
  • the rotational speed of the sixth prism and the mirror is simultaneously increased to H times the respective original rotational speeds, where H is a positive integer greater than or equal to 2, then within the duration of one point cloud frame, The number of scans for the entire target area is H.
  • the scanning module includes a galvanometer
  • the threshold value is determined by the rotation speed corresponding to the galvanometer scanning twice a preset scanning range within the time window.
  • the target perception method in the process of acquiring the point cloud data constituting one frame of point cloud frame, at least one time window in the frame of the point cloud frame is analyzed.
  • the target point cloud data within each time window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high.
  • an embodiment of the present application further provides a target sensing device, the schematic diagram of which is shown in FIG. 19 .
  • the target sensing device 1901 includes: a processor 1902 and a memory 1903, and a computer program stored in the memory 1903 and executable on the processor 1902, the processor 1902 implements the following steps when executing the program:
  • the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the target point cloud data corresponds to the point cloud frame the target area in the field of view;
  • a target perception result for the time window is output.
  • the point cloud frame includes point cloud data obtained by the radar scanning the target area for multiple times, and the extraction frequency of the point cloud data in the time window is at least twice the acquisition frequency of the point cloud frame.
  • the target point cloud data of the time window includes point cloud data obtained by the radar scanning the target area at least once.
  • the duration of the point cloud frame includes multiple consecutive time windows
  • the extracting the target point cloud data in at least one time window in the frame of the point cloud frame includes: separately extracting the multiple time windows.
  • the target point cloud data within each time window in consecutive time windows is extracted.
  • two time windows of different lengths overlap.
  • the duration of the point cloud frame includes at least two time windows of different lengths, wherein the data extraction rules corresponding to the time windows of different lengths are different.
  • the target point cloud data extracted in time windows of different lengths corresponds to point cloud data in different depth ranges in the target area; or, the target point cloud data extracted in time windows of different lengths corresponds to the Point cloud data in different orientations in the target area.
  • the data extraction rules corresponding to time windows of different lengths are different, and the data extraction rules include: for time windows of different lengths, extracting target point cloud data that meets preset conditions, the preset conditions and the time The length of the window corresponds; or, for time windows of different lengths, extract all the point cloud data in each time window.
  • the preset condition includes at least one of the following: the target point cloud data is located at a preset depth, or the coordinates of the target point cloud data are located at preset coordinates.
  • the data extraction rules include: for a first time window, extracting target point cloud data at a first depth, and for a second time window whose length is greater than the first time window, extracting target point cloud data at a second depth. data, the first depth is less than the second depth.
  • the data extraction rule is to extract all point cloud data in each time window for time windows of different lengths, and obtain the target of the time window based on the point cloud data extracted in each time window.
  • Perceived results including:
  • a target sensing result including the target sensing object located at the preset depth is acquired.
  • the acquiring the target perception results of the time windows based on the point cloud data extracted in each time window respectively includes: acquiring target perception results including different types of target perception objects based on time windows of different lengths.
  • target sensing results containing different types of target sensing objects including:
  • each of the target perception methods corresponds to a time window of a length and is used to identify a preset target object;
  • the target perception result is acquired according to a preset same type of target perception method, wherein the same type of target perception method is used to identify multiple preset target objects.
  • the multiple different target perception methods include: performing target perception based on multiple different neural network models, each of which is used to identify a preset target object; the same type of target perception.
  • the method includes: performing target perception based on the same neural network model, wherein the same neural network model is used for recognizing multiple preset target objects.
  • the length of the time window is preset; or,
  • the length of the time window is determined in the following manner: according to the historical target perception result, the length of the time window is determined; wherein, the historical target perception result includes at least one of the following: based on the point cloud frame before the current time window.
  • determining the length of the time window according to the historical target perception result includes: determining a target perception object according to the historical target perception result; and determining the length of the time window based on the target perception object.
  • determine the length of the time window including: based on the target perception object, determine the movement speed of the target perception object and/or the depth of the target perception object and/or Target point cloud angular resolution; based on the motion speed and/or the depth and/or the target point cloud angular resolution, determine the length of the time window.
  • determining the length of the time window based on the motion speed and/or the depth and/or the target angular resolution including: determining the time of the first length for the target perception object of the first motion speed Window, the time window of the second length is determined for the target perception object of the second movement speed, the first movement speed is greater than the second movement speed, and the first length is smaller than the second length; And ⁇ or, right A time window of a third length is determined for the target perception object at a third depth, and a time window of a fourth length is determined for the target perception object at a fourth depth, where the third depth is smaller than the fourth depth, and the third length is smaller than the fourth length.
  • the method further includes: acquiring a perception result of the point cloud frame; associating and outputting the perception result of the point cloud frame with a plurality of first perception results; wherein each of the first perception results is In the process of acquiring the point cloud data constituting the point cloud frame, the target perception result acquired based on the point cloud data of a time window.
  • the method before extracting the target point cloud data in at least one time window in the frame of the point cloud frame, the method further includes: receiving a trigger instruction; under the trigger of the trigger instruction, outputting the target perception Mode selection information, the target perception mode at least includes: a super frame rate perception mode, and/or, a normal frame rate perception mode; when it is determined to be a super frame rate perception mode, the target point cloud data in at least one time window is processed. Extract and output the acquired target perception result; when it is determined to be the normal frame rate perception mode, perform target perception on the point cloud frame and output the acquired target perception result.
  • the method further includes: acquiring multiple target perception results of the same time window; based on the acquired multiple target perception results, acquiring the motion trajectories of the target perception objects corresponding to the multiple same time windows; outputting the motion trajectory.
  • the method further includes: acquiring a predicted motion trajectory of the target sensing object based on the motion trajectory; and outputting the predicted motion trajectory.
  • the target area is a central area in the field of view of the point cloud frame.
  • the radar scans the target area multiple times, which is achieved by controlling the rotational speed of at least one component in the scanning module of the radar to reach at least a preset threshold in the target area.
  • the scanning module includes a double-prism scanning component composed of a first prism and a second prism, and the threshold value is corresponding to the two rotations of the first prism and the second prism within the time window.
  • the rotational speed of the first prism and the second prism are equal in value and opposite in direction; or,
  • the scanning module includes a triangular prism scanning component composed of a third prism, a fourth prism and a fifth prism, and the threshold value is determined by the rotation speed corresponding to two rotations of the fifth prism within the acquisition time window;
  • the rotational speed of the third prism and the fourth prism are equal in value and opposite in direction; or,
  • the scanning module includes a scanning component composed of a sixth prism and a reflecting mirror, the threshold value is the rotational speed corresponding to the rotation of the sixth prism for two revolutions within the time window, and the reflecting mirror is within the time window
  • the rotational speed corresponding to the preset scanning range of the two scans is jointly determined; or,
  • the scanning module includes a galvanometer, and the threshold value is determined by the rotation speed corresponding to the galvanometer scanning twice a preset scanning range within the time window.
  • the target sensing device provided by the embodiments of the present application, in the process of acquiring the point cloud data constituting a frame of point cloud frame, at least one time in the frame of the point cloud frame can be detected.
  • the target point cloud data in the window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high.
  • an embodiment of the present application also provides a detection system, the detection system includes: a light source for emitting a light pulse sequence; a scanning module for changing the optical path of the light pulse sequence to scan the field of view; The detection module is used to detect the light beam reflected by the object of the light pulse sequence to obtain point cloud data, wherein each point cloud point data in the point cloud data is used to indicate the object corresponding to the point cloud point.
  • an output module for continuously outputting point cloud frames, each point cloud frame including multiple point cloud point data
  • a perception module for performing the following operations: after acquiring the point cloud that constitutes one frame of point cloud frame In the process of data processing, the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the target point cloud data corresponds to the target area in the field of view of the point cloud frame; The target point cloud data extracted in each time window obtains the target perception result of the time window; the target perception result of the time window is output; wherein, the point cloud frame includes the scanning module to the target area multiple times For point cloud data acquired by scanning, the frequency of extracting point cloud data within the time window is at least twice the frequency of acquiring point cloud frames.
  • the sensing module may also be used to execute the steps in the methods of the foregoing embodiments, and details are not described herein again in this application.
  • the process of acquiring the point cloud data constituting one frame of point cloud frame at least one time window in the frame of the point cloud frame can be detected.
  • the target point cloud data within each time window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high.
  • an embodiment of the present application also provides a movable platform, where the movable platform includes a radar and the target sensing device described in each of the foregoing embodiments.
  • the movable platform may be a smart car, a robot, an unmanned aerial vehicle, etc., which is not limited in this embodiment of the present application.
  • the process of acquiring the point cloud data constituting a frame of point cloud frames at least one time period in the frame of the point cloud frame can be analyzed.
  • the target point cloud data in the window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high.
  • the present application also provides a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, any one of the foregoing method steps is implemented.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (a non-exhaustive list) of computer readable storage media include: electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable Programmable Read Only Memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CDROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • the signal medium of the computer-readable storage medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (eg, through the Internet using an Internet service provider) connect).
  • LAN local area network
  • WAN wide area network
  • an external computer eg, through the Internet using an Internet service provider
  • the above-mentioned apparatus can execute the methods provided by all the foregoing embodiments of the present application, and has corresponding functional modules and beneficial effects for executing the above-mentioned methods.
  • the above-mentioned apparatus can execute the methods provided by all the foregoing embodiments of the present application, and has corresponding functional modules and beneficial effects for executing the above-mentioned methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

L'invention concerne un procédé de détection de cible, comprenant les étapes suivantes : dans le processus d'acquisition de données de nuage de points constituant une trame de trame de nuage de points, extraire des données de nuage de points cibles dans au moins une fenêtre temporelle à l'intérieur de la trame de la trame de nuage de points, les données de nuage de points cibles correspondant à une zone cible dans un champ de vision de la trame de nuage de points (301) ; acquérir un résultat de perception de cible de la fenêtre temporelle sur la base de données de nuage de points cibles extraites dans chaque fenêtre temporelle, respectivement (302) ; et sortir le résultat de perception de cible de la fenêtre temporelle (303), la trame de nuage de points comprenant des données de nuage de points acquises en effectuant de multiples balayages sur la zone cible par un radar et la fréquence d'extraction des données de nuage de points dans la fenêtre temporelle étant au moins deux fois la fréquence d'acquisition de la trame de nuage de points. L'invention concerne en outre un dispositif de détection de cible, un système de détection, une plateforme mobile et un support de stockage lisible par ordinateur.
PCT/CN2021/087327 2021-04-14 2021-04-14 Procédé et dispositif de détection de cible, système de détection, plateforme mobile et support de stockage WO2022217522A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/087327 WO2022217522A1 (fr) 2021-04-14 2021-04-14 Procédé et dispositif de détection de cible, système de détection, plateforme mobile et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/087327 WO2022217522A1 (fr) 2021-04-14 2021-04-14 Procédé et dispositif de détection de cible, système de détection, plateforme mobile et support de stockage

Publications (1)

Publication Number Publication Date
WO2022217522A1 true WO2022217522A1 (fr) 2022-10-20

Family

ID=83640004

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087327 WO2022217522A1 (fr) 2021-04-14 2021-04-14 Procédé et dispositif de détection de cible, système de détection, plateforme mobile et support de stockage

Country Status (1)

Country Link
WO (1) WO2022217522A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558838A (zh) * 2018-11-29 2019-04-02 北京经纬恒润科技有限公司 一种物体识别方法及系统
CN109558854A (zh) * 2018-12-05 2019-04-02 百度在线网络技术(北京)有限公司 障碍物感知方法、装置、电子设备及存储介质
US10345437B1 (en) * 2018-08-06 2019-07-09 Luminar Technologies, Inc. Detecting distortion using other sensors
CN111060923A (zh) * 2019-11-26 2020-04-24 武汉乐庭软件技术有限公司 一种多激光雷达的汽车驾驶障碍物检测方法及系统
CN111190183A (zh) * 2018-11-13 2020-05-22 通用汽车环球科技运作有限责任公司 用于雷达系统中目标检测的滑动窗口积分方案
CN112578406A (zh) * 2021-02-25 2021-03-30 北京主线科技有限公司 一种车辆环境信息感知方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10345437B1 (en) * 2018-08-06 2019-07-09 Luminar Technologies, Inc. Detecting distortion using other sensors
CN111190183A (zh) * 2018-11-13 2020-05-22 通用汽车环球科技运作有限责任公司 用于雷达系统中目标检测的滑动窗口积分方案
CN109558838A (zh) * 2018-11-29 2019-04-02 北京经纬恒润科技有限公司 一种物体识别方法及系统
CN109558854A (zh) * 2018-12-05 2019-04-02 百度在线网络技术(北京)有限公司 障碍物感知方法、装置、电子设备及存储介质
CN111060923A (zh) * 2019-11-26 2020-04-24 武汉乐庭软件技术有限公司 一种多激光雷达的汽车驾驶障碍物检测方法及系统
CN112578406A (zh) * 2021-02-25 2021-03-30 北京主线科技有限公司 一种车辆环境信息感知方法及装置

Similar Documents

Publication Publication Date Title
KR102614323B1 (ko) 수동 및 능동 측정을 이용한 장면의 3차원 지도 생성
US11821988B2 (en) Ladar system with intelligent selection of shot patterns based on field of view data
US11620835B2 (en) Obstacle recognition method and apparatus, storage medium, and electronic device
CN109598066B (zh) 预测模块的效果评估方法、装置、设备和存储介质
EP3812793B1 (fr) Procédé, système et équipement de traitement d'informations, et support de stockage informatique
US11609329B2 (en) Camera-gated lidar system
JP7239703B2 (ja) 領域外コンテキストを用いたオブジェクト分類
JP6696697B2 (ja) 情報処理装置、車両、情報処理方法およびプログラム
CN105100780B (zh) 使用选择的像素阵列分析的光学安全监视
US10849543B2 (en) Focus-based tagging of sensor data
CN110713087B (zh) 一种电梯门状态检测方法及装置
WO2020243962A1 (fr) Procédé de détection d'objet, dispositif électronique et plateforme mobile
CN106934347B (zh) 障碍物识别方法及装置、计算机设备及可读介质
CN111157977B (zh) 用于自动驾驶车辆的、使用时间-数字转换器和多像素光子计数器的lidar峰值检测
US20220179094A1 (en) Systems and methods for implementing a tracking camera system onboard an autonomous vehicle
WO2021007320A1 (fr) Détection d'objet dans des nuages de points
WO2022198637A1 (fr) Procédé et système de filtrage de bruit en nuage de points et plate-forme mobile
WO2022188279A1 (fr) Procédé et appareil de détection, et radar laser
KR20220110034A (ko) 대상체의 기하학적 특성을 반영하여 확장된 표현 범위를 가지는 인텐시티 정보를 생성하는 방법 및 그러한 방법을 수행하는 라이다 장치
US20230139578A1 (en) Predicting agent trajectories in the presence of active emergency vehicles
US11994589B2 (en) Vapor detection in lidar point cloud
WO2022217522A1 (fr) Procédé et dispositif de détection de cible, système de détection, plateforme mobile et support de stockage
US11774596B2 (en) Streaming object detection within sensor data
JP2020154913A (ja) 物体検出装置及び方法、交通支援サーバ、コンピュータプログラム及びセンサ装置
TWI843116B (zh) 移動物體檢測方法、裝置、電子設備及存儲介質

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21936411

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21936411

Country of ref document: EP

Kind code of ref document: A1