WO2022217522A1 - Target sensing method and device, detection system, movable platform and storage medium - Google Patents

Target sensing method and device, detection system, movable platform and storage medium Download PDF

Info

Publication number
WO2022217522A1
WO2022217522A1 PCT/CN2021/087327 CN2021087327W WO2022217522A1 WO 2022217522 A1 WO2022217522 A1 WO 2022217522A1 CN 2021087327 W CN2021087327 W CN 2021087327W WO 2022217522 A1 WO2022217522 A1 WO 2022217522A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
point cloud
time window
perception
cloud data
Prior art date
Application number
PCT/CN2021/087327
Other languages
French (fr)
Chinese (zh)
Inventor
杨帅
朱晏辰
陈亚林
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/087327 priority Critical patent/WO2022217522A1/en
Publication of WO2022217522A1 publication Critical patent/WO2022217522A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00

Definitions

  • the present application relates to the field of intelligent perception, and in particular, to a target perception method, device, detection system, movable platform and computer-readable storage medium.
  • target perception is usually performed based on the acquired point cloud frames, so as to determine the environmental conditions around the movable platform, and provide information for the movable platform.
  • the motion control of the platform provides guidance information.
  • the maximum frequency of target perception is determined by the frequency of the point cloud frame. For example, for a mobile platform equipped with a scanning lidar, the acquisition frequency of the point cloud frame is 10 Hz, then, the The target sensing frequency that the mobile platform can achieve can only reach 10Hz at the highest. If there are objects with different properties in the environment around the movable platform, the properties may be different moving speeds, or different distances from the radar, and so on. Based on different attributes, different objects often have different frequencies of perception requirements. For example, for fast-moving objects, a faster perception frequency is required; for objects that are closer, a faster perception frequency is also required.
  • the limited target sensing frequency in the related art may cause the object to be unable to be sensed in time, which in turn leads to insufficient target sensing sensitivity, which may bring security risks.
  • the embodiments of the present application provide a target perception method, device, movable platform and computer readable storage medium.
  • a target perception method comprising: in the process of acquiring point cloud data constituting a frame of point cloud frames, performing at least one time in the frame of the point cloud frame The target point cloud data in the window is extracted, and the target point cloud data corresponds to the target area in the field of view of the point cloud frame; respectively, based on the target point cloud data extracted in each time window, the time window is obtained.
  • Target perception result output the target perception result of the time window; wherein, the point cloud frame includes point cloud data obtained by the radar scanning the target area for multiple times, and the extraction frequency of the point cloud data within the target duration At least twice as often as the point cloud frame is acquired.
  • a target sensing apparatus includes: a memory and a processor, and a computer program stored in the memory and executable on the processor, the processor executing the program When the method described in the first aspect of the present application is implemented.
  • a detection system includes: a light source for emitting a light pulse sequence; a scanning module for changing the optical path of the light pulse sequence, so as to monitor the field of view Scanning; a detection module for detecting the light beam reflected by the object of the light pulse sequence to obtain point cloud data, wherein each point cloud point data in the point cloud data is used to indicate the corresponding point cloud point.
  • the output module is used to continuously output point cloud frames, and each point cloud frame includes multiple point cloud point data;
  • the perception module is used to perform the following operations: after obtaining the In the process of point cloud data, the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the target point cloud data corresponds to the target area in the field of view of the point cloud frame; respectively; Obtain the target perception result of the time window based on the target point cloud data extracted in each time window; output the target perception result of the time window; wherein, the point cloud frame includes the scanning module's detection of the target area For point cloud data acquired by scanning multiple times, the extraction frequency of point cloud data within the time window is at least twice the acquisition frequency of the point cloud frame.
  • a movable platform is provided, where the movable platform includes a radar and the target sensing device according to the second aspect of the embodiments of the present application.
  • a computer-readable storage medium where computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed, the first aspect of the embodiments of the present application is implemented. method.
  • the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and obtain the target perception result of the time window based on the target point cloud data extracted in each time window, and finally output the target perception result of the time window.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame
  • the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high.
  • FIG. 1 is a schematic diagram of an application scenario of a target sensing method according to an exemplary embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a radar according to an exemplary embodiment of the present application.
  • FIG. 3 is a flow chart of a target perception method according to an exemplary embodiment of the present application.
  • FIG. 4A is a schematic diagram showing a result of scanning a target area by a radar according to an exemplary embodiment of the present application.
  • FIG. 4B is a schematic diagram showing the scanning result of another radar on a target area according to an exemplary embodiment of the present application.
  • FIG. 5A is a schematic diagram of a first process of collecting point cloud data in different time windows according to an exemplary embodiment of the present application.
  • FIG. 5B is a schematic diagram of a second process of collecting point cloud data in different time windows according to an exemplary embodiment of the present application.
  • FIG. 5C is a schematic diagram of a third process of collecting point cloud data of different time windows according to an exemplary embodiment of the present application.
  • FIG. 5D is a schematic diagram of a fourth process of collecting point cloud data in different time windows according to an exemplary embodiment of the present application.
  • Fig. 6 is a flow chart of obtaining the target perception result of each time window based on the point cloud data extracted in each time window according to an exemplary embodiment of the present application.
  • Fig. 7 is a schematic diagram of scanning the surrounding environment of a radar according to an exemplary embodiment of the present application.
  • Fig. 8 is a flow chart of determining the length of a time window according to a historical target perception result according to an exemplary embodiment of the present application.
  • Fig. 9 is a flow chart of determining the length of a time window based on the target perception object according to an exemplary embodiment of the present application.
  • Fig. 10 is a flow chart showing the output of the first target perception result according to an exemplary embodiment of the present application.
  • Fig. 11 is a flow chart showing the correlation output of a target perception result according to an exemplary embodiment of the present application.
  • Fig. 12 is a flow chart showing the output of a second target perception result according to an exemplary embodiment of the present application.
  • Fig. 13 is a flow chart of selecting a target perception model according to an exemplary embodiment of the present application.
  • Fig. 14 is a flow chart showing the output of a motion trajectory according to an exemplary embodiment of the present application.
  • Fig. 15 is a flow chart showing the output of a predicted motion trajectory according to an exemplary embodiment of the present application.
  • FIG. 16 is a schematic diagram of a biprism scanning assembly according to an exemplary embodiment of the present application.
  • FIG. 17A is a schematic diagram showing the result of scanning a target area based on a biprism scanning component according to an exemplary embodiment of the present application.
  • FIG. 17B is a schematic diagram showing the result of scanning a target area based on another biprism scanning component according to an exemplary embodiment of the present application.
  • FIG. 18 is a schematic diagram of a triangular prism scanning assembly according to an exemplary embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of a target sensing apparatus according to an exemplary embodiment of the present application.
  • first, second, third, etc. may be used in this application to describe various information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information without departing from the scope of the present application.
  • word "if” as used herein can be interpreted as "at the time of” or "when” or "in response to determining.”
  • FIG. 1 shows a schematic diagram of an application scenario.
  • a smart car 102 equipped with a lidar 101 can use the lidar 101 to obtain point cloud frames, perform target perception on the point cloud frames, and obtain target perception of the surrounding environment. As a result, the operation of the smart car is further guided.
  • the radar may include a transmitter 201 , a collimating element 202 , a scanning module 203 and a detector 204 .
  • the transmitter 201 may be used to emit light pulses.
  • the transmitter 201 may include at least one light-emitting chip, which may emit laser beams at certain time intervals.
  • the collimating element 202 can be used for collimating the light pulse emitted by the transmitter, and it can specifically be a collimating lens or other elements capable of collimating the light beam.
  • the scanning module 203 can be used to change the propagation direction of the collimated light beam, so that the light beam is irradiated on different points.
  • the scanning module may include at least one optical element that can reflect, refract, diffract, etc. the light beam, such as a prism, mirror, galvanometer, etc., thereby changing the propagation path of the light beam.
  • the optical element can be rotated under the drive of the driver. In this way, when the transmitter continuously emits light pulses, different light pulses can be emitted in different directions, so as to reach different positions and realize the scanning of a certain area by the radar. When an object is present in the scanned area, the light beam is reflected by the object back to the radar and detected by the radar detector 204 . In this way, the radar can collect point cloud data containing surrounding environment information.
  • the foregoing embodiment is only an exemplary description of the radar, and the radar may also have other structures, which are not limited in the embodiments of the present application.
  • the obtained environmental perception information is relatively limited, and if a point cloud data is acquired and processed once, the computing speed of the system is extremely high. Therefore, multiple point cloud data obtained by scanning the radar in its field of view (FOV) area are usually stored first.
  • a common practice is to output the point cloud acquired within a certain period of time as a frame of point cloud frame. After the point cloud frame is acquired, target perception can be performed based on the point cloud data in one or more frames of point cloud frames, so as to obtain the target object contained in the surrounding environment of the radar and related information of the target object.
  • the frequency of target perception is limited by the acquisition frequency of the point cloud frames (ie the frame rate of the point cloud frames). For example, for a radar system with a frame rate of 10Hz of point cloud frames, the target perception frequency can only reach 10Hz at the highest.
  • a lower object perception frequency may be appropriate.
  • the environment to be perceived contains fast-moving target objects.
  • the perception of fast-moving target objects has more significance. Taking a smart car as an example, perceiving a fast-moving target object in the environment and making a timely response to it is a key issue to ensure the safe driving of smart cars, and it is also an important factor restricting the wide application of smart cars.
  • an embodiment of the present application provides a target sensing method, as shown in FIG. 3 .
  • the target perception methods described above include:
  • Step 301 In the process of acquiring point cloud data constituting a frame of point cloud frames, extract target point cloud data in at least one time window within the frame of the point cloud frame, and the target point cloud data corresponds to the the target area in the field of view of the point cloud frame;
  • Step 302 obtaining the target perception result of the time window based on the target point cloud data extracted in each time window;
  • Step 303 Output the target perception result of the time window.
  • the point cloud frame includes point cloud data obtained by the radar scanning the target area for multiple times, and the extraction frequency of the point cloud data in the time window is at least twice the acquisition frequency of the point cloud frame.
  • the radar will scan the focused area (that is, the target area) in its field of view multiple times to obtain the point cloud data.
  • the point cloud frame whose density reaches a certain value realizes the key monitoring of the target area.
  • the scanning results of the target area (area within the rectangular frame) in the field of view by a radar with a frame rate of 10 Hz of point cloud in the time range of 50 ms and 100 ms, respectively, are schematic diagrams. With the increase of the number of scans, a higher density of point cloud data can be obtained, so that the target perception of smaller objects in the surrounding environment can be performed, and the target object information in the surrounding environment can be obtained more comprehensively and with higher spatial resolution. .
  • the point cloud frame contains the point cloud data obtained by the radar scanning the target area for many times
  • the inventor of the present application found that, in addition to the target perception based on the point cloud frame, the point cloud can also be detected first.
  • the target point cloud data within a certain time window of the frame is extracted. There can be multiple time windows.
  • the target point cloud data of each time window includes the point cloud obtained by the radar scanning the target area at least once. data.
  • target perception is performed based on the target point cloud data extracted from each time window, so that in the process of acquiring a point cloud frame, at least two target perception results for the target area are acquired, that is, super frame rate perception, which has the ability to Beneficial effects of improving radar perception sensitivity, real-time performance, and safety.
  • FIGS. 5A to 5D respectively show the process of collecting point cloud data in different time windows.
  • point cloud frames are usually acquired first, such as the first frame of point cloud frames, the second frame of point cloud frames, and the third frame of point cloud frames as shown in FIG. 5A to FIG. 5D .
  • object perception is performed based on each point cloud frame.
  • the target perception method provided by the above-mentioned embodiments of the present application, the target perception is performed based on the point cloud data of a point cloud frame without waiting for all the point cloud data of a point cloud frame to be acquired, but a The target point cloud data within at least one time window is extracted.
  • the target point cloud data whose time window is T1 is extracted, and the target perception is performed based on the extracted target point cloud data.
  • the extraction frequency of the point cloud data in the time window T1 is at least twice the acquisition frequency of the point cloud frame, that is, the acquisition time of one point cloud frame is at least twice the time window T1
  • the acquisition frequency of the target perception results is greater than the frequency of the point cloud frame, so as to realize super frame rate perception.
  • the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the at least one time window can be a plurality of time windows of the same length, for example, as shown in FIG. 5A, Within the duration of a point cloud frame, the time windows for extracting different target point cloud data are all time windows with a duration of T1; it can also be multiple time windows of different lengths, such as shown in Figure 5B, in a Within the duration of the point cloud frame, the time window for extracting different target point cloud data can include multiple time windows of different durations such as duration T1, T2 and T3; of course, it can also be multiple time windows of the same length.
  • Extract the target point cloud data within the target point cloud data and also extract the target point cloud data in multiple time windows of different lengths. Extraction is also performed synchronously with the same time window of duration T2, the point cloud data extracted from the time window of time duration T1 and the time window of duration T2 overlap, and then two parallel The time window extracts the point cloud data in the point cloud frame.
  • time windows of different lengths are used for target perception of objects with different properties, and the properties include at least one of the following: the type of the object, the size of the object, the relative distance of the object to the radar distance, speed of movement of the object, etc. The corresponding relationship between the length of the time window and the attributes of the object will be described later.
  • the above at least one time window is a time window within a point cloud frame.
  • the lengths of the time windows may be the same or different, which is not limited in this embodiment of the present application.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame
  • the extraction frequency of the point cloud data in the time window is at least the same as that of the point cloud frame Therefore, based on the target sensing method provided by the embodiment of the present application, compared with the target sensing method based on point cloud frames, super frame rate sensing can be realized for the target area, that is, when one frame of point cloud frame is obtained In the process of obtaining multiple target perception results, it can realize faster perception of the surrounding environment and improve the real-time, sensitivity and safety of target perception.
  • the radar will scan the key areas in its field of view multiple times to obtain point cloud frames whose density of point cloud data reaches a certain value. Focused monitoring of the target area. Based on the difference in the scanning components of the radar, there are differences in the area where the radar performs multiple scans.
  • the radar is capable of performing multiple scans of a local area of its field of view, eg, the radar's scanning components include galvanometers and/or mirrors; while in some embodiments, the radar It is not possible to select the area that it scans multiple times, only the entire area of its field of view can be scanned multiple times, for example, the scanning component of the radar includes a rotating prism. Therefore, the target area may be the entire area of the radar's field of view, or may be a partial area of the radar's field of view, which is not limited in this embodiment of the present application.
  • the target point cloud data of the time window includes the radar's target point cloud data.
  • Point cloud data obtained by at least one scan of the target area.
  • the specific scanning times of the target area by the radar within the time window is not limited in this embodiment of the present application, and can be determined by those skilled in the art according to actual needs. For example, for some application scenarios that require relatively low accuracy of target perception results, if the size of the object to be recognized is relatively large, the requirements for the density of the point cloud are not very high, which can be obtained based on one scan within the time window.
  • the point cloud data obtains the target perception results that meet the requirements; for some application scenarios that require relatively high accuracy of the target perception results, if the size of the object to be recognized is relatively small, the requirements for the point cloud density are slightly higher. Scan multiple times to obtain point cloud data with sufficient density, and then obtain target perception results that meet the requirements based on the point cloud data obtained from multiple scans.
  • the manner of determining the number of times the radar scans the target area may be determined based on multiple experiments, or may be determined based on pre-calculation based on a physical model, and of course, may also be determined based on other methods. This is also not limited in the application examples. Several specific embodiments will be given hereinafter for illustrative illustration.
  • the target point cloud data in the time window includes the point cloud data obtained by the radar scanning the target area at least once
  • target perception results also have different effects based on the number of scans.
  • the target point cloud data of the time window includes the point cloud data obtained by the radar scanning the target area a few times, then based on the target point cloud data of the time window, the accuracy of the obtained target perception results It is relatively low, but has a relatively high perception frequency; when the target point cloud data of the time window includes the point cloud data obtained by the radar scanning the target area for many times, then the target point based on the time window Cloud data, although the perception frequency is relatively low, can obtain high-precision target perception results.
  • the duration of the point cloud frame includes multiple time windows.
  • the extracting target point cloud data in at least one time window in the frame of the point cloud frame includes: respectively: Extracting target point cloud data in each of the multiple consecutive time windows, for example, as shown in FIGS. 5A to 5C .
  • the time window may be multiple time windows of the same length, or multiple time windows of different lengths, and of course, it may also be the extraction of target point cloud data within multiple time windows of the same length , and also extracts target point cloud data within multiple time windows of different lengths, which is not limited in this embodiment of the present application.
  • the extracting the target point cloud data in at least one time window in the frame of the point cloud frame includes: separately extracting each of the multiple non-consecutive time windows.
  • the target point cloud data within a time window is extracted.
  • the time window can be multiple time windows of the same length (as shown in FIG. 5D ), or multiple time windows of different lengths, and of course, it can also be a target within multiple time windows of the same length.
  • the point cloud data is extracted, and the target point cloud data in multiple time windows of different lengths are also extracted, which is not limited in this embodiment of the present application.
  • the target area can be continuously monitored; when extracting the target point cloud data in each of the multiple non-consecutive time windows and performing target perception, certain computing resources can be saved.
  • Those skilled in the art can select the continuity of the time window according to the actual application situation, so as to adapt to different application requirements.
  • the time window may be a time window of different lengths, and the time windows of different lengths are used to perform target perception on objects with different attributes.
  • the two time windows of different lengths may not overlap each other.
  • Figure 5B When two time windows of different lengths do not overlap each other, they correspond to point cloud data in a time range, and are only used for super-frequency perception of objects with one attribute.
  • the time window T1 is used to perceive a dog at a distance of 5m from the radar
  • the time window T2 is used to perceive a person at a distance of 5m from the radar.
  • the target point cloud data obtained within a time range can only be used to perceive a dog at a distance of 5m, or a person at a distance of 5m.
  • two time windows of different lengths may overlap.
  • the point cloud data in the overlapped time range is essentially used for super-frequency perception of objects with at least two attributes.
  • the time window T1 is used to perceive the puppy at a distance of 5m from the radar
  • the time window T2 is used to perceive a person at a distance of 5m from the radar as an example, when the time window T1 and time
  • the target point cloud data obtained in the overlapped time range is used to perceive both the dog at 5m and the person at 5m.
  • the point cloud data obtained in the same time range can be used to perceive multiple objects with different attributes at the same time. , which can make full use of the acquired point cloud data, improve the utilization of point cloud data, and obtain richer target perception results.
  • the point cloud data obtained by the radar contains information such as the three-dimensional coordinates, and/or color, and/or reflectivity of the target object located in the surrounding environment of the radar.
  • the detector of the radar detects the return light signal from the target object to obtain point cloud data, it may not perform data processing on the point cloud data, but directly forward the detected original signal to other control units for data processing. , or the radar first performs certain data processing on the point cloud data to obtain depth information or coordinate information corresponding to the point cloud data.
  • the original signal within the time window can be directly extracted for data processing and target perception.
  • the radar first performs certain data processing on the point cloud data, and obtains the depth information or coordinate information corresponding to the point cloud data, it can be based on the depth information or coordinate information of the point cloud, and firstly based on the depth information or coordinate information of the point cloud.
  • the data extraction rules are used to extract point cloud data, and then target perception is performed on the extracted target point cloud data.
  • the duration of the point cloud frame includes at least two time windows of different lengths, wherein the data extraction rules corresponding to the time windows of different lengths are different.
  • the data extraction rules corresponding to time windows of different lengths can be unified data extraction rules pre-set by developers according to the needs of application scenarios, or an initial data extraction rule set in the initial stage of target perception, which can be set in the subsequent In the process of target perception, based on the actual effect of target perception corresponding to each time window, including the accuracy of target perception and the speed of target perception, etc., the initial data extraction rules are automatically adjusted and determined to be suitable for the corresponding length.
  • the data extraction rule for the time window may also be other data extraction rules, which are not specifically limited in this embodiment of the present application.
  • the target point cloud data extracted in time windows of different lengths correspond to point cloud data in different depth ranges in the target area.
  • the radar first performs certain data processing on the point cloud data to obtain the depth information or coordinate information corresponding to the point cloud data. Then, in this case, for time windows of different lengths, point cloud data satisfying different conditions in the target area can be extracted, and target perception can be performed based on the extracted point cloud data.
  • point cloud data can be filtered and extracted first, corresponding to time windows of different lengths, and point cloud data in different depth ranges can be extracted.
  • the depth range corresponds to the length of the time window.
  • the target point cloud data extracted in time windows of different lengths correspond to point cloud data in different directions in the target area.
  • point cloud data can be filtered and extracted first, corresponding to time windows of different lengths, and point cloud data in different orientations can be extracted.
  • the orientation corresponds to the length of the time window.
  • target point cloud data extraction can be performed based on different data extraction rules.
  • specific examples are given. Those skilled in the art should understand that the following embodiments are only exemplary descriptions, and time windows of different lengths may also be other data extraction rules, which are not limited in the embodiments of the present application.
  • the data extraction rule includes: for time windows of different lengths, extracting target point cloud data that satisfies a preset condition, and the preset condition corresponds to the length of the time window; or, for different lengths time window, extract all point cloud data in each time window.
  • the point cloud data contains depth information or coordinate information, etc.
  • the point cloud data For time windows of different lengths used to perceive objects with different attributes, it is possible to first determine whether the point cloud data in each time window satisfies the preset conditions corresponding to the time windows of the length, and if so, the point cloud data Extract it as the target point cloud data for super frame rate sensing. If not satisfied, do not extract this point cloud data as the target point cloud data for super frame rate sensing.
  • the multiple point cloud data acquired within the time window T1 may correspond to different depths.
  • the target to be perceived is determined, for example, the dog that is to be perceived is a puppy at a distance of 5m from the radar, then, of the multiple point cloud data acquired within the time window T1, only the point cloud data with a depth of 5m It is useful for object perception, and the rest of the data is redundant. Therefore, only the point cloud data with a depth of 5m can be extracted as the target point cloud data, and then the target perception can be performed.
  • the preset condition may not only be that the target point cloud data is located at a preset depth, but also that the coordinates of the target point cloud data are located at preset coordinates, and of course, it may also be other
  • the preset condition is not limited in this embodiment of the present application.
  • the target perception can be directly performed on all the point cloud data in each time window. .
  • super frame rate can be implemented for the target object corresponding to the time window.
  • Time windows of different lengths are used to sense target objects with different properties. For a target object that is closer to the radar, faster frequency perception is often required; while for a target object that is farther away from the radar, the requirement for the perception frequency is lower than that of a target object that is closer.
  • the point cloud data with a smaller depth can be extracted as the target point cloud data, and then the target objects with a closer distance can be sensed more frequently.
  • the point cloud data with a larger depth can be extracted as the target point cloud data, and then the target object with a slightly farther distance can be sensed at a lower frequency.
  • the data extraction rule includes: extracting target point cloud data of a first depth for a first time window, and extracting a second depth for a second time window whose length is greater than the first time window The target point cloud data of , the first depth is smaller than the second depth.
  • the target point cloud data with a smaller depth is extracted to perform super frame rate perception on the close-range target object; for a longer time window, the target point with a larger depth is extracted.
  • Cloud data for super frame rate perception of distant objects It can meet the perception requirements of the target objects at different distances in the real world, so that the movable platform equipped with the radar can obtain the situation of the target objects at different distances, and make corresponding responses in time, so as to improve the operation efficiency of the movable platform. safety.
  • step 302 obtain the target of the time window based on the point cloud data extracted in each time window.
  • the perception result may be to perform target perception directly based on all the extracted point cloud data, and obtain the super frame rate perception result corresponding to the time window.
  • step 302 the target perception result of each time window is obtained based on the point cloud data extracted in each time window, as shown in FIG. 6, including:
  • Step 601 Acquire the depth or coordinates of all the point cloud data, and determine the point cloud data satisfying the preset depth or coordinates;
  • Step 602 according to the point cloud data satisfying a preset depth or coordinates, acquire a target perception result including a target perception object located at a preset depth.
  • step 301 even if all point cloud data in the time window is extracted, before target perception is performed based on the extracted point cloud data, the extracted data can still be processed based on the above embodiment.
  • the point cloud data is screened again for the target point cloud data, and the point cloud data that meets the preset depth or coordinates is selected as the target point cloud data for target perception, so that the target perception result of a certain depth can be obtained.
  • the target-aware process saves certain computing resources.
  • time windows of different lengths are used to perceive objects with different attributes
  • the attributes include at least one of the following: the type of the object, the size of the object, the relative value of the object to the object The distance of the radar, the speed of movement of the object, etc.
  • step 302 acquiring the target perception results of the time windows based on the point cloud data extracted in each time window, respectively, includes: acquiring objects containing different types of target perception objects based on time windows of different lengths Perceived results. For example, based on the time window of length T1, the target perception result containing the dog is acquired; based on the time window of length T2, the target perception result containing the truck is acquired.
  • acquiring target perception results containing different types of target perception objects based on time windows of different lengths includes: acquiring the target perception results according to a variety of preset target perception methods, wherein each The target perception method corresponds to a length of time window and is used to identify a preset target object.
  • the target perception objects to be recognized are dogs and trucks; the dog corresponds to a time window of length T1, and the truck corresponds to a time window of length T2, and T1 is not equal to T2; The dog uses the first object perception method for object perception, and the truck uses the second object perception method for object perception.
  • step 301 extract the target point cloud data in the time window T1 and time window T2 in the frame of the point cloud frame; in step 302, based on The first target perception method is to perform target perception on the target point cloud data extracted in the time window T1 to obtain the first target perception result including the dog; based on the second target perception method, the target point cloud extracted in the time window T2 is detected.
  • the data is subjected to object perception to obtain a second object perception result containing the truck.
  • the multiple different target sensing methods may be based on multiple different neural network models for target sensing, each of the neural network models being used to identify a preset target object.
  • each target perception method may also be an algorithm other than the neural network model, such as a traditional feature-based recognition algorithm, which is used to identify a preset target object. This application The embodiment does not limit this.
  • each of the target sensing methods corresponds to a length of time window and is used to identify a preset target
  • the accuracy of the target perception of the window will be relatively high.
  • step 302 based on time windows of different lengths, acquiring target perception results containing different types of target perception objects includes: acquiring the target perception results according to a preset same type of target perception method, Wherein, the same target perception method is used to identify multiple preset target objects.
  • the target perception objects to be identified are dogs and trucks; the dog corresponds to a time window of length T1, the truck corresponds to a time window of length T2, and T1 is not equal to T2; in addition, the third target perception method, which can be used for object perception both for dogs and trucks.
  • target sensing can be performed on the target point cloud data extracted in the time window T1 and the time window T2 based on the third target sensing method. , respectively, to obtain the first target perception result including the dog and the second target perception result including the truck.
  • the same target perception method may be based on the same neural network model for target perception, and the same neural network model is used to identify multiple preset target objects.
  • an algorithm other than the neural network model may also be used to identify various preset target objects, which is not limited in this embodiment of the present application.
  • one target sensing method corresponds to time windows of multiple lengths and is used to identify multiple preset target objects, this target sensing method performs Object awareness takes up less storage space and has wider applicability.
  • the time windows of different lengths are used for object perception for objects with different properties.
  • the determination of the length of time windows corresponding to objects with different attributes is introduced.
  • the length of the time window is predetermined. There may be various methods for pre-setting the length of the time window, which is not limited in this embodiment of the present application. Next, several specific examples are given.
  • the length of the time window may be that after the software and hardware parameters of the radar are determined, multiple experiments are performed on different target objects in different target areas by using the radar, and based on the multiple experiments , the determined time windows corresponding to different target objects, and the length of the time window corresponds to the target objects.
  • the length of the time window may be determined based on a predetermined algorithm.
  • a predetermined algorithm There may be multiple algorithms, which are not limited in this embodiment of the present application. An example of a specific algorithm is given below:
  • FIG. 7 it is a schematic diagram of scanning the surrounding environment of the radar according to the embodiment of the present application.
  • 701 is the radar
  • 702 is the target object located in the target area of the radar
  • point A and point B are the two adjacent point cloud data obtained by the radar during the scanning process in its target area
  • the corresponding spatial position d represents the distance from the target object to the radar
  • h represents the size of the target object
  • r represents the point cloud angular resolution of the radar.
  • pre-recording and analysis can be performed on the change rule of the point cloud resolution of the radar in the target area. It is recorded that the radar scans the target area X times within the duration of acquiring a point cloud frame. Then, the function of the point cloud angular resolution r of the radar in the target area changing with the number of scans x can be recorded as:
  • X is a positive integer.
  • the effective perception capability of the radar is closely related to the angular resolution: the smaller the angular resolution, the farther the effective perception distance is; or, at the same distance, the smaller the angular resolution, the radar can be A smaller target object is perceived.
  • the angular resolution required to perceive the target object should satisfy:
  • ⁇ t the time interval between two scans of the radar in the target area. Then, the actual shortest point in the target area can be taken once every t time.
  • Cloud data where t is the length of the time window mentioned above, and its expression can be written as:
  • one or more target perception objects may be pre-determined, and then based on the pre-determined one or more target perception objects, the one or more target perception objects may be determined based on the above algorithm.
  • the length of the time window corresponding to the target-aware object may be any algorithm.
  • the length of the time window can be determined in other ways besides the preset method.
  • the length of the time window may be determined by: determining the length of the time window according to historical target perception results.
  • the historical target perception result includes at least one of the following: the target perception result obtained based on the point cloud frame before the current time window, or the target obtained based on the point cloud data in the time window before the current time window Perceived results.
  • the target sensing object to be sensed may not be determined in advance, but the length of the current time window may be adaptively determined based on the historical target sensing results obtained before the current time window. For example, according to the target perception result of the previous point cloud frame, it can be determined that there is a running car 5m in front of the radar, then the length of the time window corresponding to the car can be determined based on the above algorithm, Superframe rate sensing is performed on the car at its highest frequency allowed. This method does not depend on pre-setting, has wider applicability and flexibility, and can be applied to complex application environments.
  • determining the length of the time window according to historical target perception results includes:
  • Step 801 according to the historical target perception result, determine the target perception object
  • Step 802 Determine the length of the time window based on the target perception object.
  • the target sensing object may be determined based on the historical target sensing result.
  • the determination of the target-aware object may be implemented with reference to related technologies. For example, the objects included in the historical target perception results may be determined, and then the objects are screened based on certain conditions to determine the target perception objects.
  • the certain condition may be that the size of the object satisfies a certain size, the depth of the object satisfies a certain depth, the object is a certain kind of object, etc., and then the object is perceived based on the determined target, based on the above
  • the various embodiments described determine the length of the time window.
  • the length of the time window is determined based on the target perception object.
  • the length of the time window corresponding to the target sensing object may be searched from the preset mapping relationship between the sensing object and the time window length as the length of the current time window.
  • the length of the time window can also be determined in other ways.
  • the length of the time window is determined based on the target perception object, as shown in FIG. 9 , including:
  • Step 901 based on the target perception object, determine the motion speed of the target perception object and/or the depth of the target perception object and/or the target point cloud angular resolution;
  • Step 902 Determine the length of the time window based on the motion speed and/or the depth and/or the angular resolution of the target point cloud.
  • various attribute information of the target sensing object can be obtained. For example, based on the target perception object determined by one or more historical target perception results, the distance of the target perception object from the radar (ie the depth of the target perception object) and the size of the target perception object can be obtained. For another example, based on a target perception object determined based on multiple historical target perception results, the movement speed of the target perception object may be determined.
  • the length of the time window corresponding to the depth can be found from the preset mapping relationship between the depth and the length of the time window as the length of the current time window;
  • the height value of apply the algorithm described above or other algorithms to determine the length of the current time window.
  • the length of the time window corresponding to the motion speed can be found from the preset mapping relationship between the motion speed and the time window length as the length of the current time window.
  • the algorithm described above or other algorithms may be applied to determine the length of the current time window.
  • the various embodiments of determining the length of the time window based on the motion speed and/or the depth and/or the angular resolution of the target point cloud are only illustrative, Certainly, the length of the time window may also be determined in other manners based on the information, which is not limited in this embodiment of the present application.
  • time windows of different lengths may be set in consideration of different movement speeds. For example, when there are fast-running pedestrians and stationary pedestrians in the target area, the fast-running pedestrians have a greater impact on the movable platform carried by the radar and should be carried out in a shorter time window.
  • Target perception to perceive the target perception object in real time, so as to respond quickly to the target perception object in time.
  • the depths corresponding to multiple target sensing objects that is, the distances of the target sensing objects relative to the radar
  • different depths can be considered, and different lengths can be set. time window. For example, when the target area has a car that is only 5m away from the radar and a car that is 10m away from the radar, then, for the car that is closer to the radar, it will not affect the movable platform on which the radar is mounted. The impact of the target is greater, and the target perception should be carried out in a shorter time window to perceive the target perception object in real time, so as to respond quickly to the target perception object in time.
  • the target sensing objects with different speeds and depths are exemplified as the same kind of objects.
  • the target sensing objects with different speeds and depths may also be different kinds of objects. object, which is not limited in this embodiment of the present application.
  • each point cloud frame in the process of acquiring the point cloud data constituting a frame of point cloud, after extracting the target point cloud data in at least one time window in the frame of the point cloud frame, each point cloud frame can be directly processed in real time.
  • the point cloud data within the time window is subjected to target perception, and the target perception result of the time window is output, that is, the flow chart of the output of the target perception result is shown in FIG. 10 .
  • FIG. 10 only takes the extraction of point cloud data within a time window and the super-frequency perception as an example for description.
  • point cloud data in multiple different time windows may be extracted simultaneously, and the different time windows may overlap, so as to obtain multiple target perception results including different target perception objects.
  • the point cloud frame contains the point cloud data obtained by the radar scanning its field of view area. Compared with the point cloud data of the time window, the point cloud data density corresponding to the point cloud frame is higher. , which can obtain more detailed target perception results in more space. Therefore, in some embodiments, as shown in FIG. 11 , the method may further include:
  • Step 1101 obtaining the perception result of the point cloud frame
  • Step 1102 associate and output the perception result of the point cloud frame with a plurality of first perception results.
  • each of the first perception results is a target perception result acquired based on point cloud data of a time window in the process of acquiring point cloud data constituting the point cloud frame.
  • the flowchart of the above embodiment is shown in FIG. 12 .
  • the plurality of first perception results are output in the order in which they are acquired, and in this process, the perception results of the point cloud frame are continuously output, so that the perception results of the point cloud frame and the multiple The corresponding output of the first perception result.
  • other associated output manners may also be used, which are not limited in this embodiment of the present application.
  • the perception result of the point cloud frame is output in association with a plurality of first perception results, so that the more detailed target perception results obtained based on the point cloud frame can be output at the same time.
  • the more real-time target sensing results obtained by the multiple first sensing results satisfy both the requirements of the spatial sensitivity of the perception and the requirements of the temporal sensitivity of the perception.
  • the method before extracting the target point cloud data in at least one time window in the frame of the point cloud frame, as shown in FIG. 13 , the method further includes:
  • Step 1301 receive a trigger instruction
  • Step 1302 under the trigger of the trigger instruction, output target perception model selection information, and the target perception model at least includes: a super frame rate perception mode, and/or, a normal frame rate perception mode;
  • Step 1303 when it is determined to be in the super frame rate perception mode, extract the target point cloud data in at least one time window and output the acquired target perception result; when it is determined to be the normal frame rate perception mode, the point cloud The frame performs object perception and outputs the acquired object perception results.
  • the target point cloud data in at least one time window is extracted and the obtained target perception result is output, and the point cloud frame is subjected to target perception and output. Target perception results.
  • the sensing mode is determined based on the received trigger instruction, and then the target sensing result is output accordingly, which can facilitate interaction with users and enhance user experience.
  • the method may further include:
  • Step 1401 acquiring multiple target perception results of the same time window
  • Step 1402 based on the obtained multiple target sensing results, obtain the motion trajectories of the target sensing objects corresponding to the multiple identical time windows;
  • Step 1403 outputting the motion track.
  • the motion trajectory of the target sensing object is determined and output based on the target sensing results of multiple identical time windows, so that the user or monitoring personnel can respond accordingly based on the motion trajectory.
  • the method may further include:
  • Step 1501 based on the motion track, obtain the predicted motion track of the target perception object
  • Step 1502 outputting the predicted motion trajectory.
  • step 1501 based on the motion trajectory, obtains the predicted motion trajectory of the target perception object, which may be implemented with reference to related technologies, or may be implemented by an algorithm independently developed by a developer, which is not limited in this embodiment of the present application. It can be seen from the above embodiments that, based on the obtained target perception results, determining the predicted motion trajectory of the target perception object can make relatively reliable predictions for the target objects around the radar, and then can make more reliable predictions for the target objects. Timely response improves the sensitivity and safety of the system.
  • the target area in each of the above embodiments may be the entire area of the radar field of view, or may be a partial area of the radar field of view.
  • the central area of the field of view of the acquired point cloud frame is usually the area that needs to be monitored. Therefore, in some embodiments, the target area is a central area in the field of view of the point cloud frame.
  • the target area may also be another area set based on application requirements, which is not limited in this embodiment of the present application.
  • the radar scans the target area multiple times, which may be achieved by controlling the rotational speed of at least one component in the scanning module of the radar to reach at least a preset threshold in the target area.
  • the threshold may be determined based on a predetermined target perception object, or may be determined based on a target perception object determined in historical frames, and the embodiment of the present application does not limit the determination of the threshold.
  • the scanning module includes a double-prism scanning assembly composed of a first prism and a second prism, and the threshold is rotated by the first prism and the second prism by two within the time window.
  • the rotation speed corresponding to the circle is determined; wherein, the rotation speed of the first prism and the second prism are equal in value and opposite in direction.
  • the schematic diagram of the double-prism scanning assembly composed of the first prism and the second prism can be shown in FIG. 16 , wherein 1601 is the first prism, and 1602 is the second prism.
  • 1601 is the first prism
  • 1602 is the second prism.
  • the rotational speeds of the first prism and the second prism are resolved as w1,-w1.
  • the output frame rate of the point cloud frame of the radar is 10 Hz
  • N>1 super frame rate scanning can be achieved.
  • the number of scans to the target area is 2N times within the duration of a single point cloud frame (as shown in FIG. 17A ); when all When there is a slight difference in the deflection ability of the first prism and the second prism to light, the number of scans of the target area is N times within the duration of a single point cloud frame (as shown in FIG. 17B ).
  • the scanning module includes a triangular prism scanning component composed of a third prism, a fourth prism and a fifth prism, and the threshold value is corresponding to the rotation of the fifth prism two times within the acquisition time window.
  • the rotational speed is determined; wherein, the rotational speed of the third prism and the fourth prism are equal in value and opposite in direction.
  • FIG. 18 The schematic diagram of the triangular prism scanning assembly composed of the third prism, the fourth prism and the fifth prism can be shown in FIG. 18 , wherein 1801 to 1803 are the third prism, the fourth prism and the fifth prism, respectively.
  • 1801 to 1803 are the third prism, the fourth prism and the fifth prism, respectively.
  • the above embodiment is described with a specific example: when two of the prisms adopt the constant velocity reverse strategy (taking the rotation speed of the fifth prism and the sixth prism as an example), the third prism, the fourth prism and the third prism When the rotational speed of the pentaprism is resolved as w1, w2, -w2.
  • the scanning module includes a scanning component composed of a sixth prism and a mirror
  • the threshold is a rotational speed corresponding to two rotations of the sixth prism within the time window
  • the mirror The rotational speed corresponding to the preset scanning range is determined jointly by scanning twice within the time window.
  • the rotational speeds of the sixth prism and the reflecting mirror are respectively w1 and w2, and both w1 and w2 are greater than or equal to the rotational speed r corresponding to one rotation of the driving component (such as a motor, etc.) of the scanning assembly.
  • the rotational speed of the sixth prism and the mirror is simultaneously increased to H times the respective original rotational speeds, where H is a positive integer greater than or equal to 2, then within the duration of one point cloud frame, The number of scans for the entire target area is H.
  • the scanning module includes a galvanometer
  • the threshold value is determined by the rotation speed corresponding to the galvanometer scanning twice a preset scanning range within the time window.
  • the target perception method in the process of acquiring the point cloud data constituting one frame of point cloud frame, at least one time window in the frame of the point cloud frame is analyzed.
  • the target point cloud data within each time window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high.
  • an embodiment of the present application further provides a target sensing device, the schematic diagram of which is shown in FIG. 19 .
  • the target sensing device 1901 includes: a processor 1902 and a memory 1903, and a computer program stored in the memory 1903 and executable on the processor 1902, the processor 1902 implements the following steps when executing the program:
  • the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the target point cloud data corresponds to the point cloud frame the target area in the field of view;
  • a target perception result for the time window is output.
  • the point cloud frame includes point cloud data obtained by the radar scanning the target area for multiple times, and the extraction frequency of the point cloud data in the time window is at least twice the acquisition frequency of the point cloud frame.
  • the target point cloud data of the time window includes point cloud data obtained by the radar scanning the target area at least once.
  • the duration of the point cloud frame includes multiple consecutive time windows
  • the extracting the target point cloud data in at least one time window in the frame of the point cloud frame includes: separately extracting the multiple time windows.
  • the target point cloud data within each time window in consecutive time windows is extracted.
  • two time windows of different lengths overlap.
  • the duration of the point cloud frame includes at least two time windows of different lengths, wherein the data extraction rules corresponding to the time windows of different lengths are different.
  • the target point cloud data extracted in time windows of different lengths corresponds to point cloud data in different depth ranges in the target area; or, the target point cloud data extracted in time windows of different lengths corresponds to the Point cloud data in different orientations in the target area.
  • the data extraction rules corresponding to time windows of different lengths are different, and the data extraction rules include: for time windows of different lengths, extracting target point cloud data that meets preset conditions, the preset conditions and the time The length of the window corresponds; or, for time windows of different lengths, extract all the point cloud data in each time window.
  • the preset condition includes at least one of the following: the target point cloud data is located at a preset depth, or the coordinates of the target point cloud data are located at preset coordinates.
  • the data extraction rules include: for a first time window, extracting target point cloud data at a first depth, and for a second time window whose length is greater than the first time window, extracting target point cloud data at a second depth. data, the first depth is less than the second depth.
  • the data extraction rule is to extract all point cloud data in each time window for time windows of different lengths, and obtain the target of the time window based on the point cloud data extracted in each time window.
  • Perceived results including:
  • a target sensing result including the target sensing object located at the preset depth is acquired.
  • the acquiring the target perception results of the time windows based on the point cloud data extracted in each time window respectively includes: acquiring target perception results including different types of target perception objects based on time windows of different lengths.
  • target sensing results containing different types of target sensing objects including:
  • each of the target perception methods corresponds to a time window of a length and is used to identify a preset target object;
  • the target perception result is acquired according to a preset same type of target perception method, wherein the same type of target perception method is used to identify multiple preset target objects.
  • the multiple different target perception methods include: performing target perception based on multiple different neural network models, each of which is used to identify a preset target object; the same type of target perception.
  • the method includes: performing target perception based on the same neural network model, wherein the same neural network model is used for recognizing multiple preset target objects.
  • the length of the time window is preset; or,
  • the length of the time window is determined in the following manner: according to the historical target perception result, the length of the time window is determined; wherein, the historical target perception result includes at least one of the following: based on the point cloud frame before the current time window.
  • determining the length of the time window according to the historical target perception result includes: determining a target perception object according to the historical target perception result; and determining the length of the time window based on the target perception object.
  • determine the length of the time window including: based on the target perception object, determine the movement speed of the target perception object and/or the depth of the target perception object and/or Target point cloud angular resolution; based on the motion speed and/or the depth and/or the target point cloud angular resolution, determine the length of the time window.
  • determining the length of the time window based on the motion speed and/or the depth and/or the target angular resolution including: determining the time of the first length for the target perception object of the first motion speed Window, the time window of the second length is determined for the target perception object of the second movement speed, the first movement speed is greater than the second movement speed, and the first length is smaller than the second length; And ⁇ or, right A time window of a third length is determined for the target perception object at a third depth, and a time window of a fourth length is determined for the target perception object at a fourth depth, where the third depth is smaller than the fourth depth, and the third length is smaller than the fourth length.
  • the method further includes: acquiring a perception result of the point cloud frame; associating and outputting the perception result of the point cloud frame with a plurality of first perception results; wherein each of the first perception results is In the process of acquiring the point cloud data constituting the point cloud frame, the target perception result acquired based on the point cloud data of a time window.
  • the method before extracting the target point cloud data in at least one time window in the frame of the point cloud frame, the method further includes: receiving a trigger instruction; under the trigger of the trigger instruction, outputting the target perception Mode selection information, the target perception mode at least includes: a super frame rate perception mode, and/or, a normal frame rate perception mode; when it is determined to be a super frame rate perception mode, the target point cloud data in at least one time window is processed. Extract and output the acquired target perception result; when it is determined to be the normal frame rate perception mode, perform target perception on the point cloud frame and output the acquired target perception result.
  • the method further includes: acquiring multiple target perception results of the same time window; based on the acquired multiple target perception results, acquiring the motion trajectories of the target perception objects corresponding to the multiple same time windows; outputting the motion trajectory.
  • the method further includes: acquiring a predicted motion trajectory of the target sensing object based on the motion trajectory; and outputting the predicted motion trajectory.
  • the target area is a central area in the field of view of the point cloud frame.
  • the radar scans the target area multiple times, which is achieved by controlling the rotational speed of at least one component in the scanning module of the radar to reach at least a preset threshold in the target area.
  • the scanning module includes a double-prism scanning component composed of a first prism and a second prism, and the threshold value is corresponding to the two rotations of the first prism and the second prism within the time window.
  • the rotational speed of the first prism and the second prism are equal in value and opposite in direction; or,
  • the scanning module includes a triangular prism scanning component composed of a third prism, a fourth prism and a fifth prism, and the threshold value is determined by the rotation speed corresponding to two rotations of the fifth prism within the acquisition time window;
  • the rotational speed of the third prism and the fourth prism are equal in value and opposite in direction; or,
  • the scanning module includes a scanning component composed of a sixth prism and a reflecting mirror, the threshold value is the rotational speed corresponding to the rotation of the sixth prism for two revolutions within the time window, and the reflecting mirror is within the time window
  • the rotational speed corresponding to the preset scanning range of the two scans is jointly determined; or,
  • the scanning module includes a galvanometer, and the threshold value is determined by the rotation speed corresponding to the galvanometer scanning twice a preset scanning range within the time window.
  • the target sensing device provided by the embodiments of the present application, in the process of acquiring the point cloud data constituting a frame of point cloud frame, at least one time in the frame of the point cloud frame can be detected.
  • the target point cloud data in the window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high.
  • an embodiment of the present application also provides a detection system, the detection system includes: a light source for emitting a light pulse sequence; a scanning module for changing the optical path of the light pulse sequence to scan the field of view; The detection module is used to detect the light beam reflected by the object of the light pulse sequence to obtain point cloud data, wherein each point cloud point data in the point cloud data is used to indicate the object corresponding to the point cloud point.
  • an output module for continuously outputting point cloud frames, each point cloud frame including multiple point cloud point data
  • a perception module for performing the following operations: after acquiring the point cloud that constitutes one frame of point cloud frame In the process of data processing, the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the target point cloud data corresponds to the target area in the field of view of the point cloud frame; The target point cloud data extracted in each time window obtains the target perception result of the time window; the target perception result of the time window is output; wherein, the point cloud frame includes the scanning module to the target area multiple times For point cloud data acquired by scanning, the frequency of extracting point cloud data within the time window is at least twice the frequency of acquiring point cloud frames.
  • the sensing module may also be used to execute the steps in the methods of the foregoing embodiments, and details are not described herein again in this application.
  • the process of acquiring the point cloud data constituting one frame of point cloud frame at least one time window in the frame of the point cloud frame can be detected.
  • the target point cloud data within each time window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high.
  • an embodiment of the present application also provides a movable platform, where the movable platform includes a radar and the target sensing device described in each of the foregoing embodiments.
  • the movable platform may be a smart car, a robot, an unmanned aerial vehicle, etc., which is not limited in this embodiment of the present application.
  • the process of acquiring the point cloud data constituting a frame of point cloud frames at least one time period in the frame of the point cloud frame can be analyzed.
  • the target point cloud data in the window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output.
  • the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high.
  • the present application also provides a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, any one of the foregoing method steps is implemented.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (a non-exhaustive list) of computer readable storage media include: electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable Programmable Read Only Memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CDROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • the signal medium of the computer-readable storage medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (eg, through the Internet using an Internet service provider) connect).
  • LAN local area network
  • WAN wide area network
  • an external computer eg, through the Internet using an Internet service provider
  • the above-mentioned apparatus can execute the methods provided by all the foregoing embodiments of the present application, and has corresponding functional modules and beneficial effects for executing the above-mentioned methods.
  • the above-mentioned apparatus can execute the methods provided by all the foregoing embodiments of the present application, and has corresponding functional modules and beneficial effects for executing the above-mentioned methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A target sensing method, comprising: in the process of acquiring point cloud data constituting a frame of point cloud frame, extracting target point cloud data in at least one time window within the frame of the point cloud frame, the target point cloud data corresponding to a target area in a field of view of the point cloud frame (301); acquiring a target perception result of the time window on the basis of target point cloud data extracted in each time window, respectively (302); and outputting the target perception result of the time window (303), wherein the point cloud frame comprises point cloud data acquired by performing multiple scans on the target area by a radar, and the extraction frequency of the point cloud data in the time window being at least twice the acquisition frequency of the point cloud frame. Further provided are a target sensing device, a detection system, a movable platform, and a computer-readable storage medium.

Description

目标感知方法、装置、探测系统、可移动平台及存储介质Target perception method, device, detection system, removable platform and storage medium 技术领域technical field
本申请涉及智能感知领域,尤其涉及一种目标感知方法、装置、探测系统、可移动平台及计算机可读存储介质。The present application relates to the field of intelligent perception, and in particular, to a target perception method, device, detection system, movable platform and computer-readable storage medium.
背景技术Background technique
目前,对于搭载有雷达的可移动平台,例如智能汽车、机器人等等,通常基于所获取的点云帧,进行目标感知,以确定所述可移动平台周围的环境状况,并为所述可移动平台的运动控制提供指引信息。At present, for movable platforms equipped with radar, such as smart cars, robots, etc., target perception is usually performed based on the acquired point cloud frames, so as to determine the environmental conditions around the movable platform, and provide information for the movable platform. The motion control of the platform provides guidance information.
然而,相关技术中,目标感知的最大频率由所述点云帧的频率来确定,例如,对于搭载有扫描型激光雷达的可移动平台,其点云帧的获取频率为10Hz,那么,所述可移动平台能够实现的目标感知频率,最高也只能达到10Hz。如果可移动平台周围的环境中存在着不同属性的物体,所述属性可以是不同的运动速度,或者距离雷达不同的距离等等。基于属性的不同,对于不同的物体,往往具有不同频率的感知需求。例如,对快速运动的物体,就需要更快的感知频率;对于距离较近的物体,也需要更快的感知频率。而相关技术中受限的目标感知频率,可能会导致对所述物体不能及时的感知,进而导致目标感知灵敏度不够高,有可能带来安全隐患。However, in the related art, the maximum frequency of target perception is determined by the frequency of the point cloud frame. For example, for a mobile platform equipped with a scanning lidar, the acquisition frequency of the point cloud frame is 10 Hz, then, the The target sensing frequency that the mobile platform can achieve can only reach 10Hz at the highest. If there are objects with different properties in the environment around the movable platform, the properties may be different moving speeds, or different distances from the radar, and so on. Based on different attributes, different objects often have different frequencies of perception requirements. For example, for fast-moving objects, a faster perception frequency is required; for objects that are closer, a faster perception frequency is also required. However, the limited target sensing frequency in the related art may cause the object to be unable to be sensed in time, which in turn leads to insufficient target sensing sensitivity, which may bring security risks.
发明内容SUMMARY OF THE INVENTION
为克服相关技术中所存在的目标感知频率受限,导致目标感知灵敏度不够高,进而存在带来安全隐患的风险的问题,本申请实施例提供了一种目标感知方法、装置、可移动平台及计算机可读存储介质。In order to overcome the problem that the target perception frequency is limited in the related art, the target perception sensitivity is not high enough, and there is a risk of potential safety hazards, the embodiments of the present application provide a target perception method, device, movable platform and computer readable storage medium.
根据本申请实施例的第一方面,提供一种目标感知方法,所述方法包括:在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,所述目标点云数据对应所述点云帧的视场中的目标区域;分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果;输出所述时间窗口的目标感知结果;其中,所述点云帧包含雷达对所述目标区域多次扫描而获取的点云数据,所述目标时长内的点云数据的提取频率至少是所述点云帧的获 取频率的二倍。According to a first aspect of the embodiments of the present application, there is provided a target perception method, the method comprising: in the process of acquiring point cloud data constituting a frame of point cloud frames, performing at least one time in the frame of the point cloud frame The target point cloud data in the window is extracted, and the target point cloud data corresponds to the target area in the field of view of the point cloud frame; respectively, based on the target point cloud data extracted in each time window, the time window is obtained. Target perception result; output the target perception result of the time window; wherein, the point cloud frame includes point cloud data obtained by the radar scanning the target area for multiple times, and the extraction frequency of the point cloud data within the target duration At least twice as often as the point cloud frame is acquired.
根据本申请实施例的第二方面,提供一种目标感知装置,所述装置包括:存储器和处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本申请第一方面所述的方法。According to a second aspect of the embodiments of the present application, there is provided a target sensing apparatus, the apparatus includes: a memory and a processor, and a computer program stored in the memory and executable on the processor, the processor executing the program When the method described in the first aspect of the present application is implemented.
根据本申请实施例的第三方面,提供一种探测系统,所述探测系统包括:光源,用于出射光脉冲序列;扫描模块,用于改变所述光脉冲序列的光路,以对视场进行扫描;探测模块,用于对所述光脉冲序列经物体反射的光束进行检测,得到点云数据,其中所述点云数据中的每个点云点数据用于指示所述点云点对应的物体的距离和/或方位;输出模块,用于连续输出点云帧,每帧点云帧包括多个点云点数据;感知模块,用于执行以下操作:在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,所述目标点云数据对应所述点云帧的视场中的目标区域;分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果;输出所述时间窗口的目标感知结果;其中,所述点云帧包含所述扫描模块对所述目标区域多次扫描而获取的点云数据,所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍。According to a third aspect of the embodiments of the present application, there is provided a detection system, the detection system includes: a light source for emitting a light pulse sequence; a scanning module for changing the optical path of the light pulse sequence, so as to monitor the field of view Scanning; a detection module for detecting the light beam reflected by the object of the light pulse sequence to obtain point cloud data, wherein each point cloud point data in the point cloud data is used to indicate the corresponding point cloud point. The distance and/or orientation of the object; the output module is used to continuously output point cloud frames, and each point cloud frame includes multiple point cloud point data; the perception module is used to perform the following operations: after obtaining the In the process of point cloud data, the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the target point cloud data corresponds to the target area in the field of view of the point cloud frame; respectively; Obtain the target perception result of the time window based on the target point cloud data extracted in each time window; output the target perception result of the time window; wherein, the point cloud frame includes the scanning module's detection of the target area For point cloud data acquired by scanning multiple times, the extraction frequency of point cloud data within the time window is at least twice the acquisition frequency of the point cloud frame.
根据本申请实施例的第四方面,提供一种可移动平台,所述可移动平台包括雷达以及本申请实施例第二方面所述的目标感知装置。According to a fourth aspect of the embodiments of the present application, a movable platform is provided, where the movable platform includes a radar and the target sensing device according to the second aspect of the embodiments of the present application.
根据本申请实施例的第五方面,提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时实现本申请实施例第一方面所述的方法。According to a fifth aspect of the embodiments of the present application, a computer-readable storage medium is provided, where computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed, the first aspect of the embodiments of the present application is implemented. method.
本申请实施例提供的技术方案可以包括以下有益效果:The technical solutions provided by the embodiments of the present application may include the following beneficial effects:
基于本申请实施例所提供的目标感知方法,在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,并分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果,最后输出所述时间窗口的目标感知结果。由于在本申请实施例所提供的目标感知方法中,所提取的目标点云数据对应所述点云帧的视场中的目标区域,且所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍,因此,基于本申请实施例所提供的目标感知方法,相对于基于点云帧的目标感知方法,能够对目标区域实现超帧率感知,进而能够对周围环境实现更快的感知,提高目标感知的实时性、灵敏度以及 安全性。Based on the target perception method provided by the embodiment of the present application, in the process of acquiring the point cloud data constituting one frame of point cloud frame, the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and obtain the target perception result of the time window based on the target point cloud data extracted in each time window, and finally output the target perception result of the time window. Because in the target perception method provided by the embodiment of the present application, the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high. Therefore, based on the target sensing method provided by the embodiment of the present application, compared with the target sensing method based on the point cloud frame, super frame rate sensing can be realized for the target area, and further The surrounding environment realizes faster perception and improves the real-time, sensitivity and safety of target perception.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本说明书。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the drawings that are used in the description of the embodiments. Obviously, the drawings in the following description are only some embodiments of the present application. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative labor.
图1是本申请根据一示例性实施例示出的一种目标感知方法的应用场景示意图。FIG. 1 is a schematic diagram of an application scenario of a target sensing method according to an exemplary embodiment of the present application.
图2是本申请根据一示例性实施例示出的一种雷达的结构示意图。FIG. 2 is a schematic structural diagram of a radar according to an exemplary embodiment of the present application.
图3是本申请根据一示例性实施例示出的一种目标感知方法流程图。FIG. 3 is a flow chart of a target perception method according to an exemplary embodiment of the present application.
图4A是本申请根据一示例性实施例示出的一种雷达对目标区域的扫描结果示意图。FIG. 4A is a schematic diagram showing a result of scanning a target area by a radar according to an exemplary embodiment of the present application.
图4B是本申请根据一示例性实施例示出的另一种雷达对目标区域的扫描结果示意图。FIG. 4B is a schematic diagram showing the scanning result of another radar on a target area according to an exemplary embodiment of the present application.
图5A是本申请根据一示例性实施例示出的第一种对不同时间窗口的点云数据进行采集的过程示意图。FIG. 5A is a schematic diagram of a first process of collecting point cloud data in different time windows according to an exemplary embodiment of the present application.
图5B是本申请根据一示例性实施例示出的第二种对不同时间窗口的点云数据进行采集的过程示意图。FIG. 5B is a schematic diagram of a second process of collecting point cloud data in different time windows according to an exemplary embodiment of the present application.
图5C是本申请根据一示例性实施例示出的第三种对不同时间窗口的点云数据进行采集的过程示意图。FIG. 5C is a schematic diagram of a third process of collecting point cloud data of different time windows according to an exemplary embodiment of the present application.
图5D是本申请根据一示例性实施例示出的第四种对不同时间窗口的点云数据进行采集的过程示意图。FIG. 5D is a schematic diagram of a fourth process of collecting point cloud data in different time windows according to an exemplary embodiment of the present application.
图6是本申请根据一示例性实施例示出的一种分别基于每个时间窗口内所提取的点云数据获取所述时间窗口的目标感知结果的流程图。Fig. 6 is a flow chart of obtaining the target perception result of each time window based on the point cloud data extracted in each time window according to an exemplary embodiment of the present application.
图7是本申请根据一示例性实施例示出的一种雷达对其周围的环境进行扫描的 示意图。Fig. 7 is a schematic diagram of scanning the surrounding environment of a radar according to an exemplary embodiment of the present application.
图8是本申请根据一示例性实施例示出的一种根据历史目标感知结果确定时间窗口的长度的流程图。Fig. 8 is a flow chart of determining the length of a time window according to a historical target perception result according to an exemplary embodiment of the present application.
图9是本申请根据一示例性实施例示出的一种基于所述目标感知对象确定时间窗口的长度的流程图。Fig. 9 is a flow chart of determining the length of a time window based on the target perception object according to an exemplary embodiment of the present application.
图10是本申请根据一示例性实施例示出的第一种目标感知结果输出的流程图。Fig. 10 is a flow chart showing the output of the first target perception result according to an exemplary embodiment of the present application.
图11是本申请根据一示例性实施例示出的一种目标感知结果的关联输出的流程图。Fig. 11 is a flow chart showing the correlation output of a target perception result according to an exemplary embodiment of the present application.
图12是本申请根据一示例性实施例示出的第二种目标感知结果输出的流程图。Fig. 12 is a flow chart showing the output of a second target perception result according to an exemplary embodiment of the present application.
图13是本申请根据一示例性实施例示出的一种目标感知模型选择的流程图。Fig. 13 is a flow chart of selecting a target perception model according to an exemplary embodiment of the present application.
图14是本申请根据一示例性实施例示出的一种运动轨迹的输出的流程图。Fig. 14 is a flow chart showing the output of a motion trajectory according to an exemplary embodiment of the present application.
图15是本申请根据一示例性实施例示出的一种预测运动轨迹的输出的流程图。Fig. 15 is a flow chart showing the output of a predicted motion trajectory according to an exemplary embodiment of the present application.
图16是本申请根据一示例性实施例示出的一种双棱镜扫描式组件的示意图。FIG. 16 is a schematic diagram of a biprism scanning assembly according to an exemplary embodiment of the present application.
图17A是本申请根据一示例性实施例示出的一种基于双棱镜扫描式组件对目标区域进行扫描的结果示意图。FIG. 17A is a schematic diagram showing the result of scanning a target area based on a biprism scanning component according to an exemplary embodiment of the present application.
图17B是本申请根据一示例性实施例示出的另一种基于双棱镜扫描式组件对目标区域进行扫描的结果示意图。FIG. 17B is a schematic diagram showing the result of scanning a target area based on another biprism scanning component according to an exemplary embodiment of the present application.
图18是本申请根据一示例性实施例示出的一种三棱镜扫描式组件的示意图。FIG. 18 is a schematic diagram of a triangular prism scanning assembly according to an exemplary embodiment of the present application.
图19是本申请根据一示例性实施例示出的一种目标感知装置的结构示意图。FIG. 19 is a schematic structural diagram of a target sensing apparatus according to an exemplary embodiment of the present application.
具体实施方式Detailed ways
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail herein, examples of which are illustrated in the accompanying drawings. Where the following description refers to the drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as recited in the appended claims.
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请说明书和所附权利要求书中所使用的单数形式的“一种”、“所述”和 “该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。The terminology used in this application is for the purpose of describing particular embodiments only and is not intended to limit the application. As used in this specification and the appended claims, the singular forms "a," "the," and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It will also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items.
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms first, second, third, etc. may be used in this application to describe various information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other. For example, the first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information without departing from the scope of the present application. Depending on the context, the word "if" as used herein can be interpreted as "at the time of" or "when" or "in response to determining."
目前,对于搭载有雷达的可移动平台,例如智能汽车、机器人等等,通常基于所获取的点云帧,进行目标感知,以确定所述可移动平台周围的环境状况,并为所述可移动平台的运动控制提供指引信息。图1给出了一种应用场景示意图,搭载有激光雷达101的智能汽车102,可以利用所述激光雷达101获取点云帧,对所述点云帧进行目标感知,获取周围的环境的目标感知结果,进而指引所述智能汽车的运行。At present, for movable platforms equipped with radar, such as smart cars, robots, etc., target perception is usually performed based on the acquired point cloud frames, so as to determine the environmental conditions around the movable platform, and provide information for the movable platform. The motion control of the platform provides guidance information. FIG. 1 shows a schematic diagram of an application scenario. A smart car 102 equipped with a lidar 101 can use the lidar 101 to obtain point cloud frames, perform target perception on the point cloud frames, and obtain target perception of the surrounding environment. As a result, the operation of the smart car is further guided.
为了方便理解,在这里,首先对利用雷达获取点云帧以及相关技术中基于点云帧的目标感知技术进行介绍。In order to facilitate understanding, here, firstly, the use of radar to obtain point cloud frames and the target perception technology based on point cloud frames in related technologies are introduced.
在一些实施例中,如图2所示,所述雷达可以包括发射器201、准直元件202、扫描模块203以及探测器204。其中,发射器201可以用于发射光脉冲,在一个例子中,发射器201可以包括至少一个发光芯片,其可以以一定的时间间隔发射激光束。准直元件202可以用于对发射器发射出的光脉冲进行准直,其具体可以是准直透镜或者其他能够准直光束的元件。扫描模块203可以用于改变准直后的光束的传播方向,使光束照射在不同的点上。在一种实施方式中,扫描模块可以包括至少一个光学元件,该光学元件可以对光束进行反射、折射、衍射等,例如棱镜、反射镜、振镜等等,从而改变光束的传播路径。所述光学元件可以在驱动器的驱动下旋转,如此,在发射器不断发射光脉冲时,不同的光脉冲可以以不同的方向出射,从而到达不同的位置,实现所述雷达对一定区域的扫描。当所扫描的区域中存在物体时,则光束会被所述物体反射回所述雷达,并被所述雷达探测器204探测到。这样,所述雷达就可以采集到包含周围环境信息的点云数据。当然,本领域技术人员应当理解,上述实施例仅是对所述雷达的一个示例性说明,所述雷达还可以是其他结构,本申请实施例对此不做限制。In some embodiments, as shown in FIG. 2 , the radar may include a transmitter 201 , a collimating element 202 , a scanning module 203 and a detector 204 . The transmitter 201 may be used to emit light pulses. In one example, the transmitter 201 may include at least one light-emitting chip, which may emit laser beams at certain time intervals. The collimating element 202 can be used for collimating the light pulse emitted by the transmitter, and it can specifically be a collimating lens or other elements capable of collimating the light beam. The scanning module 203 can be used to change the propagation direction of the collimated light beam, so that the light beam is irradiated on different points. In one embodiment, the scanning module may include at least one optical element that can reflect, refract, diffract, etc. the light beam, such as a prism, mirror, galvanometer, etc., thereby changing the propagation path of the light beam. The optical element can be rotated under the drive of the driver. In this way, when the transmitter continuously emits light pulses, different light pulses can be emitted in different directions, so as to reach different positions and realize the scanning of a certain area by the radar. When an object is present in the scanned area, the light beam is reflected by the object back to the radar and detected by the radar detector 204 . In this way, the radar can collect point cloud data containing surrounding environment information. Of course, those skilled in the art should understand that the foregoing embodiment is only an exemplary description of the radar, and the radar may also have other structures, which are not limited in the embodiments of the present application.
仅基于一个孤立的点云数据,所得到的环境感知信息是比较有限的,且如果获取一个点云数据就进行一次处理,则对系统的运算速度要求极高。故,通常将所述雷达在其视场(Field of View,FOV)区域扫描而获取的多个点云数据先进行存储。一种 常见的做法是将累积一定时长内获取到的点云作为一帧点云帧输出。在获取了点云帧之后,可以基于一帧或多帧点云帧中的点云数据进行目标感知,进而获得所述雷达周围环境中所包含的目标对象以及所述目标对象的相关信息。Only based on an isolated point cloud data, the obtained environmental perception information is relatively limited, and if a point cloud data is acquired and processed once, the computing speed of the system is extremely high. Therefore, multiple point cloud data obtained by scanning the radar in its field of view (FOV) area are usually stored first. A common practice is to output the point cloud acquired within a certain period of time as a frame of point cloud frame. After the point cloud frame is acquired, target perception can be performed based on the point cloud data in one or more frames of point cloud frames, so as to obtain the target object contained in the surrounding environment of the radar and related information of the target object.
基于上述介绍,可以看到,由于相关技术采用基于点云帧的目标感知方法,那么,目标感知的频率受限于所述点云帧的获取频率(即点云帧的帧率)。例如,对于点云帧的帧率为10Hz的雷达系统,其目标感知频率最高也只能达到10Hz。Based on the above introduction, it can be seen that since the related art adopts the target perception method based on point cloud frames, the frequency of target perception is limited by the acquisition frequency of the point cloud frames (ie the frame rate of the point cloud frames). For example, for a radar system with a frame rate of 10Hz of point cloud frames, the target perception frequency can only reach 10Hz at the highest.
如果要感知的环境中只存在静态目标对象或者速度比较慢的目标对象,则较低的目标感知频率可能是适用的。然而,实际情况通常是要感知的环境包含着快速运动的目标对象。此外,对于快速运动的目标对象的感知具有更重要的意义。以智能汽车为例,感知到环境中快速运动的目标对象,并对此作出及时的响应,是保证智能汽车安全行驶的关键问题,也是目前制约智能汽车广泛应用的重要因素。If there are only static objects in the environment to be sensed or objects with relatively slow speeds, a lower object perception frequency may be appropriate. However, it is often the case that the environment to be perceived contains fast-moving target objects. In addition, the perception of fast-moving target objects has more significance. Taking a smart car as an example, perceiving a fast-moving target object in the environment and making a timely response to it is a key issue to ensure the safe driving of smart cars, and it is also an important factor restricting the wide application of smart cars.
为了克服相关技术所存在的目标感知频率受限,导致目标感知灵敏度不够高,进而存在带来安全隐患的风险的问题,本申请实施例提供了一种目标感知方法,如图3所示,所述目标感知方法包括:In order to overcome the problem that the target sensing frequency is limited in the related art, resulting in insufficient target sensing sensitivity and thus the risk of potential safety hazards, an embodiment of the present application provides a target sensing method, as shown in FIG. 3 . The target perception methods described above include:
步骤301,在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,所述目标点云数据对应所述点云帧的视场中的目标区域;Step 301: In the process of acquiring point cloud data constituting a frame of point cloud frames, extract target point cloud data in at least one time window within the frame of the point cloud frame, and the target point cloud data corresponds to the the target area in the field of view of the point cloud frame;
步骤302,分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果; Step 302, obtaining the target perception result of the time window based on the target point cloud data extracted in each time window;
步骤303,输出所述时间窗口的目标感知结果。Step 303: Output the target perception result of the time window.
其中,所述点云帧包含雷达对所述目标区域多次扫描而获取的点云数据,所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍。The point cloud frame includes point cloud data obtained by the radar scanning the target area for multiple times, and the extraction frequency of the point cloud data in the time window is at least twice the acquisition frequency of the point cloud frame.
当需要对周围环境进行感知的时候,为了能够对尺寸较小的目标对象进行感知,雷达会对其视场区域中重点关注的区域(即目标区域)进行多次扫描,以获取点云数据的密度达到一定值的点云帧,实现对目标区域的重点监测。如图4A和图4B所示,分别为点云帧帧率为10Hz的雷达在50ms和100ms的时间范围内,对视场中的目标区域(矩形框内的区域)的扫描结果示意图。随着扫描次数的增加,能够获取更大密度的点云数据,从而能够对周围环境中更小的物体进行目标感知,可以更加全面地、以更高空间分辨率获取周围环境中的目标对象信息。When it is necessary to perceive the surrounding environment, in order to perceive the small-sized target object, the radar will scan the focused area (that is, the target area) in its field of view multiple times to obtain the point cloud data. The point cloud frame whose density reaches a certain value realizes the key monitoring of the target area. As shown in Fig. 4A and Fig. 4B, the scanning results of the target area (area within the rectangular frame) in the field of view by a radar with a frame rate of 10 Hz of point cloud in the time range of 50 ms and 100 ms, respectively, are schematic diagrams. With the increase of the number of scans, a higher density of point cloud data can be obtained, so that the target perception of smaller objects in the surrounding environment can be performed, and the target object information in the surrounding environment can be obtained more comprehensively and with higher spatial resolution. .
当所述点云帧包含雷达对所述目标区域多次扫描而获取的点云数据的时候,本申请的发明人发现,除了可以基于点云帧进行目标感知,还可以先对所述点云帧一定时间窗口内的目标点云数据进行提取,所述时间窗口可以有多个,每个所述时间窗口的目标点云数据包含所述雷达对所述目标区域至少一次扫描而获取的点云数据。然后基于每个时间窗口所提取的目标点云数据进行目标感知,从而在获取一个点云帧的过程中,获取至少两个对所述目标区域的目标感知结果,即超帧率感知,具有能够提升雷达的感知灵敏度、实时性以及安全性的有益效果。When the point cloud frame contains the point cloud data obtained by the radar scanning the target area for many times, the inventor of the present application found that, in addition to the target perception based on the point cloud frame, the point cloud can also be detected first. The target point cloud data within a certain time window of the frame is extracted. There can be multiple time windows. The target point cloud data of each time window includes the point cloud obtained by the radar scanning the target area at least once. data. Then, target perception is performed based on the target point cloud data extracted from each time window, so that in the process of acquiring a point cloud frame, at least two target perception results for the target area are acquired, that is, super frame rate perception, which has the ability to Beneficial effects of improving radar perception sensitivity, real-time performance, and safety.
下面,结合图5A至5D进行具体说明。图5A至图5D分别给出了对不同时间窗口的点云数据进行采集过程。如前文所述,相关技术中,通常先获取点云帧,如图5A至图5D中所示的第一帧点云帧、第二帧点云帧、第三帧点云帧等。然后,基于每个点云帧进行目标感知。在本申请上述实施例所提供的目标感知方法中,并不等一个点云帧的点云数据全部获取后,再基于点云帧的点云数据进行目标感知,而是对一个点云帧的至少一个时间窗口内的目标点云数据进行提取。以图5A为例,在第一帧点云帧的获取过程中,提取时间窗口为T1的目标点云数据,并基于所提取的目标点云数据进行目标感知。当所述时间窗口T1内的点云数据的提取频率至少是所述点云帧的获取频率的两倍时,即一帧点云帧的获取时间至少是所述时间窗口T1的两倍,则在一帧点云帧的获取过程中,能够获取两次以上的目标感知结果,该目标感知结果的获取频率大于所述点云帧的频率,实现超帧率感知。Hereinafter, a specific description will be given with reference to FIGS. 5A to 5D . Figures 5A to 5D respectively show the process of collecting point cloud data in different time windows. As mentioned above, in the related art, point cloud frames are usually acquired first, such as the first frame of point cloud frames, the second frame of point cloud frames, and the third frame of point cloud frames as shown in FIG. 5A to FIG. 5D . Then, object perception is performed based on each point cloud frame. In the target perception method provided by the above-mentioned embodiments of the present application, the target perception is performed based on the point cloud data of a point cloud frame without waiting for all the point cloud data of a point cloud frame to be acquired, but a The target point cloud data within at least one time window is extracted. Taking FIG. 5A as an example, in the process of acquiring the first point cloud frame, the target point cloud data whose time window is T1 is extracted, and the target perception is performed based on the extracted target point cloud data. When the extraction frequency of the point cloud data in the time window T1 is at least twice the acquisition frequency of the point cloud frame, that is, the acquisition time of one point cloud frame is at least twice the time window T1, then In the process of acquiring one frame of point cloud frame, more than two target perception results can be acquired, and the acquisition frequency of the target perception results is greater than the frequency of the point cloud frame, so as to realize super frame rate perception.
步骤301中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,所述至少一个时间窗口,可以为多个同一长度的时间窗口,例如如图5A所示,在一个点云帧的时长内,对不同的目标点云数据进行提取的时间窗口都为时长为T1的时间窗口;也可以为多个不同长度的时间窗口,例如如图5B所示,在一个点云帧的时长内,对不同的目标点云数据进行提取的时间窗口可以包括时长T1、T2和T3等多个不同时长的时间窗口;当然,还可以是既对多个同一长度的时间窗口内的目标点云数据进行提取,也对多个不同长度的时间窗口内的目标点云数据进行提取,例如如图5C所示,在一个点云帧内,既以时长为T1的相同时间窗口进行提取,也以时长为T2的相同时间窗口进行同步提取,所述时长为T1的时间窗口和所述时长为T2的时间窗口所提取的点云数据存在交叠,进而实现以两个并行的时间窗口对所述点云帧中的点云数据进行提取。当然,还可以是其他的目标点云数据的提取方式,本申请实施例对此不做限制。其中,不同长度的时间窗口,用于对具有不同属性的对象进行目标感知,所述属性至少包括以下之一:所述对象的种类、所述对象的尺寸、所述对象相对 于所述雷达的距离、所述对象的运动速度等等。所述时间窗口的长度与所述对象的属性的对应关系,在后文介绍。In step 301, the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the at least one time window can be a plurality of time windows of the same length, for example, as shown in FIG. 5A, Within the duration of a point cloud frame, the time windows for extracting different target point cloud data are all time windows with a duration of T1; it can also be multiple time windows of different lengths, such as shown in Figure 5B, in a Within the duration of the point cloud frame, the time window for extracting different target point cloud data can include multiple time windows of different durations such as duration T1, T2 and T3; of course, it can also be multiple time windows of the same length. Extract the target point cloud data within the target point cloud data, and also extract the target point cloud data in multiple time windows of different lengths. Extraction is also performed synchronously with the same time window of duration T2, the point cloud data extracted from the time window of time duration T1 and the time window of duration T2 overlap, and then two parallel The time window extracts the point cloud data in the point cloud frame. Of course, other methods for extracting target point cloud data may also be used, which are not limited in this embodiment of the present application. Wherein, time windows of different lengths are used for target perception of objects with different properties, and the properties include at least one of the following: the type of the object, the size of the object, the relative distance of the object to the radar distance, speed of movement of the object, etc. The corresponding relationship between the length of the time window and the attributes of the object will be described later.
此外,上述至少一个时间窗口,是一个点云帧之内的时间窗口。对于不同点云帧之内的时间窗口,时间窗口的长度可以相同,也可以不同,本申请实施例对此不做限制。In addition, the above at least one time window is a time window within a point cloud frame. For time windows within different point cloud frames, the lengths of the time windows may be the same or different, which is not limited in this embodiment of the present application.
通过上述实施例可以看到,由于所提取的目标点云数据对应所述点云帧的视场中的目标区域,且所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍,因此,基于本申请实施例所提供的目标感知方法,相对于基于点云帧的目标感知方法,能够对目标区域实现超帧率感知,即在获取一帧点云帧的过程中,获取多次目标感知结果,进而能够对周围环境实现更快的感知,提高目标感知的实时性、灵敏度以及安全性。It can be seen from the above embodiment that since the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least the same as that of the point cloud frame Therefore, based on the target sensing method provided by the embodiment of the present application, compared with the target sensing method based on point cloud frames, super frame rate sensing can be realized for the target area, that is, when one frame of point cloud frame is obtained In the process of obtaining multiple target perception results, it can realize faster perception of the surrounding environment and improve the real-time, sensitivity and safety of target perception.
前文已经介绍,为了能够对尺寸较小的目标对象进行感知,雷达会对其视场区域中的重点区域进行多次扫描,以获取包含的点云数据的密度达到一定值的点云帧,实现对所述目标区域的重点监测。基于所述雷达的扫描组件的不同,雷达进行多次扫描的区域存在差别。在一些实施例中,所述雷达能够对其视场区域的局部区域进行多次扫描,例如,所述雷达的扫描组件包含振镜和\或反射镜;而在一些实施例中,所述雷达不能对其多次扫描的区域进行选择,只能够对其视场区域的全部进行多次扫描,例如,所述雷达的扫描组件包含旋转棱镜。故所述目标区域,可以是所述雷达的视场的全部区域,也可以是所述雷达的视场的局部区域,本申请实施例对此不做限制。As mentioned above, in order to perceive small-sized target objects, the radar will scan the key areas in its field of view multiple times to obtain point cloud frames whose density of point cloud data reaches a certain value. Focused monitoring of the target area. Based on the difference in the scanning components of the radar, there are differences in the area where the radar performs multiple scans. In some embodiments, the radar is capable of performing multiple scans of a local area of its field of view, eg, the radar's scanning components include galvanometers and/or mirrors; while in some embodiments, the radar It is not possible to select the area that it scans multiple times, only the entire area of its field of view can be scanned multiple times, for example, the scanning component of the radar includes a rotating prism. Therefore, the target area may be the entire area of the radar's field of view, or may be a partial area of the radar's field of view, which is not limited in this embodiment of the present application.
不论所述目标区域对应的是所述雷达的视场的全部区域,还是所述雷达的视场的局部区域,在一些实施例中,所述时间窗口的目标点云数据包含所述雷达对所述目标区域至少一次扫描而获取的点云数据。Regardless of whether the target area corresponds to the entire area of the radar's field of view or a partial area of the radar's field of view, in some embodiments, the target point cloud data of the time window includes the radar's target point cloud data. Point cloud data obtained by at least one scan of the target area.
其中,在所述时间窗口内,所述雷达对所述目标区域的具体扫描次数,本申请实施例对此不做限制,本领域技术人员可以根据实际需求确定。例如,对于一些对目标感知结果精度要求比较低的应用场景,如需要识别的对象的尺寸比较大,则对点云密度的要求不是很高,可以基于所述时间窗口内的一次扫描而获取的点云数据获取符合需求的目标感知结果;而对于一些对目标感知结果精度要求比较高的应用场景,如需要识别的对象的尺寸比较小,则对点云密度的要求就稍微高一些,需要基于多次扫描,获取具有足够密度的点云数据,进而基于多次扫描而获取的点云数据获取符合需求的目标感知结果。此外,所述时间窗口内,所述雷达对所述目标区域扫描次数的确 定方式,可以是基于多次试验确定,还可以是基于物理模型预先计算确定,当然,还可以基于其他方式确定,本申请实施例对此也不做限定。后文将给出几种具体的实施例进行示例性说明。The specific scanning times of the target area by the radar within the time window is not limited in this embodiment of the present application, and can be determined by those skilled in the art according to actual needs. For example, for some application scenarios that require relatively low accuracy of target perception results, if the size of the object to be recognized is relatively large, the requirements for the density of the point cloud are not very high, which can be obtained based on one scan within the time window. The point cloud data obtains the target perception results that meet the requirements; for some application scenarios that require relatively high accuracy of the target perception results, if the size of the object to be recognized is relatively small, the requirements for the point cloud density are slightly higher. Scan multiple times to obtain point cloud data with sufficient density, and then obtain target perception results that meet the requirements based on the point cloud data obtained from multiple scans. In addition, within the time window, the manner of determining the number of times the radar scans the target area may be determined based on multiple experiments, or may be determined based on pre-calculation based on a physical model, and of course, may also be determined based on other methods. This is also not limited in the application examples. Several specific embodiments will be given hereinafter for illustrative illustration.
通过上述实施例可以看到,当所述时间窗口的目标点云数据包含所述雷达对所述目标区域至少一次扫描而获取的点云数据,能够针对所述目标区域实现目标感知。此外,基于扫描次数的不同,目标感知结果也具有不同的效果。当所述时间窗口的目标点云数据包含所述雷达对所述目标区域较少次数扫描而获取的点云数据,则基于所述时间窗口的目标点云数据,所获取的目标感知结果的精度较低,但是具有比较高的感知频率;当所述时间窗口的目标点云数据包含所述雷达对所述目标区域较多次数扫描而获取的点云数据,则基于所述时间窗口的目标点云数据,虽然感知频率相对较低,但是能够获取较高精度的目标感知结果。It can be seen from the above embodiment that when the target point cloud data in the time window includes the point cloud data obtained by the radar scanning the target area at least once, target perception can be achieved for the target area. In addition, the target perception results also have different effects based on the number of scans. When the target point cloud data of the time window includes the point cloud data obtained by the radar scanning the target area a few times, then based on the target point cloud data of the time window, the accuracy of the obtained target perception results It is relatively low, but has a relatively high perception frequency; when the target point cloud data of the time window includes the point cloud data obtained by the radar scanning the target area for many times, then the target point based on the time window Cloud data, although the perception frequency is relatively low, can obtain high-precision target perception results.
在一些实施例中,所述点云帧的时长包括多个时间窗口,步骤301中,所述对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,包括:分别对所述多个连续的时间窗口中的每个时间窗口内的目标点云数据进行提取,例如如图5A至图5C所示。同样,所述时间窗口,可以为多个同一长度的时间窗口,也可以为多个不同长度的时间窗口,当然,还可以是既对多个同一长度的时间窗口内的目标点云数据进行提取,也对多个不同长度的时间窗口内的目标点云数据进行提取,本申请实施例对此不做限制。In some embodiments, the duration of the point cloud frame includes multiple time windows. In step 301, the extracting target point cloud data in at least one time window in the frame of the point cloud frame includes: respectively: Extracting target point cloud data in each of the multiple consecutive time windows, for example, as shown in FIGS. 5A to 5C . Similarly, the time window may be multiple time windows of the same length, or multiple time windows of different lengths, and of course, it may also be the extraction of target point cloud data within multiple time windows of the same length , and also extracts target point cloud data within multiple time windows of different lengths, which is not limited in this embodiment of the present application.
在一些实施例中,步骤301中,所述对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,包括:分别对所述多个非连续的时间窗口中的每个时间窗口内的目标点云数据进行提取。所述时间窗口,可以为多个同一长度的时间窗口(如图5D所示),也可以为多个不同长度的时间窗口,当然,还可以是既对多个同一长度的时间窗口内的目标点云数据进行提取,也对多个不同长度的时间窗口内的目标点云数据进行提取,本申请实施例对此不做限制。In some embodiments, in step 301, the extracting the target point cloud data in at least one time window in the frame of the point cloud frame includes: separately extracting each of the multiple non-consecutive time windows. The target point cloud data within a time window is extracted. The time window can be multiple time windows of the same length (as shown in FIG. 5D ), or multiple time windows of different lengths, and of course, it can also be a target within multiple time windows of the same length. The point cloud data is extracted, and the target point cloud data in multiple time windows of different lengths are also extracted, which is not limited in this embodiment of the present application.
通过上述实施例可以看到,当对所述多个连续的时间窗口中的每个时间窗口内的目标点云数据进行提取并进行目标感知时,能够持续地对所述目标区域进行监测;当对所述多个非连续的时间窗口中的每个时间窗口内的目标点云数据进行提取并进行目标感知时,可以节省一定的计算资源。本领域技术人员,可以根据实际应用情况,对所述时间窗口的连续性进行选择,以适应不同的应用需求。It can be seen from the above embodiment that when the target point cloud data in each of the multiple continuous time windows is extracted and the target is sensed, the target area can be continuously monitored; when When extracting the target point cloud data in each of the multiple non-consecutive time windows and performing target perception, certain computing resources can be saved. Those skilled in the art can select the continuity of the time window according to the actual application situation, so as to adapt to different application requirements.
在本申请实施例所提供的目标感知方法中,如前文所述,所述时间窗口,可以 为不同长度的时间窗口,且不同长度的时间窗口,用于对具有不同属性的对象进行目标感知。In the target perception method provided by the embodiment of the present application, as mentioned above, the time window may be a time window of different lengths, and the time windows of different lengths are used to perform target perception on objects with different attributes.
在一些实施例中,两个不同长度的时间窗口,可以互不交叠。例如,可以如图5B所示。当两个不同长度的时间窗口互不交叠时,则对应着一个时间范围内的点云数据,仅用于对一种属性的对象进行超频率感知。以所述时间窗口仅有两个,且时间窗口T1用于感知距所述雷达5m处的小狗,时间窗口T2用于感知距所述雷达5m处的人为例,当时间窗口T1和时间窗口T2互不交叠,则一个时间范围内所获取的目标点云数据,只能用于感知5m处的小狗,或者用于感知5m处的人。In some embodiments, the two time windows of different lengths may not overlap each other. For example, as shown in Figure 5B. When two time windows of different lengths do not overlap each other, they correspond to point cloud data in a time range, and are only used for super-frequency perception of objects with one attribute. There are only two time windows, and the time window T1 is used to perceive a dog at a distance of 5m from the radar, and the time window T2 is used to perceive a person at a distance of 5m from the radar. For example, when the time window T1 and the time window T2 do not overlap with each other, then the target point cloud data obtained within a time range can only be used to perceive a dog at a distance of 5m, or a person at a distance of 5m.
在一些实施例中,两个不同长度的时间窗口,可以存在交叠。例如,可以如图C所示。当两个不同长度的时间窗口存在交叠时,则交叠的时间范围内的点云数据,实质上用于对至少两种属性的对象进行超频率感知。仍以所述时间窗口仅有两个,且时间窗口T1用于感知距所述雷达5m处的小狗,时间窗口T2用于感知距所述雷达5m处的人为例,当时间窗口T1和时间窗口T2交叠,则在所交叠的时间范围内所获取的目标点云数据,既用于感知5m处的小狗,又用于感知5m处的人。In some embodiments, two time windows of different lengths may overlap. For example, as shown in Figure C. When two time windows of different lengths overlap, the point cloud data in the overlapped time range is essentially used for super-frequency perception of objects with at least two attributes. Still taking the time window only two, and the time window T1 is used to perceive the puppy at a distance of 5m from the radar, and the time window T2 is used to perceive a person at a distance of 5m from the radar as an example, when the time window T1 and time When the windows T2 overlap, the target point cloud data obtained in the overlapped time range is used to perceive both the dog at 5m and the person at 5m.
通过上述实施例可以看到,当两个不同长度的时间窗口,存在交叠时,那么,基于同一个时间范围内所获取的点云数据,能够同时用于对多个不同属性的对象进行感知,可以充分利用所获取的点云数据,提高点云数据利用率,且获取更丰富的目标感知结果。It can be seen from the above embodiment that when two time windows of different lengths overlap, the point cloud data obtained in the same time range can be used to perceive multiple objects with different attributes at the same time. , which can make full use of the acquired point cloud data, improve the utilization of point cloud data, and obtain richer target perception results.
本领域技术人员应当理解,所述雷达所获取的点云数据,包含着位于所述雷达周围环境的目标对象的三维坐标,和\或颜色,和\或反射率等信息。所述雷达的探测器对来自所述目标对象的回光信号进行探测获取点云数据后,可以不对所述点云数据进行数据处理,直接将探测到的原始信号转发给其他控制单元进行数据处理,也可以是所述雷达先对所述点云数据进行一定的数据处理,获取所述点云数据所对应的深度信息或者坐标信息。Those skilled in the art should understand that the point cloud data obtained by the radar contains information such as the three-dimensional coordinates, and/or color, and/or reflectivity of the target object located in the surrounding environment of the radar. After the detector of the radar detects the return light signal from the target object to obtain point cloud data, it may not perform data processing on the point cloud data, but directly forward the detected original signal to other control units for data processing. , or the radar first performs certain data processing on the point cloud data to obtain depth information or coordinate information corresponding to the point cloud data.
对于不对所述点云数据进行数据处理,直接将探测到的原始信号转发给其他处理单元进行数据处理的情况,可以采用直接提取所述时间窗口内的原始信号,进行数据处理以及目标感知。对于所述雷达先对所述点云数据进行一定的数据处理,获取所述点云数据所对应的深度信息或者坐标信息的情况,可以基于所述点云的深度信息或者坐标信息,先基于一定的数据提取规则进行点云数据的提取,再对所提取的目标点云数据进行目标感知。In the case of directly forwarding the detected original signal to other processing units for data processing without performing data processing on the point cloud data, the original signal within the time window can be directly extracted for data processing and target perception. For the case where the radar first performs certain data processing on the point cloud data, and obtains the depth information or coordinate information corresponding to the point cloud data, it can be based on the depth information or coordinate information of the point cloud, and firstly based on the depth information or coordinate information of the point cloud. The data extraction rules are used to extract point cloud data, and then target perception is performed on the extracted target point cloud data.
故,在一些实施例中,所述点云帧的时长包括至少两个不同长度的时间窗口,其中,不同长度的时间窗口对应的数据提取规则不同。Therefore, in some embodiments, the duration of the point cloud frame includes at least two time windows of different lengths, wherein the data extraction rules corresponding to the time windows of different lengths are different.
不同长度的时间窗口对应的数据提取规则,可以是开发人员根据应用场景的需求预先设定的统一的数据提取规则,也可以是在目标感知的初始阶段,设置一个初始数据提取规则,在后续的目标感知过程中,基于每个时间窗口所对应的目标感知的实际效果,包括目标感知的准确率以及目标感知的速度等等,对初始的数据提取规则进行自动调整所确定的适用于对应长度的时间窗口的数据提取规则,当然,还可以是其他的数据提取规则,本申请实施例对此不做具体的限制。The data extraction rules corresponding to time windows of different lengths can be unified data extraction rules pre-set by developers according to the needs of application scenarios, or an initial data extraction rule set in the initial stage of target perception, which can be set in the subsequent In the process of target perception, based on the actual effect of target perception corresponding to each time window, including the accuracy of target perception and the speed of target perception, etc., the initial data extraction rules are automatically adjusted and determined to be suitable for the corresponding length. Of course, the data extraction rule for the time window may also be other data extraction rules, which are not specifically limited in this embodiment of the present application.
在一些实施例中,不同长度的时间窗口内所提取的目标点云数据对应所述目标区域中不同深度范围内的点云数据。In some embodiments, the target point cloud data extracted in time windows of different lengths correspond to point cloud data in different depth ranges in the target area.
在一些情况下,所述雷达先对所述点云数据进行一定的数据处理,获取所述点云数据所对应的深度信息或者坐标信息的情况。那么,在这种情况下,对于不同长度的时间窗口,可以对所述目标区域中满足不同条件的点云数据进行提取,基于所提取的点云数据进行目标感知。In some cases, the radar first performs certain data processing on the point cloud data to obtain the depth information or coordinate information corresponding to the point cloud data. Then, in this case, for time windows of different lengths, point cloud data satisfying different conditions in the target area can be extracted, and target perception can be performed based on the extracted point cloud data.
仍以智能汽车的应用场景为例,对于智能汽车,对距离较近处的目标进行超帧率感知比对距离较远处的目标进行超帧率感知更重要。因为对于行进中的智能汽车,更需要对距离较近处的目标对象进行响应。故,在应用本申请实施例所提供的目标感知方法进行超帧率感知的时候,可以先对点云数据进行筛选提取,对应于不同长度的时间窗口,提取不同深度范围内的点云数据,所述深度范围与所述时间窗口的长度对应。Still taking the application scenario of a smart car as an example, for a smart car, it is more important to sense the super frame rate of a target at a closer distance than to sense a super frame rate of a target farther away. Because for a moving smart car, it is more necessary to respond to the target object at a closer distance. Therefore, when applying the target sensing method provided by the embodiment of the present application for super frame rate sensing, point cloud data can be filtered and extracted first, corresponding to time windows of different lengths, and point cloud data in different depth ranges can be extracted. The depth range corresponds to the length of the time window.
在一些实施例中,不同长度的时间窗口内所提取的目标点云数据对应所述目标区域中不同方位内的点云数据。In some embodiments, the target point cloud data extracted in time windows of different lengths correspond to point cloud data in different directions in the target area.
仍以智能汽车的应用场景为例,对于智能汽车,对于位于正前方的目标进行超帧率感知比对位于其他方向的目标进行超帧率感知更重要。因为,位于其他方向的目标对于智能汽车的行进的影响要小于位于所述智能汽车正方向的目标。故,在应用本申请实施例所提供的目标感知方法进行超帧率感知的时候,可以先对点云数据进行筛选提取,对应于不同长度的时间窗口,提取不同方位内的点云数据,所述方位与所述时间窗口的长度对应。Still taking the application scenario of a smart car as an example, for a smart car, it is more important to sense the super frame rate of the target located in front of it than to sense the super frame rate of the target located in other directions. This is because the influence of the target located in other directions on the traveling of the smart car is smaller than that of the target located in the forward direction of the smart car. Therefore, when applying the target sensing method provided in the embodiment of the present application for super frame rate sensing, point cloud data can be filtered and extracted first, corresponding to time windows of different lengths, and point cloud data in different orientations can be extracted. The orientation corresponds to the length of the time window.
通过上述实施例可以看到,对不同长度的时间窗口,提取所述目标区域中不同 深度范围或者不同方位内的的点云数据,然后基于不同深度范围或者不同方位内的点云数据进行超帧率感知,既能够实现对所述目标区域中的重点深度范围或者重点方位的目标感知,又能够节省一定的用于进行超帧率感知的计算资源。It can be seen from the above embodiments that, for time windows of different lengths, point cloud data in different depth ranges or in different orientations in the target area are extracted, and then superframes are performed based on the point cloud data in different depth ranges or in different orientations. The rate perception can not only realize the target perception of the key depth range or key orientation in the target area, but also save a certain amount of computing resources for super frame rate perception.
对于不同长度的时间窗口,可以基于不同的数据提取规则进行目标点云数据提取。接下来,给出具体的实施例。本领域技术人员应当理解,下述实施例仅为示例性说明,不同长度的时间窗口,还可以是其他的数据提取规则,本申请实施例对此不做限制。For time windows of different lengths, target point cloud data extraction can be performed based on different data extraction rules. Next, specific examples are given. Those skilled in the art should understand that the following embodiments are only exemplary descriptions, and time windows of different lengths may also be other data extraction rules, which are not limited in the embodiments of the present application.
在一些实施例中,所述数据提取规则包括:对不同长度的时间窗口,提取满足预设条件的目标点云数据,所述预设条件与所述时间窗口的长度对应;或者,对不同长度的时间窗口,提取所述每个时间窗口内的全部点云数据。In some embodiments, the data extraction rule includes: for time windows of different lengths, extracting target point cloud data that satisfies a preset condition, and the preset condition corresponds to the length of the time window; or, for different lengths time window, extract all point cloud data in each time window.
当所述雷达先对所述点云数据进行一定的数据处理,所述点云数据包含着深度信息或者坐标信息等时。对于用于感知不同属性对象的不同长度的时间窗口,可以先判断每个时间窗口内的点云数据,是否满足与所述长度的时间窗口对应的预设条件,如果满足,则该点云数据作为用于进行超帧率感知的目标点云数据进行提取,如果不满足,则不将此点云数据作为用于进行超帧率感知的目标点云数据进行提取。仍以时间窗口T1用于感知距所述雷达5m处的小狗为例进行说明。在时间窗口T1内所获取的多个点云数据,可能对应着不同的深度。而如果要感知的目标是确定的,例如,要感知的就是距离所述雷达5m处的小狗,那么,在时间窗T1内所获取的多个点云数据,只有深度为5m的点云数据对目标感知是有用的,其余数据为冗余数据。因此,可以仅提取深度为5m的点云数据作为目标点云数据,然后进行目标感知。When the radar first performs certain data processing on the point cloud data, the point cloud data contains depth information or coordinate information, etc. For time windows of different lengths used to perceive objects with different attributes, it is possible to first determine whether the point cloud data in each time window satisfies the preset conditions corresponding to the time windows of the length, and if so, the point cloud data Extract it as the target point cloud data for super frame rate sensing. If not satisfied, do not extract this point cloud data as the target point cloud data for super frame rate sensing. Still taking the time window T1 for sensing a puppy at a distance of 5 m from the radar as an example for description. The multiple point cloud data acquired within the time window T1 may correspond to different depths. However, if the target to be perceived is determined, for example, the dog that is to be perceived is a puppy at a distance of 5m from the radar, then, of the multiple point cloud data acquired within the time window T1, only the point cloud data with a depth of 5m It is useful for object perception, and the rest of the data is redundant. Therefore, only the point cloud data with a depth of 5m can be extracted as the target point cloud data, and then the target perception can be performed.
在一些实施例中,所述预设条件,除了可以是所述目标点云数据位于预设的深度,还可以是所述目标点云数据的坐标位于预设的坐标,当然,还可以是其他的预设条件,本申请实施例对此不做限制。In some embodiments, the preset condition may not only be that the target point cloud data is located at a preset depth, but also that the coordinates of the target point cloud data are located at preset coordinates, and of course, it may also be other The preset condition is not limited in this embodiment of the present application.
当然,也可以不对每个时间窗口内的点云数据,判断是否满足与所述长度的时间窗口对应的预设条件,而是直接对所述每个时间窗口内的全部点云数据进行目标感知。同样,能够实现对与所述时间窗口对应的目标对象进行超帧率。Of course, instead of judging whether the preset conditions corresponding to the time window of the length are satisfied for the point cloud data in each time window, the target perception can be directly performed on all the point cloud data in each time window. . Likewise, super frame rate can be implemented for the target object corresponding to the time window.
不同长度的时间窗口,用于对不同属性的目标对象进行感知。对于距离与所述雷达较近的目标对象,往往需要进行更快频率的感知;而对于距离与所述雷达较远的目标对象,对感知频率的要求会低于距离较近的目标对象。相应地,在对目标点云数 据进行提取的时候,对于较短的时间窗口,可以提取深度较小的点云数据作为目标点云数据,进而对距离较近的目标对象进行更快频率的感知;对于较长的时间窗口,可以提取深度大一点的点云数据作为目标点云数据,进而对距离稍远的目标对象进行稍低频率的感知。Time windows of different lengths are used to sense target objects with different properties. For a target object that is closer to the radar, faster frequency perception is often required; while for a target object that is farther away from the radar, the requirement for the perception frequency is lower than that of a target object that is closer. Correspondingly, when extracting the target point cloud data, for a shorter time window, the point cloud data with a smaller depth can be extracted as the target point cloud data, and then the target objects with a closer distance can be sensed more frequently. ; For a longer time window, the point cloud data with a larger depth can be extracted as the target point cloud data, and then the target object with a slightly farther distance can be sensed at a lower frequency.
因此,在一些实施例中,所述数据提取规则包括:对第一时间窗口,提取第一深度的目标点云数据,对长度大于所述第一时间窗口的第二时间窗口,提取第二深度的目标点云数据,所述第一深度小于所述第二深度。Therefore, in some embodiments, the data extraction rule includes: extracting target point cloud data of a first depth for a first time window, and extracting a second depth for a second time window whose length is greater than the first time window The target point cloud data of , the first depth is smaller than the second depth.
通过上述实施例可以看到,对较短时间窗口,提取较小深度的目标点云数据,以对近距离的目标对象进行超帧率感知;对较长时间窗口,提取较大深度的目标点云数据,以对远距离的目标对象进行超帧率感知。能够满足真实世界对不同距离的目标对象的感知需求,使搭载所述雷达的可移动平台能够获取不同距离的目标对象的情况,并及时做出相应的响应,提高所述可移动平台的运行的安全性。It can be seen from the above embodiment that, for a shorter time window, the target point cloud data with a smaller depth is extracted to perform super frame rate perception on the close-range target object; for a longer time window, the target point with a larger depth is extracted. Cloud data for super frame rate perception of distant objects. It can meet the perception requirements of the target objects at different distances in the real world, so that the movable platform equipped with the radar can obtain the situation of the target objects at different distances, and make corresponding responses in time, so as to improve the operation efficiency of the movable platform. safety.
当所述数据提取规则为对不同长度的时间窗口提取所述每个时间窗口内的全部点云数据,步骤302,分别基于每个时间窗口内所提取的点云数据获取所述时间窗口的目标感知结果,可以是直接基于所提取的全部点云数据进行目标感知,获取该时间窗口对应的超帧率感知结果。When the data extraction rule is to extract all point cloud data in each time window for time windows of different lengths, step 302 , obtain the target of the time window based on the point cloud data extracted in each time window. The perception result may be to perform target perception directly based on all the extracted point cloud data, and obtain the super frame rate perception result corresponding to the time window.
此外,在一些实施例中,步骤302,分别基于每个时间窗口内所提取的点云数据获取所述时间窗口的目标感知结果,还可以如图6所示,包括:In addition, in some embodiments, in step 302, the target perception result of each time window is obtained based on the point cloud data extracted in each time window, as shown in FIG. 6, including:
步骤601,获取所述全部点云数据的深度或坐标,确定满足预设的深度或坐标的点云数据;Step 601: Acquire the depth or coordinates of all the point cloud data, and determine the point cloud data satisfying the preset depth or coordinates;
步骤602,根据所述满足预设的深度或坐标的点云数据,获取包含位于预设深度的目标感知对象的目标感知结果。 Step 602 , according to the point cloud data satisfying a preset depth or coordinates, acquire a target perception result including a target perception object located at a preset depth.
通过上述实施例,在步骤301中,即使所提取的是所述时间窗口内的全部点云数据,在基于所提取的点云数据进行目标感知之前,仍可以基于上述实施例,对所提取的点云数据再进行一次目标点云数据筛选,选择满足预设的深度或坐标的点云数据作为目标点云数据进行目标感知,从而在能够获取一定深度的目标感知结果的基础上,对所述目标感知过程节省一定的计算资源。Through the above embodiment, in step 301, even if all point cloud data in the time window is extracted, before target perception is performed based on the extracted point cloud data, the extracted data can still be processed based on the above embodiment. The point cloud data is screened again for the target point cloud data, and the point cloud data that meets the preset depth or coordinates is selected as the target point cloud data for target perception, so that the target perception result of a certain depth can be obtained. The target-aware process saves certain computing resources.
前文已经介绍,不同长度的时间窗口,用于对具有不同属性的对象进行目标感知,所述属性至少包括以下之一:所述对象的种类、所述对象的尺寸、所述对象相对 于所述雷达的距离、所述对象的运动速度等等。As described above, time windows of different lengths are used to perceive objects with different attributes, and the attributes include at least one of the following: the type of the object, the size of the object, the relative value of the object to the object The distance of the radar, the speed of movement of the object, etc.
在一些实施例中,步骤302,分别基于每个时间窗口内所提取的点云数据获取所述时间窗口的目标感知结果,包括:基于不同长度的时间窗口,获取包含不同种类目标感知对象的目标感知结果。例如,基于长度为T1的时间窗口,获取包含小狗的目标感知结果;基于长度为T2的时间窗口,获取包含卡车的目标感知结果。In some embodiments, step 302 , acquiring the target perception results of the time windows based on the point cloud data extracted in each time window, respectively, includes: acquiring objects containing different types of target perception objects based on time windows of different lengths Perceived results. For example, based on the time window of length T1, the target perception result containing the dog is acquired; based on the time window of length T2, the target perception result containing the truck is acquired.
其中,获取包含不同种类目标感知对象的目标感知结果的方式,可以有多种,本申请实施例对此不做限定。以下,给出几种实施例。There may be various manners for acquiring target sensing results including different types of target sensing objects, which are not limited in this embodiment of the present application. In the following, several examples are given.
在一些实施例中,基于不同长度的时间窗口,获取包含不同种类目标感知对象的目标感知结果,包括:根据预先设定的多种不同目标感知方法,获取所述目标感知结果,其中,每个所述目标感知方法与一个长度的时间窗口对应,用于识别一种预设的目标对象。In some embodiments, acquiring target perception results containing different types of target perception objects based on time windows of different lengths includes: acquiring the target perception results according to a variety of preset target perception methods, wherein each The target perception method corresponds to a length of time window and is used to identify a preset target object.
不同种类的对象,具有不同的特性。例如,小狗和卡车,就具有不同的特性。将不同种类对象的特性作为先验知识,相应地,存在着与多种不同种类的对象对应的目标感知方法。在上述实施例中,可以利用与不同种类的对象对应的多种不同目标感知方法,对不同长度的时间窗口的目标点云数据进行目标感知。例如,预先确定要识别的目标感知对象是小狗和卡车;小狗对应的是长度为T1的时间窗口,卡车对应的是长度为T2的时间窗口,T1不等于T2;此外,预设对小狗用第一目标感知方法进行目标感知,对卡车用第二目标感知方法进行目标感知。则应用本申请实施例所提供的目标感知方法,在步骤301中,对所述点云帧的帧内的时间窗口T1和时间窗口T2内的目标点云数据进行提取;在步骤302中,基于第一目标感知方法,对时间窗口T1内所提取的目标点云数据进行目标感知获得包含小狗的第一目标感知结果;基于第二目标感知方法,对时间窗口T2内所提取的目标点云数据进行目标感知获得包含卡车的第二目标感知结果。Different kinds of objects have different properties. For example, puppies and trucks have different characteristics. Taking the characteristics of different kinds of objects as prior knowledge, correspondingly, there are target perception methods corresponding to many kinds of different kinds of objects. In the above embodiment, a variety of different target perception methods corresponding to different types of objects can be used to perform target perception on target point cloud data of time windows of different lengths. For example, it is pre-determined that the target perception objects to be recognized are dogs and trucks; the dog corresponds to a time window of length T1, and the truck corresponds to a time window of length T2, and T1 is not equal to T2; The dog uses the first object perception method for object perception, and the truck uses the second object perception method for object perception. Then, applying the target perception method provided by the embodiment of the present application, in step 301, extract the target point cloud data in the time window T1 and time window T2 in the frame of the point cloud frame; in step 302, based on The first target perception method is to perform target perception on the target point cloud data extracted in the time window T1 to obtain the first target perception result including the dog; based on the second target perception method, the target point cloud extracted in the time window T2 is detected. The data is subjected to object perception to obtain a second object perception result containing the truck.
在一些实施例中,所述多个不同目标感知方法,可以是基于多个不同的神经网络模型进行目标感知,每个所述神经网络模型用于识别一种预设的目标对象。当然,所述多个不同目标感知方法,每个目标感知方法还可以是除了神经网络模型之外的算法,例如传统的基于特征识别的算法,用于识别一种预设的目标对象,本申请实施例对此不做限定。In some embodiments, the multiple different target sensing methods may be based on multiple different neural network models for target sensing, each of the neural network models being used to identify a preset target object. Of course, among the multiple different target perception methods, each target perception method may also be an algorithm other than the neural network model, such as a traditional feature-based recognition algorithm, which is used to identify a preset target object. This application The embodiment does not limit this.
通过上述实施例可以看到,由于每个所述目标感知方法与一个长度的时间窗口 对应,且是用于识别一种预设的目标对象的,故这种目标感知方法对每个长度的时间窗口进行目标感知的准确性会相对较高。It can be seen from the above embodiment that since each of the target sensing methods corresponds to a length of time window and is used to identify a preset target The accuracy of the target perception of the window will be relatively high.
当然,目前也存在着一些目标感知方法,能够对多种预设的目标对象进行目标感知。故,在一些实施例中,步骤302,基于不同长度的时间窗口,获取包含不同种类目标感知对象的目标感知结果,包括:根据预先设定的同种目标感知方法,获取所述目标感知结果,其中,所述同种目标感知方法用于识别多种预设的目标对象。Of course, there are some target perception methods at present, which can perform target perception on a variety of preset target objects. Therefore, in some embodiments, in step 302, based on time windows of different lengths, acquiring target perception results containing different types of target perception objects includes: acquiring the target perception results according to a preset same type of target perception method, Wherein, the same target perception method is used to identify multiple preset target objects.
例如,预先确定要识别的目标感知对象是小狗和卡车;小狗对应的是长度为T1的时间窗口,卡车对应的是长度为T2的时间窗口,T1不等于T2;此外,第三目标感知方法,既可以用于对小狗进行目标感知,还可以用于对卡车进行目标感知。则应用本申请上述实施例所述的目标感知方法,在步骤302中,就可以基于所述第三目标感知方法,对时间窗口T1内和时间窗口T2内所提取的目标点云数据进行目标感知,分别获得包含小狗的第一目标感知结果和包含卡车的第二目标感知结果。For example, it is pre-determined that the target perception objects to be identified are dogs and trucks; the dog corresponds to a time window of length T1, the truck corresponds to a time window of length T2, and T1 is not equal to T2; in addition, the third target perception method, which can be used for object perception both for dogs and trucks. Then, applying the target sensing method described in the above embodiments of the present application, in step 302, target sensing can be performed on the target point cloud data extracted in the time window T1 and the time window T2 based on the third target sensing method. , respectively, to obtain the first target perception result including the dog and the second target perception result including the truck.
在一些实施例中,所述同种目标感知方法,可以是基于同一个神经网络模型进行目标感知,所述同一个神经网络模型用于识别多种预设的目标对象。当然,还可以是除了神经网络模型之外的算法,用于识别多种预设的目标对象,本申请实施例对此不做限定。In some embodiments, the same target perception method may be based on the same neural network model for target perception, and the same neural network model is used to identify multiple preset target objects. Certainly, an algorithm other than the neural network model may also be used to identify various preset target objects, which is not limited in this embodiment of the present application.
通过上述实施例可以看到,由于一个目标感知方法与多个长度的时间窗口对应,且是用于识别多种预设的目标对象的,故这种目标感知方法对每个长度的时间窗口进行目标感知所占用的存储空间会比较小,且具有更广泛的适用性。It can be seen from the above embodiment that since one target sensing method corresponds to time windows of multiple lengths and is used to identify multiple preset target objects, this target sensing method performs Object awareness takes up less storage space and has wider applicability.
前文已经介绍,所述不同长度的时间窗口,用于对具有不同属性的对象进行目标感知。接下来,对与不同属性的对象对应的时间窗口的长度确定进行介绍。As described above, the time windows of different lengths are used for object perception for objects with different properties. Next, the determination of the length of time windows corresponding to objects with different attributes is introduced.
在一些实施例中,所述时间窗口的长度预先设定。所述时间窗口的长度的预先设定的方法,可以有多种,本申请实施例对此不做限制。接下来,给出几种具体的示例。In some embodiments, the length of the time window is predetermined. There may be various methods for pre-setting the length of the time window, which is not limited in this embodiment of the present application. Next, several specific examples are given.
在一些实施例中,所述时间窗口的长度,可以是在所述雷达的软件和硬件参数确定后,利用所述雷达对不同目标区域的不同目标对象进行多次实验,基于所述多次实验,所确定的与不同目标对象所对应的时间窗口,所述时间窗口的长度与目标对象对应。In some embodiments, the length of the time window may be that after the software and hardware parameters of the radar are determined, multiple experiments are performed on different target objects in different target areas by using the radar, and based on the multiple experiments , the determined time windows corresponding to different target objects, and the length of the time window corresponds to the target objects.
在一些实施例中,所述时间窗口的长度,可以基于预先的算法确定。所述算法 可以有多种,本申请实施例对此不做限定。以下给出一种具体的算法的实施例:In some embodiments, the length of the time window may be determined based on a predetermined algorithm. There may be multiple algorithms, which are not limited in this embodiment of the present application. An example of a specific algorithm is given below:
如图7所示,是本申请实施例中所述雷达对其周围的环境进行扫描的示意图。其中,701为所述雷达,702为位于所述雷达的目标区域中的目标对象,A点和B点是所述雷达在扫描过程中所获取的两个相邻的点云数据在其目标区域对应的空间位置,d表示所述目标对象距所述雷达的距离,h表示所述目标对象的尺寸,r代表着所述雷达的点云角分辨率。As shown in FIG. 7 , it is a schematic diagram of scanning the surrounding environment of the radar according to the embodiment of the present application. Among them, 701 is the radar, 702 is the target object located in the target area of the radar, point A and point B are the two adjacent point cloud data obtained by the radar during the scanning process in its target area The corresponding spatial position, d represents the distance from the target object to the radar, h represents the size of the target object, and r represents the point cloud angular resolution of the radar.
当所述雷达的软件和硬件参数确定后,可以对所述雷达在目标区域点云分辨率变化规律进行预先的记录及分析。记所述雷达在其获取一个点云帧的时长内在所述目标区域扫描X次,那么,所述雷达在所述目标区域的点云角分辨率r随扫描次数变化x的函数可以记为:After the software and hardware parameters of the radar are determined, pre-recording and analysis can be performed on the change rule of the point cloud resolution of the radar in the target area. It is recorded that the radar scans the target area X times within the duration of acquiring a point cloud frame. Then, the function of the point cloud angular resolution r of the radar in the target area changing with the number of scans x can be recorded as:
r=f(x),x=1,2,...X       (1)r=f(x),x=1,2,...X (1)
其中,X为正整数。当所述雷达在所述目标区域进行多次扫描时,所述目标区域被扫描的次数越多,所述角分辨率就会越小,即基于所述雷达所获取的点云数据,能够对更小尺寸的目标对象进行感知。而雷达的有效感知能力与角分辨率紧密相关:所述角分辨率越小,有效的感知距离也就越远;或者,在同等距离下,所述雷达在越小的角分辨率下,能够感知到更小的目标对象。where X is a positive integer. When the radar scans the target area multiple times, the more times the target area is scanned, the smaller the angular resolution will be, that is, based on the point cloud data obtained by the radar, the Smaller size target objects for perception. The effective perception capability of the radar is closely related to the angular resolution: the smaller the angular resolution, the farther the effective perception distance is; or, at the same distance, the smaller the angular resolution, the radar can be A smaller target object is perceived.
以垂直角分辨率为例,假设在所述目标区域的距离d处,存在着一个高度为h的目标对象,那么,感知所述目标对象所需要的角分辨率应该满足:Taking the vertical angular resolution as an example, assuming that there is a target object with a height of h at the distance d of the target area, then the angular resolution required to perceive the target object should satisfy:
Figure PCTCN2021087327-appb-000001
Figure PCTCN2021087327-appb-000001
结合上述角分辨率r与扫描次数x的关系,可以得到,在一个点云帧的时长内,感知此目标对象的最高频率v max为: Combining the above relationship between the angular resolution r and the number of scans x, it can be obtained that within the duration of a point cloud frame, the highest frequency v max that perceives the target object is:
Figure PCTCN2021087327-appb-000002
Figure PCTCN2021087327-appb-000002
因此,对于此目标对象,理论上可以实现最高频率为v max的超帧率感知。 Therefore, for this target object, superframe rate perception with a maximum frequency of vmax can theoretically be achieved.
此外,因为所述雷达的扫描并非是连续进行的,记Δt为所述雷达在所述目标区域内两次扫描的时间间隔,那么,实际最短可以每t时长取一次所述目标区域内的点云数据,其中,t即为前文所述的时间窗口的长度,其表达式可以记作:In addition, since the scanning of the radar is not carried out continuously, denote Δt as the time interval between two scans of the radar in the target area. Then, the actual shortest point in the target area can be taken once every t time. Cloud data, where t is the length of the time window mentioned above, and its expression can be written as:
Figure PCTCN2021087327-appb-000003
Figure PCTCN2021087327-appb-000003
其中,
Figure PCTCN2021087327-appb-000004
表示对N进行向上取整。
in,
Figure PCTCN2021087327-appb-000004
Indicates that N is rounded up.
在应用本申请实施例所提供的上述目标感知方法时,可以预先先确定一个或多个目标感知对象,然后基于预先所确定的一个或多个目标感知对象,基于上述算法确定此一个或多个目标感知对象所对应的时间窗口的长度。当然,本领域技术人员应当理解,所述算法实施例仅为一种示例性说明,并非枚举。用于确定所述时间窗口长度的算法,还可以是其他的算法,本申请实施例对此不做限定。When applying the above target perception method provided by the embodiment of the present application, one or more target perception objects may be pre-determined, and then based on the pre-determined one or more target perception objects, the one or more target perception objects may be determined based on the above algorithm. The length of the time window corresponding to the target-aware object. Of course, those skilled in the art should understand that the algorithm embodiment is only an exemplary illustration, not an enumeration. The algorithm for determining the length of the time window may also be other algorithms, which are not limited in this embodiment of the present application.
当然,所述时间窗口的长度,除了可以通过预先设定的方法确定,还可以通过其他方式确定。在一些实施例中,所述时间窗口的长度可以通过以下方式确定:根据历史目标感知结果,确定所述时间窗口的长度。其中,所述历史目标感知结果,至少包括以下之一:基于当前时间窗口之前的点云帧所获取的目标感知结果,或者,基于当前时间窗口之前的时间窗口内的点云数据所获取的目标感知结果。Certainly, the length of the time window can be determined in other ways besides the preset method. In some embodiments, the length of the time window may be determined by: determining the length of the time window according to historical target perception results. Wherein, the historical target perception result includes at least one of the following: the target perception result obtained based on the point cloud frame before the current time window, or the target obtained based on the point cloud data in the time window before the current time window Perceived results.
基于上述实施例,可以不预先对要感知的目标感知对象进行确定,而是基于在当前时间窗口之前所获取的历史目标感知结果,自适应地确定当前时间窗口的长度。例如,根据上一帧点云帧的目标感知结果,可以确定在所述雷达的前方5m处存在一个运行的小轿车,那可以基于上述算法,确定出所述小轿车对应的时间窗口的长度,对所述小轿车以其允许的最高频率进行超帧率感知。此种方法不依赖于预先的设定,具有更广泛的适用性和灵活性,能够应用于复杂的应用环境。Based on the above embodiment, the target sensing object to be sensed may not be determined in advance, but the length of the current time window may be adaptively determined based on the historical target sensing results obtained before the current time window. For example, according to the target perception result of the previous point cloud frame, it can be determined that there is a running car 5m in front of the radar, then the length of the time window corresponding to the car can be determined based on the above algorithm, Superframe rate sensing is performed on the car at its highest frequency allowed. This method does not depend on pre-setting, has wider applicability and flexibility, and can be applied to complex application environments.
在一些实施例中,所述根据历史目标感知结果,确定所述时间窗口的长度,可以如图8所示,包括:In some embodiments, determining the length of the time window according to historical target perception results, as shown in FIG. 8 , includes:
步骤801,根据所述历史目标感知结果,确定目标感知对象; Step 801, according to the historical target perception result, determine the target perception object;
步骤802,基于所述目标感知对象,确定所述时间窗口的长度。Step 802: Determine the length of the time window based on the target perception object.
在获取了所述历史目标感知结果之后,可以从所述历史目标感知结果中,提取诸多信息,例如,所述雷达周围的环境中,是否存在物体,所述物体中是否包含需要特别关注的目标感知对象,所述物体中是否存在运动的物体等等。在上述实施例中,可以先基于所述历史目标感知结果,确定目标感知对象。所述目标感知对象的确定,可以参考相关技术来实现。例如,可以是确定所述历史目标感知结果中所包含的物体,然后基于一定条件对所述物体进行筛选,确定目标感知对象。所述一定条件可以是所 述物体的尺寸满足一定的大小,所述物体的深度满足一定的深度,所述物体是某种种类的物体等等,然后基于所确定的目标感知对象,基于前文所述的各种实施例确定所述时间窗口的长度。After acquiring the historical target perception result, a lot of information can be extracted from the historical target perception result, for example, whether there is an object in the environment around the radar, and whether the object contains a target that needs special attention Perceiving objects, whether there are moving objects in said objects, etc. In the above embodiment, the target sensing object may be determined based on the historical target sensing result. The determination of the target-aware object may be implemented with reference to related technologies. For example, the objects included in the historical target perception results may be determined, and then the objects are screened based on certain conditions to determine the target perception objects. The certain condition may be that the size of the object satisfies a certain size, the depth of the object satisfies a certain depth, the object is a certain kind of object, etc., and then the object is perceived based on the determined target, based on the above The various embodiments described determine the length of the time window.
对于步骤802,基于所述目标感知对象,确定所述时间窗口的长度。也存在着多种实现方式。例如,可以基于所述目标感知对象的种类,从预设的感知对象与时间窗口长度的映射关系中,查找与所述目标感知对象对应的时间窗口的长度,作为当前时间窗口的长度。当然,还可以采用其他的方式确定所述时间窗口的长度。For step 802, the length of the time window is determined based on the target perception object. There are also multiple implementations. For example, based on the type of the target sensing object, the length of the time window corresponding to the target sensing object may be searched from the preset mapping relationship between the sensing object and the time window length as the length of the current time window. Of course, the length of the time window can also be determined in other ways.
在一些实施例中,基于所述目标感知对象,确定所述时间窗口的长度,可以如图9所示,包括:In some embodiments, the length of the time window is determined based on the target perception object, as shown in FIG. 9 , including:
步骤901,基于所述目标感知对象,确定所述目标感知对象的运动速度和\或所述目标感知对象的深度和\或目标点云角分辨率; Step 901, based on the target perception object, determine the motion speed of the target perception object and/or the depth of the target perception object and/or the target point cloud angular resolution;
步骤902,基于所述运动速度和\或所述深度和\或所述目标点云角分辨率,确定所述时间窗口的长度。Step 902: Determine the length of the time window based on the motion speed and/or the depth and/or the angular resolution of the target point cloud.
在上述实施例中,从根据历史目标感知结果所确定的目标感知对象中,可以获取所述目标感知对象的诸多属性信息。例如基于一个或多个历史目标感知结果所确定的目标感知对象,可以得到所述目标感知对象距离所述雷达的距离(即所述目标感知对象的深度)以及所述目标感知对象的大小。又例如基于多个历史目标感知结果所确定的目标感知对象,可以确定所述目标感知对象的运动速度。In the above embodiment, from the target sensing object determined according to the historical target sensing result, various attribute information of the target sensing object can be obtained. For example, based on the target perception object determined by one or more historical target perception results, the distance of the target perception object from the radar (ie the depth of the target perception object) and the size of the target perception object can be obtained. For another example, based on a target perception object determined based on multiple historical target perception results, the movement speed of the target perception object may be determined.
当确定了所述目标感知对象的深度,则可以从预设的深度与时间窗口长度的映射关系中,查找与所述深度对应的时间窗口的长度,作为当前时间窗口的长度;或者,基于固定的高度值,应用前文所述的算法或者其他算法,确定当前时间窗口的长度。当确定了所述目标感知对象的运动速度,则可以从预设的运动速度与时间窗口长度的映射关系中,查找与所述运动速度对应的时间窗口的长度,作为当前时间窗口的长度。当确定了所述目标感知对象所对应的目标点云角分辨率,则可以应用前文所述的算法或者其他算法,确定当前时间窗口的长度。When the depth of the target perception object is determined, the length of the time window corresponding to the depth can be found from the preset mapping relationship between the depth and the length of the time window as the length of the current time window; The height value of , apply the algorithm described above or other algorithms to determine the length of the current time window. When the motion speed of the target sensing object is determined, the length of the time window corresponding to the motion speed can be found from the preset mapping relationship between the motion speed and the time window length as the length of the current time window. When the angular resolution of the target point cloud corresponding to the target sensing object is determined, the algorithm described above or other algorithms may be applied to determine the length of the current time window.
当然,本领域技术人员应当理解,基于所述运动速度和\或所述深度和\或所述目标点云角分辨率,确定所述时间窗口的长度的各个实施例,仅为示例性说明,当然,还可以基于这些信息,通过其他方式确定所述时间窗口的长度,本申请实施例对此不做限制。Of course, those skilled in the art should understand that the various embodiments of determining the length of the time window based on the motion speed and/or the depth and/or the angular resolution of the target point cloud are only illustrative, Certainly, the length of the time window may also be determined in other manners based on the information, which is not limited in this embodiment of the present application.
在一些实施例中,步骤902,基于所述运动速度和\或所述深度和\或所述目标角分辨率,确定所述时间窗口的长度,可以包括:对第一运动速度的目标感知对象确定第一长度的时间窗口,对第二运动速度的目标感知对象确定第二长度的时间窗口,所述第一运动速度大于所述第二运动速度,所述第一长度小于所述第二长度。In some embodiments, step 902, determining the length of the time window based on the motion speed and/or the depth and/or the target angular resolution, may include: perceiving the object at the first motion speed A time window of a first length is determined, and a time window of a second length is determined for the target perception object of the second movement speed, the first movement speed is greater than the second movement speed, and the first length is smaller than the second length .
在上述实施例中,当基于历史目标感知结果,获取到多个目标感知对象对应的运动速度后,可以考虑运动速度的不同,设置不同长度的时间窗口。例如,当目标区域中具有快速奔跑的行人和静止不动的行人,那么,对于快速奔跑的行人,其对所述雷达所搭载的可移动平台的影响更大,应当以更短的时间窗口进行目标感知,以实时感知该目标感知对象,以便及时对该目标感知对象进行迅速响应。In the above-mentioned embodiment, after the movement speeds corresponding to multiple target perception objects are obtained based on the historical target perception results, time windows of different lengths may be set in consideration of different movement speeds. For example, when there are fast-running pedestrians and stationary pedestrians in the target area, the fast-running pedestrians have a greater impact on the movable platform carried by the radar and should be carried out in a shorter time window. Target perception, to perceive the target perception object in real time, so as to respond quickly to the target perception object in time.
在一些实施例中,步骤902,基于所述运动速度和\或所述深度和\或所述目标角分辨率,确定所述时间窗口的长度,可以包括:对第三深度的目标感知对象确定第三长度的时间窗口,对第四深度的目标感知对象确定第四长度的时间窗口,所述第三深度小于所述第四深度,所述第三长度小于所述第四长度。In some embodiments, step 902, determining the length of the time window based on the motion speed and/or the depth and/or the target angular resolution, may include: determining a target perception object at a third depth For a time window of a third length, a time window of a fourth length is determined for the target perception object of a fourth depth, where the third depth is smaller than the fourth depth, and the third length is smaller than the fourth length.
在上述实施例中,当基于历史目标感知结果,获取到多个目标感知对象对应的深度(即所述目标感知对象相对于所述雷达的距离)后,可以考虑深度的不同,设置不同长度的时间窗口。例如,当目标区域中具有距离所述雷达仅有5m的汽车和具有距离所述雷达有10m的汽车,那么,对于距离所述雷达较近的汽车,其对所述雷达所搭载的可移动平台的影响更大,应当以更短的时间窗口进行目标感知,以实时感知该目标感知对象,以便及时对该目标感知对象进行迅速响应。In the above embodiment, when the depths corresponding to multiple target sensing objects (that is, the distances of the target sensing objects relative to the radar) are obtained based on the historical target sensing results, different depths can be considered, and different lengths can be set. time window. For example, when the target area has a car that is only 5m away from the radar and a car that is 10m away from the radar, then, for the car that is closer to the radar, it will not affect the movable platform on which the radar is mounted. The impact of the target is greater, and the target perception should be carried out in a shorter time window to perceive the target perception object in real time, so as to respond quickly to the target perception object in time.
在上述两个实施例中,以对不同速度以及深度的目标感知对象为同种对象进行了示例说明,本领域技术人员应当理解,所述不同速度和深度的目标感知对象,也可以是不同种对象,本申请实施例对此不做限定。In the above two embodiments, the target sensing objects with different speeds and depths are exemplified as the same kind of objects. Those skilled in the art should understand that the target sensing objects with different speeds and depths may also be different kinds of objects. object, which is not limited in this embodiment of the present application.
在一些实施例中,在获取构成一帧点云的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取后,可以直接实时地对每个所述时间窗口内的点云数据进行目标感知,并输出所述时间窗口的目标感知结果,即其目标感知结果输出的流程图如图10所示。本领域技术人员应当理解,图10仅以对一个时间窗口内的点云数据进行提取并进行超频率感知为例进行说明。当然,还可以如前文所述,同时对多个不同时间窗口内的点云数据进行提取,所述不同时间窗口可以存在交叠,以获取多个包含不同目标感知对象的目标感知结果。In some embodiments, in the process of acquiring the point cloud data constituting a frame of point cloud, after extracting the target point cloud data in at least one time window in the frame of the point cloud frame, each point cloud frame can be directly processed in real time. The point cloud data within the time window is subjected to target perception, and the target perception result of the time window is output, that is, the flow chart of the output of the target perception result is shown in FIG. 10 . It should be understood by those skilled in the art that FIG. 10 only takes the extraction of point cloud data within a time window and the super-frequency perception as an example for description. Of course, as described above, point cloud data in multiple different time windows may be extracted simultaneously, and the different time windows may overlap, so as to obtain multiple target perception results including different target perception objects.
前文已经介绍,点云帧包含着所述雷达对其视场区域的扫描而获取的点云数据,相对于所述时间窗口的点云数据,所述点云帧对应的点云数据密度更大,能够获取更空间中更加细节的目标感知结果。故,在一些实施例中,如图11所示,所述方法还可以包括:As mentioned above, the point cloud frame contains the point cloud data obtained by the radar scanning its field of view area. Compared with the point cloud data of the time window, the point cloud data density corresponding to the point cloud frame is higher. , which can obtain more detailed target perception results in more space. Therefore, in some embodiments, as shown in FIG. 11 , the method may further include:
步骤1101,获取所述点云帧的感知结果; Step 1101, obtaining the perception result of the point cloud frame;
步骤1102,将所述点云帧的感知结果与多个第一感知结果关联输出。 Step 1102 , associate and output the perception result of the point cloud frame with a plurality of first perception results.
其中,每个所述第一感知结果为在获取构成所述点云帧的点云数据的过程中,基于一个时间窗口的点云数据所获取的目标感知结果。Wherein, each of the first perception results is a target perception result acquired based on point cloud data of a time window in the process of acquiring point cloud data constituting the point cloud frame.
上述实施例的流程图如图12所示。其中,将所述点云帧的感知结果与多个第一感知结果关联输出,可以有多种方式。例如,将所述多个第一感知结果按获取的先后顺序依次输出,在这个过程中,将所述点云帧的感知结果持续输出,从而实现所述点云帧的感知结果与所述多个第一感知结果的对应输出。当然,还可以是其他的关联输出方式,本申请实施例在此不做限制。The flowchart of the above embodiment is shown in FIG. 12 . There are various ways to associate and output the perception result of the point cloud frame with the multiple first perception results. For example, the plurality of first perception results are output in the order in which they are acquired, and in this process, the perception results of the point cloud frame are continuously output, so that the perception results of the point cloud frame and the multiple The corresponding output of the first perception result. Of course, other associated output manners may also be used, which are not limited in this embodiment of the present application.
通过上述实施例可以看到,将所述点云帧的感知结果与多个第一感知结果关联输出,实现了同时输出基于所述点云帧获取的更细节的目标感知结果,与基于所述多个第一感知结果所获取的更实时的目标感知结果,既满足了感知的空间灵敏度的要求,又满足了感知的时间灵敏度的要求。It can be seen from the above embodiment that the perception result of the point cloud frame is output in association with a plurality of first perception results, so that the more detailed target perception results obtained based on the point cloud frame can be output at the same time. The more real-time target sensing results obtained by the multiple first sensing results satisfy both the requirements of the spatial sensitivity of the perception and the requirements of the temporal sensitivity of the perception.
在一些实施例中,在对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取之前,如图13所示,所述方法还包括:In some embodiments, before extracting the target point cloud data in at least one time window in the frame of the point cloud frame, as shown in FIG. 13 , the method further includes:
步骤1301,接收触发指令; Step 1301, receive a trigger instruction;
步骤1302,在所述触发指令的触发下,输出目标感知模型选择信息,所述目标感知模型至少包括:超帧率感知模式,和\或,普通帧率感知模式; Step 1302, under the trigger of the trigger instruction, output target perception model selection information, and the target perception model at least includes: a super frame rate perception mode, and/or, a normal frame rate perception mode;
步骤1303,当确定是超帧率感知模式时,对至少一个时间窗口内的目标点云数据进行提取并输出所获取的目标感知结果;当确定是普通帧率感知模式时,对所述点云帧进行目标感知并输出所获取的目标感知结果。 Step 1303, when it is determined to be in the super frame rate perception mode, extract the target point cloud data in at least one time window and output the acquired target perception result; when it is determined to be the normal frame rate perception mode, the point cloud The frame performs object perception and outputs the acquired object perception results.
一些示例中,在超帧率感知模式中,对至少一个时间窗口内的目标点云数据进行提取并输出所获取的目标感知结果,以及对对所述点云帧进行目标感知并输出所获取的目标感知结果。In some examples, in the super frame rate perception mode, the target point cloud data in at least one time window is extracted and the obtained target perception result is output, and the point cloud frame is subjected to target perception and output. Target perception results.
通过上述实施例可以看到,在实现超帧率感知之前,先基于所接收的触发指令,确定感知模式,然后相应地输出目标感知结果,能够方便与用户进行交互,增强用户体验。It can be seen from the above embodiments that, before realizing super frame rate sensing, the sensing mode is determined based on the received trigger instruction, and then the target sensing result is output accordingly, which can facilitate interaction with users and enhance user experience.
在一些实施例中,如图14所示,所述方法还可以包括:In some embodiments, as shown in FIG. 14 , the method may further include:
步骤1401,获取多个相同时间窗口的目标感知结果; Step 1401, acquiring multiple target perception results of the same time window;
步骤1402,基于所获取的多个目标感知结果,获取与所述多个相同时间窗口对应的目标感知对象的运动轨迹; Step 1402, based on the obtained multiple target sensing results, obtain the motion trajectories of the target sensing objects corresponding to the multiple identical time windows;
步骤1403,输出所述运动轨迹。Step 1403, outputting the motion track.
通过上述实施例,基于多个相同时间窗口的目标感知结果,确定目标感知对象的运动轨迹并输出,能够使用户或者监测人员基于所述运动轨迹,做出相应的响应。Through the above embodiment, the motion trajectory of the target sensing object is determined and output based on the target sensing results of multiple identical time windows, so that the user or monitoring personnel can respond accordingly based on the motion trajectory.
在一些实施例中,如图15所示,所述方法还可以包括:In some embodiments, as shown in FIG. 15 , the method may further include:
步骤1501,基于所述运动轨迹,获取所述目标感知对象的预测运动轨迹; Step 1501, based on the motion track, obtain the predicted motion track of the target perception object;
步骤1502,输出所述预测运动轨迹。 Step 1502, outputting the predicted motion trajectory.
其中,步骤1501,基于所述运动轨迹,获取所述目标感知对象的预测运动轨迹,可以参考相关技术实现,也可以是通过开发人员独立开发的算法实现,本申请实施例不做限制。通过上述实施例可以看到,基于所获取的目标感知结果,确定目标感知对象的预测运动轨迹,能够对所述雷达周围的目标对象进行相对可靠的预测,进而能够对所述目标对象做出更及时的响应,提高系统的灵敏度和安全性。Wherein, step 1501, based on the motion trajectory, obtains the predicted motion trajectory of the target perception object, which may be implemented with reference to related technologies, or may be implemented by an algorithm independently developed by a developer, which is not limited in this embodiment of the present application. It can be seen from the above embodiments that, based on the obtained target perception results, determining the predicted motion trajectory of the target perception object can make relatively reliable predictions for the target objects around the radar, and then can make more reliable predictions for the target objects. Timely response improves the sensitivity and safety of the system.
上述各实施例中的目标区域,可以是所述雷达视场的全部区域,也可以是所述雷达视场的部分区域。对于智能汽车、智能机器人等等,其所获取的点云帧的视场的中心区域,通常是需要重点监测的区域。故,在一些实施例,所述目标区域为所述点云帧的视场中的中心区域。当然,所述目标区域还可以是基于应用需求所设置的其他区域,本申请实施例对此不做限制。The target area in each of the above embodiments may be the entire area of the radar field of view, or may be a partial area of the radar field of view. For smart cars, smart robots, etc., the central area of the field of view of the acquired point cloud frame is usually the area that needs to be monitored. Therefore, in some embodiments, the target area is a central area in the field of view of the point cloud frame. Certainly, the target area may also be another area set based on application requirements, which is not limited in this embodiment of the present application.
在一些实施例中,所述雷达对所述目标区域多次扫描,可以通过控制所述雷达的扫描模块中的至少一个组件在所述目标区域内的转速至少达到预设的阈值来实现。其中,所述阈值可以基于预先确定的目标感知对象来确定,也可以基于历史帧所确定的目标感知对象来确定,本申请实施例对所述阈值的确定不做限制。In some embodiments, the radar scans the target area multiple times, which may be achieved by controlling the rotational speed of at least one component in the scanning module of the radar to reach at least a preset threshold in the target area. The threshold may be determined based on a predetermined target perception object, or may be determined based on a target perception object determined in historical frames, and the embodiment of the present application does not limit the determination of the threshold.
在一些实施例中,所述扫描模块包含由第一棱镜和第二棱镜构成的双棱镜扫描 式组件,所述阈值由所述第一棱镜和所述第二棱镜在所述时间窗口内旋转两圈对应的转速确定;其中,所述第一棱镜和所述第二棱镜的转速数值相等,方向相反。In some embodiments, the scanning module includes a double-prism scanning assembly composed of a first prism and a second prism, and the threshold is rotated by the first prism and the second prism by two within the time window. The rotation speed corresponding to the circle is determined; wherein, the rotation speed of the first prism and the second prism are equal in value and opposite in direction.
所述由第一棱镜和第二棱镜构成的双棱镜扫描式组件的示意图,可以如图16所示,其中,1601为所述第一棱镜,1602为所述第二棱镜。以一个具体的例子对上述实施例进行说明:当采用等速反向策略时,则第一棱镜和第二棱镜的转速分辨为w1,-w1。当所述雷达的点云帧的输出帧率为10Hz时,设置w1=N*r,其中,r为所述扫描组件的驱动部件(如电机等)旋转一圈所对应的转速,N为大于等于1的整数。当N>1时,即可实现超帧率扫描。当所述第一棱镜和所述第二棱镜各自对光线的偏转能力一致时,在单个点云帧时长内,对所述目标区域的扫描次数为2N次(如图17A所示);当所述第一棱镜和所述第二棱镜对光线的偏转能力有小许差异时,在单个点云帧的时长内对所述目标区域的扫描次数为N次(如图17B所示)。The schematic diagram of the double-prism scanning assembly composed of the first prism and the second prism can be shown in FIG. 16 , wherein 1601 is the first prism, and 1602 is the second prism. The above embodiment is described with a specific example: when the constant velocity reverse strategy is adopted, the rotational speeds of the first prism and the second prism are resolved as w1,-w1. When the output frame rate of the point cloud frame of the radar is 10 Hz, set w1=N*r, where r is the rotational speed corresponding to one rotation of the driving component (such as a motor, etc.) of the scanning component, and N is greater than An integer equal to 1. When N>1, super frame rate scanning can be achieved. When the deflection capabilities of the first prism and the second prism are the same, the number of scans to the target area is 2N times within the duration of a single point cloud frame (as shown in FIG. 17A ); when all When there is a slight difference in the deflection ability of the first prism and the second prism to light, the number of scans of the target area is N times within the duration of a single point cloud frame (as shown in FIG. 17B ).
在一些实施例中,所述扫描模块包含由第三棱镜、第四棱镜和第五棱镜构成的三棱镜扫描式组件,所述阈值由所述第五棱镜在获取所述时间窗口内旋转两圈对应的转速确定;其中,所述第三棱镜和所述第四棱镜的转速数值相等,方向相反。In some embodiments, the scanning module includes a triangular prism scanning component composed of a third prism, a fourth prism and a fifth prism, and the threshold value is corresponding to the rotation of the fifth prism two times within the acquisition time window. The rotational speed is determined; wherein, the rotational speed of the third prism and the fourth prism are equal in value and opposite in direction.
所述由第三棱镜、第四棱镜和第五棱镜构成的三棱镜扫描式组件的示意图,可以如图18所示,其中,1801~1803分别为所述第三棱镜、第四棱镜和第五棱镜。以一个具体的例子对上述实施例进行说明:当其中两个棱镜采用等速反向策略时(以第五棱镜和第六棱镜的转速等速方向为例),第三棱镜、第四棱镜和第五棱镜的转速分辨为w1,w2,-w2时。当所述雷达的点云帧的输出帧率为10Hz时,设置w1=M1*r+dw1;w1=M2*r+dw2,其中,r为所述扫描组件的驱动部件(如电机等)旋转一圈所对应的转速,dw1和dw2均为0~r间的整数,M1、M2均为大于1的整数。当M1>1时,即可实现超帧率感知。The schematic diagram of the triangular prism scanning assembly composed of the third prism, the fourth prism and the fifth prism can be shown in FIG. 18 , wherein 1801 to 1803 are the third prism, the fourth prism and the fifth prism, respectively. The above embodiment is described with a specific example: when two of the prisms adopt the constant velocity reverse strategy (taking the rotation speed of the fifth prism and the sixth prism as an example), the third prism, the fourth prism and the third prism When the rotational speed of the pentaprism is resolved as w1, w2, -w2. When the output frame rate of the point cloud frame of the radar is 10 Hz, set w1=M1*r+dw1; w1=M2*r+dw2, where r is the rotation of the driving component (such as a motor, etc.) of the scanning assembly For the rotational speed corresponding to one revolution, dw1 and dw2 are both integers between 0 and r, and both M1 and M2 are integers greater than 1. When M1>1, super frame rate perception can be achieved.
在一些实施例中,所述扫描模块包含由第六棱镜和反射镜构成的扫描组件,所述阈值由所述第六棱镜在所述时间窗口内旋转两圈对应的转速,以及所述反射镜在所述时间窗口内扫描两次预设的扫描范围对应的转速共同确定。例如,所述第六棱镜和所述反射镜的转速分辨为w1和w2,w1和w2均大于等于所述扫描组件的驱动部件(如电机等)旋转一圈所对应的转速r。要实现超帧率扫描,则使所述第六棱镜和所述反射镜的转速同时提升为各自原转速的H倍,H为大于等于2的正整数,则在一个点云帧的时长内,整个目标区域的扫描次数为H。In some embodiments, the scanning module includes a scanning component composed of a sixth prism and a mirror, the threshold is a rotational speed corresponding to two rotations of the sixth prism within the time window, and the mirror The rotational speed corresponding to the preset scanning range is determined jointly by scanning twice within the time window. For example, the rotational speeds of the sixth prism and the reflecting mirror are respectively w1 and w2, and both w1 and w2 are greater than or equal to the rotational speed r corresponding to one rotation of the driving component (such as a motor, etc.) of the scanning assembly. To achieve super frame rate scanning, the rotational speed of the sixth prism and the mirror is simultaneously increased to H times the respective original rotational speeds, where H is a positive integer greater than or equal to 2, then within the duration of one point cloud frame, The number of scans for the entire target area is H.
在一些实施例中,所述扫描模块包含振镜,所述阈值由所述振镜在所述时间窗 口内扫描两次预设的扫描范围对应的转速确定。In some embodiments, the scanning module includes a galvanometer, and the threshold value is determined by the rotation speed corresponding to the galvanometer scanning twice a preset scanning range within the time window.
当然,本领域技术人员应当理解,上述各个实施例,仅为雷达对所述目标区域多次扫描的示例性实现方式。此外,还可以基于所述雷达的系统结构,确定相应的多次扫描的实现方式,本申请实施例对此不做限制。Of course, those skilled in the art should understand that the above embodiments are only exemplary implementations in which the radar scans the target area multiple times. In addition, a corresponding implementation manner of multiple scans may also be determined based on the system structure of the radar, which is not limited in this embodiment of the present application.
通过上述各个实施例可以看到,基于本申请实施例所提供的目标感知方法,在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,并分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果,最后输出所述时间窗口的目标感知结果。由于在本申请实施例所提供的目标感知方法中,所提取的目标点云数据对应所述点云帧的视场中的目标区域,且所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍,因此,基于本申请实施例所提供的目标感知方法,相对于基于点云帧的目标感知方法,能够对目标区域实现超帧率感知,进而能够对周围环境实现更快的感知,提高目标感知的实时性、灵敏度以及安全性。It can be seen from the above embodiments that, based on the target perception method provided by the embodiments of the present application, in the process of acquiring the point cloud data constituting one frame of point cloud frame, at least one time window in the frame of the point cloud frame is analyzed. The target point cloud data within each time window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output. Because in the target perception method provided by the embodiment of the present application, the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high. Therefore, based on the target sensing method provided by the embodiment of the present application, compared with the target sensing method based on the point cloud frame, super frame rate sensing can be realized for the target area, and further The surrounding environment realizes faster perception and improves the real-time, sensitivity and safety of target perception.
与上述各实施例所提供的目标感知方法相对应,本申请实施例还提供了一种目标感知装置,其结构示意图如图19所示。所述目标感知装置1901,包括:处理器1902和存储器1903,以及存储在存储器1903上并可在处理器1902上运行的计算机程序,所述处理器1902执行所述程序时实现如下所述步骤:Corresponding to the target sensing methods provided by the above embodiments, an embodiment of the present application further provides a target sensing device, the schematic diagram of which is shown in FIG. 19 . The target sensing device 1901 includes: a processor 1902 and a memory 1903, and a computer program stored in the memory 1903 and executable on the processor 1902, the processor 1902 implements the following steps when executing the program:
在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,所述目标点云数据对应所述点云帧的视场中的目标区域;In the process of acquiring the point cloud data constituting one frame of point cloud frame, the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the target point cloud data corresponds to the point cloud frame the target area in the field of view;
分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果;Obtaining the target perception result of the time window based on the target point cloud data extracted in each time window;
输出所述时间窗口的目标感知结果。A target perception result for the time window is output.
其中,所述点云帧包含雷达对所述目标区域多次扫描而获取的点云数据,所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍。The point cloud frame includes point cloud data obtained by the radar scanning the target area for multiple times, and the extraction frequency of the point cloud data in the time window is at least twice the acquisition frequency of the point cloud frame.
可选的,所述时间窗口的目标点云数据包含所述雷达对所述目标区域至少一次扫描而获取的点云数据。Optionally, the target point cloud data of the time window includes point cloud data obtained by the radar scanning the target area at least once.
可选的,所述点云帧的时长包括多个连续的时间窗口,所述对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,包括:分别对所述多个连续的时间 窗口中的每个时间窗口内的目标点云数据进行提取。Optionally, the duration of the point cloud frame includes multiple consecutive time windows, and the extracting the target point cloud data in at least one time window in the frame of the point cloud frame includes: separately extracting the multiple time windows. The target point cloud data within each time window in consecutive time windows is extracted.
可选的,两个不同长度的所述时间窗口存在交叠。Optionally, two time windows of different lengths overlap.
可选的,所述点云帧的时长包括至少两个不同长度的时间窗口,其中,不同长度的时间窗口对应的数据提取规则不同。Optionally, the duration of the point cloud frame includes at least two time windows of different lengths, wherein the data extraction rules corresponding to the time windows of different lengths are different.
可选的,不同长度的时间窗口内所提取的目标点云数据对应所述目标区域中不同深度范围内的点云数据;或者,不同长度的时间窗口内所提取的目标点云数据对应所述目标区域中不同方位内的点云数据。Optionally, the target point cloud data extracted in time windows of different lengths corresponds to point cloud data in different depth ranges in the target area; or, the target point cloud data extracted in time windows of different lengths corresponds to the Point cloud data in different orientations in the target area.
可选的,不同长度的时间窗口对应的数据提取规则不同,所述数据提取规则包括:对不同长度的时间窗口,提取满足预设条件的目标点云数据,所述预设条件与所述时间窗口的长度对应;或者,对不同长度的时间窗口,提取所述每个时间窗口内的全部点云数据。Optionally, the data extraction rules corresponding to time windows of different lengths are different, and the data extraction rules include: for time windows of different lengths, extracting target point cloud data that meets preset conditions, the preset conditions and the time The length of the window corresponds; or, for time windows of different lengths, extract all the point cloud data in each time window.
可选的,所述预设条件至少包括以下之一:所述目标点云数据位于预设的深度,或者,所述目标点云数据的坐标位于预设的坐标。Optionally, the preset condition includes at least one of the following: the target point cloud data is located at a preset depth, or the coordinates of the target point cloud data are located at preset coordinates.
可选的,所述数据提取规则包括:对第一时间窗口,提取第一深度的目标点云数据,对长度大于所述第一时间窗口的第二时间窗口,提取第二深度的目标点云数据,所述第一深度小于所述第二深度。Optionally, the data extraction rules include: for a first time window, extracting target point cloud data at a first depth, and for a second time window whose length is greater than the first time window, extracting target point cloud data at a second depth. data, the first depth is less than the second depth.
可选的,所述数据提取规则为对不同长度的时间窗口提取所述每个时间窗口内的全部点云数据,分别基于每个时间窗口内所提取的点云数据获取所述时间窗口的目标感知结果,包括:Optionally, the data extraction rule is to extract all point cloud data in each time window for time windows of different lengths, and obtain the target of the time window based on the point cloud data extracted in each time window. Perceived results, including:
获取所述全部点云数据的深度或坐标,确定满足预设的深度或坐标的点云数据;Acquire the depth or coordinates of all the point cloud data, and determine the point cloud data that satisfies the preset depth or coordinates;
根据所述满足预设的深度或坐标的点云数据,获取包含位于预设深度的目标感知对象的目标感知结果。According to the point cloud data satisfying the preset depth or coordinates, a target sensing result including the target sensing object located at the preset depth is acquired.
可选的,所述分别基于每个时间窗口内所提取的点云数据获取所述时间窗口的目标感知结果,包括:基于不同长度的时间窗口,获取包含不同种类目标感知对象的目标感知结果。Optionally, the acquiring the target perception results of the time windows based on the point cloud data extracted in each time window respectively includes: acquiring target perception results including different types of target perception objects based on time windows of different lengths.
可选的,基于不同长度的时间窗口,获取包含不同种类目标感知对象的目标感知结果,包括:Optionally, based on time windows of different lengths, obtain target sensing results containing different types of target sensing objects, including:
根据预先设定的多个不同目标感知方法,获取所述目标感知结果,其中,每个所述目标感知方法与一个长度的时间窗口对应,用于识别一种预设的目标对象;或者,Obtain the target perception result according to a plurality of preset different target perception methods, wherein each of the target perception methods corresponds to a time window of a length and is used to identify a preset target object; or,
根据预先设定的同种目标感知方法,获取所述目标感知结果,其中,所述同种目标感知方法用于识别多种预设的目标对象。The target perception result is acquired according to a preset same type of target perception method, wherein the same type of target perception method is used to identify multiple preset target objects.
可选的,所述多个不同目标感知方法包括:基于多个不同的神经网络模型进行目标感知,每个所述神经网络模型用于识别一种预设的目标对象;所述同种目标感知方法包括:基于同一个神经网络模型进行目标感知,所述同一个神经网络模型用于识别多种预设的目标对象。Optionally, the multiple different target perception methods include: performing target perception based on multiple different neural network models, each of which is used to identify a preset target object; the same type of target perception. The method includes: performing target perception based on the same neural network model, wherein the same neural network model is used for recognizing multiple preset target objects.
可选的,所述时间窗口的长度预先设定;或者,Optionally, the length of the time window is preset; or,
所述时间窗口的长度通过以下方式确定:根据历史目标感知结果,确定所述时间窗口的长度;其中,所述历史目标感知结果,至少包括以下之一:基于当前时间窗口之前的点云帧所获取的目标感知结果,或者,基于当前时间窗口之前的时间窗口内的点云数据所获取的目标感知结果。The length of the time window is determined in the following manner: according to the historical target perception result, the length of the time window is determined; wherein, the historical target perception result includes at least one of the following: based on the point cloud frame before the current time window. The obtained target perception result, or the target perception result obtained based on the point cloud data in the time window before the current time window.
可选的,根据历史目标感知结果,确定所述时间窗口的长度,包括:根据所述历史目标感知结果,确定目标感知对象;基于所述目标感知对象,确定所述时间窗口的长度。Optionally, determining the length of the time window according to the historical target perception result includes: determining a target perception object according to the historical target perception result; and determining the length of the time window based on the target perception object.
可选的,基于所述目标感知对象,确定所述时间窗口的长度,包括:基于所述目标感知对象,确定所述目标感知对象的运动速度和\或所述目标感知对象的深度和\或目标点云角分辨率;基于所述运动速度和\或所述深度和\或所述目标点云角分辨率,确定所述时间窗口的长度。Optionally, based on the target perception object, determine the length of the time window, including: based on the target perception object, determine the movement speed of the target perception object and/or the depth of the target perception object and/or Target point cloud angular resolution; based on the motion speed and/or the depth and/or the target point cloud angular resolution, determine the length of the time window.
可选的,基于所述运动速度和\或所述深度和\或所述目标角分辨率,确定所述时间窗口的长度,包括:对第一运动速度的目标感知对象确定第一长度的时间窗口,对第二运动速度的目标感知对象确定第二长度的时间窗口,所述第一运动速度大于所述第二运动速度,所述第一长度小于所述第二长度;和\或,对第三深度的目标感知对象确定第三长度的时间窗口,对第四深度的目标感知对象确定第四长度的时间窗口,所述第三深度小于所述第四深度,所述第三长度小于所述第四长度。Optionally, determining the length of the time window based on the motion speed and/or the depth and/or the target angular resolution, including: determining the time of the first length for the target perception object of the first motion speed Window, the time window of the second length is determined for the target perception object of the second movement speed, the first movement speed is greater than the second movement speed, and the first length is smaller than the second length; And\or, right A time window of a third length is determined for the target perception object at a third depth, and a time window of a fourth length is determined for the target perception object at a fourth depth, where the third depth is smaller than the fourth depth, and the third length is smaller than the the fourth length.
可选的,所述方法还包括:获取所述点云帧的感知结果;将所述点云帧的感知结果与多个第一感知结果关联输出;其中,每个所述第一感知结果为在获取构成所述点云帧的点云数据的过程中,基于一个时间窗口的点云数据所获取的目标感知结果。Optionally, the method further includes: acquiring a perception result of the point cloud frame; associating and outputting the perception result of the point cloud frame with a plurality of first perception results; wherein each of the first perception results is In the process of acquiring the point cloud data constituting the point cloud frame, the target perception result acquired based on the point cloud data of a time window.
可选的,在对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取之前,所述方法还包括:接收触发指令;在所述触发指令的触发下,输出目标感知模式选择信息,所述目标感知模式至少包括:超帧率感知模式,和\或,普通帧率感知模式;当确定是超帧率感知模式时,对至少一个时间窗口内的目标点云数据进行提取并输出所获取的目标感知结果;当确定是普通帧率感知模式时,对所述点云帧进行目标感知并输出所获取的目标感知结果。Optionally, before extracting the target point cloud data in at least one time window in the frame of the point cloud frame, the method further includes: receiving a trigger instruction; under the trigger of the trigger instruction, outputting the target perception Mode selection information, the target perception mode at least includes: a super frame rate perception mode, and/or, a normal frame rate perception mode; when it is determined to be a super frame rate perception mode, the target point cloud data in at least one time window is processed. Extract and output the acquired target perception result; when it is determined to be the normal frame rate perception mode, perform target perception on the point cloud frame and output the acquired target perception result.
可选的,所述方法还包括:获取多个相同时间窗口的目标感知结果;基于所获取的多个目标感知结果,获取与所述多个相同时间窗口对应的目标感知对象的运动轨迹;输出所述运动轨迹。Optionally, the method further includes: acquiring multiple target perception results of the same time window; based on the acquired multiple target perception results, acquiring the motion trajectories of the target perception objects corresponding to the multiple same time windows; outputting the motion trajectory.
可选的,所述方法还包括:基于所述运动轨迹,获取所述目标感知对象的预测运动轨迹;输出所述预测运动轨迹。Optionally, the method further includes: acquiring a predicted motion trajectory of the target sensing object based on the motion trajectory; and outputting the predicted motion trajectory.
可选的,所述目标区域为所述点云帧的视场中的中心区域。Optionally, the target area is a central area in the field of view of the point cloud frame.
可选的,所述雷达对所述目标区域多次扫描,通过控制所述雷达的扫描模块中的至少一个组件在所述目标区域内的转速至少达到预设的阈值来实现。Optionally, the radar scans the target area multiple times, which is achieved by controlling the rotational speed of at least one component in the scanning module of the radar to reach at least a preset threshold in the target area.
可选的,所述扫描模块包含由第一棱镜和第二棱镜构成的双棱镜扫描式组件,所述阈值由所述第一棱镜和所述第二棱镜在所述时间窗口内旋转两圈对应的转速确定;其中,所述第一棱镜和所述第二棱镜的转速数值相等,方向相反;或者,Optionally, the scanning module includes a double-prism scanning component composed of a first prism and a second prism, and the threshold value is corresponding to the two rotations of the first prism and the second prism within the time window. The rotational speed of the first prism and the second prism are equal in value and opposite in direction; or,
所述扫描模块包含由第三棱镜、第四棱镜和第五棱镜构成的三棱镜扫描式组件,所述阈值由所述第五棱镜在获取所述时间窗口内旋转两圈对应的转速确定;其中,所述第三棱镜和所述第四棱镜的转速数值相等,方向相反;或者,The scanning module includes a triangular prism scanning component composed of a third prism, a fourth prism and a fifth prism, and the threshold value is determined by the rotation speed corresponding to two rotations of the fifth prism within the acquisition time window; The rotational speed of the third prism and the fourth prism are equal in value and opposite in direction; or,
所述扫描模块包含由第六棱镜和反射镜构成的扫描组件,所述阈值由所述第六棱镜在所述时间窗口内旋转两圈对应的转速,以及所述反射镜在所述时间窗口内扫描两次预设的扫描范围对应的转速共同确定;或者,The scanning module includes a scanning component composed of a sixth prism and a reflecting mirror, the threshold value is the rotational speed corresponding to the rotation of the sixth prism for two revolutions within the time window, and the reflecting mirror is within the time window The rotational speed corresponding to the preset scanning range of the two scans is jointly determined; or,
所述扫描模块包含振镜,所述阈值由所述振镜在所述时间窗口内扫描两次预设的扫描范围对应的转速确定。The scanning module includes a galvanometer, and the threshold value is determined by the rotation speed corresponding to the galvanometer scanning twice a preset scanning range within the time window.
上述所述处理器1902执行所述程序所实现的步骤,可以参考前文目标感知方法各个实施例的介绍,本申请实施例在此不做赘述。For the steps implemented by the processor 1902 executing the program, reference may be made to the foregoing descriptions of the various embodiments of the target sensing method, which are not described in detail in this embodiment of the present application.
通过上述各个实施例可以看到,基于本申请实施例所提供的目标感知装置,能 够在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,并分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果,最后输出所述时间窗口的目标感知结果。由于在本申请实施例所提供的目标感知方法中,所提取的目标点云数据对应所述点云帧的视场中的目标区域,且所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍,因此,基于本申请实施例所提供的目标感知方法,相对于基于点云帧的目标感知方法,能够对目标区域实现超帧率感知,进而能够对周围环境实现更快的感知,提高目标感知的实时性、灵敏度以及安全性。It can be seen from the above embodiments that, based on the target sensing device provided by the embodiments of the present application, in the process of acquiring the point cloud data constituting a frame of point cloud frame, at least one time in the frame of the point cloud frame can be detected. The target point cloud data in the window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output. Because in the target perception method provided by the embodiment of the present application, the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high. Therefore, based on the target sensing method provided by the embodiment of the present application, compared with the target sensing method based on the point cloud frame, super frame rate sensing can be realized for the target area, and further The surrounding environment realizes faster perception and improves the real-time, sensitivity and safety of target perception.
此外,本申请实施例还提供了一种探测系统,所述探测系统包括:光源,用于出射光脉冲序列;扫描模块,用于改变所述光脉冲序列的光路,以对视场进行扫描;探测模块,用于对所述光脉冲序列经物体反射的光束进行检测,得到点云数据,其中所述点云数据中的每个点云点数据用于指示所述点云点对应的物体的距离和/或方位;输出模块,用于连续输出点云帧,每帧点云帧包括多个点云点数据;感知模块,用于执行以下操作:在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,所述目标点云数据对应所述点云帧的视场中的目标区域;分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果;输出所述时间窗口的目标感知结果;其中,所述点云帧包含所述扫描模块对所述目标区域多次扫描而获取的点云数据,所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍。所述感知模块,还可以用于执行前述各实施例的方法中的步骤,本申请在此不再赘述。In addition, an embodiment of the present application also provides a detection system, the detection system includes: a light source for emitting a light pulse sequence; a scanning module for changing the optical path of the light pulse sequence to scan the field of view; The detection module is used to detect the light beam reflected by the object of the light pulse sequence to obtain point cloud data, wherein each point cloud point data in the point cloud data is used to indicate the object corresponding to the point cloud point. distance and/or orientation; an output module for continuously outputting point cloud frames, each point cloud frame including multiple point cloud point data; a perception module for performing the following operations: after acquiring the point cloud that constitutes one frame of point cloud frame In the process of data processing, the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the target point cloud data corresponds to the target area in the field of view of the point cloud frame; The target point cloud data extracted in each time window obtains the target perception result of the time window; the target perception result of the time window is output; wherein, the point cloud frame includes the scanning module to the target area multiple times For point cloud data acquired by scanning, the frequency of extracting point cloud data within the time window is at least twice the frequency of acquiring point cloud frames. The sensing module may also be used to execute the steps in the methods of the foregoing embodiments, and details are not described herein again in this application.
通过上述各个实施例可以看到,基于本申请实施例所提供的探测系统,能够在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,并分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果,最后输出所述时间窗口的目标感知结果。由于在本申请实施例所提供的目标感知方法中,所提取的目标点云数据对应所述点云帧的视场中的目标区域,且所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍,因此,基于本申请实施例所提供的目标感知方法,相对于基于点云帧的目标感知方法,能够对目标区域实现超帧率感知,进而能够对周围环境实现更快的感知,提高目标感知的实时性、灵敏度以及安全性。It can be seen from the above embodiments that, based on the detection system provided by the embodiments of the present application, in the process of acquiring the point cloud data constituting one frame of point cloud frame, at least one time window in the frame of the point cloud frame can be detected. The target point cloud data within each time window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output. Because in the target perception method provided by the embodiment of the present application, the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high. Therefore, based on the target sensing method provided by the embodiment of the present application, compared with the target sensing method based on the point cloud frame, super frame rate sensing can be realized for the target area, and further The surrounding environment realizes faster perception and improves the real-time, sensitivity and safety of target perception.
此外,本申请实施例还提供了一种可移动平台,所述可移动平台包括雷达以及 前文各个实施例所述的目标感知装置。所述可移动平台可以是智能汽车、机器人、无人机等等,本申请实施例对此不做限制。In addition, an embodiment of the present application also provides a movable platform, where the movable platform includes a radar and the target sensing device described in each of the foregoing embodiments. The movable platform may be a smart car, a robot, an unmanned aerial vehicle, etc., which is not limited in this embodiment of the present application.
基于所述可移动平台,能够实现前文各个实施例所述的任一方法步骤,相关内容本申请实施例在此不再赘述。Based on the movable platform, any of the method steps described in the foregoing embodiments can be implemented, and the related content of the embodiments of the present application will not be repeated here.
通过上述各个实施例可以看到,基于本申请实施例所提供的可移动平台,能够在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,并分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果,最后输出所述时间窗口的目标感知结果。由于在本申请实施例所提供的目标感知方法中,所提取的目标点云数据对应所述点云帧的视场中的目标区域,且所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍,因此,基于本申请实施例所提供的目标感知方法,相对于基于点云帧的目标感知方法,能够对目标区域实现超帧率感知,进而能够对周围环境实现更快的感知,提高目标感知的实时性、灵敏度以及安全性。It can be seen from the above embodiments that, based on the movable platform provided by the embodiments of the present application, in the process of acquiring the point cloud data constituting a frame of point cloud frames, at least one time period in the frame of the point cloud frame can be analyzed. The target point cloud data in the window is extracted, and the target perception result of the time window is obtained based on the extracted target point cloud data in each time window, and finally the target perception result of the time window is output. Because in the target perception method provided by the embodiment of the present application, the extracted target point cloud data corresponds to the target area in the field of view of the point cloud frame, and the extraction frequency of the point cloud data in the time window is at least The acquisition frequency of the point cloud frame is twice as high. Therefore, based on the target sensing method provided by the embodiment of the present application, compared with the target sensing method based on the point cloud frame, super frame rate sensing can be realized for the target area, and further The surrounding environment realizes faster perception and improves the real-time, sensitivity and safety of target perception.
此外,本申请还提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现前文所述的任一方法步骤。In addition, the present application also provides a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, any one of the foregoing method steps is implemented.
所述计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CDROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples (a non-exhaustive list) of computer readable storage media include: electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable Programmable Read Only Memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CDROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. In this document, a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
所述计算机可读存储介质的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。The signal medium of the computer-readable storage medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如”C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Program code embodied on a computer readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as "C" language or similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (eg, through the Internet using an Internet service provider) connect).
上述装置可执行本申请前述所有实施例所提供的方法,具备执行上述方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请前述所有实施例所提供的方法。The above-mentioned apparatus can execute the methods provided by all the foregoing embodiments of the present application, and has corresponding functional modules and beneficial effects for executing the above-mentioned methods. For technical details not described in detail in this embodiment, reference may be made to the methods provided in all the foregoing embodiments of this application.
本领域技术人员在考虑申请及实践这里申请的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未申请的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。Other embodiments of the present application will readily occur to those skilled in the art upon consideration of applying and practicing the inventions claimed herein. This application is intended to cover any variations, uses or adaptations of the present application that follow the general principles of the present application and include common knowledge or conventional techniques in the technical field to which the present application is not filed . The specification and examples are to be regarded as exemplary only, with the true scope and spirit of the application being indicated by the following claims.
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。It is to be understood that the present application is not limited to the precise structures described above and shown in the accompanying drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。The above descriptions are only preferred embodiments of the present application, and are not intended to limit the present application. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present application shall be included in the present application. within the scope of protection.

Claims (28)

  1. 一种目标感知方法,其特征在于,包括:A target perception method, comprising:
    在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,所述目标点云数据对应所述点云帧的视场中的目标区域;In the process of acquiring the point cloud data constituting one frame of point cloud frame, extract the target point cloud data in at least one time window in the frame of the point cloud frame, and the target point cloud data corresponds to the point cloud frame the target area in the field of view;
    分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果;Obtaining the target perception result of the time window based on the target point cloud data extracted in each time window;
    输出所述时间窗口的目标感知结果;output the target perception result of the time window;
    其中,所述点云帧包含雷达对所述目标区域多次扫描而获取的点云数据,所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍。The point cloud frame includes point cloud data obtained by the radar scanning the target area for multiple times, and the extraction frequency of the point cloud data in the time window is at least twice the acquisition frequency of the point cloud frame.
  2. 根据权利要求1所述的方法,其特征在于,所述时间窗口的目标点云数据包含所述雷达对所述目标区域至少一次扫描而获取的点云数据。The method according to claim 1, wherein the target point cloud data of the time window includes point cloud data obtained by the radar scanning the target area at least once.
  3. 根据权利要求1所述的方法,其特征在于,所述点云帧的时长包括多个连续的时间窗口,所述对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,包括:The method according to claim 1, wherein the duration of the point cloud frame includes a plurality of continuous time windows, and the step of performing the processing on the target point cloud data in at least one time window in the frame of the point cloud frame Extract, including:
    分别对所述多个连续的时间窗口中的每个时间窗口内的目标点云数据进行提取。Extracting target point cloud data in each time window of the multiple consecutive time windows.
  4. 根据权利要求1所述的方法,其特征在于,两个不同长度的所述时间窗口存在交叠。The method of claim 1, wherein the time windows of two different lengths overlap.
  5. 根据权利要求1所述的方法,其特征在于,所述点云帧的时长包括至少两个不同长度的时间窗口,其中,不同长度的时间窗口对应的数据提取规则不同。The method according to claim 1, wherein the duration of the point cloud frame includes at least two time windows of different lengths, wherein the data extraction rules corresponding to the time windows of different lengths are different.
  6. 根据权利要求5所述的方法,其特征在于,不同长度的时间窗口内所提取的目标点云数据对应所述目标区域中不同深度范围内的点云数据;The method according to claim 5, wherein the target point cloud data extracted in time windows of different lengths corresponds to point cloud data in different depth ranges in the target area;
    或者,or,
    不同长度的时间窗口内所提取的目标点云数据对应所述目标区域中不同方位内的点云数据。The target point cloud data extracted in time windows of different lengths correspond to point cloud data in different directions in the target area.
  7. 根据权利要求4所述的方法,其特征在于,不同长度的时间窗口对应的数据提取规则不同,所述数据提取规则包括:The method according to claim 4, wherein the data extraction rules corresponding to time windows of different lengths are different, and the data extraction rules include:
    对不同长度的时间窗口,提取满足预设条件的目标点云数据,所述预设条件与所述时间窗口的长度对应;For time windows of different lengths, extract target point cloud data that satisfies preset conditions, where the preset conditions correspond to the lengths of the time windows;
    或者,or,
    对不同长度的时间窗口,提取所述每个时间窗口内的全部点云数据。For time windows of different lengths, extract all point cloud data within each time window.
  8. 根据权利要求7所述的方法,其特征在于,所述预设条件至少包括以下之一:所述目标点云数据位于预设的深度,或者,所述目标点云数据的坐标位于预设的坐标。The method according to claim 7, wherein the preset condition includes at least one of the following: the target point cloud data is located at a preset depth, or the coordinates of the target point cloud data are located at a preset depth coordinate.
  9. 根据权利要求8所述的方法,其特征在于,所述数据提取规则包括:The method according to claim 8, wherein the data extraction rule comprises:
    对第一时间窗口,提取第一深度的目标点云数据,对长度大于所述第一时间窗口的第二时间窗口,提取第二深度的目标点云数据,所述第一深度小于所述第二深度。For the first time window, extract the target point cloud data of the first depth, and extract the target point cloud data of the second depth for the second time window whose length is greater than the first time window, and the first depth is smaller than the first time window. Second depth.
  10. 根据权利要求7所述的方法,其特征在于,所述数据提取规则为对不同长度的时间窗口提取所述每个时间窗口内的全部点云数据,分别基于每个时间窗口内所提取的点云数据获取所述时间窗口的目标感知结果,包括:The method according to claim 7, wherein the data extraction rule is to extract all point cloud data in each time window for time windows of different lengths, based on the points extracted in each time window. The cloud data obtains the target perception results of the time window, including:
    获取所述全部点云数据的深度或坐标,确定满足预设的深度或坐标的点云数据;Acquire the depth or coordinates of all the point cloud data, and determine the point cloud data that satisfies the preset depth or coordinates;
    根据所述满足预设的深度或坐标的点云数据,获取包含位于预设深度的目标感知对象的目标感知结果。According to the point cloud data satisfying the preset depth or coordinates, a target sensing result including the target sensing object located at the preset depth is acquired.
  11. 根据权利要求1所述的方法,其特征在于,所述分别基于每个时间窗口内所提取的点云数据获取所述时间窗口的目标感知结果,包括:The method according to claim 1, wherein the acquiring the target perception result of the time window based on the point cloud data extracted in each time window respectively comprises:
    基于不同长度的时间窗口,获取包含不同种类目标感知对象的目标感知结果。Based on time windows of different lengths, target sensing results containing different types of target sensing objects are obtained.
  12. 根据权利要求11所述的方法,其特征在于,基于不同长度的时间窗口,获取包含不同种类目标感知对象的目标感知结果,包括:The method according to claim 11, wherein, based on time windows of different lengths, acquiring target perception results containing different types of target perception objects, comprising:
    根据预先设定的多个不同目标感知方法,获取所述目标感知结果,其中,每个所述目标感知方法与一个长度的时间窗口对应,用于识别一种预设的目标对象;Obtain the target perception result according to a plurality of preset different target perception methods, wherein each of the target perception methods corresponds to a time window of a length and is used to identify a preset target object;
    或者,or,
    根据预先设定的同种目标感知方法,获取所述目标感知结果,其中,所述同种目标感知方法用于识别多种预设的目标对象。The target perception result is acquired according to a preset same type of target perception method, wherein the same type of target perception method is used to identify multiple preset target objects.
  13. 根据权利要求12所述的方法,其特征在于,所述多个不同目标感知方法包括:基于多个不同的神经网络模型进行目标感知,每个所述神经网络模型用于识别一种预设的目标对象;The method according to claim 12, wherein the multiple different target perception methods include: performing target perception based on multiple different neural network models, each of the neural network models being used to identify a preset target;
    所述同种目标感知方法包括:基于同一个神经网络模型进行目标感知,所述同一个神经网络模型用于识别多种预设的目标对象。The same kind of target perception method includes: performing target perception based on the same neural network model, and the same neural network model is used to identify multiple preset target objects.
  14. 根据权利要求1所述的方法,其特征在于,所述时间窗口的长度预先设定;The method according to claim 1, wherein the length of the time window is preset;
    或者,or,
    所述时间窗口的长度通过以下方式确定:The length of the time window is determined by:
    根据历史目标感知结果,确定所述时间窗口的长度;Determine the length of the time window according to the historical target perception result;
    其中,所述历史目标感知结果,至少包括以下之一:基于当前时间窗口之前的点 云帧所获取的目标感知结果,或者,基于当前时间窗口之前的时间窗口内的点云数据所获取的目标感知结果。Wherein, the historical target perception result includes at least one of the following: the target perception result obtained based on the point cloud frame before the current time window, or the target obtained based on the point cloud data in the time window before the current time window Perceived results.
  15. 根据权利要求14所述的方法,其特征在于,根据历史目标感知结果,确定所述时间窗口的长度,包括:The method according to claim 14, wherein determining the length of the time window according to historical target perception results, comprising:
    根据所述历史目标感知结果,确定目标感知对象;According to the historical target perception result, determine the target perception object;
    基于所述目标感知对象,确定所述时间窗口的长度。Based on the target perception object, the length of the time window is determined.
  16. 根据权利要求15所述的方法,其特征在于,基于所述目标感知对象,确定所述时间窗口的长度,包括:The method according to claim 15, wherein determining the length of the time window based on the target sensing object comprises:
    基于所述目标感知对象,确定所述目标感知对象的运动速度和\或所述目标感知对象的深度和\或目标点云角分辨率;Based on the target perception object, determine the movement speed of the target perception object and/or the depth of the target perception object and/or the angular resolution of the target point cloud;
    基于所述运动速度和\或所述深度和\或所述目标点云角分辨率,确定所述时间窗口的长度。The length of the time window is determined based on the motion speed and/or the depth and/or the target point cloud angular resolution.
  17. 根据权利要求16所述的方法,其特征在于,基于所述运动速度和\或所述深度和\或所述目标角分辨率,确定所述时间窗口的长度,包括:The method according to claim 16, wherein, determining the length of the time window based on the motion speed and/or the depth and/or the target angular resolution, comprising:
    对第一运动速度的目标感知对象确定第一长度的时间窗口,对第二运动速度的目标感知对象确定第二长度的时间窗口,所述第一运动速度大于所述第二运动速度,所述第一长度小于所述第二长度;A time window of a first length is determined for the target perception object at a first movement speed, and a time window of a second length is determined for the target perception object at a second movement speed, where the first movement speed is greater than the second movement speed, and the the first length is less than the second length;
    和\或,and / or,
    对第三深度的目标感知对象确定第三长度的时间窗口,对第四深度的目标感知对象确定第四长度的时间窗口,所述第三深度小于所述第四深度,所述第三长度小于所述第四长度。A time window of a third length is determined for the target perception object at a third depth, and a time window of a fourth length is determined for the target perception object at a fourth depth, where the third depth is less than the fourth depth, and the third length is less than the fourth length.
  18. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    获取所述点云帧的感知结果;obtaining the perception result of the point cloud frame;
    将所述点云帧的感知结果与多个第一感知结果关联输出;Correlate and output the perception results of the point cloud frame with a plurality of first perception results;
    其中,每个所述第一感知结果为在获取构成所述点云帧的点云数据的过程中,基于一个时间窗口的点云数据所获取的目标感知结果。Wherein, each of the first perception results is a target perception result acquired based on point cloud data of a time window in the process of acquiring point cloud data constituting the point cloud frame.
  19. 根据权利要求1所述的方法,其特征在于,在对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取之前,所述方法还包括:The method according to claim 1, wherein before extracting the target point cloud data in at least one time window in the frame of the point cloud frame, the method further comprises:
    接收触发指令;receive trigger instructions;
    在所述触发指令的触发下,输出目标感知模式选择信息,所述目标感知模式至少包括:超帧率感知模式,和\或,普通帧率感知模式;Under the triggering of the trigger instruction, output target perception mode selection information, and the target perception mode at least includes: a super frame rate perception mode, and/or, a normal frame rate perception mode;
    当确定是超帧率感知模式时,对至少一个时间窗口内的目标点云数据进行提取并输出所获取的目标感知结果;当确定是普通帧率感知模式时,对所述点云帧进行目标感知并输出所获取的目标感知结果。When it is determined to be in the super frame rate perception mode, extract the target point cloud data in at least one time window and output the acquired target perception result; when it is determined to be in the normal frame rate perception mode, perform the target detection on the point cloud frame Perceive and output the acquired target perception result.
  20. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    获取多个相同时间窗口的目标感知结果;Obtain multiple target perception results of the same time window;
    基于所获取的多个目标感知结果,获取与所述多个相同时间窗口对应的目标感知对象的运动轨迹;Based on the obtained multiple target sensing results, obtain the motion trajectories of the target sensing objects corresponding to the multiple identical time windows;
    输出所述运动轨迹。The motion trajectory is output.
  21. 根据权利要求20所述的方法,其特征在于,所述方法还包括:The method of claim 20, wherein the method further comprises:
    基于所述运动轨迹,获取所述目标感知对象的预测运动轨迹;Based on the motion trajectory, obtain the predicted motion trajectory of the target perception object;
    输出所述预测运动轨迹。The predicted motion trajectory is output.
  22. 根据权利要求1所述的方法,其特征在于,所述目标区域为所述点云帧的视场中的中心区域。The method according to claim 1, wherein the target area is a central area in the field of view of the point cloud frame.
  23. 根据权利要求1所述的方法,其特征在于,所述雷达对所述目标区域多次扫描,通过控制所述雷达的扫描模块中的至少一个组件在所述目标区域内的转速至少达到预设的阈值来实现。The method according to claim 1, wherein the radar scans the target area multiple times, and the rotation speed of at least one component in the scanning module of the radar in the target area is controlled to at least reach a preset value threshold to achieve.
  24. 根据权利要求23所述的方法,其特征在于,所述扫描模块包含由第一棱镜和第二棱镜构成的双棱镜扫描式组件,所述阈值由所述第一棱镜和所述第二棱镜在所述时间窗口内旋转两圈对应的转速确定;其中,所述第一棱镜和所述第二棱镜的转速数值相等,方向相反;The method of claim 23, wherein the scanning module comprises a double-prism scanning type assembly composed of a first prism and a second prism, and the threshold is determined by the first prism and the second prism between the two prisms. The rotational speed corresponding to two rotations in the time window is determined; wherein, the rotational speed of the first prism and the second prism are equal in value and opposite in direction;
    或者,or,
    所述扫描模块包含由第三棱镜、第四棱镜和第五棱镜构成的三棱镜扫描式组件,所述阈值由所述第五棱镜在获取所述时间窗口内旋转两圈对应的转速确定;其中,所述第三棱镜和所述第四棱镜的转速数值相等,方向相反;The scanning module includes a triangular prism scanning component composed of a third prism, a fourth prism and a fifth prism, and the threshold value is determined by the rotation speed corresponding to two rotations of the fifth prism within the acquisition time window; The rotational speed of the third prism and the fourth prism are equal in value and opposite in direction;
    或者,or,
    所述扫描模块包含由第六棱镜和反射镜构成的扫描组件,所述阈值由所述第六棱镜在所述时间窗口内旋转两圈对应的转速,以及所述反射镜在所述时间窗口内扫描两次预设的扫描范围对应的转速共同确定;The scanning module includes a scanning component composed of a sixth prism and a reflecting mirror, the threshold value is the rotational speed corresponding to the rotation of the sixth prism for two revolutions within the time window, and the reflecting mirror is within the time window The rotational speed corresponding to the two preset scanning ranges is determined jointly;
    或者,or,
    所述扫描模块包含振镜,所述阈值由所述振镜在所述时间窗口内扫描两次预设的扫描范围对应的转速确定。The scanning module includes a galvanometer, and the threshold value is determined by the rotation speed corresponding to the galvanometer scanning twice a preset scanning range within the time window.
  25. 一种目标感知装置,其特征在于,包括:存储器和处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至24任一所述的方法。A target sensing device, characterized in that it comprises: a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements any one of claims 1 to 24 when executing the program. method described.
  26. 一种探测系统,其特征在于,包括:A detection system, characterized in that it includes:
    光源,用于出射光脉冲序列;a light source for emitting a sequence of light pulses;
    扫描模块,用于改变所述光脉冲序列的光路,以对视场进行扫描;a scanning module for changing the light path of the light pulse sequence to scan the field of view;
    探测模块,用于对所述光脉冲序列经物体反射的光束进行检测,得到点云数据,其中所述点云数据中的每个点云点数据用于指示所述点云点对应的物体的距离和/或方位;The detection module is used to detect the light beam reflected by the object of the light pulse sequence to obtain point cloud data, wherein each point cloud point data in the point cloud data is used to indicate the object corresponding to the point cloud point. distance and/or bearing;
    输出模块,用于连续输出点云帧,每帧点云帧包括多个点云点数据;The output module is used to continuously output point cloud frames, each frame of point cloud frame includes multiple point cloud point data;
    感知模块,用于执行以下操作:Perception module to do the following:
    在获取构成一帧点云帧的点云数据的过程中,对所述点云帧的帧内至少一个时间窗口内的目标点云数据进行提取,所述目标点云数据对应所述点云帧的视场中的目标区域;In the process of acquiring the point cloud data constituting one frame of point cloud frame, the target point cloud data in at least one time window in the frame of the point cloud frame is extracted, and the target point cloud data corresponds to the point cloud frame the target area in the field of view;
    分别基于每个时间窗口内所提取的目标点云数据获取所述时间窗口的目标感知结果;Obtaining the target perception result of the time window based on the target point cloud data extracted in each time window;
    输出所述时间窗口的目标感知结果;output the target perception result of the time window;
    其中,所述点云帧包含所述扫描模块对所述目标区域多次扫描而获取的点云数据,所述时间窗口内的点云数据的提取频率至少是所述点云帧的获取频率的二倍。Wherein, the point cloud frame includes point cloud data obtained by scanning the target area for multiple times by the scanning module, and the extraction frequency of the point cloud data in the time window is at least as much as the acquisition frequency of the point cloud frame Twice.
  27. 一种可移动平台,其特征在于,所述可移动平台包括雷达以及如权利要求25所述的目标感知装置。A movable platform, characterized in that, the movable platform includes a radar and the target sensing device as claimed in claim 25 .
  28. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机指令,所述计算机指令被执行时实现权利要求1至24任一项所述方法的步骤。A computer-readable storage medium, characterized in that, computer instructions are stored on the computer-readable storage medium, and when the computer instructions are executed, the steps of the method according to any one of claims 1 to 24 are implemented.
PCT/CN2021/087327 2021-04-14 2021-04-14 Target sensing method and device, detection system, movable platform and storage medium WO2022217522A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/087327 WO2022217522A1 (en) 2021-04-14 2021-04-14 Target sensing method and device, detection system, movable platform and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/087327 WO2022217522A1 (en) 2021-04-14 2021-04-14 Target sensing method and device, detection system, movable platform and storage medium

Publications (1)

Publication Number Publication Date
WO2022217522A1 true WO2022217522A1 (en) 2022-10-20

Family

ID=83640004

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/087327 WO2022217522A1 (en) 2021-04-14 2021-04-14 Target sensing method and device, detection system, movable platform and storage medium

Country Status (1)

Country Link
WO (1) WO2022217522A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558838A (en) * 2018-11-29 2019-04-02 北京经纬恒润科技有限公司 A kind of object identification method and system
CN109558854A (en) * 2018-12-05 2019-04-02 百度在线网络技术(北京)有限公司 Method for barrier perception, device, electronic equipment and storage medium
US10345437B1 (en) * 2018-08-06 2019-07-09 Luminar Technologies, Inc. Detecting distortion using other sensors
CN111060923A (en) * 2019-11-26 2020-04-24 武汉乐庭软件技术有限公司 Multi-laser-radar automobile driving obstacle detection method and system
CN111190183A (en) * 2018-11-13 2020-05-22 通用汽车环球科技运作有限责任公司 Sliding window integration scheme for target detection in radar systems
CN112578406A (en) * 2021-02-25 2021-03-30 北京主线科技有限公司 Vehicle environment information sensing method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10345437B1 (en) * 2018-08-06 2019-07-09 Luminar Technologies, Inc. Detecting distortion using other sensors
CN111190183A (en) * 2018-11-13 2020-05-22 通用汽车环球科技运作有限责任公司 Sliding window integration scheme for target detection in radar systems
CN109558838A (en) * 2018-11-29 2019-04-02 北京经纬恒润科技有限公司 A kind of object identification method and system
CN109558854A (en) * 2018-12-05 2019-04-02 百度在线网络技术(北京)有限公司 Method for barrier perception, device, electronic equipment and storage medium
CN111060923A (en) * 2019-11-26 2020-04-24 武汉乐庭软件技术有限公司 Multi-laser-radar automobile driving obstacle detection method and system
CN112578406A (en) * 2021-02-25 2021-03-30 北京主线科技有限公司 Vehicle environment information sensing method and device

Similar Documents

Publication Publication Date Title
KR102614323B1 (en) Create a 3D map of a scene using passive and active measurements
US11821988B2 (en) Ladar system with intelligent selection of shot patterns based on field of view data
US11620835B2 (en) Obstacle recognition method and apparatus, storage medium, and electronic device
US11609329B2 (en) Camera-gated lidar system
CN109598066B (en) Effect evaluation method, apparatus, device and storage medium for prediction module
JP6696697B2 (en) Information processing device, vehicle, information processing method, and program
JP7239703B2 (en) Object classification using extraterritorial context
CN105100780B (en) Optical safety monitoring with selective pixel array analysis
CN110713087B (en) Elevator door state detection method and device
WO2020243962A1 (en) Object detection method, electronic device and mobile platform
CN106934347B (en) Obstacle identification method and device, computer equipment and readable medium
US20220179094A1 (en) Systems and methods for implementing a tracking camera system onboard an autonomous vehicle
US10860034B1 (en) Barrier detection
US11450120B2 (en) Object detection in point clouds
WO2022198637A1 (en) Point cloud noise filtering method and system, and movable platform
KR20220110034A (en) A method of generating an intensity information with extended expression range by reflecting a geometric characteristic of object and a LiDAR device that performs the method
US11994589B2 (en) Vapor detection in lidar point cloud
WO2022217522A1 (en) Target sensing method and device, detection system, movable platform and storage medium
US11774596B2 (en) Streaming object detection within sensor data
JP2020154913A (en) Object detection device and method, traffic support server, computer program and sensor device
TWI843116B (en) Moving object detection method, device, electronic device and storage medium
US20240062386A1 (en) High throughput point cloud processing
Alajmi et al. Increasing Robot’s Proximity Awareness Using LIDAR Technology
WO2022188279A1 (en) Detection method and apparatus, and laser radar
US20220268938A1 (en) Systems and methods for bounding box refinement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21936411

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21936411

Country of ref document: EP

Kind code of ref document: A1