CN115965944B - Target information detection method, device, driving device and medium - Google Patents
Target information detection method, device, driving device and medium Download PDFInfo
- Publication number
- CN115965944B CN115965944B CN202310222986.6A CN202310222986A CN115965944B CN 115965944 B CN115965944 B CN 115965944B CN 202310222986 A CN202310222986 A CN 202310222986A CN 115965944 B CN115965944 B CN 115965944B
- Authority
- CN
- China
- Prior art keywords
- feature map
- target information
- moment
- characteristic diagram
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a method, equipment, driving equipment and medium for detecting target information, which comprise the steps of inputting perception data at the current moment into a perception detection model for feature extraction to obtain a feature map at the current moment; acquiring a characteristic diagram of a historical moment from a cache queue, and aligning the characteristic diagram of the historical moment with the characteristic diagram of the current moment to obtain a multi-time frame characteristic diagram; the multi-time frame feature map is input into the perception detection model to be detected, so that a detection result of target information is obtained, feature extraction is only carried out on the perception data at the current moment, a large amount of redundant calculation is reduced, the time for feature extraction is shortened, and the detection efficiency of the target information is improved.
Description
Technical Field
The invention relates to the technical field of data processing, and particularly provides a target information detection method, device, driving equipment and medium.
Background
The introduction of timing capability in the perceptual model to solve timing tasks (e.g., trajectory prediction) or to improve the perceptibility of a single frame (e.g., introducing continuous frame optimization target detection) is a trend in the current algorithm development.
Typically, the timing module in the perceptual model typically uses a sliding window approach, where in one inference, features need to be extracted for the current and historical frames within the sliding window. In the sliding window of the next moment, the features need to be extracted again for the history frame. In this way, many redundant calculations are generated in each sliding window, so that the time cost is excessive, and the efficiency of detecting the target information is reduced.
Disclosure of Invention
The present invention has been made to overcome the above-mentioned drawbacks, and provides a target information detection method, apparatus, driving apparatus, and medium that solve or at least partially solve the technical problem that each time a sliding window generates a plurality of redundant calculations, resulting in excessive time overhead, and reducing the efficiency of target information detection.
In a first aspect, the present invention provides a method for detecting target information, the method comprising:
inputting the perception data at the current moment into a perception detection model for feature extraction to obtain a feature map at the current moment;
acquiring a characteristic diagram of a historical moment from a cache queue, and aligning the characteristic diagram of the historical moment with the characteristic diagram of the current moment to obtain a multi-time frame characteristic diagram;
and inputting the multi-time frame feature map into the perception detection model for detection to obtain a detection result of the target information.
Further, the method for detecting target information further includes:
and storing the characteristic diagram at the current moment into the cache queue.
Further, in the above method for detecting target information, storing the feature map at the current time in the cache queue includes:
detecting whether the number of elements in the non-idle state in the cache queue reaches the maximum number of all elements in the cache queue; wherein the non-idle state element is an element of a feature map stored with the history time;
if the number of the elements in the non-idle state in the cache queue reaches the maximum number of all the elements in the cache queue, deleting the characteristic diagram of the history time in the first element, sequentially moving the characteristic diagram of the history time in the rest elements to the direction of the first element for bit filling, and then storing the characteristic diagram of the current time in the last element in the cache queue;
if the number of the elements in the non-idle state in the cache queue does not reach the maximum number of all the elements in the cache queue, storing the characteristic diagram at the current moment into a first idle state element; the idle state element is an element of the feature map in which the history time is not stored.
Further, in the above method for detecting target information, storing the feature map at the current time into a first idle state element includes:
inserting the feature map at the current moment from the last element in the cache queue, and moving the feature map to the first idle state element in the direction of the first element.
Further, in the above method for detecting target information, aligning the feature map at the historical time with the feature map at the current time to obtain a multi-time frame feature map includes:
acquiring current pose information of driving equipment based on the perception detection model, and acquiring historical pose information of the driving equipment from the cache queue;
determining a pose transformation relation of driving equipment according to the historical pose information and the current pose information;
and based on the pose transformation relation, aligning the characteristic diagram of the historical moment with the characteristic diagram of the current moment to obtain a multi-time frame characteristic diagram.
Further, in the above method for detecting target information, aligning the feature map at the historical time with the feature map at the current time based on the pose transformation relationship to obtain a multi-time frame feature map, the method includes:
based on the pose transformation relationship, determining a mapping relationship between the feature map at the current moment and the feature map at the historical moment;
and mapping the characteristic map of the historical moment to the characteristic map of the historical moment based on the mapping relation to finish alignment, so as to obtain the multi-time frame characteristic map.
Further, in the above method for detecting target information, determining a mapping relationship between the feature map at the current time and the feature map at the historical time based on the pose transformation relationship includes:
performing grid division on the feature map at the current moment to obtain a grid feature map at the current moment;
and determining a mapping relation between the characteristic map at the current moment and the characteristic map at the historical moment based on the grid characteristic map at the current moment and the pose transformation relation.
In a second aspect, the invention provides a detection device for target information, comprising a processor and storage means, said storage means being adapted to store a plurality of program code, characterized in that said program code is adapted to be loaded and executed by said processor to perform the method for detecting target information according to any of the preceding claims.
In a third aspect, there is provided a driving apparatus including the detection apparatus of target information as described above.
In a third aspect, a computer readable storage medium is provided, the computer readable storage medium storing a plurality of program codes, characterized in that the program codes are adapted to be loaded and executed by a processor to perform the method of detecting target information according to any one of the above.
The technical scheme provided by the invention has at least one or more of the following beneficial effects:
in the technical scheme of implementing the invention, the characteristic diagram of the historical moment is stored through the cache queue, the perception data of the current moment is input into the perception detection model for characteristic extraction, the characteristic diagram of the historical moment is directly obtained from the cache queue after the characteristic diagram of the current moment is obtained, the characteristic diagram of the historical moment is aligned with the characteristic diagram of the current moment, and the multi-time frame characteristic diagram is obtained as the input of the perception detection model, so that the detection result of the target information is output, the characteristic extraction of the perception data of the current moment is only realized, a large number of redundant calculation is reduced, the time for characteristic extraction is shortened, and the detection efficiency of the target information is improved.
Drawings
The present disclosure will become more readily understood with reference to the accompanying drawings. As will be readily appreciated by those skilled in the art: the drawings are for illustrative purposes only and are not intended to limit the scope of the present invention. Moreover, like numerals in the figures are used to designate like parts, wherein:
FIG. 1 is a flow chart illustrating main steps of a method for detecting target information according to an embodiment of the present invention;
FIG. 2 is a flow chart of the alignment of feature maps at different times;
FIG. 3 is a graph comparing trace predictions for different time sequence lengths;
FIG. 4 is a flow chart illustrating main steps of a method for detecting target information according to an embodiment of the present invention;
fig. 5 is a main structural block diagram of a target information detection apparatus according to an embodiment of the present invention.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
In the description of the present invention, a "module," "processor" may include hardware, software, or a combination of both. A module may comprise hardware circuitry, various suitable sensors, communication ports, memory, or software components, such as program code, or a combination of software and hardware. The processor may be a central processor, a microprocessor, an image processor, a digital signal processor, or any other suitable processor. The processor has data and/or signal processing functions. The processor may be implemented in software, hardware, or a combination of both. Non-transitory computer readable storage media include any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random access memory, and the like. The term "a and/or B" means all possible combinations of a and B, such as a alone, B alone or a and B. The term "at least one A or B" or "at least one of A and B" has a meaning similar to "A and/or B" and may include A alone, B alone or A and B. The singular forms "a", "an" and "the" include plural referents.
In the detection process using a timing module in a perception model, the timing module usually uses a sliding window method, and in one inference, features need to be extracted from a current frame and a historical frame in the sliding window. In the sliding window of the next moment, the features need to be extracted again for the history frame. In this way, many redundant calculations are generated in each sliding window, so that the time cost is excessive, and the efficiency of detecting the target information is reduced.
Therefore, in order to solve the technical problems, the invention provides the following technical scheme:
referring to fig. 1, fig. 1 is a schematic flow chart of main steps of a method for detecting target information according to an embodiment of the present invention. As shown in fig. 1, the method for detecting target information in the embodiment of the present invention mainly includes the following steps 101 to 103.
in a specific implementation process, sensing, a camera and the like can be utilized to collect data of the current environment in real time as sensing data. For example, the perception data may include images, lidar point clouds, and the like. After the sensing data of the current moment is acquired, the sensing data of the current moment can be input into a time sequence module in a pre-trained sensing detection model to perform feature extraction, and a feature map of the current moment is obtained.
102, acquiring a characteristic diagram of a historical moment from a cache queue, and aligning the characteristic diagram of the historical moment with the characteristic diagram of the current moment to obtain a multi-time frame characteristic diagram;
in a specific implementation process, a buffer queue may be preset, and after each time of the time sequence module in the perception detection model extracts a feature map, the feature map may be used as a feature map of a historical time and stored in the buffer queue. Therefore, after the perceived data at the current moment is obtained, only the characteristic diagram at the current moment corresponding to the perceived data at the current moment is required to be extracted, and after the characteristic diagram at the historical moment is directly obtained from the cache queue, the characteristic diagram at the historical moment is aligned with the characteristic diagram at the current moment to obtain the multi-time frame characteristic diagram, so that all the characteristic diagrams can be synchronized, and the detection precision of the target information is improved. The feature map of N-1 historical moments can be extracted according to a time sequence length N specified by a time sequence module in the perception detection model.
In a specific implementation process, the characteristic diagram of the historical moment and the characteristic diagram of the current moment may be aligned with reference to a flowchart shown in fig. 2, so as to obtain a multi-time frame characteristic diagram. Fig. 2 is a schematic flow chart for aligning feature diagrams at different moments. As shown in fig. 2, the process may include the following steps 201-203:
in a specific implementation process, when the cache queue stores the feature map of the historical moment, the historical pose information of the driving device can also be stored together. Thus, after the perception data of the current moment is obtained, the perception detection model can be utilized to obtain the current pose information of the driving equipment, and the historical pose information of the driving equipment is obtained from the cache queue.
in a specific implementation process, the posture change amount of the driving device from the historical moment to the current moment can be calculated according to the historical posture information and the current posture information, so that the posture change relation of the driving device can be obtained according to the posture change amount.
And 203, aligning the feature map at the historical moment with the feature map at the current moment based on the pose transformation relation to obtain a multi-time frame feature map.
In a specific implementation process, after the pose transformation relationship of the driving device is obtained, a mapping relationship between the feature map at the current moment and the feature map at the historical moment, that is, a pixel position of the feature map at the current moment, which corresponds to a pixel position of the feature map at the historical moment, may be determined based on the pose transformation relationship of the driving device. And then, mapping the feature map of the historical moment to the feature map of the current moment to finish alignment based on the mapping relation between the feature map of the current moment and the feature map of the historical moment, so as to obtain the multi-time frame feature map. Specifically, based on the mapping relationship between the feature map at the current time and the feature map at the historical time, the coordinates of each pixel of the feature map at the historical time are projected to the feature map at the current time, so that all the feature maps are aggregated together.
In a specific implementation process, the feature map at the current time can be subjected to grid division to obtain a grid feature map at the current time, and a mapping relationship between the feature map at the current time and the feature map at the historical time is determined based on the grid feature map at the current time and the pose transformation relationship. Specifically, a grid with point cloud data can be selected as an effective grid, and the mapping relation between the feature images in the effective grid and the feature images at the historical moment is successively determined until the mapping relation between the feature images at the current moment and the feature images at the historical moment is obtained after the feature images in all the effective grids are traversed.
And step 103, inputting the multi-time frame feature map into the perception detection model for detection, and obtaining a detection result of the target information.
In a specific implementation process, the obtained multi-time frame feature map can be input into the perception detection model again for detection, and a detection result of the target information is obtained. The detection results of different target information can be obtained through different detection head networks in the perception detection model. For example, the detection result of the target information may include a target object such as an obstacle, a signboard, or the like; semantic/instance segmentation results; a result of the drivable region detection; track prediction results; intent prediction results, and the like.
In a specific implementation process, as the time sequence feature extraction is performed, only feature extraction is performed on the sensing data at the current moment, and the pressure of the time sequence module in the sensing detection model is released, so that the feature map in a relatively long time period can be obtained when the feature map is obtained, and the prediction result can be more accurate. For example, FIG. 3 is a graph comparing trace predictions for different time sequence lengths. As shown in fig. 3, in the vehicle cornering scenario, when the time-series length is 3 frames, the predicted vehicle travel track a is a straight line and is inconsistent with the lane line, and when the time-series length is 5 frames, the predicted vehicle travel track a is a curve and is consistent with the lane line.
According to the target information detection method, the characteristic diagram of the historical moment is stored through the cache queue, the sensing data of the current moment is input into the sensing detection model for characteristic extraction, the characteristic diagram of the historical moment is directly obtained from the cache queue after the characteristic diagram of the current moment is obtained, the characteristic diagram of the historical moment is aligned with the characteristic diagram of the current moment, and the multi-time frame characteristic diagram is obtained and used as input of the sensing detection model, so that the detection result of the target information is output, the characteristic extraction of the sensing data of the current moment is only achieved, a large number of redundant calculation is reduced, the time for characteristic extraction is shortened, and the detection efficiency of the target information is improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating main steps of a method for detecting target information according to an embodiment of the present invention. As shown in fig. 4, the method for detecting target information in the embodiment of the present invention mainly includes the following steps 401 to 406.
Step 401, inputting the sensing data at the current moment into a sensing detection model for feature extraction to obtain a feature map at the current moment;
in a specific implementation process, after the feature map at the current time is obtained, the feature map at the current time is also required to be stored in a cache queue. However, the number of elements in the cache queue is typically limited, so it can be detected whether the number of non-idle state elements in the cache queue reaches the maximum number of all elements in the cache queue. Wherein, the element of the feature map stored with the history time may be a non-idle state element; elements of the feature map that do not store the historical time instants may be idle state elements. Illustratively, the cache queue includes L elements, step 405 is performed if the non-idle state element is also L, and step 406 is performed if the non-idle state element is also M, where M is less than L.
Step 405, deleting a feature map of a history time in a first element, sequentially moving the feature map of the history time in the remaining elements to the direction of the first element for bit filling, and storing the feature map of the current time in a last element in the cache queue;
in a specific implementation process, if the number of non-idle state elements in the cache queue reaches the maximum number of all elements in the cache queue, deleting the feature map of the history time in the first element, sequentially moving the feature map of the history time in the remaining elements to the direction of the first element for bit filling, and storing the feature map of the current time in the last element in the cache queue. Namely deleting the feature map of the forefront historical moment, sequentially moving the feature map of the subsequent historical moment forward to enable the last element to be an idle state element, and then storing the feature map of the current moment into the last element in the cache queue.
in a specific implementation process, if the number of non-idle state elements in the cache queue does not reach the maximum number of all elements in the cache queue, the feature map at the current time may be stored into the first idle state element. Specifically, the feature map at the current moment is inserted from the last element in the cache queue, and is moved to the first idle state element in the direction of the first element. That is, for the L elements, if the number of elements in the non-idle state is N, the feature map at the current time is inserted from the last element in the cache queue, and then moved to the n+1th element.
It should be noted that, although the foregoing embodiments describe the steps in a specific order, it will be understood by those skilled in the art that, in order to achieve the effects of the present invention, the steps are not necessarily performed in such an order, and may be performed simultaneously (in parallel) or in other orders, and these variations are within the scope of the present invention.
It will be appreciated by those skilled in the art that the present invention may implement all or part of the above-described methods according to the above-described embodiments, or may be implemented by means of a computer program for instructing relevant hardware, where the computer program may be stored in a computer readable storage medium, and where the computer program may implement the steps of the above-described embodiments of the method when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include: any entity or device, medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunications signals, software distribution media, and the like capable of carrying the computer program code. It should be noted that the computer readable storage medium may include content that is subject to appropriate increases and decreases as required by jurisdictions and by jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunications signals.
Further, the invention also provides a detection device of the target information.
Referring to fig. 5, fig. 5 is a main block diagram of a target information detecting apparatus according to an embodiment of the present invention. As shown in fig. 5, the apparatus for detecting target information in the embodiment of the present invention may include a processor 51 and a storage device 52.
The storage device 52 may be configured to store a program for executing the method of detecting target information of the above-described method embodiment, and the processor 51 may be configured to execute the program in the storage device 52, including, but not limited to, a program for executing the method of detecting target information of the above-described method embodiment. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The detection device of the target information may be a control device formed by including various electronic devices.
In one implementation, the number of memory devices 52 and processors 51 may be multiple. While the program for executing the method for detecting target information of the above-described method embodiment may be divided into a plurality of sub-programs, each of which may be loaded and executed by the processor 51 to perform the different steps of the method for detecting target information of the above-described method embodiment, respectively. Specifically, each of the sub-programs may be stored in a different storage device 52, and each of the processors 51 may be configured to execute the programs in one or more storage devices 52 to collectively implement the method for detecting target information in the above method embodiment, that is, each of the processors 51 executes different steps of the method for detecting target information in the above method embodiment, respectively, to collectively implement the method for detecting target information in the above method embodiment.
The plurality of processors 51 may be processors disposed on the same device, for example, the device may be a high-performance device composed of a plurality of processors, and the plurality of processors 51 may be processors configured on the high-performance device. The plurality of processors 51 may be processors disposed on different devices, for example, the devices may be a server cluster, and the plurality of processors 51 may be processors on different servers in the server cluster.
Further, the present invention also provides a driving apparatus, which may include the target information detection apparatus of the above embodiment.
Further, the invention also provides a computer readable storage medium. In one embodiment of the computer-readable storage medium according to the present invention, the computer-readable storage medium may be configured to store a program for performing a control method of detection of target information of the above-described method embodiment, the program being loadable and executable by a processor to implement the above-described target information detection method. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The computer readable storage medium may be a storage device including various electronic devices, and optionally, the computer readable storage medium in the embodiments of the present invention is a non-transitory computer readable storage medium.
Further, it should be understood that, since the respective modules are merely set to illustrate the functional units of the apparatus of the present invention, the physical devices corresponding to the modules may be the processor itself, or a part of software in the processor, a part of hardware, or a part of a combination of software and hardware. Accordingly, the number of individual modules in the figures is merely illustrative.
Those skilled in the art will appreciate that the various modules in the apparatus may be adaptively split or combined. Such splitting or combining of specific modules does not cause the technical solution to deviate from the principle of the present invention, and therefore, the technical solution after splitting or combining falls within the protection scope of the present invention.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.
Claims (10)
1. A method for detecting target information, comprising:
inputting the perception data at the current moment into a perception detection model for feature extraction to obtain a feature map at the current moment;
acquiring a characteristic diagram of a historical moment from a cache queue, and aligning the characteristic diagram of the historical moment with the characteristic diagram of the current moment to obtain a multi-time frame characteristic diagram;
and inputting the multi-time frame feature map into the perception detection model for detection to obtain a detection result of the target information.
2. The method for detecting target information according to claim 1, further comprising:
and storing the characteristic diagram at the current moment into the cache queue.
3. The method for detecting target information according to claim 2, wherein storing the feature map of the current time to the cache queue includes:
detecting whether the number of elements in the non-idle state in the cache queue reaches the maximum number of all elements in the cache queue; wherein the non-idle state element is an element of a feature map stored with the history time;
if the number of the elements in the non-idle state in the cache queue reaches the maximum number of all the elements in the cache queue, deleting the characteristic diagram of the history time in the first element, sequentially moving the characteristic diagram of the history time in the rest elements to the direction of the first element for bit filling, and then storing the characteristic diagram of the current time in the last element in the cache queue;
if the number of the elements in the non-idle state in the cache queue does not reach the maximum number of all the elements in the cache queue, storing the characteristic diagram at the current moment into a first idle state element; the idle state element is an element of the feature map in which the history time is not stored.
4. A method of detecting target information according to claim 3, wherein storing the feature map of the current time into a first idle state element comprises:
inserting the feature map at the current moment from the last element in the cache queue, and moving the feature map to the first idle state element in the direction of the first element.
5. The method for detecting target information according to claim 1, wherein aligning the feature map at the historical time with the feature map at the current time to obtain a multi-time frame feature map comprises:
acquiring current pose information of driving equipment based on the perception detection model, and acquiring historical pose information of the driving equipment from the cache queue;
determining a pose transformation relation of driving equipment according to the historical pose information and the current pose information;
and based on the pose transformation relation, aligning the characteristic diagram of the historical moment with the characteristic diagram of the current moment to obtain a multi-time frame characteristic diagram.
6. The method according to claim 5, wherein aligning the feature map at the historical time with the feature map at the current time based on the pose transformation relationship to obtain a multi-timeframe feature map, comprises:
based on the pose transformation relationship, determining a mapping relationship between the feature map at the current moment and the feature map at the historical moment;
and mapping the characteristic map of the historical moment to the characteristic map of the historical moment based on the mapping relation to finish alignment, so as to obtain the multi-time frame characteristic map.
7. The method according to claim 6, wherein determining a mapping relationship between the feature map at the current time and the feature map at the history time based on the pose transformation relationship, comprises:
performing grid division on the feature map at the current moment to obtain a grid feature map at the current moment;
and determining a mapping relation between the characteristic map at the current moment and the characteristic map at the historical moment based on the grid characteristic map at the current moment and the pose transformation relation.
8. A detection apparatus for target information, characterized by comprising a processor and a storage means, the storage means being adapted to store a plurality of pieces of program code, the program code being adapted to be loaded and executed by the processor to perform the method for detecting target information according to any one of claims 1 to 7.
9. A driving apparatus comprising the target information detection apparatus according to claim 8.
10. A computer readable storage medium, characterized in that a plurality of program codes are stored, characterized in that the program codes are adapted to be loaded and run by a processor to perform the method of detecting target information according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310222986.6A CN115965944B (en) | 2023-03-09 | 2023-03-09 | Target information detection method, device, driving device and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310222986.6A CN115965944B (en) | 2023-03-09 | 2023-03-09 | Target information detection method, device, driving device and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115965944A CN115965944A (en) | 2023-04-14 |
CN115965944B true CN115965944B (en) | 2023-05-09 |
Family
ID=85888659
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310222986.6A Active CN115965944B (en) | 2023-03-09 | 2023-03-09 | Target information detection method, device, driving device and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115965944B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179311A (en) * | 2019-12-23 | 2020-05-19 | 全球能源互联网研究院有限公司 | Multi-target tracking method and device and electronic equipment |
CN111797751A (en) * | 2020-06-29 | 2020-10-20 | 中国第一汽车股份有限公司 | Pedestrian trajectory prediction method, device, equipment and medium |
CN112016469A (en) * | 2020-08-28 | 2020-12-01 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and readable storage medium |
CN113743607A (en) * | 2021-09-15 | 2021-12-03 | 京东科技信息技术有限公司 | Training method of anomaly detection model, anomaly detection method and device |
CN114494314A (en) * | 2021-12-27 | 2022-05-13 | 南京大学 | Timing boundary detection method and timing sensor |
CN114723779A (en) * | 2021-01-06 | 2022-07-08 | 广州汽车集团股份有限公司 | Vehicle positioning method and device and computer readable storage medium |
CN114998433A (en) * | 2022-05-31 | 2022-09-02 | Oppo广东移动通信有限公司 | Pose calculation method and device, storage medium and electronic equipment |
CN115565154A (en) * | 2022-09-19 | 2023-01-03 | 九识(苏州)智能科技有限公司 | Feasible region prediction method, device, system and storage medium |
CN115588175A (en) * | 2022-10-21 | 2023-01-10 | 北京易航远智科技有限公司 | Aerial view characteristic generation method based on vehicle-mounted all-around image |
CN115597591A (en) * | 2022-09-15 | 2023-01-13 | 山东新一代信息产业技术研究院有限公司(Cn) | Robot repositioning method and system based on multi-line laser radar |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110517293A (en) * | 2019-08-29 | 2019-11-29 | 京东方科技集团股份有限公司 | Method for tracking target, device, system and computer readable storage medium |
CN111091591B (en) * | 2019-12-23 | 2023-09-26 | 阿波罗智联(北京)科技有限公司 | Collision detection method and device, electronic equipment and storage medium |
-
2023
- 2023-03-09 CN CN202310222986.6A patent/CN115965944B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179311A (en) * | 2019-12-23 | 2020-05-19 | 全球能源互联网研究院有限公司 | Multi-target tracking method and device and electronic equipment |
CN111797751A (en) * | 2020-06-29 | 2020-10-20 | 中国第一汽车股份有限公司 | Pedestrian trajectory prediction method, device, equipment and medium |
CN112016469A (en) * | 2020-08-28 | 2020-12-01 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and readable storage medium |
CN114723779A (en) * | 2021-01-06 | 2022-07-08 | 广州汽车集团股份有限公司 | Vehicle positioning method and device and computer readable storage medium |
CN113743607A (en) * | 2021-09-15 | 2021-12-03 | 京东科技信息技术有限公司 | Training method of anomaly detection model, anomaly detection method and device |
CN114494314A (en) * | 2021-12-27 | 2022-05-13 | 南京大学 | Timing boundary detection method and timing sensor |
CN114998433A (en) * | 2022-05-31 | 2022-09-02 | Oppo广东移动通信有限公司 | Pose calculation method and device, storage medium and electronic equipment |
CN115597591A (en) * | 2022-09-15 | 2023-01-13 | 山东新一代信息产业技术研究院有限公司(Cn) | Robot repositioning method and system based on multi-line laser radar |
CN115565154A (en) * | 2022-09-19 | 2023-01-03 | 九识(苏州)智能科技有限公司 | Feasible region prediction method, device, system and storage medium |
CN115588175A (en) * | 2022-10-21 | 2023-01-10 | 北京易航远智科技有限公司 | Aerial view characteristic generation method based on vehicle-mounted all-around image |
Also Published As
Publication number | Publication date |
---|---|
CN115965944A (en) | 2023-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7273129B2 (en) | Lane detection method, device, electronic device, storage medium and vehicle | |
CN111209978B (en) | Three-dimensional visual repositioning method and device, computing equipment and storage medium | |
CN112639878B (en) | Unsupervised deep prediction neural network | |
CN110276293B (en) | Lane line detection method, lane line detection device, electronic device, and storage medium | |
CN110232368B (en) | Lane line detection method, lane line detection device, electronic device, and storage medium | |
CN112734931A (en) | Method and system for assisting point cloud target detection | |
CN113112542A (en) | Visual positioning method and device, electronic equipment and storage medium | |
CN110147724B (en) | Method, apparatus, device, and medium for detecting text region in video | |
CN116229448A (en) | Three-dimensional target detection method, device, equipment and readable storage medium | |
CN115249266A (en) | Method, system, device and storage medium for predicting position of waypoint | |
CN114387197A (en) | Binocular image processing method, device, equipment and storage medium | |
CN118311955A (en) | Unmanned aerial vehicle control method, terminal, unmanned aerial vehicle and storage medium | |
CN114399737A (en) | Road detection method and device, storage medium and electronic equipment | |
CN111914596A (en) | Lane line detection method, device, system and storage medium | |
CN117392181A (en) | Motion information prediction method, computer equipment, storage medium and intelligent equipment | |
CN108154522B (en) | Target tracking system | |
CN115965944B (en) | Target information detection method, device, driving device and medium | |
CN116523970A (en) | Dynamic three-dimensional target tracking method and device based on secondary implicit matching | |
CN116052100A (en) | Image sensing method, computer device, computer-readable storage medium, and vehicle | |
CN116858233A (en) | Path generation method, path generation device, server and storage medium | |
CN110210329A (en) | A kind of method for detecting human face, device and equipment | |
CN116558540B (en) | Model training method and device, and track generating method and device | |
CN115035157B (en) | AGV motion control method, device and medium based on visual tracking | |
CN115797412B (en) | Dynamic object outlier parallel detection method, device, system, equipment and medium | |
CN115923847B (en) | Preprocessing method and device for perception information of automatic driving vehicle and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |