CN115641567B - Target object detection method and device for vehicle, vehicle and medium - Google Patents

Target object detection method and device for vehicle, vehicle and medium Download PDF

Info

Publication number
CN115641567B
CN115641567B CN202211661642.7A CN202211661642A CN115641567B CN 115641567 B CN115641567 B CN 115641567B CN 202211661642 A CN202211661642 A CN 202211661642A CN 115641567 B CN115641567 B CN 115641567B
Authority
CN
China
Prior art keywords
target
candidate
laser signal
candidate object
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211661642.7A
Other languages
Chinese (zh)
Other versions
CN115641567A (en
Inventor
张琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202211661642.7A priority Critical patent/CN115641567B/en
Publication of CN115641567A publication Critical patent/CN115641567A/en
Application granted granted Critical
Publication of CN115641567B publication Critical patent/CN115641567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The present disclosure proposes a target object detection method, apparatus, vehicle, and storage medium for a vehicle, including: acquiring a target laser signal in a scene and a scene image of the scene, wherein the target laser signal is a laser signal obtained by scattering a laser pulse by at least one candidate object in the scene, and the scene image comprises: the method comprises the steps of identifying a local image of at least one candidate object from the at least one candidate object according to a target laser signal and a scene image to obtain a target object, and detecting information of the target object according to the target laser signal and the scene image.

Description

Target object detection method and device for vehicle, vehicle and medium
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a method and an apparatus for detecting a target object of a vehicle, and a medium.
Background
In the technical field of automatic driving, target objects (such as other vehicles, pedestrians, obstacles and the like) in a vehicle driving scene are detected, which plays an important role in links of obstacle avoidance, road condition identification, route planning and the like of the vehicle, and the detection of the target objects in the vehicle driving scene is an important prerequisite for ensuring safe driving of the vehicle.
In the related art, a target object in a driving scene of a vehicle is generally detected based on a laser signal acquired by a laser signal acquisition device.
In this way, the acquisition range of the laser signal acquisition device is small, so that a target object in a long distance in a vehicle driving scene cannot be detected, and the applicability of the target object detection method in the vehicle driving scene is poor.
Disclosure of Invention
The present disclosure is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, the present disclosure provides a target object detection method and apparatus for a vehicle, a vehicle and a storage medium, which can effectively improve the detection capability of a remote target object by combining a target laser signal and a scene image, so as to more accurately detect information of the remote target object in a driving scene of the vehicle, and effectively improve the applicability of the target object detection method in the driving scene of the vehicle.
The method for detecting the target object for the vehicle provided by the embodiment of the first aspect of the disclosure comprises the following steps: acquiring a target laser signal in a scene and a scene image of the scene, wherein the target laser signal is a laser signal obtained by scattering a laser pulse by at least one candidate object in the scene, and the scene image comprises: a local image of at least one candidate object; identifying and obtaining a target object from at least one candidate object according to the target laser signal and the scene image; and detecting information of the target object according to the target laser signal and the scene image.
The method for detecting a target object for a vehicle, provided by an embodiment of the present disclosure, includes acquiring a target laser signal in a scene and a scene image of the scene, where the target laser signal is a laser signal obtained by scattering a laser pulse by at least one candidate object in the scene, and the scene image includes: the method comprises the steps of identifying a local image of at least one candidate object from the at least one candidate object according to a target laser signal and a scene image to obtain a target object, detecting information of the target object according to the target laser signal and the scene image, and effectively improving the detection capability of the target object in a long distance by combining the target laser signal and the scene image, so that the information of the target object in a long distance in a vehicle driving scene can be detected more accurately, and the applicability of the target object detection method in the vehicle driving scene is effectively improved.
A second aspect of the present disclosure provides a target object detection apparatus for a vehicle, including: the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a target laser signal in a scene and a scene image of the scene, the target laser signal is a laser signal obtained by scattering a laser pulse by at least one candidate object in the scene, and the scene image comprises: a local image of at least one candidate object; the identification module is used for identifying and obtaining a target object from at least one candidate object according to the target laser signal and the scene image; and the detection module is used for detecting the information of the target object according to the target laser signal and the scene image.
The target object detection apparatus for a vehicle according to an embodiment of the second aspect of the present disclosure is configured to acquire a target laser signal in a scene and a scene image of the scene, where the target laser signal is a laser signal obtained by scattering a laser pulse by at least one candidate object in the scene, and the scene image includes: the method comprises the steps of identifying a local image of at least one candidate object from the at least one candidate object according to a target laser signal and a scene image to obtain a target object, and detecting information of the target object according to the target laser signal and the scene image.
A third aspect of the present disclosure provides a vehicle, including: the present disclosure relates to a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the program to implement the method for detecting a target object for a vehicle as set forth in an embodiment of the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a target object detection method for a vehicle as set forth in the first aspect of the present disclosure.
A fifth aspect of the present disclosure provides a computer program product, wherein when instructions in the computer program product are executed by a processor, the method for detecting a target object of a vehicle as set forth in the first aspect of the present disclosure is performed.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a target object detection method for a vehicle according to an embodiment of the disclosure;
fig. 2 is a schematic flowchart of a target object detection method for a vehicle according to another embodiment of the disclosure;
fig. 3 is a schematic flowchart of a target object detection method for a vehicle according to another embodiment of the disclosure;
fig. 4 is a schematic structural diagram of a target object detection apparatus for a vehicle according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a target object detection apparatus for a vehicle according to another embodiment of the present disclosure;
FIG. 6 illustrates a block diagram of an exemplary vehicle suitable for use to implement embodiments of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of illustrating the present disclosure and should not be construed as limiting the same. Rather, the embodiments of the disclosure include all changes, modifications and equivalents coming within the spirit and terms of the claims appended thereto.
Fig. 1 is a schematic flowchart of a target object detection method for a vehicle according to an embodiment of the present disclosure.
The disclosed embodiments are exemplified in a case where a target object detection method for a vehicle is configured to be used in a target object detection apparatus for a vehicle.
The target object detection method for a vehicle in the embodiments of the present disclosure may be configured in a target object detection apparatus for a vehicle, which may be provided in a server, or may also be provided in a vehicle, and the embodiments of the present disclosure do not limit this.
It should be noted that the execution subject of the embodiment of the present disclosure may be, for example, a server or a Central Processing Unit (CPU) in an electronic device in terms of hardware, and may be, for example, a server or an associated background service in a vehicle in terms of software, which is not limited to this.
As shown in fig. 1, the target object detection method for a vehicle includes:
s101: acquiring a target laser signal in a scene and a scene image of the scene, wherein the target laser signal is a laser signal obtained by scattering a laser pulse by at least one candidate object in the scene, and the scene image comprises: a partial image of at least one candidate object.
The target object detection method for the vehicle described in the embodiments of the present disclosure may be used to detect a target object in a driving scene where the vehicle is located.
The target laser signal and the scene image of the scene in the scene may be acquired by aiming at a candidate object in the driving scene of the vehicle in the driving process of the vehicle, and the acquisition is not limited to this.
The candidate object may be other vehicles, pedestrians, obstacles, signs, and the like in the driving scene of the vehicle, which is not limited to this.
The target laser signal may be based on a laser radar, the point cloud camera may acquire laser point cloud data for a candidate object in a driving scene where the vehicle is located in a driving process of the vehicle, and accordingly, the scene image may be an image acquired for the candidate object in the driving scene where the vehicle is located in the driving process of the vehicle based on camera components such as the camera, and the like, and the method is not limited to this.
That is, in the embodiment of the present disclosure, the collecting the target laser signal in the scene may be based on a laser signal collecting device (for example, a laser radar, which is not limited to this) configured in advance in the vehicle, and the laser signal obtained by scattering the laser pulse by at least one candidate object in the scene is collected.
In the embodiment of the present disclosure, capturing a scene image in a scene may be based on a camera device (for example, a camera, which is not limited thereto) configured in advance in a vehicle, capturing a candidate object in a driving scene of the vehicle, and taking the captured image as the scene image, which is not limited thereto.
In an embodiment of the present disclosure, a scene image includes: the local image of the at least one candidate object, that is, the scene image includes, without limitation, at least one local image corresponding to the candidate object in the scene image acquired in the driving scene of the vehicle.
In the embodiment of the present disclosure, the number of target laser signals is plural, and the number of frames of the scene image is plural.
Alternatively, in some embodiments, the target laser signals in the scene and the scene image of the scene are acquired, and each target laser signal is acquired while acquiring the corresponding frame scene image, such that for each target laser signal there is a corresponding frame scene image corresponding thereto.
That is to say, in the embodiment of the present disclosure, acquiring the target laser signal in the scene and the scene image of the scene may be, during the driving of the vehicle, using a laser radar and a camera device configured in the vehicle, and acquiring the corresponding target laser signal and the scene image in synchronization for the candidate object in the driving scene of the vehicle, that is, acquiring the scene image of the corresponding frame while acquiring each target laser signal, thereby ensuring that there is a scene image corresponding to any target laser signal and having the same time information as that of the target laser signal, and thus, during the subsequent execution process of the target object detection method for the vehicle, it may be supported to process the scene image and the target laser signal at the same acquisition time.
S102: and identifying the target object from at least one candidate object according to the target laser signal and the scene image.
After the target laser signal in the scene and the scene image of the scene are collected, the target object can be identified and obtained from at least one candidate object according to the target laser signal and the scene image.
The target object may be a candidate object determined from the candidate objects for processing in a target detection method for a vehicle, and the target object may be another vehicle in a driving scene of the vehicle, which is not limited herein.
That is, the target detection method for a vehicle in the embodiment of the present disclosure may detect a target object determined from candidate objects, which is not limited to this.
It can be understood that, in the embodiment of the present disclosure, since the acquisition ranges of the laser signal acquisition device and the camera device are different, that is, the acquisition range of the camera device is generally larger than the acquisition range of the laser acquisition device, the candidate object described by the target laser signal may be different from the candidate object described in the scene image.
Therefore, the target object is identified from the at least one candidate object according to the target laser signal and the scene image, which may be the same candidate object in the target laser signal and the scene image, and is determined as the target object, which is not limited in this respect.
S103: and detecting the information of the target object according to the target laser signal and the scene image.
The embodiment of the disclosure may detect information of a target object according to a target laser signal and a scene image after the target object is identified from at least one candidate object according to the target laser signal and the scene image.
The information of the target object refers to information related to the target object, and may specifically be speed information of the target object, trajectory information of the target object, feature information of the target object, and the like, which is not limited herein.
In some embodiments, the information of the target object is detected according to the target laser signal and the scene image, which may be that distance measurement and speed measurement errors of a long-distance target object in the scene image and no target object corresponding to the timestamp target laser signal are realized according to the target laser signal and the scene image, distance measurement and speed measurement of a visual target (a long-distance target) in a region that cannot be sensed by the laser acquisition device are determined, and a detection result of the determined distance and speed is used as the information of the target object, which is not limited herein.
In an embodiment of the present disclosure, a target laser signal in a scene and a scene image of the scene are collected, where the target laser signal is a laser signal obtained by scattering a laser pulse by at least one candidate object in the scene, and the scene image includes: the method comprises the steps of identifying a local image of at least one candidate object from the at least one candidate object according to a target laser signal and a scene image to obtain a target object, detecting information of the target object according to the target laser signal and the scene image, and effectively improving the detection capability of the target object in a long distance by combining the target laser signal and the scene image, so that the information of the target object in a long distance in a vehicle driving scene can be detected more accurately, and the applicability of the target object detection method in the vehicle driving scene is effectively improved.
Fig. 2 is a schematic flowchart of a target object detection method for a vehicle according to another embodiment of the present disclosure.
As shown in fig. 2, the target object detection method for a vehicle includes:
s201: target laser signals in a scene and a scene image of the scene are acquired.
For the description of S201, reference may be made to the above embodiments, which are not described herein again.
S202: and detecting to obtain a first candidate object according to at least one target laser signal, wherein the first candidate object has a corresponding detection position, and the detection position is the position of the first candidate object in the scene, which is detected based on the target laser signal.
After acquiring the target laser signals in the scene, the embodiments of the present disclosure may detect a corresponding candidate object from the target laser signals according to at least one target laser signal, where the candidate object may be referred to as a first candidate object.
The position of the candidate object in the vehicle driving scene may be referred to as a detection position, and the detection position may be a pose of the candidate object in the vehicle driving scene, a spatial coordinate of the candidate object in the vehicle driving scene, or the like, which is not limited in this respect.
In the embodiment of the present disclosure, the first candidate object is obtained by detecting according to at least one target laser signal, which may be inputting the target laser signal into a pre-trained laser target detection model, and performing target detection on the target laser signal by using the pre-trained laser target detection model, so as to obtain the first candidate object by identifying from the target laser signal, which is not limited herein.
S203: at least one second candidate object is identified from a scene image acquired corresponding to the target laser signal, wherein the second candidate object has a corresponding identification position, the identification position is the position of the second candidate object in the scene identified based on the scene image, and a first set condition is met between the identification position and the detection position.
The scene image acquired corresponding to the target laser signal is the scene image acquired at the same time as the acquisition time of the target laser signal, and is not limited thereto.
The second candidate object has a corresponding recognition position, the recognition position is the position of the second candidate object in the vehicle driving scene obtained through scene image recognition, a first setting condition is satisfied between the recognition position and the detection position, and the first setting condition may be that the distance between the recognition position and the detection position is smaller than a distance threshold, which is not limited.
In the embodiment of the present disclosure, at least one second candidate object is identified from the scene image acquired corresponding to the target laser signal, where the scene image may be input into a pre-trained target detection model (for example, a YOLOV5 model, which is not limited thereto), and the pre-trained target detection model performs target detection on the scene image, so as to identify the second candidate object from the scene image, which is not limited thereto.
S204: a target object is determined from at least one second candidate object based on the first candidate object.
According to the embodiment of the disclosure, after the first candidate object is detected and obtained according to the at least one target laser signal, and the target object is determined from the at least one second candidate object according to the first candidate object, the target object may be determined from the at least one second candidate object according to the first candidate object.
It can be understood that, in the embodiment of the present disclosure, since the acquisition ranges of the laser signal acquisition device and the camera device are different, that is, the acquisition range of the camera device is generally larger than the acquisition range of the laser signal acquisition device, the number of the second candidate objects detected in the scene image may be greater than the number of the first candidate objects detected by the target laser signal.
Therefore, the target object may be determined from at least one second candidate object according to the first candidate object, the first candidate object and the second candidate object may be subjected to matching processing, and when there is no candidate object matching the second candidate object in the first candidate object, the second candidate object is taken as the target object, which is not limited to this.
Alternatively, in some embodiments, the target object may be determined from at least one second candidate object based on the first candidate object by determining similarity information between the first candidate object and each of the second candidate objects, determining similarity information satisfying a second setting condition from among the at least one piece of similarity information, and then regarding the second candidate object to which the similarity information satisfying the second setting condition belongs as the target object, whereby the target object may be accurately determined from the second candidate objects based on the second setting condition.
The similarity information between the first candidate object and each second candidate object may be a cosine similarity between the first candidate object and each second candidate object, or an euclidean distance between the first candidate object and each second candidate object, which is not limited herein.
The similarity information satisfying the second setting condition may be that the similarity value between the first candidate object and each second candidate object is greater than or equal to a similarity threshold, which is not limited herein.
That is to say, in the embodiment of the present disclosure, the similarity value between the first candidate object and each second candidate object may be determined, the similarity value may be compared with a predetermined similarity threshold, when the similarity value is greater than or equal to the similarity threshold, it is determined that the similarity information satisfies the second set condition, and then the second candidate object to which the similarity information satisfying the second set condition belongs may be taken as the target object.
S205: and forming candidate track information of the first candidate object according to at least one detection position corresponding to at least one first candidate object obtained by detecting the target laser signal.
Among them, the moving track of the first candidate object in the driving scene of the vehicle, that is, may be referred to as candidate track information.
In some embodiments, the candidate track information of the first candidate object is formed according to at least one detection position corresponding to at least one first candidate object obtained by detecting the target laser signal, which may be to obtain detection positions corresponding to the first candidate object at different times, sequentially connect the multiple detection positions according to time information corresponding to the detection positions, and use the connection line of the multiple detection positions as the candidate track information, which is not limited in this regard.
S206: and predicting to obtain target track information of the target object according to the scene image and the candidate track information.
After determining the candidate trajectory information of the first candidate object, the embodiment of the present disclosure may determine the moving trajectory of the target object according to the scene image and the candidate trajectory information, and use the determined moving trajectory of the target object as the target trajectory information.
It can be understood that due to the characteristics of the laser collecting device and the camera device, when the same target object is tracked based on the target laser signal collected by the laser collecting device and the scene image collected by the image collecting device, the scene image can track the target object to a longer distance than the target laser signal.
Therefore, in some embodiments, the target track information of the target object is obtained through prediction according to the scene image and the candidate track information, and the target track information of the target object that is visible in the scene image and invisible in the target laser signal may be obtained through prediction according to the scene image and the candidate track information, that is, the target track information is obtained through prediction at a plurality of track positions of the target object that is visible in the scene image and invisible in the corresponding target laser signal, and then, a subsequent target object detection method for the vehicle may be triggered and executed based on the target process information, which may be specifically referred to in subsequent embodiments.
S207: and detecting the information of the target object according to the target track information.
In an embodiment of the present disclosure, the target track information includes: the target track is composed of a plurality of timestamps and target track position points corresponding to each timestamp, namely the target track is composed of connecting lines of the target track position points corresponding to the timestamps respectively.
Alternatively, in some embodiments, the information of the target object is detected according to the target track information, and may be that a distance value between the target object and the vehicle is measured and calculated according to a target track position point corresponding to each different timestamp, a time interval between the different timestamps is determined, a distance difference between the distance values corresponding to each different timestamp is determined, a velocity value of the target object is measured and calculated according to the distance difference and the time interval, and the distance value, the distance difference, and the velocity value are used as the information of the target object, so that the velocity value, the distance value, and the distance difference of the target object that are not visible in the target laser signal can be accurately determined, and thus when the velocity value, the distance value, and the distance difference of the target object are used as the information of the target object, the comprehensiveness of the information of the target object is effectively improved, and thus a more comprehensive information reference can be provided for an automatic driving process of the vehicle.
The distance value between the target object and the vehicle may be used to quantitatively describe a distance between the target object and the vehicle in a driving scene of the vehicle, and the speed value may be used to describe a moving speed of the target object, which is not limited herein.
That is to say, in the embodiment of the present disclosure, corresponding scene models may be pre-constructed for a vehicle driving scene, then, the spatial distance between a target track position point of a target object at different timestamps and a vehicle is respectively determined as a distance value between the target object and the vehicle, the interval duration between the different timestamps and a distance difference between the distance values corresponding to the different timestamps are determined, then, a ratio between the distance difference and the interval duration is used as a speed value, and the speed value of the target object, the determined distance value, distance difference, and speed value are used as information of the target object.
In the embodiment of the disclosure, a target laser signal in a scene and a scene image of the scene are collected, and a first candidate object is detected according to at least one target laser signal, wherein the first candidate object has a corresponding detection position, the detection position is a position of the first candidate object in the scene detected based on the target laser signal, then at least one second candidate object is identified from the scene image collected corresponding to the target laser signal, a target object is determined from the at least one second candidate object according to the first candidate object, then candidate track information of the first candidate object is formed according to the at least one detection position corresponding to the at least one first candidate object detected by the target laser signal, and information of the target object is detected according to the target track information, so that the comprehensiveness of the information of the target object can be effectively improved, and thus a more comprehensive information reference can be provided for an automatic driving process of a vehicle.
Fig. 3 is a flowchart illustrating a target object detection method for a vehicle according to another embodiment of the present disclosure.
As shown in fig. 3, the target object detection method for a vehicle includes:
s301: target laser signals in a scene and a scene image of the scene are acquired.
S302: and identifying the target object from at least one candidate object according to the target laser signal and the scene image.
S303: and detecting at least one first candidate object according to each target laser signal, wherein the first candidate object has a corresponding detection position, and the detection position is the position of the corresponding first candidate object in the scene, which is detected based on the target laser signal.
S304: and forming candidate track information of the first candidate object according to the at least one detection position.
For the description of S301 to S304, reference may be made to the above embodiments, and details are not repeated herein.
S305: and determining a second candidate object which is the same as the first candidate object, wherein the second candidate object is identified from the scene image acquired corresponding to the target laser signal.
That is, the embodiments of the present disclosure may determine a second candidate object that is the same as the first candidate object after detecting the first candidate object from the target laser signal and detecting the second candidate object from the scene image.
S306: and determining the relative position relation between the second candidate object and the target object according to the correspondingly acquired scene image.
Since the target object is visible in the scene image but not visible in the target laser signal, the relative position relationship between the second candidate object and the target object may be used to describe the relative position between the second candidate object and the target image, and the relative position relationship may be, for example, a spatial distance between the second candidate object and the target object in the same spatial dimension, which is not limited to this.
S307: and predicting to obtain the target track information of the target object according to the candidate track information and the relative position relation of the first candidate object.
According to the embodiment of the disclosure, after the relative position relationship between the second candidate object and the target object is determined according to the correspondingly acquired scene image, the target track information of the target object can be obtained through prediction according to the candidate track information and the relative position relationship of the first candidate object.
Optionally, in some embodiments, the target trajectory information of the target object is predicted according to the candidate trajectory information and the relative position relationship of the first candidate object, and may be formed by forming trajectory map data according to at least one candidate trajectory information, where the trajectory map data includes: and determining a target track position point of the target object at each timestamp according to the candidate track position point of the first candidate object at each timestamp and the relative position relationship, and generating target track information according to the plurality of timestamps and the target track position point corresponding to each timestamp.
Wherein the time stamp is used for describing the acquisition time of the scene image.
Wherein the trajectory map data includes: the first candidate object is at the candidate trajectory location point of each timestamp.
That is to say, in the embodiment of the present disclosure, a candidate trajectory position point of a first candidate object in a target laser signal under each timestamp may be determined according to candidate trajectory information of the first candidate object, and a connection line of the candidate trajectory position point of the first candidate object under each timestamp may be used as trajectory map data, and then, according to a relative position relationship and the candidate trajectory position point, a position of the target object that is invisible in the target laser signal and visible in a scene image under the corresponding timestamp may be determined as a target trajectory position point, and a connection line of the target trajectory position point corresponding to each timestamp of the target objects may be used as the target trajectory information.
S308: and detecting the information of the target object according to the target track information.
For the description of S308, reference may be made to the above embodiments, which are not described herein again.
In the disclosed embodiment, by acquiring a target laser signal and a scene image of a scene in the scene, identifying a target object from at least one candidate object according to the target laser signal and the scene image, detecting at least one first candidate object according to each target laser signal, wherein the first candidate object has a corresponding detection position, the detection position is a position of the corresponding first candidate object in the scene detected based on the target laser signal, and candidate trajectory information of the first candidate object is formed according to the at least one detection position, then determining a second candidate object identical to the first candidate object, wherein the second candidate object is identified from the scene image acquired corresponding to the target laser signal, and determining a relative position relationship between the second candidate object and the target object according to the correspondingly acquired scene image, then predicting the target trajectory information of the target object according to the candidate trajectory information and the relative position relationship of the first candidate object, and detecting the information of the target object according to the target trajectory information, the target object can be combined with the target laser signal and the scene image, thereby effectively improving the detection capability of the target laser signal and the scene image, and thus more effectively detecting the target object in the driving method of the vehicle.
Fig. 4 is a schematic structural diagram of a target object detection apparatus for a vehicle according to an embodiment of the present disclosure.
As shown in fig. 4, in some embodiments, a target object detection apparatus 40 for a vehicle of an example of the present disclosure includes:
an acquiring module 401, configured to acquire a target laser signal in a scene and a scene image of the scene, where the target laser signal is a laser signal obtained by scattering a laser pulse by at least one candidate object in the scene, and the scene image includes: a local image of at least one candidate object;
an identifying module 402, configured to identify a target object from at least one candidate object according to a target laser signal and a scene image; and
and a detection module 403, configured to detect information of the target object according to the target laser signal and the scene image.
In some embodiments of the present disclosure, as shown in fig. 5, fig. 5 is a schematic structural diagram of a target object detection apparatus for a vehicle according to an embodiment of the present disclosure, and the acquisition module 401 is further configured to:
collecting each target laser signal;
and collecting corresponding frame scene images while collecting each target laser signal.
In some embodiments of the present disclosure, the identifying module 402 includes:
the first detection sub-module 4021 is configured to detect and obtain a first candidate object according to at least one target laser signal, where the first candidate object has a corresponding detection position, and the detection position is a position of the first candidate object in a scene detected based on the target laser signal;
the recognition sub-module 4022 is configured to recognize at least one second candidate object from a scene image acquired corresponding to the target laser signal, where the second candidate object has a corresponding recognition position, the recognition position is a position of the second candidate object in the scene recognized based on the scene image, and a first setting condition is satisfied between the recognition position and the detection position; and
the determining sub-module 4023 is configured to determine a target object from the at least one second candidate object according to the first candidate object.
In some embodiments of the present disclosure, the determining sub-module 4023 is further configured to:
determining similarity information between the first candidate object and each second candidate object;
determining similarity information satisfying a second set condition from among the at least one piece of similarity information;
and taking the second candidate object to which the similarity information meeting the second set condition belongs as the target object.
In some embodiments of the present disclosure, the detection module 403 includes:
a forming sub-module 4031, configured to form candidate trajectory information of the first candidate object according to at least one detection position corresponding to at least one first candidate object obtained by detecting the target laser signal;
the prediction sub-module 4032 is used for predicting target track information of a target object according to the scene image and the candidate track information;
and the second detection submodule 4033 is configured to detect information of the target object according to the target track information.
In some embodiments of the present disclosure, the prediction sub-module 4033 is further configured to:
determining a second candidate object which is the same as the first candidate object, wherein the second candidate object is identified from a scene image acquired corresponding to the target laser signal;
determining the relative position relation between the second candidate object and the target object according to the correspondingly acquired scene image;
and predicting to obtain the target track information of the target object according to the candidate track information and the relative position relation of the first candidate object.
In some embodiments of the present disclosure, the prediction sub-module 4033 is further configured to:
forming track map data according to at least one candidate track information, wherein the track map data comprises: a candidate trajectory location point of the first candidate object at each timestamp;
determining a target track position point of the target object at each time stamp according to the relative position relation and the candidate track position points;
and generating target track information according to the plurality of time stamps and the target track position point corresponding to each time stamp.
In some embodiments of the present disclosure, the target trajectory information includes: a plurality of timestamps, and a target trajectory location point corresponding to each timestamp;
the third detection submodule 4034 is further configured to:
measuring and calculating a distance value between a target object and a vehicle according to the target track position point corresponding to each different timestamp;
determining interval duration between different timestamps, and determining a distance difference value between distance values corresponding to each different timestamp;
measuring and calculating the speed value of the target object according to the distance difference and the interval duration;
the distance value, the distance difference value, and the velocity value are taken as information of the target object.
It should be noted that the foregoing explanation of the embodiment of the target object detection method for a vehicle also applies to the target object detection apparatus for a vehicle of this embodiment, and details are not repeated here.
In this embodiment, a target laser signal in a scene and a scene image of the scene are collected, where the target laser signal is a laser signal obtained by scattering a laser pulse by at least one candidate object in the scene, and the scene image includes: the method comprises the steps of identifying a local image of at least one candidate object from the at least one candidate object according to a target laser signal and a scene image to obtain a target object, detecting information of the target object according to the target laser signal and the scene image, and effectively improving the detection capability of the target object in a long distance by combining the target laser signal and the scene image, so that the information of the target object in a long distance in a vehicle driving scene can be detected more accurately, and the applicability of the target object detection method in the vehicle driving scene is effectively improved.
To achieve some of the above embodiments, the present disclosure also proposes a vehicle including: the present disclosure relates to a target object detection method for a vehicle, and more particularly, to a memory, a processor, and a computer program stored on the memory and executable on the processor, which when executed by the processor, implement the target object detection method for a vehicle as set forth in the foregoing embodiments of the present disclosure.
In order to achieve some of the embodiments described above, the present disclosure also proposes a non-transitory computer-readable storage medium on which a computer program is stored, which when executed by a processor implements a target object detection method for a vehicle as proposed in the previous embodiments of the present disclosure.
In order to achieve some of the embodiments described above, the present disclosure also proposes a computer program product which, when being executed by an instruction processor in the computer program product, performs the target object detection method for a vehicle as proposed by the previous embodiments of the present disclosure.
FIG. 6 illustrates a block diagram of an exemplary vehicle suitable for use to implement embodiments of the present disclosure. The vehicle 12 shown in fig. 6 is only one example and should not impose any limitations on the functionality or scope of use of the disclosed embodiments.
As shown in FIG. 6, the vehicle 12 is embodied in the form of a general purpose computing device. The components of the vehicle 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. These architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro Channel Architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
The vehicle 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by the vehicle 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The vehicle 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive").
Although not shown in FIG. 6, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only Memory (CD-ROM), a Digital versatile disk Read Only Memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described in this disclosure.
The vehicle 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with the vehicle 12, and/or with any devices (e.g., network card, modem, etc.) that enable the vehicle 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the vehicle 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via the Network adapter 20. As shown, the network adapter 20 communicates with other modules of the vehicle 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the vehicle 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, implementing the target object detection method for a vehicle mentioned in the foregoing embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
It should be noted that, in the description of the present disclosure, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present disclosure, "a plurality" means two or more unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present disclosure includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following technologies, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer-readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present disclosure have been shown and described above, it will be understood that the above embodiments are exemplary and not to be construed as limiting the present disclosure, and that changes, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present disclosure.

Claims (10)

1. A target object detection method for a vehicle, the method comprising:
acquiring a target laser signal in a scene and a scene image of the scene, wherein the number of the target laser signals is multiple, the number of frames of the scene image is multiple, the target laser signal is a laser signal obtained by scattering of a laser pulse by at least one candidate object in the scene, and the scene image comprises: a local image of the at least one candidate object;
detecting a first candidate object according to at least one target laser signal, wherein the first candidate object has a corresponding detection position, and the detection position is the position of the first candidate object in the scene, which is detected based on the target laser signal;
identifying at least one second candidate object from the scene image acquired corresponding to the target laser signal, wherein the second candidate object has a corresponding identification position, the identification position is the position of the second candidate object in the scene identified based on the scene image, and a first set condition is met between the identification position and the detection position; and
determining the target object from the at least one second candidate object according to the first candidate object;
forming candidate track information of at least one first candidate object according to at least one detection position corresponding to the at least one first candidate object obtained by the target laser signal detection;
predicting to obtain target track information of the target object according to the scene image and the candidate track information;
detecting the information of the target object according to the target track information;
the predicting target track information of the target object according to the scene image and the candidate track information includes: determining a second candidate object which is the same as the first candidate object, wherein the second candidate object is identified from the scene image which is acquired corresponding to the target laser signal; determining a relative position relation between the second candidate object and the target object according to the correspondingly acquired scene image;
forming trajectory map data according to at least one piece of candidate trajectory information, wherein the trajectory map data comprises: a candidate trajectory location point of the first candidate object at each timestamp;
determining a target track position point of the target object at each timestamp according to the relative position relation and the candidate track position points; and generating the target track information according to the plurality of timestamps and the target track position point corresponding to each timestamp.
2. The method of claim 1, wherein said acquiring a target laser signal in a scene and a scene image of the scene, comprises:
collecting each target laser signal;
and acquiring a scene image corresponding to a corresponding frame while acquiring each target laser signal.
3. The method of claim 1, wherein said determining the target object from the at least one second candidate object based on the first candidate object comprises:
determining similarity information between the first candidate object and each of the second candidate objects;
determining similarity information satisfying a second set condition from among at least one of the similarity information;
and taking the second candidate object to which the similarity information meeting the second set condition belongs as the target object.
4. The method of claim 1, wherein the target track information comprises: a plurality of the timestamps, and a target track location point corresponding to each timestamp;
wherein the detecting information of the target object according to the target track information includes:
measuring and calculating a distance value between the target object and the vehicle according to the target track position point corresponding to each timestamp;
determining interval duration between different timestamps, and determining a distance difference value between distance values corresponding to each timestamp;
measuring and calculating the speed value of the target object according to the distance difference and the interval duration;
and taking the distance value, the distance difference value and the speed value as the information of the target object.
5. A target object detection apparatus for a vehicle, characterized in that the apparatus comprises:
an acquisition module, configured to acquire a target laser signal in a scene and a scene image of the scene, where the number of the target laser signals is multiple, and the number of frames of the scene image is multiple, where the target laser signal is a laser signal obtained by scattering a laser pulse by at least one candidate object in the scene, and the scene image includes: a local image of the at least one candidate object;
the identification module is used for identifying and obtaining a target object from the at least one candidate object according to the target laser signal and the scene image; and
the detection module is used for detecting the information of the target object according to the target laser signal and the scene image;
wherein, the identification module specifically includes:
a first detection sub-module, configured to detect, according to at least one target laser signal, a first candidate object, where the first candidate object has a corresponding detection position, and the detection position is a position of the first candidate object in the scene, where the position is detected based on the target laser signal;
the identification submodule is used for identifying at least one second candidate object from the scene image acquired corresponding to the target laser signal, wherein the second candidate object has a corresponding identification position, the identification position is the position of the second candidate object in the scene identified based on the scene image, and a first set condition is met between the identification position and the detection position; and
a determining sub-module for determining the target object from the at least one second candidate object based on the first candidate object;
the detection module comprises:
the forming submodule is used for forming candidate track information of at least one first candidate object according to at least one detection position corresponding to the at least one first candidate object obtained by the target laser signal detection;
the prediction sub-module is used for predicting and obtaining target track information of the target object according to the scene image and the candidate track information;
the second detection submodule is used for detecting the information of the target object according to the target track information;
wherein the prediction sub-module is further configured to:
determining a second candidate object which is the same as the first candidate object, wherein the second candidate object is identified from the scene image acquired corresponding to the target laser signal;
determining a relative position relationship between the second candidate object and the target object according to the correspondingly acquired scene image;
forming trajectory map data according to at least one piece of candidate trajectory information, wherein the trajectory map data comprises: a candidate trajectory location point of the first candidate object at each timestamp;
determining a target track position point of the target object at each timestamp according to the relative position relation and the candidate track position points;
and generating the target track information according to the plurality of timestamps and the target track position point corresponding to each timestamp.
6. The apparatus of claim 5, wherein the acquisition module is further configured to:
collecting each target laser signal;
and collecting a scene image corresponding to a corresponding frame while collecting each target laser signal.
7. The apparatus of claim 5, wherein the determination submodule is further operable to:
determining similarity information between the first candidate object and each of the second candidate objects;
determining similarity information satisfying a second set condition from among at least one of the similarity information;
and taking the second candidate object to which the similarity information meeting the second set condition belongs as the target object.
8. The apparatus of claim 5, wherein the target track information comprises: a plurality of the timestamps, and a target track location point corresponding to each of the timestamps;
wherein the second detection submodule is further configured to:
measuring and calculating a distance value between the target object and the vehicle according to the target track position point corresponding to each timestamp;
determining interval duration between different timestamps, and determining a distance difference value between distance values corresponding to each timestamp;
measuring and calculating the speed value of the target object according to the distance difference and the interval duration;
and taking the distance value, the distance difference value and the speed value as the information of the target object.
9. A vehicle, characterized by comprising:
memory, a processor and a computer program stored on the memory and run on the processor, the processor when executing the program implementing a target object detection method for a vehicle as claimed in any one of claims 1-4.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon which, when being executed by a processor, carries out a target object detection method for a vehicle according to any one of claims 1-4.
CN202211661642.7A 2022-12-23 2022-12-23 Target object detection method and device for vehicle, vehicle and medium Active CN115641567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211661642.7A CN115641567B (en) 2022-12-23 2022-12-23 Target object detection method and device for vehicle, vehicle and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211661642.7A CN115641567B (en) 2022-12-23 2022-12-23 Target object detection method and device for vehicle, vehicle and medium

Publications (2)

Publication Number Publication Date
CN115641567A CN115641567A (en) 2023-01-24
CN115641567B true CN115641567B (en) 2023-04-11

Family

ID=84949801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211661642.7A Active CN115641567B (en) 2022-12-23 2022-12-23 Target object detection method and device for vehicle, vehicle and medium

Country Status (1)

Country Link
CN (1) CN115641567B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838125A (en) * 2021-09-17 2021-12-24 中国第一汽车股份有限公司 Target position determining method and device, electronic equipment and storage medium
CN115082857A (en) * 2022-06-24 2022-09-20 深圳市镭神智能系统有限公司 Target object detection method, device, equipment and storage medium
CN115187941A (en) * 2022-06-20 2022-10-14 中国电信股份有限公司 Target detection positioning method, system, equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345510A (en) * 2018-09-07 2019-02-15 百度在线网络技术(北京)有限公司 Object detecting method, device, equipment, storage medium and vehicle
CN110059608B (en) * 2019-04-11 2021-07-06 腾讯科技(深圳)有限公司 Object detection method and device, electronic equipment and storage medium
CN112396630A (en) * 2019-08-15 2021-02-23 纳恩博(北京)科技有限公司 Method and device for determining state of target object, storage medium and electronic device
CN113743171A (en) * 2020-05-30 2021-12-03 华为技术有限公司 Target detection method and device
CN114648549A (en) * 2022-03-04 2022-06-21 长安大学 Traffic scene target detection and positioning method fusing vision and laser radar
CN115147587A (en) * 2022-06-01 2022-10-04 杭州海康机器人技术有限公司 Obstacle detection method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838125A (en) * 2021-09-17 2021-12-24 中国第一汽车股份有限公司 Target position determining method and device, electronic equipment and storage medium
CN115187941A (en) * 2022-06-20 2022-10-14 中国电信股份有限公司 Target detection positioning method, system, equipment and storage medium
CN115082857A (en) * 2022-06-24 2022-09-20 深圳市镭神智能系统有限公司 Target object detection method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Xu Shiwei et al..Design and implementation of the laser range-gating imaging synchronization control system.《Proceedings of 2011 International Conference on Electronics and Optoelectronics》.2011,第237-241页. *
张袅娜 等.基于激光雷达和摄像机融合的智能车障碍物识别方法.科学技术与工程.2022,第20卷(第4期),第1461-1465页. *

Also Published As

Publication number Publication date
CN115641567A (en) 2023-01-24

Similar Documents

Publication Publication Date Title
CN109188457B (en) Object detection frame generation method, device, equipment, storage medium and vehicle
US10043389B2 (en) Vehicular information systems and methods
US9083856B2 (en) Vehicle speed measurement method and system utilizing a single image capturing unit
CN112669349A (en) Passenger flow statistical method, electronic equipment and storage medium
CN111091529A (en) People counting method and system
CN109684986A (en) A kind of vehicle analysis method and system based on automobile detecting following
CN110111018B (en) Method, device, electronic equipment and storage medium for evaluating vehicle sensing capability
US20210398425A1 (en) Vehicular information systems and methods
CN111274852A (en) Target object key point detection method and device
CN115641567B (en) Target object detection method and device for vehicle, vehicle and medium
CN115578386B (en) Parking image generation method and device, electronic equipment and storage medium
CN115841660A (en) Distance prediction method, device, equipment, storage medium and vehicle
US9183448B2 (en) Approaching-object detector, approaching object detecting method, and recording medium storing its program
CN115713750A (en) Lane line detection method and device, electronic equipment and storage medium
CN112356845B (en) Method, device and equipment for predicting motion state of target and vehicle
WO2022179016A1 (en) Lane detection method and apparatus, device, and storage medium
CN109583511B (en) Speed fusion method and device
CN109740518B (en) Method and device for determining object in video
Nguyen et al. An algorithm using YOLOv4 and DeepSORT for tracking vehicle speed on highway
CN109581380B (en) Vehicle position detection method and device based on radar and computer equipment
US20220207891A1 (en) Moving object and obstacle detection portable device using a millimeter wave radar and camera
Pan et al. A Motion Status Discrimination Method Based on Velocity Estimation Embedded in Multi-object Tracking
WO2020073272A1 (en) Snapshot image to train an event detector
CN117676123A (en) Driving-assisted camera perception performance test method, device, apparatus and medium
WO2020073270A1 (en) Snapshot image of traffic scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant