CN116363631A - Three-dimensional target detection method and device and vehicle - Google Patents

Three-dimensional target detection method and device and vehicle Download PDF

Info

Publication number
CN116363631A
CN116363631A CN202310565718.4A CN202310565718A CN116363631A CN 116363631 A CN116363631 A CN 116363631A CN 202310565718 A CN202310565718 A CN 202310565718A CN 116363631 A CN116363631 A CN 116363631A
Authority
CN
China
Prior art keywords
information
target vehicle
detection
detection frame
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310565718.4A
Other languages
Chinese (zh)
Other versions
CN116363631B (en
Inventor
万韶华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202310565718.4A priority Critical patent/CN116363631B/en
Publication of CN116363631A publication Critical patent/CN116363631A/en
Application granted granted Critical
Publication of CN116363631B publication Critical patent/CN116363631B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The disclosure relates to a three-dimensional target detection method, a three-dimensional target detection device and a vehicle, and relates to the technical field of automatic driving and intelligent perception, wherein the method comprises the following steps: acquiring a vehicle surrounding image to be processed; inputting the vehicle surrounding image into a preset target detection model, and acquiring three-dimensional detection frame information of at least one target vehicle output by the target detection model; performing at least one of ground wire detection, garage position detection and shielding rate detection on the vehicle surrounding image to obtain auxiliary detection information of at least one target vehicle; for each target vehicle, correcting the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle, so that the three-dimensional detection frame information of the target vehicle can be corrected by combining the auxiliary detection information of the target vehicle, the influence of noise jitter on the accuracy of the three-dimensional detection frame information is avoided, and the accuracy and the stability of three-dimensional target detection are improved.

Description

Three-dimensional target detection method and device and vehicle
Technical Field
The disclosure relates to the technical field of automatic driving and intelligent perception, in particular to a three-dimensional target detection method and device and a vehicle.
Background
The existing three-dimensional target detection method mainly comprises the steps of inputting a vehicle surrounding image into a preset target detection model, and obtaining three-dimensional detection frame information of a plurality of target vehicles output by the target detection model. In the scheme, the vehicle surrounding image is acquired by the image sensor on the vehicle, the noise point has jitter condition, and the target detection model is sensitive to the jitter of the noise point, so that the accuracy of the three-dimensional detection frame information detected by the target detection model is low, and the stability is poor.
Disclosure of Invention
The disclosure provides a three-dimensional target detection method, a three-dimensional target detection device and a vehicle.
According to a first aspect of embodiments of the present disclosure, there is provided a three-dimensional object detection method, the method including: acquiring a vehicle surrounding image to be processed; inputting the vehicle surrounding image into a preset target detection model, and acquiring three-dimensional detection frame information of at least one target vehicle output by the target detection model; performing at least one of ground wire detection, bin position detection and shielding rate detection on the vehicle surrounding image to obtain auxiliary detection information of at least one target vehicle; and for each target vehicle, correcting the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
In one embodiment of the present disclosure, the auxiliary detection information of the target vehicle includes ground line information of the target vehicle; the correcting process is performed on the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle, including: acquiring detection frame position information in three-dimensional detection frame information of the target vehicle; and correcting the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the ground wire information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
In one embodiment of the disclosure, the correcting the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the ground wire information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle includes: determining a detection frame area of the target vehicle according to the detection frame position information of the target vehicle; determining position offset information of the detection frame area according to the position information of the first grounding point when the first grounding point which is not positioned in the detection frame area exists in the grounding line information; and correcting the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the position offset information to obtain corrected detection frame position information of the target vehicle.
In one embodiment of the present disclosure, the auxiliary detection information of the target vehicle includes library position information of the target vehicle; the correcting process is performed on the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle, including: acquiring detection frame direction information and detection frame position information in three-dimensional detection frame information of the target vehicle; and correcting the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the library position information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
In one embodiment of the present disclosure, the correcting the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the library position information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle includes: determining the position direction information according to the position information of the target vehicle; determining direction offset information between the bin direction information and the detection frame direction information when the bin direction information is inconsistent with the detection frame direction information; and correcting the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the direction offset information to obtain corrected three-dimensional detection frame information of the target vehicle.
In one embodiment of the present disclosure, the auxiliary detection information of the target vehicle includes a shielding rate of the target vehicle; the correcting process is performed on the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle, including: acquiring confidence information in the three-dimensional detection frame information of the target vehicle; and under the condition that the shielding rate of the target vehicle is smaller than a preset shielding rate threshold value, carrying out reduction processing on the confidence information in the three-dimensional detection frame information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
In one embodiment of the disclosure, the scene corresponding to the vehicle surrounding image is a parking lot scene; in the case where the target vehicle is in a moving state, the auxiliary detection information of the target vehicle includes at least one of: ground wire information and shielding rate; the auxiliary detection information of the target vehicle includes at least one of the following in a case where the target vehicle is in a stationary state: library position information and shielding rate.
According to a second aspect of embodiments of the present disclosure, there is also provided a three-dimensional object detection apparatus, the apparatus including: the first acquisition module is used for acquiring a vehicle surrounding image to be processed; the second acquisition module is used for inputting the vehicle surrounding image into a preset target detection model and acquiring three-dimensional detection frame information of at least one target vehicle output by the target detection model; the detection module is used for carrying out at least one of ground wire detection, garage position detection and shielding rate detection on the vehicle surrounding image to obtain auxiliary detection information of at least one target vehicle; and the correction processing module is used for correcting the three-dimensional detection frame information of each target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
According to a third aspect of embodiments of the present disclosure, there is also provided a vehicle including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to: the steps of the three-dimensional object detection method as described above are implemented.
According to a fourth aspect of embodiments of the present disclosure, there is also provided a non-transitory computer-readable storage medium, which when executed by a processor, causes the processor to perform the three-dimensional object detection method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
acquiring a vehicle surrounding image to be processed; inputting the vehicle surrounding image into a preset target detection model, and acquiring three-dimensional detection frame information of at least one target vehicle output by the target detection model; performing at least one of ground wire detection, garage position detection and shielding rate detection on the vehicle surrounding image to obtain auxiliary detection information of at least one target vehicle; for each target vehicle, correcting the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle, so that the three-dimensional detection frame information of the target vehicle can be corrected by combining the auxiliary detection information of the target vehicle, the influence of noise jitter on the accuracy of the three-dimensional detection frame information is avoided, the accuracy of the three-dimensional detection frame information is improved, and the stability of three-dimensional target detection is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a flow chart of a three-dimensional object detection method according to one embodiment of the present disclosure;
FIG. 2 is a schematic view of ground line information for a vehicle surrounding image;
FIG. 3 is a flow chart of a three-dimensional object detection method according to another embodiment of the present disclosure;
FIG. 4 is a schematic structural view of a three-dimensional object detection device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of a vehicle according to an exemplary embodiment of the present disclosure.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The existing three-dimensional target detection method mainly comprises the steps of inputting a vehicle surrounding image into a preset target detection model, and obtaining three-dimensional detection frame information of a plurality of target vehicles output by the target detection model. In the scheme, the vehicle surrounding image is acquired by the image sensor on the vehicle, the noise point has jitter condition, and the target detection model is sensitive to the jitter of the noise point, so that the accuracy of the three-dimensional detection frame information detected by the target detection model is low, and the stability is poor.
Fig. 1 is a flow chart of a three-dimensional object detection method according to an embodiment of the present disclosure. It should be noted that, the three-dimensional object detection method of the present embodiment may be applied to a three-dimensional object detection apparatus, which may be configured in an electronic device of a vehicle or an electronic device in communication with a vehicle, so that the electronic device may perform a three-dimensional object detection function.
The electronic device may be any device with computing capability, for example, an image capturing device, a personal computer (Personal Computer, abbreviated as PC), a mobile terminal, a server, and the like, and the mobile terminal may be, for example, a vehicle-mounted device, a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and the like, which have various hardware devices including an operating system, a touch screen, and/or a display screen. In the following embodiments, an execution body is described as an example of an electronic device.
As shown in fig. 1, the method comprises the steps of:
step 101, acquiring a vehicle surrounding image to be processed.
In the embodiment of the disclosure, the vehicle surrounding image may be a 2-dimensional image acquired by an image sensor on the vehicle. Wherein the number of the vehicle surrounding images may be one or more. For example, if the acquisition area is a smaller angle range in front of the vehicle, an image sensor may be used to acquire an image of the front of the vehicle at the current position of the vehicle, so as to obtain a vehicle surrounding image. For another example, if the acquisition region is a wide angular range, for example, a full angular range, the image acquisition processing may be performed by using a plurality of image sensors facing in different directions at the current position of the vehicle, so as to obtain a plurality of vehicle-surrounding images.
Step 102, inputting the vehicle periphery image into a preset target detection model, and acquiring three-dimensional detection frame information of at least one target vehicle output by the target detection model.
In the embodiment of the disclosure, the preset target detection model may be obtained by training according to a large number of sample images and three-dimensional detection frame information of the target in the sample images.
Wherein the three-dimensional detection frame information of the target vehicle may include at least one of: detection frame position information, detection frame direction information, detection frame category information, and confidence information. Wherein, when the three-dimensional detection frame information is the three-dimensional detection frame information of the vehicle, the detection frame type information is the vehicle type.
Wherein, the detection frame position information may include: length information, width information, height information and three-dimensional coordinate information of a center point of the detection frame. In this case, the three-dimensional coordinate information or the position information and the like mentioned in the present disclosure refer to coordinate information or position information in the same three-dimensional coordinate system, and will not be described later. The three-dimensional coordinate system, for example, the world coordinate system, the vehicle body coordinate system, and the like, can be set according to actual needs.
In one example, the number of confidence information may be one, and may be the confidence of the three-dimensional detection frame information. In another example, the number of confidence information may be two, one of which is the confidence of the detection frame position information and the other of which is the confidence of the detection frame category information.
And 103, performing at least one of ground wire detection, bin position detection and shielding rate detection on the vehicle surrounding image to obtain auxiliary detection information of at least one target vehicle.
In the disclosed embodiments, the ground line detection algorithm may be, for example, a Free space detection algorithm. By combining a Free space detection algorithm, a plurality of grounding line segments in the vehicle surrounding image and target types corresponding to each grounding line segment can be obtained, namely, the grounding line segments of all targets in the vehicle surrounding image are obtained. Among these are objects such as vehicles, posts, ice cream cones, curbs, walls, etc. Taking the object as an example of the vehicle, the ground wire information of the vehicle may include: coordinate information of each point in the ground line segment of the vehicle. The coordinate information is coordinate information in a three-dimensional coordinate system. As shown in fig. 2, the ground line information schematic diagram of the vehicle surrounding image may be shown in fig. 2, where the ground lines formed by connecting the ground line segments of the plurality of targets are shown, but the target types corresponding to the respective ground line segments are not shown.
In the embodiment of the disclosure, the vehicle surrounding image is subjected to the bin detection, so that the bin angle point information of each vehicle in the vehicle surrounding image can be obtained. The vehicle position angle point information refers to coordinate information of each angle of a vehicle position. The coordinate information is coordinate information in a three-dimensional coordinate system.
In the embodiment of the disclosure, one detection algorithm of the shielding rate detection may be, for example, determining a ground line segment of each target in the vehicle peripheral image by combining with a Free space detection algorithm; and determining the shielding rate of each target by combining the length of the grounding line segment of the target and the total length of the ground projection boundary of the target. For example, for a vehicle, the ratio of the length of the ground line segment of the vehicle to the total length of the boundary of the ground projection of the vehicle is taken as the shielding rate of the vehicle.
In the embodiment of the disclosure, in different scenes, different detection can be performed on the images around the vehicle, so as to obtain auxiliary detection information of the target vehicle. When the scene corresponding to the vehicle surrounding image is a parking lot scene, at least one of ground line detection, garage position detection, and shielding rate detection may be performed on the vehicle surrounding image. When the scene corresponding to the vehicle surrounding image is a road running scene, at least one of ground line detection and shielding rate detection may be performed for the vehicle surrounding image.
The scene corresponding to the vehicle surrounding image is a scene where the vehicle collecting the vehicle surrounding image is located.
Wherein, for specific scenes, different target vehicles have different auxiliary detection information in detection results obtained after the detection of the vehicle surrounding images. Taking a parking lot scenario as an example, ground line information and shielding rate information of a vehicle in a moving state, for example, a vehicle outside a garage and a vehicle in a parking/out state can be detected. For a vehicle in a stationary state, for example, a vehicle located in a garage, garage position information and shielding rate information of the vehicle can be detected. That is, different pieces of auxiliary detection information may be detected for the same vehicle surrounding image in which different target vehicles are detected.
Step 104, for each target vehicle, correcting the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
In the embodiment of the disclosure, for the target vehicle, when the auxiliary detection information of the target vehicle includes multiple information, the three-dimensional detection frame information of the target vehicle may be corrected by combining the information sequentially with each information, and corrected three-dimensional detection frame information of the target vehicle may be obtained after the correction of the multiple information is completed.
In the three-dimensional target detection method of the embodiment of the disclosure, a vehicle surrounding image to be processed is acquired; inputting the vehicle surrounding image into a preset target detection model, and acquiring three-dimensional detection frame information of at least one target vehicle output by the target detection model; performing at least one of ground wire detection, garage position detection and shielding rate detection on the vehicle surrounding image to obtain auxiliary detection information of at least one target vehicle; for each target vehicle, correcting the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle, so that the three-dimensional detection frame information of the target vehicle can be corrected by combining the auxiliary detection information of the target vehicle, the influence of noise jitter on the accuracy of the three-dimensional detection frame information is avoided, the accuracy of the three-dimensional detection frame information is improved, and the stability of three-dimensional target detection is improved.
Fig. 3 is a flow chart of a three-dimensional object detection method according to another embodiment of the present disclosure. It should be noted that, the three-dimensional object detection method of the present embodiment may be applied to a three-dimensional object detection apparatus, which may be configured in an electronic device of a vehicle or an electronic device in communication with a vehicle, so that the electronic device may perform a three-dimensional object detection function.
The electronic device may be any device with computing capability, for example, an image capturing device, a personal computer (Personal Computer, abbreviated as PC), a mobile terminal, a server, and the like, and the mobile terminal may be, for example, a vehicle-mounted device, a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and the like, which have various hardware devices including an operating system, a touch screen, and/or a display screen. In the following embodiments, an execution body is described as an example of an electronic device.
As shown in fig. 3, the method comprises the steps of:
in step 301, a vehicle surrounding image to be processed is acquired.
Step 302, inputting the vehicle periphery image into a preset target detection model, and obtaining three-dimensional detection frame information of at least one target vehicle output by the target detection model.
Step 303, performing at least one of ground line detection, bin position detection and shielding rate detection on the vehicle surrounding image to obtain auxiliary detection information of at least one target vehicle.
Step 304, for each target vehicle, information included in the auxiliary detection information of the target vehicle is acquired.
In step 305, in the case where the auxiliary detection information of the target vehicle includes the ground line information of the target vehicle, the detection frame position information in the three-dimensional detection frame information of the target vehicle is acquired.
And 306, carrying out correction processing on the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the ground wire information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
In the embodiment of the present disclosure, the electronic device may perform the process of step 306, for example, determining a detection frame area of the target vehicle according to the detection frame position information of the target vehicle; determining position offset information of the detection frame region according to position information of a first grounding point when the first grounding point which is not positioned in the detection frame region exists in the grounding wire information; and correcting the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the position offset information to obtain corrected detection frame position information of the target vehicle.
Wherein, the detection frame position information may include: length information, width information, height information and three-dimensional coordinate information of a center point of the detection frame. The obtained detection frame region is determined according to the detection frame position information, and the three-dimensional region under the three-dimensional coordinate system can be obtained.
The determining, by the electronic device, the positional offset information of the detection frame area according to the positional information of the first grounding point may be, for example, determining, according to the positional information of the first grounding point, the offset information between a point closest to the first grounding point in the detection frame area and the first grounding point; the offset information is determined as positional offset information of the detection frame region. When the number of first ground points is plural, the largest offset information may be selected from among the plurality of offset information obtained by the determination as the positional offset information of the detection frame region.
In step 307, in the case where the auxiliary detection information of the target vehicle includes the library position information of the target vehicle, the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle are acquired.
Step 308, according to the library position information of the target vehicle, correcting the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
In the embodiment of the present disclosure, the electronic device performs the process of step 308 may, for example, determine the location direction information according to the location information of the target vehicle; determining direction offset information between the bin direction information and the detection frame direction information under the condition that the bin direction information is inconsistent with the detection frame direction information; and correcting the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the direction offset information to obtain corrected three-dimensional detection frame information of the target vehicle.
The electronic device may correct the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the direction offset information, for example, by correcting the detection frame direction information in the three-dimensional detection frame information of the target vehicle according to the direction offset information; and determining the corrected detection frame position information according to the corrected detection frame direction information and the detection frame position information before correction.
In step 309, in the case where the auxiliary detection information of the target vehicle includes the occlusion rate of the target vehicle, confidence information in the three-dimensional detection frame information of the target vehicle is acquired.
Step 310, performing reduction processing on the confidence information in the three-dimensional detection frame information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle under the condition that the shielding rate of the target vehicle is smaller than the preset shielding rate threshold value.
In the embodiment of the present disclosure, the auxiliary detection information of the target vehicle may include a plurality of kinds of detection information. In one example, the auxiliary detection information of the target vehicle may include ground line information and bank location information. In another example, the auxiliary detection information of the target vehicle may include ground line information and a shielding rate. In another example, the auxiliary detection information of the target vehicle may include bin information and a blocking rate. In another example, the auxiliary detection information of the target vehicle may include ground line information, bin information, and a shielding rate.
When the auxiliary detection information of the target vehicle includes a plurality of types of detection information, each type of detection information may be sequentially combined, and the three-dimensional detection frame information of the target vehicle may be corrected. For example, in the case where the auxiliary detection information of the target vehicle includes the ground line information and the shielding rate, the three-dimensional detection frame information of the target vehicle may be corrected by combining the ground line information; and then carrying out correction processing again on the three-dimensional detection frame information of the target vehicle obtained by the correction processing by combining the shielding rate.
For another example, when the auxiliary detection information of the target vehicle includes the bin information and the shielding rate, the three-dimensional detection frame information of the target vehicle may be corrected by combining the bin information; and then carrying out correction processing again on the three-dimensional detection frame information of the target vehicle obtained by the correction processing by combining the shielding rate.
It should be noted that, for details of steps 301 to 303, reference may be made to steps 101 to 103 in the embodiment shown in fig. 1, and detailed description thereof will not be provided here.
In the three-dimensional target detection method of the embodiment of the disclosure, a vehicle surrounding image to be processed is acquired; inputting the vehicle surrounding image into a preset target detection model, and acquiring three-dimensional detection frame information of at least one target vehicle output by the target detection model; performing at least one of ground wire detection, garage position detection and shielding rate detection on the vehicle surrounding image to obtain auxiliary detection information of at least one target vehicle; for each target vehicle, acquiring information included in auxiliary detection information of the target vehicle; acquiring detection frame position information in three-dimensional detection frame information of a target vehicle in the case where the auxiliary detection information of the target vehicle includes ground line information of the target vehicle; according to the ground wire information of the target vehicle, correcting the position information of the detection frame in the three-dimensional detection frame information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle; acquiring detection frame direction information and detection frame position information in three-dimensional detection frame information of a target vehicle when auxiliary detection information of the target vehicle comprises library position information of the target vehicle; according to the library position information of the target vehicle, correcting the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle; acquiring confidence information in three-dimensional detection frame information of the target vehicle under the condition that the auxiliary detection information of the target vehicle comprises the shielding rate of the target vehicle; and under the condition that the shielding rate of the target vehicle is smaller than a preset shielding rate threshold value, carrying out reduction processing on the confidence coefficient information in the three-dimensional detection frame information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle, so that the three-dimensional detection frame information of the target vehicle can be corrected by combining the auxiliary detection information of the target vehicle, the influence of noise dithering on the accuracy of the three-dimensional detection frame information is avoided, the accuracy of the three-dimensional detection frame information is improved, and the stability of three-dimensional target detection is improved.
Fig. 4 is a schematic structural diagram of a three-dimensional object detection device according to an embodiment of the present disclosure.
As shown in fig. 4, the three-dimensional object detection apparatus may include: a first acquisition module 401, a second acquisition module 402, a detection module 403, and a correction processing module 404.
Wherein, the first acquisition module 401 is configured to acquire a vehicle surrounding image to be processed;
a second obtaining module 402, configured to input the vehicle surrounding image into a preset target detection model, and obtain three-dimensional detection frame information of at least one target vehicle output by the target detection model;
the detection module 403 is configured to perform at least one of ground line detection, garage position detection and shielding rate detection on the image around the vehicle, so as to obtain auxiliary detection information of at least one target vehicle;
and the correction processing module 404 is configured to perform correction processing on the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle for each target vehicle, so as to obtain corrected three-dimensional detection frame information of the target vehicle.
In one embodiment of the present disclosure, the auxiliary detection information of the target vehicle includes ground line information of the target vehicle; the correction processing module 404 is specifically configured to obtain detection frame position information in three-dimensional detection frame information of the target vehicle; and correcting the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the ground wire information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
In one embodiment of the present disclosure, the correction processing module 404 is specifically further configured to determine a detection frame area of the target vehicle according to the detection frame position information of the target vehicle; determining position offset information of the detection frame area according to the position information of the first grounding point when the first grounding point which is not positioned in the detection frame area exists in the grounding line information; and correcting the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the position offset information to obtain corrected detection frame position information of the target vehicle.
In one embodiment of the present disclosure, the auxiliary detection information of the target vehicle includes library position information of the target vehicle; the correction processing module 404 is specifically configured to obtain detection frame direction information and detection frame position information in the three-dimensional detection frame information of the target vehicle; and correcting the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the library position information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
In one embodiment of the present disclosure, the correction processing module 404 is specifically further configured to determine, according to the location information of the target vehicle, location direction information; determining direction offset information between the bin direction information and the detection frame direction information when the bin direction information is inconsistent with the detection frame direction information; and correcting the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the direction offset information to obtain corrected three-dimensional detection frame information of the target vehicle.
In one embodiment of the present disclosure, the auxiliary detection information of the target vehicle includes a shielding rate of the target vehicle; the correction processing module 404 is specifically configured to obtain confidence information in the three-dimensional detection frame information of the target vehicle; and under the condition that the shielding rate of the target vehicle is smaller than a preset shielding rate threshold value, carrying out reduction processing on the confidence information in the three-dimensional detection frame information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
In one embodiment of the disclosure, the scene corresponding to the vehicle surrounding image is a parking lot scene; in the case where the target vehicle is in a moving state, the auxiliary detection information of the target vehicle includes at least one of: ground wire information and shielding rate; the auxiliary detection information of the target vehicle includes at least one of the following in a case where the target vehicle is in a stationary state: library position information and shielding rate.
In the three-dimensional object detection device of the embodiment of the present disclosure, a vehicle surrounding image to be processed is acquired; inputting the vehicle surrounding image into a preset target detection model, and acquiring three-dimensional detection frame information of at least one target vehicle output by the target detection model; performing at least one of ground wire detection, garage position detection and shielding rate detection on the vehicle surrounding image to obtain auxiliary detection information of at least one target vehicle; for each target vehicle, correcting the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle, so that the three-dimensional detection frame information of the target vehicle can be corrected by combining the auxiliary detection information of the target vehicle, the influence of noise jitter on the accuracy of the three-dimensional detection frame information is avoided, the accuracy of the three-dimensional detection frame information is improved, and the stability of three-dimensional target detection is improved.
According to a third aspect of embodiments of the present disclosure, there is also provided a vehicle including: a processor; a memory for storing processor-executable instructions, wherein the processor is configured to: the three-dimensional object detection method as described above is realized.
In order to implement the above-described embodiments, the present disclosure also proposes a storage medium.
Wherein the instructions in the storage medium, when executed by the processor, enable the processor to perform the three-dimensional object detection method as described above.
To achieve the above embodiments, the present disclosure also provides a computer program product.
Wherein the computer program product, when executed by a processor of an electronic device, enables the electronic device to perform the method as above.
Fig. 5 is a block diagram of a vehicle 500 according to an exemplary embodiment of the present disclosure. For example, the vehicle 500 may be a hybrid vehicle, or may be a non-hybrid vehicle, an electric vehicle, a fuel cell vehicle, or other type of vehicle. The vehicle 500 may be an autonomous vehicle, a semi-autonomous vehicle, or a non-autonomous vehicle.
Referring to fig. 5, a vehicle 500 may include various subsystems, such as an infotainment system 510, a perception system 520, a decision control system 530, a drive system 540, and a computing platform 550. Vehicle 500 may also include more or fewer subsystems, and each subsystem may include multiple components. In addition, interconnections between each subsystem and between each component of the vehicle 500 may be achieved by wired or wireless means.
In some embodiments, the infotainment system 510 may include a communication system, an entertainment system, a navigation system, and the like.
The sensing system 520 may include several sensors for sensing information of the environment surrounding the vehicle 500. For example, the sensing system 520 may include a global positioning system (which may be a GPS system, a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU), a lidar, millimeter wave radar, an ultrasonic radar, and a camera device.
Decision control system 530 may include a computing system, a vehicle controller, a steering system, a throttle, and a braking system.
The drive system 540 may include components that provide powered movement of the vehicle 500. In one embodiment, the drive system 540 may include an engine, an energy source, a transmission, and wheels. The engine may be one or a combination of an internal combustion engine, an electric motor, an air compression engine. The engine is capable of converting energy provided by the energy source into mechanical energy.
Some or all of the functions of the vehicle 500 are controlled by the computing platform 550. The computing platform 550 may include at least one processor 551 and memory 552, and the processor 551 may execute instructions 553 stored in the memory 552.
The processor 551 may be any conventional processor, such as a commercially available CPU. The processor may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof.
The memory 552 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In addition to instructions 553, memory 552 may store data such as road maps, route information, vehicle position, direction, speed, and the like. The data stored by memory 552 may be used by computing platform 550.
In an embodiment of the present disclosure, the processor 551 may execute instructions 553 to complete all or part of the steps of the three-dimensional object detection method described above.
Furthermore, the word "exemplary" is used herein to mean serving as an example, instance, illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as advantageous over other aspects or designs. Rather, the use of the word exemplary is intended to present concepts in a concrete fashion. As used herein, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise, or clear from context, "X application a or B" is intended to mean any one of the natural inclusive permutations. I.e. if X applies a; x is applied with B; or both X applications a and B, "X application a or B" is satisfied under any of the foregoing examples. In addition, the articles "a" and "an" as used in this application and the appended claims are generally understood to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form.
Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations and is limited only by the scope of the claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (which is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms "includes," including, "" has, "" having, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of three-dimensional object detection, the method comprising:
acquiring a vehicle surrounding image to be processed;
inputting the vehicle surrounding image into a preset target detection model, and acquiring three-dimensional detection frame information of at least one target vehicle output by the target detection model;
performing at least one of ground wire detection, bin position detection and shielding rate detection on the vehicle surrounding image to obtain auxiliary detection information of at least one target vehicle;
And for each target vehicle, correcting the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
2. The method according to claim 1, wherein the auxiliary detection information of the target vehicle includes ground line information of the target vehicle; the correcting process is performed on the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle, including:
acquiring detection frame position information in three-dimensional detection frame information of the target vehicle;
and correcting the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the ground wire information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
3. The method according to claim 2, wherein the correcting the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the ground line information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle includes:
Determining a detection frame area of the target vehicle according to the detection frame position information of the target vehicle;
determining position offset information of the detection frame area according to the position information of the first grounding point when the first grounding point which is not positioned in the detection frame area exists in the grounding line information;
and correcting the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the position offset information to obtain corrected detection frame position information of the target vehicle.
4. The method according to claim 1, wherein the auxiliary detection information of the target vehicle includes library position information of the target vehicle; the correcting process is performed on the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle, including:
acquiring detection frame direction information and detection frame position information in three-dimensional detection frame information of the target vehicle;
and correcting the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the library position information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
5. The method according to claim 4, wherein the correcting the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the library position information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle includes:
determining the position direction information according to the position information of the target vehicle;
determining direction offset information between the bin direction information and the detection frame direction information when the bin direction information is inconsistent with the detection frame direction information;
and correcting the detection frame direction information and the detection frame position information in the three-dimensional detection frame information of the target vehicle according to the direction offset information to obtain corrected three-dimensional detection frame information of the target vehicle.
6. The method according to claim 1, wherein the auxiliary detection information of the target vehicle includes a shielding rate of the target vehicle; the correcting process is performed on the three-dimensional detection frame information of the target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle, including:
Acquiring confidence information in the three-dimensional detection frame information of the target vehicle;
and under the condition that the shielding rate of the target vehicle is smaller than a preset shielding rate threshold value, carrying out reduction processing on the confidence information in the three-dimensional detection frame information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
7. The method according to any one of claims 1 to 6, wherein the scene to which the vehicle surrounding image corresponds is a parking lot scene;
in the case where the target vehicle is in a moving state, the auxiliary detection information of the target vehicle includes at least one of: ground wire information and shielding rate;
the auxiliary detection information of the target vehicle includes at least one of the following in a case where the target vehicle is in a stationary state: library position information and shielding rate.
8. A three-dimensional object detection device, the device comprising:
the first acquisition module is used for acquiring a vehicle surrounding image to be processed;
the second acquisition module is used for inputting the vehicle surrounding image into a preset target detection model and acquiring three-dimensional detection frame information of at least one target vehicle output by the target detection model;
The detection module is used for carrying out at least one of ground wire detection, garage position detection and shielding rate detection on the vehicle surrounding image to obtain auxiliary detection information of at least one target vehicle;
and the correction processing module is used for correcting the three-dimensional detection frame information of each target vehicle according to the auxiliary detection information of the target vehicle to obtain corrected three-dimensional detection frame information of the target vehicle.
9. A vehicle, characterized by comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to:
the steps of implementing the three-dimensional object detection method according to any one of claims 1 to 7.
10. A non-transitory computer readable storage medium, which when executed by a processor, causes the processor to perform the three-dimensional object detection method of any one of claims 1 to 7.
CN202310565718.4A 2023-05-19 2023-05-19 Three-dimensional target detection method and device and vehicle Active CN116363631B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310565718.4A CN116363631B (en) 2023-05-19 2023-05-19 Three-dimensional target detection method and device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310565718.4A CN116363631B (en) 2023-05-19 2023-05-19 Three-dimensional target detection method and device and vehicle

Publications (2)

Publication Number Publication Date
CN116363631A true CN116363631A (en) 2023-06-30
CN116363631B CN116363631B (en) 2023-09-05

Family

ID=86939393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310565718.4A Active CN116363631B (en) 2023-05-19 2023-05-19 Three-dimensional target detection method and device and vehicle

Country Status (1)

Country Link
CN (1) CN116363631B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993066A (en) * 2019-03-06 2019-07-09 开易(北京)科技有限公司 Vehicle positioning method and system towards sideline
DE102018203590A1 (en) * 2018-03-09 2019-09-12 Conti Temic Microelectronic Gmbh Surroundview system with adapted projection surface
CN111507145A (en) * 2019-01-31 2020-08-07 上海欧菲智能车联科技有限公司 Method, system and device for detecting barrier at storage position of embedded vehicle-mounted all-round looking system
CN111553282A (en) * 2020-04-29 2020-08-18 北京百度网讯科技有限公司 Method and device for detecting vehicle
CN112036385A (en) * 2020-11-04 2020-12-04 天津天瞳威势电子科技有限公司 Library position correction method and device, electronic equipment and readable storage medium
CN113420682A (en) * 2021-06-28 2021-09-21 阿波罗智联(北京)科技有限公司 Target detection method and device in vehicle-road cooperation and road side equipment
CN114387498A (en) * 2021-12-31 2022-04-22 北京旷视科技有限公司 Target detection method and device, electronic equipment and storage medium
CN114511834A (en) * 2020-11-17 2022-05-17 阿里巴巴集团控股有限公司 Method and device for determining prompt information, electronic equipment and storage medium
CN114972941A (en) * 2022-05-11 2022-08-30 燕山大学 Decision fusion method and device for three-dimensional detection of shielded vehicle and electronic equipment
CN115984151A (en) * 2022-12-14 2023-04-18 北京奇艺世纪科技有限公司 Method and device for correcting shielding frame, electronic equipment and readable storage medium
CN115984796A (en) * 2022-12-31 2023-04-18 武汉光庭信息技术股份有限公司 Image annotation method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018203590A1 (en) * 2018-03-09 2019-09-12 Conti Temic Microelectronic Gmbh Surroundview system with adapted projection surface
CN111507145A (en) * 2019-01-31 2020-08-07 上海欧菲智能车联科技有限公司 Method, system and device for detecting barrier at storage position of embedded vehicle-mounted all-round looking system
CN109993066A (en) * 2019-03-06 2019-07-09 开易(北京)科技有限公司 Vehicle positioning method and system towards sideline
CN111553282A (en) * 2020-04-29 2020-08-18 北京百度网讯科技有限公司 Method and device for detecting vehicle
CN112036385A (en) * 2020-11-04 2020-12-04 天津天瞳威势电子科技有限公司 Library position correction method and device, electronic equipment and readable storage medium
CN114511834A (en) * 2020-11-17 2022-05-17 阿里巴巴集团控股有限公司 Method and device for determining prompt information, electronic equipment and storage medium
CN113420682A (en) * 2021-06-28 2021-09-21 阿波罗智联(北京)科技有限公司 Target detection method and device in vehicle-road cooperation and road side equipment
CN114387498A (en) * 2021-12-31 2022-04-22 北京旷视科技有限公司 Target detection method and device, electronic equipment and storage medium
CN114972941A (en) * 2022-05-11 2022-08-30 燕山大学 Decision fusion method and device for three-dimensional detection of shielded vehicle and electronic equipment
CN115984151A (en) * 2022-12-14 2023-04-18 北京奇艺世纪科技有限公司 Method and device for correcting shielding frame, electronic equipment and readable storage medium
CN115984796A (en) * 2022-12-31 2023-04-18 武汉光庭信息技术股份有限公司 Image annotation method and system

Also Published As

Publication number Publication date
CN116363631B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
AU2018286594A1 (en) Methods and systems for color point cloud generation
US20200355513A1 (en) Systems and methods for updating a high-definition map
CN113916243A (en) Vehicle positioning method, device, equipment and storage medium for target scene area
CN111213153A (en) Target object motion state detection method, device and storage medium
CN113240813B (en) Three-dimensional point cloud information determining method and device
US20240046563A1 (en) Neural radiance field for vehicle
CN116363631B (en) Three-dimensional target detection method and device and vehicle
CN115223015B (en) Model training method, image processing method, device and vehicle
CN115718304A (en) Target object detection method, target object detection device, vehicle and storage medium
CN116168362A (en) Pre-training method and device for vehicle perception model, electronic equipment and vehicle
CN114694107A (en) Image processing method and device, electronic equipment and storage medium
CN116659529B (en) Data detection method, device, vehicle and storage medium
CN112150553A (en) Calibration method and device for vehicle-mounted camera
CN116740681B (en) Target detection method, device, vehicle and storage medium
CN115471513B (en) Point cloud segmentation method and device
CN115661798B (en) Method and device for determining target area, vehicle and storage medium
CN114842458B (en) Obstacle detection method, obstacle detection device, vehicle, and storage medium
CN116626670B (en) Automatic driving model generation method and device, vehicle and storage medium
CN116385825B (en) Model joint training method and device and vehicle
CN115900771B (en) Information determination method, device, vehicle and storage medium
CN116563812B (en) Target detection method, target detection device, storage medium and vehicle
CN117128976B (en) Method and device for acquiring road center line, vehicle and storage medium
CN116503482B (en) Vehicle position acquisition method and device and electronic equipment
CN118050010A (en) Positioning method, device, vehicle, storage medium and program product for vehicle
CN116758504A (en) Image processing method, device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant