CN117002527A - Vehicle control method and device, vehicle and storage medium - Google Patents

Vehicle control method and device, vehicle and storage medium Download PDF

Info

Publication number
CN117002527A
CN117002527A CN202310611302.1A CN202310611302A CN117002527A CN 117002527 A CN117002527 A CN 117002527A CN 202310611302 A CN202310611302 A CN 202310611302A CN 117002527 A CN117002527 A CN 117002527A
Authority
CN
China
Prior art keywords
target
traffic light
vehicle
environment image
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310611302.1A
Other languages
Chinese (zh)
Inventor
李�昊
李志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202310611302.1A priority Critical patent/CN117002527A/en
Publication of CN117002527A publication Critical patent/CN117002527A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/60Traffic rules, e.g. speed limits or right of way

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure relates to a method, apparatus, vehicle and storage medium for vehicle control, the method comprising: acquiring a target environment image around a vehicle; the target environment image comprises an image of a target traffic light device; determining target traffic indication information of the target traffic light equipment and target lane indication information of a target lane from the target environment image under the condition that the distance between the vehicle and the target traffic light equipment is less than or equal to a preset distance threshold value according to the target environment image, wherein the target lane is a lane on which the vehicle runs; and controlling the vehicle to run according to the target traffic indication information and the target lane indication information.

Description

Vehicle control method and device, vehicle and storage medium
Technical Field
The disclosure relates to the technical field of automatic driving, and in particular relates to a vehicle control method, a vehicle control device, a vehicle and a storage medium.
Background
In the automatic driving technology, in order to ensure that an automatic driving vehicle can safely pass through an intersection, it is necessary to rely on a high-precision map to acquire sensing information of an external environment of a current driving road, for example, to acquire lane indication information of the driving road and traffic light indication information located on the driving road from the high-precision map, and to control the vehicle to run on the driving road according to the lane indication information and the traffic light indication information.
However, in the related art, the high-precision map has the defects of long acquisition period, high manufacturing cost, small coverage range, use constraint and the like, so that under the condition of using the high-precision map as an information source for external environment perception, the problem of lack or inaccuracy of perception information exists, and further, the automatic driving vehicle is wrongly planned, and the risk of accidents is increased.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method, apparatus, vehicle, and storage medium for vehicle control.
According to a first aspect of embodiments of the present disclosure, there is provided a method of vehicle control, the method comprising:
acquiring a target environment image around a vehicle; the target environment image comprises an image of a target traffic light device;
determining target traffic indication information of the target traffic light equipment and target lane indication information of a target lane from the target environment image under the condition that the distance between the vehicle and the target traffic light equipment is less than or equal to a preset distance threshold value according to the target environment image, wherein the target lane is a lane on which the vehicle runs;
And controlling the vehicle to run according to the target traffic indication information and the target lane indication information.
Optionally, the determining the target traffic indication information of the target traffic light device from the target environment image, and the target lane indication information of the target lane include:
acquiring a historical environment image, wherein the historical environment image is an environment image of at least one frame before the vehicle acquires the target environment image;
determining historical traffic light equipment and historical lane identification information from the historical environment image;
determining candidate traffic light equipment and candidate lane identification information from the target environment image;
in the case that the candidate traffic light device and the history traffic light device are determined to be the same, taking the candidate traffic light device as the target traffic light device and acquiring target traffic indication information of the target traffic light device;
and in the case that the candidate lane identification information and the history lane identification information are determined to be the same, taking the candidate lane identification information as the target lane identification information.
Optionally, determining the distance between the vehicle and the target traffic light apparatus comprises:
Acquiring a first distance between the target traffic light equipment and the vehicle;
determining the position information of the target traffic light equipment according to a first historical distance and the first distance, wherein the first historical distance is the distance information of the historical traffic light equipment and a vehicle in the historical environment image;
and determining the target traffic light distance between the target traffic light equipment and the vehicle according to the position information of the target traffic light equipment.
Optionally, the determining the target traffic light distance between the target traffic light device and the vehicle according to the position information of the target traffic light device includes:
acquiring a current frame point cloud image around a vehicle;
determining a first point cloud set corresponding to the target traffic light equipment according to the position information of the target traffic light equipment;
determining target position information of the target traffic light equipment according to the first point cloud set;
and determining the target distance between the target traffic light equipment and the vehicle according to the target position information of the target traffic light equipment.
Optionally, determining that the candidate traffic light device and the historical traffic light device are the same comprises:
acquiring at least one region of interest of the target environment image, wherein the region of interest comprises a region to be processed for characterizing correspondence of at least one candidate traffic light device determined from the environment image on the target environment image;
Re-projecting the historical traffic light device onto the target environmental image;
and under the condition that the historical traffic light equipment with the re-projection exists in at least one interested area of the target environment image, determining that a first candidate traffic light equipment is the same as the historical traffic light equipment, wherein the first candidate traffic light equipment is used for representing the candidate traffic light equipment corresponding to the interested area with the re-projection point of the historical traffic light equipment.
Optionally, the determining the lane identification information of the target lane from the target environment image includes:
acquiring a second region of interest corresponding to the target lane indication identifier in the environment image, wherein the second region of interest is used for representing a region to be processed corresponding to the target lane indication identifier on the environment image;
the lane identification information of the target road identification object is determined from the second region of interest.
Optionally, the acquiring the target traffic indication information of the target traffic light device includes:
acquiring a first region of interest corresponding to the target traffic light device in the target environment image, wherein the first region of interest is used for representing a region to be processed corresponding to the target traffic light device on the target environment image;
Determining a plurality of related target traffic light devices from a preset region of interest through an indication information acquisition model, wherein the preset region of interest comprises the first region of interest;
and determining the target traffic indication information corresponding to the plurality of target traffic light devices.
Optionally, the determining, by the indication information acquisition model, a relevant plurality of target traffic light devices from a preset region of interest includes:
acquiring the preset region of interest;
and taking the first region of interest as the input of the indication information acquisition model to obtain the target traffic indication information corresponding to the target traffic light equipment output by the indication information acquisition model.
According to a second aspect of embodiments of the present disclosure, there is provided an apparatus for vehicle control, including:
an acquisition module configured to acquire a target environment image around a vehicle; the target environment image comprises an image of a target traffic light device;
a determining module configured to determine target traffic indication information of the target traffic light apparatus and target lane indication information of a target lane, which is a lane in which the vehicle travels, from the target environment image, in a case where it is determined that a distance between the vehicle and the target traffic light apparatus is less than or equal to a preset distance threshold value, based on the target environment image;
And the control module is configured to control the vehicle to run according to the target traffic indication information and the target lane indication information.
Optionally, the determining module includes:
a first acquisition sub-module configured to acquire a history environmental image, the history environmental image being an environmental image of at least one frame before the vehicle acquires the target environmental image;
a first determination sub-module configured to determine historical traffic light devices and historical lane identification information from the historical environmental image;
a second determination sub-module configured to determine candidate traffic light devices and candidate lane identification information from the target environment image;
a third determination submodule configured to, in a case where it is determined that the candidate traffic light apparatus and the history traffic light apparatus are the same, take the candidate traffic light apparatus as the target traffic light apparatus and acquire target traffic indication information of the target traffic light apparatus;
a fourth determination sub-module configured to take the candidate lane identification information as the target lane identification information in a case where it is determined that the candidate lane identification information and the history lane identification information are the same.
Optionally, the determining module includes:
a second acquisition sub-module configured to acquire a first distance of the target traffic light apparatus from the vehicle;
a fifth determination submodule configured to determine position information of the target traffic light device according to a first historical distance and the first distance, wherein the first historical distance is distance information of the historical traffic light device and a vehicle in the historical environment image;
a sixth determination submodule configured to determine a target traffic light distance of the target traffic light apparatus and the vehicle according to the position information of the target traffic light apparatus.
Optionally, the sixth determining submodule is configured to acquire a current frame point cloud image around the vehicle; determining a first point cloud set corresponding to the target traffic light equipment according to the position information of the target traffic light equipment; determining target position information of the target traffic light equipment according to the first point cloud set; and determining the target distance between the target traffic light equipment and the vehicle according to the target position information of the target traffic light equipment.
Optionally, the third determining submodule is configured to acquire at least one region of interest of the target environment image, and the region of interest comprises a region to be processed for characterizing correspondence of at least one candidate traffic light device determined from the environment image on the target environment image; re-projecting the historical traffic light device onto the target environmental image; and under the condition that the historical traffic light equipment with the re-projection exists in at least one interested area of the target environment image, determining that a first candidate traffic light equipment is the same as the historical traffic light equipment, wherein the first candidate traffic light equipment is used for representing the candidate traffic light equipment corresponding to the interested area with the re-projection point of the historical traffic light equipment.
Optionally, the second determining submodule is configured to acquire a second region of interest corresponding to the target lane indication identifier in the environment image, where the second region of interest is used for characterizing a region to be processed corresponding to the target lane indication identifier on the environment image; the lane identification information of the target road identification object is determined from the second region of interest.
Optionally, the determining module includes:
the third acquisition sub-module is configured to acquire a first region of interest corresponding to the target traffic light device in the target environment image, wherein the first region of interest is used for representing a region to be processed corresponding to the target traffic light device on the target environment image;
a seventh determination submodule configured to determine a relevant plurality of the target traffic light apparatuses from a preset region of interest including the first region of interest by means of an instruction information acquisition model;
an eighth determination submodule configured to determine the target traffic indication information corresponding to a plurality of the target traffic light apparatuses.
Optionally, the seventh determining submodule is configured to acquire the preset region of interest; and taking the first region of interest as the input of the indication information acquisition model to obtain the target traffic indication information corresponding to the target traffic light equipment output by the indication information acquisition model.
According to a third aspect of embodiments of the present disclosure, there is provided a vehicle comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of vehicle control provided by the first aspect of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of vehicle control provided by the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
acquiring a target environment image around a vehicle; the target environment image comprises an image of a target traffic light device; determining target traffic indication information of the target traffic light equipment and target lane indication information of a target lane from the target environment image under the condition that the distance between the vehicle and the target traffic light equipment is less than or equal to a preset distance threshold value according to the target environment image, wherein the target lane is a lane on which the vehicle runs; and controlling the vehicle to run according to the target traffic indication information and the target lane indication information.
In this way, by determining the target traffic indication information of the target traffic light device and the target lane indication information of the target lane from the target environment image acquired in real time, the environment information perceived around the vehicle can be directly acquired without depending on the map information of the high-precision map, so that the vehicle is controlled to run according to the perceived environment information, and the vehicle can be controlled to run according to the perceived target traffic indication information and the target lane indication information of the target lane in the automatic driving process; therefore, the problems that the coverage range of the map information of the high-precision map is small and the perception information is lack or inaccurate due to the use constraint can be avoided under the condition that only the map information of the high-precision map is used, the accuracy of automatic driving vehicle planning can be improved, and the risk of accidents is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flowchart illustrating a method of vehicle control, according to an exemplary embodiment.
FIG. 2 is a flowchart illustrating another method of vehicle control, according to an exemplary embodiment.
FIG. 3 is a flowchart illustrating another method of vehicle control, according to an exemplary embodiment.
FIG. 4 is a flowchart illustrating another method of vehicle control, according to an exemplary embodiment.
Fig. 5 is a block diagram of an apparatus for vehicle control according to an exemplary embodiment.
Fig. 6 is a block diagram of a determination module shown in accordance with the embodiment shown in fig. 5.
Fig. 7 is a block diagram of a determination module shown in accordance with the embodiment shown in fig. 5.
Fig. 8 is a block diagram of a determination module shown in accordance with the embodiment shown in fig. 5.
Fig. 9 is a block diagram of a vehicle, according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, all actions of acquiring signals, information or data in the present application are performed under the condition of conforming to the corresponding data protection rule policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Before describing the specific embodiments of the present disclosure in detail, an application scenario of the present disclosure will be described first. Currently, in the automatic driving technology, in order to ensure that an automatic driving vehicle can safely pass through an intersection, it is necessary to rely on a high-precision map to obtain sensing information of an external environment of a current driving road, for example, to obtain lane indication information of the driving road and traffic light indication information located on the driving road from the high-precision map, and to control the vehicle to drive on the driving road according to the lane indication information and the traffic light indication information. However, under the condition of using the high-precision map, the high-precision map has the defects of long acquisition period, high manufacturing cost, small coverage range, use constraint and the like, so that the problem of lack or inaccuracy of sensing information can be caused when the high-precision map is used as an information source for sensing the external environment, the automatic driving vehicle is further caused to have wrong planning, and the risk of accidents is increased.
In order to overcome the technical problems in the related art, the present disclosure provides a method, an apparatus, a vehicle and a storage medium for controlling a vehicle. By determining the target traffic indication information of the target traffic light device and the target lane indication information of the target lane from the target environment image acquired in real time, the environment information perceived around the vehicle can be directly acquired under the condition of not depending on the map information of the high-precision map, so that the vehicle is controlled to run according to the perceived environment information, and the vehicle can be controlled to run according to the perceived target traffic indication information and the target lane indication information of the target lane in the automatic driving process; therefore, the problems that the coverage area is small and the perception information is lack or inaccurate caused by the use constraint can be avoided under the condition that only map information of a high-precision map is used, the accuracy of automatic driving vehicle planning can be improved, and the risk of accidents is reduced.
The present disclosure is described below in connection with specific embodiments.
FIG. 1 is a flowchart illustrating a method of vehicle control, as shown in FIG. 1, according to an exemplary embodiment, which may include:
In step S11, a target environment image around the vehicle is acquired.
The image of the target environment comprises images of target traffic light equipment, different images of the target environment can be acquired by an image acquisition device under different acquisition visual angles, an acquisition entity provided with the image acquisition device can acquire the target environment around a vehicle under different acquisition visual angles in the target environment, specifically, the acquisition entity can be the vehicle, and the image acquisition device can be an acquisition device such as a laser radar or a binocular camera.
For example, the image capturing device may be mounted on a vehicle, and in the process of capturing the target environment, the image capturing device may capture a plurality of target environment images of the target environment at different capturing angles, which may include images of objects such as a target traffic light device and a lane line, as the vehicle travels in the target environment.
For example, in the case where the target environment is an intersection, a target environment image of the target environment may be acquired from west to east at a position a predetermined distance from one of the intersections.
In step S12, in the case where it is determined that the distance between the vehicle and the target traffic light apparatus is less than or equal to the preset distance threshold value from the target environment image, target traffic indication information of the target traffic light apparatus and target lane indication information of the target lane are determined from the target environment image.
The target lane is a lane where the vehicle travels.
In this step, whether the traffic light device exists in the current target environment image may be detected through a pre-trained traffic light device detection model, and if it is determined that the traffic light device exists in the target environment image and if it is determined that the traffic light device is the target traffic light device, the distance between the target traffic light device and the vehicle may be identified and detected through the traffic light device detection model, and then if it is determined that the distance between the vehicle and the target traffic light device is less than or equal to a preset distance threshold value, the target traffic indication information of the target traffic light device and the target lane indication information of the target lane may be determined from the target environment image.
The traffic light apparatus in this embodiment may be various in form, and for example, the traffic light apparatus may be a common suspended traffic light apparatus, a movable temporary traffic light apparatus, a fixed upright traffic light apparatus, or the like.
For example, after a target environment image around a vehicle is acquired, the target environment image may be first input into a traffic light device detection model trained in advance, and in a case where it is determined that a target traffic light device exists in the target environment image, a distance between the target traffic light device and the vehicle output by the traffic light device detection model may be obtained. Specifically, the image acquisition device may be used to acquire multiple frames of target environmental images at different driving moments during the driving process of the vehicle, and then output multiple distances between the target traffic light device and the vehicle through the traffic light device detection model.
Secondly, after determining a plurality of distances between the target traffic light device and the vehicle, determining position information of the target traffic light device according to the plurality of distances between the target traffic light device and the vehicle in a triangular positioning mode, then taking the position information of the traffic light device as input of an indication information acquisition model to obtain the target traffic indication information corresponding to the target traffic light device output by the indication information acquisition model, and obtaining target lane indication information of a target lane according to the target environment image.
In consideration of the fact that there may be an inconsistency between the traffic light devices included in the target environment image acquired by the vehicle at the first time and the traffic light devices included in the target environment image acquired by the vehicle at the second time during the driving of the vehicle, the target environment image of each frame may be input into the traffic light tracking model through a pre-trained traffic light tracking model, and the output target traffic light device may be obtained when it is determined that the traffic light devices in the target environment image of the current frame are identical to the traffic light devices in the target environment image obtained by the previous frame.
In step S13, the vehicle is controlled to travel based on the target traffic indication information and the target lane indication information.
Acquiring a target environment image around a vehicle; the target environment image includes an image of a target traffic light device; determining target traffic indication information of the target traffic light equipment and target lane indication information of a target lane from the target environment image under the condition that the distance between the vehicle and the target traffic light equipment is less than or equal to a preset distance threshold value according to the target environment image, wherein the target lane is a lane on which the vehicle runs; and controlling the vehicle to run according to the target traffic indication information and the target lane indication information. In this way, by determining the target traffic indication information of the target traffic light device and the target lane indication information of the target lane from the target environment image acquired in real time, the environment information perceived around the vehicle can be directly acquired without depending on the map information of the high-precision map, so that the vehicle is controlled to run according to the perceived environment information, and the vehicle can be controlled to run according to the perceived target traffic indication information and the target lane indication information of the target lane in the automatic driving process; therefore, the problems that the coverage area is small and the perception information is lack or inaccurate caused by the use constraint can be avoided under the condition that only map information of a high-precision map is used, the accuracy of automatic driving vehicle planning can be improved, and the risk of accidents is reduced.
In some embodiments, fig. 2 is a flowchart illustrating another method of vehicle control according to an exemplary embodiment, and as shown in fig. 2, the above-mentioned step S12 may include the following steps.
In step S121, a history environment image is acquired.
The historical environment image is an environment image of at least one frame before the vehicle acquires the target environment image.
In step S122, the historical traffic light apparatus and the historical lane identification information are determined from the historical environmental image.
Wherein the history traffic light apparatus may include traffic light apparatuses determined in the history environment image, and the history lane identification information may include lane line information determined in the history environment image, and arrow indication information between lane lines on both sides of a road.
In this step, the historical environment image may be input into the pre-trained device detection model through the pre-trained device detection model, so as to obtain the traffic light device detected by the device detection model and the arrow indication information between the lane lines at both sides of the road.
Specifically, the device detection model may adopt an existing RCNN (Regions with CNN features, area-based convolutional neural network) series detection model and a YOLO (You Only Look Once: unified, real Time Object Detection, unified real-time object detection) series detection model, and the specific detection process is not described herein.
In some embodiments, the historical traffic light device and the historical lane identification information may also be target traffic light device and target lane identification information determined in an environmental image of at least one frame preceding the target environmental image.
In step S123, candidate traffic light apparatuses and candidate lane identification information are determined from the target environment image.
In this step, the target environment image may be input into the pre-trained device detection model through the pre-trained device detection model to obtain the traffic light device detected by the device detection model and the arrow indication information between the lane lines on both sides of the road, and the detected traffic light device may be used as the candidate traffic light device, and the arrow indication information between the lane lines on both sides of the detected road may be used as the candidate lane identification information.
In step S124, in the case where it is determined that the candidate traffic light apparatus and the history traffic light apparatus are the same, the candidate traffic light apparatus is regarded as the target traffic light apparatus, and the target traffic indication information of the target traffic light apparatus is acquired.
In this step, the target environment image may be input into the object tracking model through a traffic light tracking model trained in advance, the output target traffic light device may be obtained if it is determined that the candidate traffic light device in the target environment image of the current frame is identical to the history traffic light device in the history environment image, and the target traffic indication information of the target traffic light device may be obtained if it is determined that the target traffic light device is identical.
In some embodiments, at least one region of interest of the target environmental image may be acquired first, wherein the region of interest includes a region to be processed for characterizing correspondence of at least one of the candidate traffic light devices determined from the environmental image on the target environmental image; the historical traffic light apparatus may then be re-projected onto the target environmental image; next, in the case where it is determined that there is the re-projected historical traffic light device within the at least one region of interest of the target environmental image, a first candidate traffic light device is determined to be the same as the historical traffic light device, wherein the first candidate traffic light device is used to characterize the candidate traffic light device corresponding to the region of interest where the re-projected point of the historical traffic light device is present.
For example, in the case where a plurality of the candidate traffic light devices are determined on the target environment image, flag information may be established for each of the candidate traffic light devices, for example, in the case where three candidate traffic light devices are determined to be included on the target environment image, flags of landmark1, landmark2, and landmark3 may be established for the three candidate traffic light devices, respectively, wherein each flag may include traffic light attribute information, which may include information of traffic light shape information (e.g., circle, square, pedestrian, etc.), traffic light arrow information type (e.g., left-turn arrow, right-turn arrow, arrow turning around, etc.), and the like. Then, a target detection model in the related technology can be adopted to detect and obtain a first region of interest corresponding to the candidate traffic light equipment on the target environment image, for example, the first region of interest corresponding to three candidate traffic light equipment such as landmark1, landmark2, landmark3 and the like can be determined through an RCNN detection model, and the first region of interest comprises a boundingbox1, a boundingbox2 and a boundingbox3; secondly, three candidate traffic light devices may be reprojected, and in the case that the historical traffic light device including the reprojection is determined in one boundingbox into which the three candidate traffic light devices are respectively reprojected, the candidate traffic light device corresponding to the boundingbox including the reprojected historical traffic light device may be used as the target traffic light device.
In some embodiments, the target traffic indication information of the target traffic light device may be obtained by:
s1, acquiring a first region of interest corresponding to the target traffic light equipment in the target environment image.
The first region of interest is used for representing a region to be processed corresponding to the target traffic light device on the target environment image.
S2, determining a plurality of related target traffic light devices from a preset interested area through an indication information acquisition model.
Wherein the preset region of interest includes the first region of interest.
For example, the preset region of interest may be acquired first; and then, the first region of interest can be used as the input of the indication information acquisition model to obtain the target traffic indication information corresponding to the target traffic light equipment output by the indication information acquisition model.
S3, determining the target traffic indication information corresponding to the plurality of target traffic light devices.
In step S125, in the case where it is determined that the candidate lane identification information and the history lane identification information are the same, the candidate lane identification information is taken as the target lane identification information.
In the step, firstly, the target environment image can be input into the lane identification tracking model through a pre-trained lane identification tracking model, and under the condition that candidate lane identification information in the target environment image of the current frame and history lane identification information in the history environment image are determined, the output target lane identification information is obtained.
It should be understood that the specific algorithm of the preset traffic light tracking model and the lane identification tracking model may be selected according to actual requirements, so long as the tracking algorithm is ensured to be capable of tracking the target. For example, the tracking algorithm includes, but is not limited to: the embodiment of the application is not limited to a mean shift algorithm, a Kalman filtering-based target tracking algorithm, and a particle filtering-based target tracking algorithm.
By adopting the technical scheme, the candidate traffic light equipment and the candidate lane identification information determined in the history environment image of the current frame can be tracked by acquiring the history environment image and used for determining the candidate traffic light equipment and the candidate lane identification information determined in the current frame, and the history traffic light equipment and the history lane identification information determined in the history environment image are the same, so that the accuracy of detecting the traffic light equipment and the lane identification information can be improved.
Alternatively, fig. 3 is a flowchart illustrating another method of vehicle control according to an exemplary embodiment, and as shown in fig. 3, the following steps may be included in step S12.
In step S126, a first distance of the target traffic light apparatus from the vehicle is acquired.
In step S127, the location information of the target traffic light device is determined based on the first historical distance and the first distance.
Wherein the first historical distance is distance information between the historical traffic light device and the vehicle in the historical environment image, and the position information of the target traffic light device may include a three-dimensional position of the target traffic light device under a coordinate system of current vehicle driving, and the coordinate system may include a world coordinate system, which is not limited in this disclosure.
In this step, a first historical distance between the target traffic light device and the vehicle may be first obtained, and for example, the historical environment image may be input into the trained traffic light device detection model to obtain the first historical distance detected by the traffic light device detection model; then, in the case that the first history distance and the first distance are acquired, the position information of the target traffic light apparatus may be determined by a triangulation manner.
In step S128, a target traffic light distance between the target traffic light device and the vehicle is determined based on the location information of the target traffic light device.
In some embodiments, a current frame point cloud image around a vehicle may be acquired first; then, according to the position information of the target traffic light equipment, determining a first point cloud set corresponding to the target traffic light equipment; determining target position information of the target traffic light equipment according to the first point cloud set; finally, the target distance between the target traffic light device and the vehicle can be determined according to the target position information of the target traffic light device.
By adopting the technical scheme, the distances between the traffic light equipment and the vehicle are acquired through the traffic light equipment detection model trained in advance, and then the position information of the target traffic light equipment is determined by adopting a triangular positioning mode for the distances, so that the position information of the target traffic light equipment can be determined more accurately.
FIG. 4 is a flowchart illustrating a method of vehicle control, as shown in FIG. 4, according to an exemplary embodiment, which may include:
in step S401, a target environment image around the vehicle is acquired; the target environment image includes an image of a target traffic light device.
In step S402, a history environmental image, which is an environmental image of at least one frame before the vehicle acquires the target environmental image, is acquired.
In step S403, a first distance of the target traffic light apparatus from the vehicle is acquired.
In step S404, the location information of the target traffic light device is determined according to a first historical distance and the first distance, where the first historical distance is the distance information between the historical traffic light device and the vehicle in the historical environment image.
In step S405, a target traffic light distance between the target traffic light apparatus and the vehicle is determined based on the position information of the target traffic light apparatus.
In some embodiments, a current frame point cloud image around a vehicle may be acquired first; then, according to the position information of the target traffic light equipment, determining a first point cloud set corresponding to the target traffic light equipment; secondly, determining target position information of the target traffic light equipment according to the first point cloud set; finally, the target distance between the target traffic light device and the vehicle can be determined according to the target position information of the target traffic light device.
In the case that it is determined that the distance between the vehicle and the target traffic light apparatus is less than or equal to the preset distance threshold, step S406 is performed;
in step S406, the historical traffic light apparatus and the historical lane identification information are determined from the historical environmental image.
In step S407, candidate traffic light apparatuses and candidate lane identification information are determined from the target environment image.
In step S408, in the case where it is determined that the candidate traffic light apparatus and the history traffic light apparatus are the same, the candidate traffic light apparatus is regarded as the target traffic light apparatus, and the target traffic indication information of the target traffic light apparatus is acquired.
Optionally, at least one region of interest of the target environment image may be acquired first, the region of interest comprising a region to be processed for characterizing a correspondence of at least one of the candidate traffic light devices determined from the environment image on the target environment image; re-projecting the historical traffic light device onto the target environmental image; in the event that it is determined that there is a re-projected historical traffic light device within at least one region of interest of the target environmental image, a first candidate traffic light device is determined to be the same as the historical traffic light device, the first candidate traffic light device being used to characterize the candidate traffic light device corresponding to the region of interest where the re-projected point of the historical traffic light device is present.
Alternatively, the target traffic indication information of the target traffic light apparatus may be acquired by:
s1, acquiring a first region of interest corresponding to the target traffic light equipment in the target environment image.
The first region of interest is used for representing a region to be processed corresponding to the target traffic light device on the target environment image.
S2, determining a plurality of related target traffic light devices from a preset interested area through an indication information acquisition model.
Wherein the preset region of interest includes the first region of interest.
For example, the preset region of interest may be acquired first; and then, the first region of interest can be used as the input of the indication information acquisition model to obtain the target traffic indication information corresponding to the target traffic light equipment output by the indication information acquisition model.
S3, determining the target traffic indication information corresponding to the plurality of target traffic light devices.
In step S409, in the case where it is determined that the candidate lane identification information and the history lane identification information are the same, the candidate lane identification information is taken as the target lane identification information.
Optionally, a second region of interest corresponding to the target lane indication identifier in the environmental image may be acquired first, where the second region of interest is used to characterize a region to be processed corresponding to the target lane indication identifier on the environmental image; the lane identification information of the target road identification object may then be determined from the second region of interest.
In step S410, the vehicle is controlled to travel according to the target traffic indication information and the target lane indication information.
Acquiring a target environment image around a vehicle; the target environment image includes an image of a target traffic light device; determining target traffic indication information of the target traffic light equipment and target lane indication information of a target lane from the target environment image under the condition that the distance between the vehicle and the target traffic light equipment is less than or equal to a preset distance threshold value according to the target environment image, wherein the target lane is a lane on which the vehicle runs; and controlling the vehicle to run according to the target traffic indication information and the target lane indication information. In this way, by determining the target traffic indication information of the target traffic light device and the target lane indication information of the target lane from the target environment image acquired in real time, the environment information perceived around the vehicle can be directly acquired without depending on the map information of the high-precision map, so that the vehicle is controlled to run according to the perceived environment information, and the vehicle can be controlled to run according to the perceived target traffic indication information and the target lane indication information of the target lane in the automatic driving process; therefore, the problems that the coverage area is small and the perception information is lack or inaccurate caused by the use constraint can be avoided under the condition that only map information of a high-precision map is used, the accuracy of automatic driving vehicle planning can be improved, and the risk of accidents is reduced.
Fig. 5 is a block diagram of an apparatus for vehicle control according to an exemplary embodiment. Referring to fig. 5, the apparatus includes an acquisition module 501, a determination module 502, and a control module 503.
An acquisition module 501 configured to acquire a target environment image around a vehicle; the target environment image includes an image of a target traffic light device;
a determining module 502 configured to determine, from the target environment image, target traffic indication information of the target traffic light apparatus and target lane indication information of a target lane, which is a lane in which the vehicle is traveling, in a case where it is determined that a distance between the vehicle and the target traffic light apparatus is less than or equal to a preset distance threshold value, based on the target environment image;
a control module 503 configured to control the vehicle to travel according to the target traffic indication information and the target lane indication information.
Fig. 6 is a block diagram of a determination module shown in accordance with the embodiment shown in fig. 5. Referring to fig. 6, the determining module 502 includes:
a first acquiring submodule 5021 configured to acquire a history environmental image which is an environmental image of at least one frame before the vehicle acquires the target environmental image;
A first determination submodule 5022 configured to determine historical traffic light devices and historical lane identification information from the historical environmental image;
a second determination submodule 5023 configured to determine candidate traffic light devices and candidate lane identification information from the target environment image;
a third determining submodule 5024 configured to take the candidate traffic light device as the target traffic light device and acquire target traffic indication information of the target traffic light device in the case where the candidate traffic light device and the history traffic light device are determined to be the same;
the fourth determination submodule 5025 is configured to take the candidate lane identification information as the target lane identification information in the case where it is determined that the candidate lane identification information and the history lane identification information are the same.
Fig. 7 is a block diagram of a determination module shown in accordance with the embodiment shown in fig. 5. Referring to fig. 7, the determining module 502 includes:
a second acquisition sub-module 5026 configured to acquire a first distance of the target traffic light device from the vehicle;
a fifth determining submodule 5027 configured to determine the position information of the target traffic light device according to a first historical distance and the first distance, wherein the first historical distance is the distance information of the historical traffic light device and the vehicle in the historical environment image;
A sixth determination submodule 5028 configured to determine a target traffic light distance of the target traffic light device and the vehicle based on the location information of the target traffic light device.
Optionally, the sixth determining submodule 5028 is configured to acquire a current frame point cloud image around the vehicle; determining a first point cloud set corresponding to the target traffic light equipment according to the position information of the target traffic light equipment; determining target position information of the target traffic light equipment according to the first point cloud set; and determining the target distance between the target traffic light equipment and the vehicle according to the target position information of the target traffic light equipment.
Optionally, the third determining submodule 5024 is configured to acquire at least one region of interest of the target environment image, the region of interest including a region to be processed for characterizing correspondence of at least one of the candidate traffic light devices determined from the environment image on the target environment image; re-projecting the historical traffic light device onto the target environmental image; in the event that it is determined that there is a re-projected historical traffic light device within at least one region of interest of the target environmental image, a first candidate traffic light device is determined to be the same as the historical traffic light device, the first candidate traffic light device being used to characterize the candidate traffic light device corresponding to the region of interest where the re-projected point of the historical traffic light device is present.
Optionally, the second determining submodule 5023 is configured to obtain a second region of interest corresponding to the target lane indication identifier in the environment image, where the second region of interest is used for characterizing a region to be processed corresponding to the target lane indication identifier on the environment image; the lane identification information of the target road identification object is determined from the second region of interest.
Fig. 8 is a block diagram of a determination module shown in accordance with the embodiment shown in fig. 5. Referring to fig. 8, the determining module 502 includes:
a third obtaining sub-module 5029 configured to obtain a first region of interest corresponding to the target traffic light device in the target environment image, where the first region of interest is used to characterize a region to be processed corresponding to the target traffic light device on the target environment image;
a seventh determining submodule 50210 configured to determine a relevant plurality of the target traffic light devices from a preset region of interest including the first region of interest by means of an indication information acquisition model;
an eighth determination submodule 50211 configured to determine the target traffic indication information for a plurality of the target traffic light devices.
Optionally, the seventh determining submodule 50210 is configured to obtain the preset region of interest; and taking the first region of interest as the input of the indication information acquisition model to obtain the target traffic indication information corresponding to the target traffic light equipment output by the indication information acquisition model.
By adopting the device, the target traffic indication information of the target traffic light equipment and the target lane indication information of the target lane are determined from the target environment image acquired in real time, the perceived environment information around the vehicle can be directly acquired under the condition of not depending on the map information of the high-precision map, so that the vehicle is controlled to run according to the perceived environment information, and the vehicle is controlled to run according to the perceived target traffic indication information and the target lane indication information of the target lane in the automatic driving process; therefore, the problems that the coverage area is small and the perception information is lack or inaccurate caused by the use constraint can be avoided under the condition that only map information of a high-precision map is used, the accuracy of automatic driving vehicle planning can be improved, and the risk of accidents is reduced.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of vehicle control provided by the present disclosure.
Fig. 9 is a block diagram of a vehicle 900, according to an exemplary embodiment. For example, vehicle 900 may be a hybrid vehicle, but may also be a non-hybrid vehicle, an electric vehicle, a fuel cell vehicle, or other type of vehicle. The vehicle 900 may be an autonomous vehicle, a semi-autonomous vehicle, or a non-autonomous vehicle.
Referring to fig. 9, a vehicle 900 may include various subsystems, such as an infotainment system 910, a perception system 920, a decision control system 930, a drive system 940, and a computing platform 950. Vehicle 900 may also include more or fewer subsystems, and each subsystem may include multiple components. In addition, interconnections between each subsystem and between each component of the vehicle 900 may be achieved by wired or wireless means.
In some embodiments, the infotainment system 910 may include a communication system, an entertainment system, a navigation system, and the like.
The sensing system 920 may include several sensors for sensing information of the environment surrounding the vehicle 900. For example, the sensing system 920 may include a global positioning system (which may be a GPS system, a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU), a lidar, millimeter wave radar, an ultrasonic radar, and a camera device.
Decision control system 930 may include a computing system, a vehicle controller, a steering system, a throttle, and a braking system.
The drive system 940 may include components that provide powered movement of the vehicle 900. In one embodiment, the drive system 940 may include an engine, an energy source, a transmission, and wheels. The engine may be one or a combination of an internal combustion engine, an electric motor, an air compression engine. The engine is capable of converting energy provided by the energy source into mechanical energy.
Some or all of the functions of the vehicle 900 are controlled by the computing platform 950. Computing platform 950 may include at least one processor 951 and memory 952, and processor 951 may execute instructions 953 stored in memory 952.
The processor 951 may be any conventional processor, such as a commercially available CPU. The processor may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof.
The memory 952 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically Erasable Programmable Read Only Memory (EEPROM), erasable Programmable Read Only Memory (EPROM), programmable Read Only Memory (PROM), read Only Memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In addition to instructions 953, the memory 952 may also store data such as road maps, route information, vehicle position, direction, speed, and the like. The data stored by memory 952 may be used by computing platform 950.
In an embodiment of the present disclosure, processor 951 may execute instructions 953 to perform all or part of the steps of the method of vehicle control described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned method of vehicle control when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. A method of vehicle control, comprising:
acquiring a target environment image around a vehicle; the target environment image comprises an image of a target traffic light device;
determining target traffic indication information of the target traffic light equipment and target lane indication information of a target lane from the target environment image under the condition that the distance between the vehicle and the target traffic light equipment is less than or equal to a preset distance threshold value according to the target environment image, wherein the target lane is a lane on which the vehicle runs;
and controlling the vehicle to run according to the target traffic indication information and the target lane indication information.
2. The method of claim 1, wherein the determining target traffic indication information for the target traffic light device from the target environment image, and target lane indication information for a target lane, comprises:
Acquiring a historical environment image, wherein the historical environment image is an environment image of at least one frame before the vehicle acquires the target environment image;
determining historical traffic light equipment and historical lane identification information from the historical environment image;
determining candidate traffic light equipment and candidate lane identification information from the target environment image;
in the case that the candidate traffic light device and the history traffic light device are determined to be the same, taking the candidate traffic light device as the target traffic light device and acquiring target traffic indication information of the target traffic light device;
and in the case that the candidate lane identification information and the history lane identification information are determined to be the same, taking the candidate lane identification information as the target lane identification information.
3. The method of claim 2, wherein determining a distance between the vehicle and the target traffic light device comprises:
acquiring a first distance between the target traffic light equipment and the vehicle;
determining the position information of the target traffic light equipment according to a first historical distance and the first distance, wherein the first historical distance is the distance information of the historical traffic light equipment and a vehicle in the historical environment image;
And determining the target traffic light distance between the target traffic light equipment and the vehicle according to the position information of the target traffic light equipment.
4. The method of claim 3, wherein the determining the target traffic light distance of the target traffic light device and the vehicle based on the location information of the target traffic light device comprises:
acquiring a current frame point cloud image around a vehicle;
determining a first point cloud set corresponding to the target traffic light equipment according to the position information of the target traffic light equipment;
determining target position information of the target traffic light equipment according to the first point cloud set;
and determining the target distance between the target traffic light equipment and the vehicle according to the target position information of the target traffic light equipment.
5. The method of claim 2, wherein determining that the candidate traffic light device and the historical traffic light device are the same comprises:
acquiring at least one region of interest of the target environment image, wherein the region of interest comprises a region to be processed for characterizing correspondence of at least one candidate traffic light device determined from the environment image on the target environment image;
Re-projecting the historical traffic light device onto the target environmental image;
and under the condition that the historical traffic light equipment with the re-projection exists in at least one interested area of the target environment image, determining that a first candidate traffic light equipment is the same as the historical traffic light equipment, wherein the first candidate traffic light equipment is used for representing the candidate traffic light equipment corresponding to the interested area with the re-projection point of the historical traffic light equipment.
6. The method of claim 2, wherein the determining lane identification information of a target lane from the target environment image comprises:
acquiring a second region of interest corresponding to the target lane indication identifier in the environment image, wherein the second region of interest is used for representing a region to be processed corresponding to the target lane indication identifier on the environment image;
the lane identification information of the target road identification object is determined from the second region of interest.
7. The method of claim 2, wherein the obtaining the target traffic indication information for the target traffic light device comprises:
acquiring a first region of interest corresponding to the target traffic light device in the target environment image, wherein the first region of interest is used for representing a region to be processed corresponding to the target traffic light device on the target environment image;
Determining a plurality of related target traffic light devices from a preset region of interest through an indication information acquisition model, wherein the preset region of interest comprises the first region of interest;
and determining the target traffic indication information corresponding to the plurality of target traffic light devices.
8. The method of claim 7, wherein determining the relevant plurality of target traffic light devices from the preset region of interest by the indication information acquisition model comprises:
acquiring the preset region of interest;
and taking the first region of interest as the input of the indication information acquisition model to obtain the target traffic indication information corresponding to the target traffic light equipment output by the indication information acquisition model.
9. An apparatus for controlling a vehicle, comprising:
an acquisition module configured to acquire a target environment image around a vehicle; the target environment image comprises an image of a target traffic light device;
a determining module configured to determine target traffic indication information of the target traffic light apparatus and target lane indication information of a target lane, which is a lane in which the vehicle travels, from the target environment image, in a case where it is determined that a distance between the vehicle and the target traffic light apparatus is less than or equal to a preset distance threshold value, based on the target environment image;
And the control module is configured to control the vehicle to run according to the target traffic indication information and the target lane indication information.
10. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the steps of the method of any of claims 1-8 when executed.
11. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1-8.
CN202310611302.1A 2023-05-26 2023-05-26 Vehicle control method and device, vehicle and storage medium Pending CN117002527A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310611302.1A CN117002527A (en) 2023-05-26 2023-05-26 Vehicle control method and device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310611302.1A CN117002527A (en) 2023-05-26 2023-05-26 Vehicle control method and device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN117002527A true CN117002527A (en) 2023-11-07

Family

ID=88567920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310611302.1A Pending CN117002527A (en) 2023-05-26 2023-05-26 Vehicle control method and device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN117002527A (en)

Similar Documents

Publication Publication Date Title
US10684372B2 (en) Systems, devices, and methods for autonomous vehicle localization
US10145945B2 (en) Systems and methods for automatically calibrating a LIDAR using information from a secondary vehicle
US20190219697A1 (en) Lidar localization
US20180273031A1 (en) Travel Control Method and Travel Control Apparatus
CN111238494A (en) Carrier, carrier positioning system and carrier positioning method
CN108896994A (en) A kind of automatic driving vehicle localization method and equipment
RU2757234C2 (en) Method and system for calculating data for controlling the operation of a self-driving car
CN115878494B (en) Test method and device for automatic driving software system, vehicle and storage medium
CN111402387A (en) Removing short timepoints from a point cloud of a high definition map for navigating an autonomous vehicle
EP4322132A1 (en) Parking method, storage medium, chip and vehicle
CN112461249A (en) Sensor localization from external source data
CN113252022A (en) Map data processing method and device
CN115220449A (en) Path planning method and device, storage medium, chip and vehicle
CN117163049A (en) System and method for autopilot
CN116380088B (en) Vehicle positioning method and device, vehicle and storage medium
CN115718304A (en) Target object detection method, target object detection device, vehicle and storage medium
CN117002527A (en) Vehicle control method and device, vehicle and storage medium
CN116767224B (en) Method, device, vehicle and storage medium for determining a travelable region
CN116563812B (en) Target detection method, target detection device, storage medium and vehicle
CN112880692A (en) Map data annotation method and device and storage medium
CN118050010B (en) Positioning method, device, vehicle, storage medium and program product for vehicle
CN116659529B (en) Data detection method, device, vehicle and storage medium
CN116499477B (en) Map fusion method, device, medium and vehicle
CN116678423B (en) Multisource fusion positioning method, multisource fusion positioning device and vehicle
CN116499488B (en) Target fusion method, device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination