CN113933854A - Method for automatically driving vehicle to dynamically acquire drive test data and automatically driving vehicle - Google Patents

Method for automatically driving vehicle to dynamically acquire drive test data and automatically driving vehicle Download PDF

Info

Publication number
CN113933854A
CN113933854A CN202111124544.5A CN202111124544A CN113933854A CN 113933854 A CN113933854 A CN 113933854A CN 202111124544 A CN202111124544 A CN 202111124544A CN 113933854 A CN113933854 A CN 113933854A
Authority
CN
China
Prior art keywords
fineness
perception
autonomous vehicle
scene
drive test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111124544.5A
Other languages
Chinese (zh)
Inventor
肖健雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Baodong Technology Co ltd
Original Assignee
Shanghai Baodong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Baodong Technology Co ltd filed Critical Shanghai Baodong Technology Co ltd
Priority to CN202111124544.5A priority Critical patent/CN113933854A/en
Publication of CN113933854A publication Critical patent/CN113933854A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a method for dynamically acquiring drive test data by an automatic driving vehicle, which comprises the following steps: and under a first perception mode, perceiving the current environment to obtain first fineness scene data. And judging whether the perception fineness needs to be improved or not according to the first fineness scene data. When the perception fineness needs to be improved, switching to a second perception mode; and in a second perception mode, perceiving the current environment to obtain second fineness scene data, wherein the perception fineness in the second perception mode is higher than the perception fineness in the first perception mode. And generating drive test data according to the second fineness scene data. The application also provides an autonomous vehicle and a storage medium.

Description

Method for automatically driving vehicle to dynamically acquire drive test data and automatically driving vehicle
Technical Field
The present application relates to the field of autonomous vehicle technologies, and in particular, to a method for dynamically acquiring drive test data for an autonomous vehicle, and a storage medium.
Background
The automatic driving vehicle is characterized in that each part in the vehicle is accurately controlled, calculated and analyzed through vehicle-mounted terminal equipment such as an Electronic Control Unit (ECU), so that the full-automatic running of the vehicle is realized, and the aim of unmanned driving of the vehicle is fulfilled.
Sensing, predicting, deciding, planning, controlling, etc. are routine operations in autonomous driving. In order to effectively and accurately autopilot a vehicle, the autopilot algorithm of an autopilot system must be trained based on a drive test data set, wherein the drive test data set has known annotated data. In the conventional situation, when the automatic driving vehicle acquires the drive test data, different scenes are not considered to use different perception fineness. During training of the automatic driving algorithm, finer marking data cannot be obtained, so that the capability of the automatic driving algorithm is influenced, and the driving safety of the automatic driving vehicle is further influenced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method for an autonomous vehicle to dynamically acquire drive test data, an autonomous vehicle, and a storage medium.
In a first aspect, an embodiment of the present application provides a method for an autonomous vehicle to dynamically obtain drive test data, where the method for the autonomous vehicle to dynamically obtain the drive test data includes:
judging whether the perception fineness needs to be improved or not according to the first fineness scene data;
when the perception fineness needs to be improved, switching to a second perception mode; and
in a second perception mode, perceiving the current environment to obtain second fineness scene data, wherein the perception fineness in the second perception mode is higher than the perception fineness in the first perception mode;
and generating drive test data according to the second fineness scene data.
Further, the number of sensors activated by the autonomous vehicle in the first perception mode is less than the number of sensors activated by the autonomous vehicle in the second perception mode.
Further, the number of algorithm modules activated by the autonomous vehicle in the first perception mode is less than the number of algorithm modules activated by the autonomous vehicle in the second perception mode.
Further, the sensor includes a lidar, and the grid particles of the lidar point cloud of the autonomous vehicle in the second perception mode are smaller than the grid particles of the lidar point cloud of the autonomous vehicle in the first perception mode.
Further, the determining whether to improve the perception fineness according to the first fineness scene data specifically includes:
acquiring the distance between an obstacle and the automatic driving vehicle according to the first fineness scene data;
when the distance between the obstacle and the autonomous vehicle is smaller than a first preset value, the perception fineness needs to be increased.
Further, the determining whether to improve the perception fineness according to the first fineness scene data further includes:
identifying the type of a first fineness scene according to the first fineness scene data, wherein the automatic driving vehicle stores a first scene type, and the first scene type is a scene requiring the automatic driving vehicle to improve perception fineness;
when the type of the first fineness scene matches the first scene type, the perception fineness needs to be increased.
In a second aspect, embodiments of the present application provide a computer readable storage medium storing one or more programs which, when executed by a processor, implement a method for dynamically acquiring drive test data for an autonomous vehicle as described above.
In a third aspect, an embodiment of the present application provides an autonomous vehicle, including:
a memory for storing program instructions; and
a processor for executing the program instructions to cause the server to implement the method for dynamically acquiring drive test data for an autonomous vehicle as described above.
In a fourth aspect, embodiments of the present application provide an autonomous vehicle having a first perception mode and a second perception mode, the autonomous vehicle comprising:
the first perception module is used for perceiving the current environment to obtain first fineness scene data in a first perception mode;
the scene analysis module is used for judging whether the perception fineness needs to be improved or not according to the first fineness scene data;
the switching module is used for switching to a second perception mode when the perception fineness needs to be improved;
the second perception module is used for perceiving the current environment in a second perception mode to obtain second fineness scene data, wherein the perception fineness in the second perception mode is higher than the perception fineness in the first perception mode;
and the drive test data generation module is used for generating drive test data according to the second fineness scene data.
Further, the scene analysis module includes:
the distance calculation unit is used for acquiring the distance between an obstacle and the automatic driving vehicle according to the first fineness scene data;
and the judging unit is used for increasing the perception fineness when the distance between the obstacle and the automatic driving vehicle is smaller than a first preset value.
According to the method for dynamically acquiring the drive test data by the automatic driving vehicle, the automatic driving vehicle and the storage medium, the automatic driving vehicle senses the current environment to obtain the first fineness scene data in the first sensing mode. And judging whether the perception fineness needs to be improved or not according to the first fineness scene data. And when the perception fineness needs to be improved, switching to a second perception mode. And under a second perception mode, perceiving the current environment to obtain second fineness scene data. The method and the device realize dynamic adjustment of perception fineness to obtain the drive test data, provide more refined data, and further improve the accuracy of marking data, thereby improving the capability of a vehicle-mounted automatic driving algorithm.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a schematic view of an autonomous vehicle according to an embodiment of the present disclosure.
Fig. 2 is a schematic flowchart of a method for dynamically acquiring drive test data by an autonomous vehicle according to an embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a first sub-step of step S102 of a method for dynamically acquiring drive test data by an autonomous vehicle according to an embodiment of the present application.
Fig. 4 is a flowchart illustrating a second sub-step of step S102 of the method for dynamically acquiring drive test data by an autonomous vehicle according to the embodiment of the present application.
Fig. 5 is a schematic view of a scene in which drive test data is actually acquired according to an embodiment of the present application.
Fig. 6 is a schematic view of a scenario in which drive test data is used according to an embodiment of the present application.
FIG. 7 is a schematic view of an autonomous vehicle interior module according to an embodiment of the present application.
FIG. 8 is a schematic diagram of a determination module of an autonomous vehicle according to an embodiment of the present application.
Fig. 9 is a schematic internal structural diagram of an autonomous vehicle according to an embodiment of the present application.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. The drawings illustrate examples of embodiments of the invention. It is to be understood that the drawings are not to scale as the invention may be practiced in practice, but are for illustrative purposes and are not to scale. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Please refer to fig. 1 in combination, which is a schematic diagram of an autonomous vehicle according to an embodiment of the present application.
The autonomous vehicle 100 is an intelligent vehicle that can be unmanned through a computer system, and integrates functions of sensing, prediction, decision-making, planning, control, and the like. The autonomous vehicle 100 is equipped with a laser radar, various sensors, and a monitoring device, so as to acquire the surrounding environment and traffic conditions. Autonomous vehicle 100 is a vehicle that automatically transports passengers from one location to another without human manipulation. Autonomous vehicle 100 may also be a motorcycle, truck, Sport Utility Vehicle (SUV), Recreational Vehicle (RV), boat, aircraft, or any other transportation device. In the exemplary embodiment, autonomous vehicle 100 is a so-called four-level or five-level autonomous system. The four-stage system refers to a "highly automated" autopilot system. The automatic driving vehicle of the four-level system can make an autonomous decision, generally does not need any operation of a human driver, and can realize the coping functions under various actual scenes such as automatic vehicle taking and returning, automatic formation cruising, automatic obstacle avoidance and the like by depending on the support of road information data which can be updated in real time. The five-level system refers to a "fully automated" autopilot system. The automatic driving vehicle of the five-level system can make an autonomous decision, does not need any operation of a human driver at all, can realize all-weather and all-region automatic driving by depending on the support of road information data which can be updated in real time, and can deal with various actual road condition problems generated by the changes of environmental climate and geographic position.
Please refer to fig. 2, which is a flowchart illustrating a method for dynamically acquiring drive test data of an autonomous vehicle according to an embodiment of the present disclosure. The autonomous vehicle has a first perception mode and a second perception mode. The first perception mode is a mode used when the autonomous vehicle 100 normally travels on a normal road section. The second perception mode is a mode used by the autonomous vehicle 100 when it is in a particular road segment, a particular scene. The method for dynamically acquiring drive test data for an autonomous vehicle 100 includes the following steps.
Step S101, under a first perception mode, perceiving a current environment to obtain first fineness scene data. Specifically, the autonomous vehicle 100 is provided with a plurality of sensors, which may be a camera, a millimeter wave radar, and an infrared sensor, and in this embodiment, the first sensing mode is a mode used when the autonomous vehicle 100 normally travels in a normal road section. Accordingly, the sensor and algorithm module required for the autonomous vehicle 100 to start normal driving of the general road segment in the first sensing mode senses the surrounding environment of the autonomous vehicle 100 to obtain corresponding scene data. The fineness of the scene data is a first fineness. It can be understood that when the automatic driving vehicle adopts different sensors or algorithm modules for sensing, the fineness of the obtained scene data is different. For example, the autonomous vehicle 100 is provided with a fine lidar and a rough lidar. When the ordinary road section is driven, the automatic driving vehicle 100 only needs to start the rough measurement laser radar when the ordinary road section is driven, and when the automatic driving vehicle 100 is to be parked at a narrow parking space, the precise measurement laser radar needs to be started, so that the perception fineness is improved, and more precise data is obtained. The scene data includes position information, travel information, road information, information of obstacles, weather, and the like of the autonomous vehicle 100, wherein the information of the obstacles includes: at least one of position information, velocity, and acceleration of the obstacle. For example: the scene data includes own-vehicle information of the autonomous vehicle 100, for example, GPS positioning information, a traveling speed, a traveling direction, and the like. The scene data also includes road information, such as road sections, straight roads, curves, ramps, and the like. And the scene data also includes obstacle information such as position information, type, speed, acceleration, and the like of the obstacle.
And S102, judging whether the perception fineness needs to be improved or not according to the first fineness scene data. Specifically, the increased perception fineness of the autonomous vehicle 100 may result in increased perception computation, and under the condition of limited vehicle-mounted computation power, the perception fineness is not required to be too high in the ordinary case. When the autonomous vehicle 100 is in a narrow passage, and there are dense crowds on both sides of the vehicle body, the autonomous vehicle 100 needs to increase the sensing fineness to obtain finer data and ensure the driving safety of the autonomous vehicle 100. How the autonomous vehicle 100 judges whether or not the perception fineness needs to be increased based on the first fineness scene data will be described in detail below.
And step S103, when the perception fineness needs to be improved, switching to a second perception mode. Specifically, the second perception mode is a mode used by the autonomous vehicle 100 when it is in a particular road segment, a particular scene. For example, in a scenario where the autonomous vehicle 100 is driving on a narrow road, or the autonomous vehicle 100 encounters a dense pedestrian at a short distance, the autonomous vehicle 100 needs to enable the second perception mode. The first perception mode and the second perception mode differ in that: the number of sensors activated by the autonomous vehicle 100 in the second sensing mode is greater than the number of sensors activated by the autonomous vehicle 100 in the first sensing mode. Alternatively, the number of algorithm modules activated by the autonomous vehicle 100 in the second perception mode is greater than the number of algorithm modules activated by the autonomous vehicle 100 in the first perception mode. Or the grid particles of the lidar point cloud of the autonomous vehicle 100 in the second perception mode are smaller than the grid particles of the lidar point cloud of the autonomous vehicle 100 in the first perception mode.
Step S104, in a second perception mode, perceiving the current environment to obtain second fineness scene data, wherein the perception fineness in the second perception mode is higher than the perception fineness in the first perception mode. Specifically, the scene data includes position information, travel information, road information, information of obstacles, weather, and the like of the autonomous vehicle 100, wherein the information of the obstacles includes: at least one of position information, velocity, and acceleration of the obstacle. The second fineness scene data perceived in the second perception mode is finer than the first fineness scene data perceived in the first perception mode.
And step S105, generating drive test data according to the second fineness scene data. Specifically, the autonomous vehicle 100 calculates the second fineness scene data using an on-board autonomous driving algorithm to obtain the drive test data. The drive test data includes data such as perception, prediction, decision, planning, and the like of the autonomous vehicle 100 on the road obstacle, and also includes accurate tracking of the trajectory of each obstacle on the road. The way for improving the perception fineness of the autonomous vehicle 100 includes that the autonomous vehicle 100 increases the number of sensors to obtain drive test data and increases an algorithm module to calculate the drive test data; and reducing the size of the grid particles of the laser radar point cloud to obtain the drive test data. For example, the autonomous vehicle 100 is provided with a fine millimeter wave radar and a rough millimeter wave radar, and does not need to use a fine millimeter wave radar when driving on a road, and needs to turn on the fine millimeter wave radar when backing up and entering a garage. For example, autonomous vehicle 100 generally does not need to perform gesture recognition, and when the current scene is traffic police directing traffic, the gesture recognition module needs to be enabled. For example, when the autonomous vehicle 100 enters a construction road segment with temporarily built cones on both sides, the grid particle size of the lidar point cloud needs to be reduced to obtain finer data. For example, the grid particles are 10 cm in the normal case, but in the case of a very narrow road, the autonomous vehicle 100 may dynamically reduce the grid particle size, change the grid particle size to 5 cm, and identify the obstacle size more finely. More refined drive test data are acquired through the automatic driving vehicle 100, so that more accurate marking data can be acquired when the drive test data are marked, the more accurate marking data are beneficial to training an automatic driving algorithm of a larger vehicle-mounted end, and the safety factor of the automatic driving vehicle 100 is improved.
The above-described embodiment of the autonomous driving vehicle 100 obtains finer drive test data by dynamically adjusting the sensing fineness, and further improves the accuracy of the labeling data, thereby improving the capability of the vehicle-mounted autonomous driving algorithm.
Please refer to fig. 3, which is a flowchart illustrating a first sub-step of step S102 of the method for dynamically acquiring drive test data for an autonomous vehicle according to an embodiment of the present application. Step S102 specifically includes the following steps:
and S1021, acquiring the distance between the obstacle and the automatic driving vehicle according to the first fineness scene data. Specifically, the autonomous vehicle 100 acquires data of an obstacle including position information, length, width, and the like of the obstacle by using a camera, a millimeter wave radar, an infrared sensor, and the like. The autonomous vehicle 100 is provided with a positioning sensor, and acquires current position information of itself by the positioning sensor. The autonomous vehicle 100 calculates the distance between the obstacle and the autonomous vehicle 100 from the current position information of the autonomous vehicle and the position information of the obstacle.
And S1022, when the distance between the obstacle and the automatic driving vehicle is smaller than a first preset value, the perception fineness needs to be improved. Specifically, when the autonomous vehicle 100 determines that the distance between the obstacle and the autonomous vehicle 100 is smaller than the first preset value, the perception fineness needs to be increased. As shown in fig. 5, when a dense crowd is present in front of the autonomous vehicle 100 and the autonomous vehicle 100 is in close proximity to the dense crowd, the autonomous vehicle 100 may obtain finer data by invoking less accurate radar sensors or reducing the grid grain size of the activation radar point cloud.
Please refer to fig. 4, which is a flowchart illustrating a second sub-step of step S102 of the method for dynamically acquiring drive test data for an autonomous vehicle according to an embodiment of the present application. Step S102 specifically includes the following steps:
and S1023, identifying the type of a first fineness scene according to the first fineness scene data, wherein the automatic driving vehicle stores a first scene type, and the first scene type is a scene requiring the automatic driving vehicle to improve perception fineness. Specifically, the autonomous vehicle 100 preselects a scene in which a first scene type, for example, a traffic police directing traffic, a blind spot in a short distance of the autonomous vehicle 100 finding an obstacle, and the like, is stored.
And S1024, when the type of the first fineness scene is matched with the first scene type, the perception fineness needs to be improved. Specifically, if the autonomous driving vehicle 100 recognizes that the current scene is a traffic police directing traffic scene, and the traffic police directing traffic scene is marked as the first scene type by the autonomous driving vehicle 100 in advance, the autonomous driving vehicle 100 recognizes the gesture of the traffic police by adding the algorithm module for finger recognition, and obtains finer drive test data.
Please refer to fig. 6, which is a schematic diagram of a scenario using drive test data according to an embodiment of the present application. The scene comprises the automatic driving vehicle 100, the server 200 and the client 300, wherein the automatic driving vehicle 100 is connected with the server 200 through a wireless network, and the server 200 is connected with the client 300 through a wired or wireless network.
When the drive test data is acquired, the autonomous vehicle 100 runs on the road, and sensors such as a laser radar and a camera on the autonomous vehicle 100 acquire and store the drive test data, and transmit the drive test data to the server 200. The server 200 calls an automatic driving algorithm, correspondingly processes the road test data to obtain processed data, and labels the road test data by using the processed data.
The client 300 receives the drive test data sent by the server 200, and when the drive test data cannot be accurately marked on the server 200, a marking person is required to correct and mark the identified obstacle through a marking tool at the client 300. In some embodiments, the client 300 may be a computer, a tablet computer, or the like terminal. Server 200 may be a tower server, a rack server, a blade server, a high density server, a rack server, or the like.
Specifically, the process of collecting the drive test data by the autonomous vehicle 100 on the road is time-sequenced, and the drive test data includes data of sensing, predicting, deciding, planning, etc. of the autonomous vehicle 100 on the obstacle on the road, and also includes accurate tracking of the trajectory of each obstacle on the road. The autonomous vehicle 100 uploads the drive test data to the server 200 over the wireless network. The process of collecting the drive test data on the road by the autonomous vehicle 100 is time-sequential, and the autonomous vehicle 100 must also have time-sequential property when processing the drive test data on the road in real time, but the drive test data may not be limited to the time sequence after being acquired by the server 200, and the former data may be labeled with the latter data. For example, the drive test data of the first tag includes a pedestrian standing on the side of the road, and the drive test data of the second tag includes a tracking trajectory of the pedestrian crossing the road. As can be seen from the behavior data of the drive test data of the second tag, the pedestrian in the drive test data of the first tag intends to cross the road, and the behavior data of the pedestrian in the drive test data of the second tag is labeled to the drive test data of the first tag. The drive test data of the first tag is obtained by performing drive test at a first time, the drive test data of the second tag is obtained by performing drive test at a second time, and the first time is earlier than the second time.
Please refer to fig. 7 in combination, which is a schematic diagram of an interior module of an autonomous vehicle according to an embodiment of the present application. The autonomous vehicle 100 includes a first perception module 101, a scene analysis module 102, a switching module 103, a second perception module 104, and a drive test data generation module 105.
The first sensing module 101 is configured to sense a current environment to obtain first fineness scene data in a first sensing mode. Specifically, sensors such as a laser radar and a camera provided in the autonomous vehicle 100 collect and store scene data.
And the scene analysis module 102 is configured to determine whether to need to improve the perceptual fineness according to the first fineness scene data. Specifically, the scene analysis module 102 determines whether the perception fineness needs to be increased according to the first fineness scene data acquired by the first perception module 101.
And the switching module 103 is configured to switch to the second sensing mode when the sensing fineness needs to be improved. Specifically, when the scene analysis module 102 determines that the perception fineness needs to be improved, the switching module 103 switches the first perception mode to the second perception mode.
The second sensing module 104 is configured to sense a current environment in a second sensing mode to obtain second fineness scene data, where the sensing fineness in the second sensing mode is higher than the sensing fineness in the first sensing mode. Specifically, the second sensing module 104 senses the current environment to obtain second fineness scene data.
And a drive test data generation module 105, configured to generate drive test data according to the second fineness scene data. Specifically, the drive test data generation module 105 generates drive test data according to the second fineness scene data sensed by the second sensing module 104.
Please refer to fig. 8, which is a schematic diagram of a scene analysis module of an autonomous vehicle according to an embodiment of the present application. The scene analysis module 102 includes a distance calculation unit 1021 and a first adjustment unit 1022.
A distance calculation unit 1021 configured to obtain a distance between an obstacle and the autonomous vehicle according to the first fineness scene data. Specifically, the distance calculation unit 1021 acquires the distance between the obstacle and the autonomous vehicle 100 from the first fineness scene data acquired by the scene data acquisition module 101.
The determining unit 1022 may need to increase the perception fineness when the distance between the obstacle and the autonomous vehicle is smaller than a first preset value. Specifically, when the determination unit 1022 determines that the distance between the obstacle and the autonomous vehicle 100 is smaller than the first preset value, the perception fineness of the autonomous vehicle 100 needs to be increased.
Please refer to fig. 9 in combination, which is a schematic diagram of an internal structure of an autonomous vehicle according to an embodiment of the present application. Autonomous vehicle 100 includes memory 1001, processor 1002, and bus 1003. The memory 1001 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 1001 may be an internal storage unit of the autonomous vehicle 100 in some embodiments, such as a hard disk of the autonomous vehicle 100. The memory 1001 may also be an external autonomous vehicle 100 storage device in other embodiments, such as a plug-in hard drive provided on the autonomous vehicle 100, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like. Further, memory 1001 may also include both internal and external storage devices of autonomous vehicle 100. The memory 1001 may be used not only to store application software and various types of data installed in the autonomous vehicle 100, but also to temporarily store data that has been output or will be output.
The bus 1003 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
Further, the autonomous vehicle 100 may also include a display component 1004. The display component 1004 can be an LED (Light Emitting Diode) display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light Emitting Diode) touch panel, and the like. The display component 1004 may also be referred to as a display device or display unit, as appropriate, for displaying information processed in the autonomous vehicle 100 and for displaying a visual user interface, among other things.
Further, autonomous vehicle 100 may also include a communication component 1005, and communication component 1005 may optionally include a wired communication component and/or a wireless communication component (e.g., WI-FI communication component, bluetooth communication component, etc.) that are typically used to establish a communication connection between autonomous vehicle 100 and other devices.
The processor 1002, in some embodiments, may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip configured to execute program codes stored in the memory 1001 or process data.
It is to be understood that fig. 9 only illustrates the autonomous vehicle 100 having the assembly 1001 and 1005 and those skilled in the art will appreciate that the configuration illustrated in fig. 9 is not limiting of the autonomous vehicle 100 and may include fewer or more components than illustrated, or some components in combination, or a different arrangement of components.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the unit is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing an emulated computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A method for an autonomous vehicle to dynamically acquire drive test data, the method comprising:
under a first perception mode, perceiving a current environment to obtain first fineness scene data;
judging whether the perception fineness needs to be improved or not according to the first fineness scene data;
when the perception fineness needs to be improved, switching to a second perception mode; and
in a second perception mode, perceiving the current environment to obtain second fineness scene data, wherein the perception fineness in the second perception mode is higher than the perception fineness in the first perception mode;
and generating drive test data according to the second fineness scene data.
2. The method for dynamically acquiring drive test data for an autonomous vehicle as recited in claim 1, wherein the number of sensors activated by the autonomous vehicle in the first perception mode is less than the number of sensors activated by the autonomous vehicle in the second perception mode.
3. The method for dynamically acquiring drive test data for an autonomous vehicle as recited in claim 1, wherein the number of algorithm modules activated by the autonomous vehicle in the first perception mode is less than the number of algorithm modules activated by the autonomous vehicle in the second perception mode.
4. The method for dynamically acquiring drive test data by an autonomous vehicle of claim 2 wherein the sensor comprises a lidar, and wherein the raster particles of the lidar point cloud of the autonomous vehicle in the second perception mode are smaller than the raster particles of the lidar point cloud of the autonomous vehicle in the first perception mode.
5. The method for dynamically acquiring drive test data of an autonomous vehicle as claimed in claim 1, wherein said determining whether the perceived fineness needs to be increased based on the first fineness scene data specifically comprises:
acquiring the distance between an obstacle and the automatic driving vehicle according to the first fineness scene data;
when the distance between the obstacle and the autonomous vehicle is smaller than a first preset value, the perception fineness needs to be increased.
6. The method for dynamically acquiring drive test data for an autonomous vehicle as recited in claim 1, wherein said determining if the perceived fineness needs to be increased based on said first fineness scene data further comprises:
identifying the type of a first fineness scene according to the first fineness scene data, wherein the automatic driving vehicle stores a first scene type, and the first scene type is a scene requiring the automatic driving vehicle to improve perception fineness;
when the type of the first fineness scene matches the first scene type, the perception fineness needs to be increased.
7. A computer readable storage medium storing one or more programs which, when executed by a processor, implement the method of dynamically acquiring drive test data for an autonomous vehicle of any of claims 1-6.
8. An autonomous vehicle, comprising:
a memory for storing program instructions; and
a processor for executing the program instructions to cause the server to implement the method of dynamically acquiring drive test data for an autonomous vehicle as claimed in any of claims 1 to 6.
9. An autonomous vehicle having a first perception mode and a second perception mode, comprising:
the first perception module is used for perceiving the current environment to obtain first fineness scene data in a first perception mode;
the scene analysis module is used for judging whether the perception fineness needs to be improved or not according to the first fineness scene data;
the switching module is used for switching to a second perception mode when the perception fineness needs to be improved;
the second perception module is used for perceiving the current environment in a second perception mode to obtain second fineness scene data, wherein the perception fineness in the second perception mode is higher than the perception fineness in the first perception mode;
and the drive test data generation module is used for generating drive test data according to the second fineness scene data.
10. The autonomous-capable vehicle of claim 9, wherein the scene analysis module comprises:
the distance calculation unit is used for acquiring the distance between an obstacle and the automatic driving vehicle according to the first fineness scene data;
and the judging unit is used for increasing the perception fineness when the distance between the obstacle and the automatic driving vehicle is smaller than a first preset value.
CN202111124544.5A 2021-09-24 2021-09-24 Method for automatically driving vehicle to dynamically acquire drive test data and automatically driving vehicle Pending CN113933854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111124544.5A CN113933854A (en) 2021-09-24 2021-09-24 Method for automatically driving vehicle to dynamically acquire drive test data and automatically driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111124544.5A CN113933854A (en) 2021-09-24 2021-09-24 Method for automatically driving vehicle to dynamically acquire drive test data and automatically driving vehicle

Publications (1)

Publication Number Publication Date
CN113933854A true CN113933854A (en) 2022-01-14

Family

ID=79276854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111124544.5A Pending CN113933854A (en) 2021-09-24 2021-09-24 Method for automatically driving vehicle to dynamically acquire drive test data and automatically driving vehicle

Country Status (1)

Country Link
CN (1) CN113933854A (en)

Similar Documents

Publication Publication Date Title
CN109927719B (en) Auxiliary driving method and system based on obstacle trajectory prediction
CN108437986B (en) Vehicle driving assistance system and assistance method
CN105501224B (en) Detecting low speed close range vehicle overtaking
CN112389466B (en) Method, device and equipment for automatically avoiding vehicle and storage medium
US9734390B2 (en) Method and device for classifying a behavior of a pedestrian when crossing a roadway of a vehicle as well as passenger protection system of a vehicle
CN105667508B (en) Vehicle speed regulation
US9522701B2 (en) Steering risk decision system and method for driving narrow roads
CN114375467B (en) System and method for detecting an emergency vehicle
US8731816B2 (en) Method for classifying an object as an obstacle
US20150166069A1 (en) Autonomous driving style learning
US10072936B2 (en) Estimating a street type using sensor-based surroundings data
US10229599B2 (en) Vehicle lane changing
CN109720348B (en) In-vehicle device, information processing system, and information processing method
US20220001891A1 (en) Sensing method, intelligent control device and autonomous driving vehicle
CN116209611B (en) Method and system for using other road user's responses to self-vehicle behavior in autopilot
CN113631452B (en) Lane change area acquisition method and device
CN109318894B (en) Vehicle driving assistance system, vehicle driving assistance method, and vehicle
US10916134B2 (en) Systems and methods for responding to a vehicle parked on shoulder of the road
US9725092B2 (en) Method, host vehicle and following space management unit for managing following space
CN112918472A (en) Vehicle driving assistance system, vehicle using the same, and corresponding method and medium
CN111731296A (en) Travel control device, travel control method, and storage medium storing program
CN109887321B (en) Unmanned vehicle lane change safety judgment method and device and storage medium
CN113335311B (en) Vehicle collision detection method and device, vehicle and storage medium
US11276304B2 (en) Systems and methods for addressing a moving vehicle response to a stationary vehicle
CN113933854A (en) Method for automatically driving vehicle to dynamically acquire drive test data and automatically driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination