CN112326258A - Method, device and system for detecting automatic driving state and electronic equipment - Google Patents
Method, device and system for detecting automatic driving state and electronic equipment Download PDFInfo
- Publication number
- CN112326258A CN112326258A CN201910695865.7A CN201910695865A CN112326258A CN 112326258 A CN112326258 A CN 112326258A CN 201910695865 A CN201910695865 A CN 201910695865A CN 112326258 A CN112326258 A CN 112326258A
- Authority
- CN
- China
- Prior art keywords
- driving
- information
- vehicle
- environment
- conveyor belt
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01M—TESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
- G01M17/00—Testing of vehicles
- G01M17/007—Wheeled or endless-tracked vehicles
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the disclosure provides a method, a device and a system for detecting an automatic driving ability state, an electronic device and a storage medium, and relates to the technical field of vehicles, wherein the method comprises the following steps: controlling an environment simulation device to construct a simulated driving scene based on driving environment information of a vehicle, obtaining first driving information generated by the vehicle driving on a conveyor belt, controlling the running state of the conveyor belt according to the first driving information, and detecting the automatic driving state of the vehicle according to second driving information corresponding to the driving environment information and the first driving information; the detection method, the device and the system, the electronic equipment and the storage medium can construct a vehicle simulation driving scene and control the running state of a conveyor belt bearing the vehicle, and detect the automatic driving capability based on the actual driving information and the automatic driving information of a driver, so that the simulated driving condition is more real, the test flexibility can be improved, the test space can be saved, and the method is suitable for various test sites.
Description
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a method, an apparatus, a system, an electronic device, and a storage medium for detecting an automatic driving status.
Background
In the production process of the unmanned automobile, before the unmanned automobile actually goes on the road, whether the automatic driving capability of the unmanned automobile reaches the specified automatic driving qualified index or not needs to be detected. At present, for detecting the automatic driving capability of an unmanned automobile, an existing detection system simply builds a detection scene in a specified closed environment, and is used for simulating a road actual scene and testing the automatic driving capability of the unmanned automobile in the closed environment. The existing detection scene is simple and easy, and is not easy to change after being built, so that the complex detection scene cannot be flexibly built.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present disclosure provide a method, an apparatus, a system, an electronic device, and a storage medium for detecting an automatic driving state.
According to an aspect of an embodiment of the present disclosure, there is provided a method for detecting an automatic driving state, including: acquiring running environment information of a vehicle, and controlling an environment simulation device of a driving ability testing mechanism to construct a simulated running scene corresponding to the vehicle based on the running environment information; obtaining first driving information generated by the vehicle running on a conveyor belt of the driving ability testing mechanism for the simulated running scene; controlling an operation state of the conveyor belt based on the first driving information so that a relative position of the vehicle and the conveyor belt is kept unchanged; second driving information corresponding to the running environment information is obtained, and an automatic driving state of the vehicle is detected based on the second driving information and the first driving information.
According to another aspect of the embodiments of the present disclosure, there is provided an automatic driving state detection apparatus including: the scene construction module is used for obtaining running environment information of a vehicle and controlling an environment simulation device of a driving ability testing mechanism to construct a simulated running scene corresponding to the vehicle based on the running environment information; the operation control module is used for obtaining first driving information generated by the vehicle running on a conveyor belt of the driving ability testing mechanism aiming at the simulated running scene; controlling an operation state of the conveyor belt based on the first driving information so that a relative position of the vehicle and the conveyor belt is kept unchanged; and the capacity detection module is used for obtaining second driving information corresponding to the running environment information and detecting the automatic driving state of the vehicle based on the second driving information and the first driving information.
According to another aspect of the embodiments of the present disclosure, there is provided an automatic driving state detection system including: a driving ability testing mechanism and a detection device for the automatic driving ability; wherein the detection device of the automatic driving state controls the driving ability test mechanism to execute corresponding operation.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the above-mentioned method.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; the processor is used for executing the method.
Based on the method, the device and the system for detecting the automatic driving state, the electronic equipment and the storage medium, which are provided by the embodiment of the disclosure, the real environment of a road is simulated by constructing a vehicle simulation driving scene and controlling the running state of a conveyor belt bearing the vehicle and detecting the automatic driving capability based on the actual driving information of a driver and the automatic driving information, so that the simulated driving condition is more real, the vehicle can complete various detections on the automatic driving capability through a driving capability detecting mechanism, and the flexibility of the detection is improved.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a flow chart of one embodiment of a method of detecting an autonomous driving state of the present disclosure;
FIG. 2 is a flow chart of one embodiment of the present disclosure for obtaining travel environment information for a vehicle;
FIG. 3 is a flow diagram of one embodiment of the present disclosure for obtaining driving environment information based on an image and a point cloud;
FIG. 4 is a flow diagram of one embodiment of the present disclosure for constructing a simulated driving scenario;
FIG. 5 is a flow chart of one embodiment of detecting an autopilot capability of the present disclosure;
FIG. 6 is a flow chart of one embodiment of controlling conveyor belt operation of the present disclosure;
FIG. 7 is a schematic structural diagram of one embodiment of a drivability test mechanism of the present disclosure;
FIG. 8 is a schematic structural diagram illustrating one embodiment of an autopilot capability detection apparatus of the present disclosure;
FIG. 9 is a schematic diagram of a scene building module of an embodiment of the present disclosure;
FIG. 10 is a block diagram of one embodiment of an electronic device of the present disclosure.
Detailed Description
Example embodiments according to the present disclosure will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more than two and "at least one" may refer to one, two or more than two.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, such as a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Embodiments of the present disclosure may be implemented in electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with an electronic device, such as a terminal device, computer system, or server, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment. In a distributed cloud computing environment, tasks may be performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the application
In the process of realizing the present disclosure, the inventor finds that, in the process of detecting the automatic driving state of the unmanned automobile, the existing detection system simply builds a detection scene in a specified closed environment, the detection scene is relatively simple and easy, and is not easy to change after being built, the complex detection scene cannot be flexibly built, and the actual traffic condition is difficult to simulate.
The method for detecting the automatic driving state comprises the steps of controlling an environment simulation device to construct a simulation driving scene based on driving environment information of a vehicle, obtaining first driving information generated by the vehicle driving on a conveyor belt and controlling the running state of the conveyor belt according to the first driving information, and detecting the automatic driving state of the vehicle according to second driving information corresponding to the driving environment information and the first driving information; the vehicle simulation driving scene can be constructed, the running state of a conveyor belt for bearing the vehicle can be controlled, the simulated driving condition is more real, the complex detection scene can be flexibly constructed, and the test flexibility is improved.
Examples of the inventionSexual method
Fig. 1 is a flowchart of an embodiment of a method for detecting an automatic driving state according to the present disclosure, where the method shown in fig. 1 includes the steps of: S101-S104. The following describes each step.
S101, obtaining running environment information of the vehicle, and controlling an environment simulation device of a driving ability testing mechanism to construct a simulated running scene corresponding to the vehicle based on the running environment information.
In one embodiment, the driving environment information may be collected by a camera device, a laser radar, and the like, and the driving environment information includes various environment element information in a driving scene of the vehicle, such as information of shapes, colors, distances, and the like of lane lines, intersections, roundabouts, obstacles, indicator lights, signs, pedestrians, and the like. An environment simulation device of the driving ability test mechanism constructs a simulated driving scene including various environment elements based on the driving environment information. In one embodiment, the vehicle may be an unmanned vehicle.
S102, first driving information generated by the vehicle running on the conveyor belt of the driving ability testing mechanism according to the simulated running scene is obtained.
In one embodiment, the driving ability testing mechanism is provided with a conveyor belt, the unmanned vehicle runs on the conveyor belt, and an automatic driving system of the unmanned vehicle automatically generates first driving information aiming at the simulated running scene, wherein the first driving information comprises at least one item of control information of an accelerator, a brake, a steering, a gear, light and the like.
And S103, controlling the running state of the conveyor belt based on the first driving information so as to keep the relative position of the vehicle and the conveyor belt unchanged.
In an embodiment, the operating state of the conveyor belt may be controlled based on the first driving information generated by the automatic driving system, the operating state including a speed, a conveying direction, and the like of the conveyor belt. In one embodiment, the relative position of the vehicle and the conveyor belt is kept constant by controlling the operating state of the conveyor belt during the running of the vehicle on the conveyor belt.
And S104, obtaining second driving information corresponding to the running environment information, and detecting the automatic driving state of the vehicle based on the second driving information and the first driving information.
In one embodiment, the second driving information of the vehicle in the actual driving scene may be obtained during the actual driving of the vehicle by the driver, and the second driving information includes control information of the driver for at least one of throttle, brake, steering, gear, light, and the like. The automatic driving capability of the vehicle comprises positioning capability, vehicle control capability and the like, wherein the positioning capability represents positioning accuracy capability in the driving environment, and the vehicle control capability represents accuracy capability of first driving information generated by the driving environment information.
In one embodiment, first driving information automatically generated by an automatic driving system of a vehicle for a simulated driving scene corresponding to an actual driving scene is obtained, an automatic driving state of the vehicle is detected based on comparison between second driving information and the first driving information, whether the second driving information is matched with the first driving information is detected, the automatic driving state can be scored based on a matching result, and the automatic driving capability of the vehicle can be determined.
In the method for detecting the automatic driving state in the embodiment, the environment simulation device is controlled to construct the simulation driving scene and control the running state of the conveyor belt bearing the vehicle based on the driving environment information of the vehicle, the automatic driving capability is detected according to the actual driving information and the automatic driving information of the driver, various complex detection scenes can be flexibly constructed, and the flexibility of the test is improved.
Fig. 2 is a flowchart of one embodiment of the present disclosure for obtaining driving environment information of a vehicle, and the method shown in fig. 2 includes the steps of: s201 and S202. The following describes each step.
S201, in the driving process of the vehicle, image data acquired by an image acquisition device and point cloud data acquired by a radar are acquired.
In one embodiment, a camera, a laser radar and the like can be installed on the vehicle, and when the vehicle runs in an actual running scene, image data, three-dimensional laser radar point cloud data and the like are collected through the camera, the laser radar and the like.
S202, obtaining driving environment information based on the image data and the point cloud data.
There are various methods for obtaining the driving environment information based on the image data and the point cloud data, and reference may be made to the following description of the embodiment shown in fig. 3, which is not detailed herein.
The device comprises a vehicle, a camera, a laser radar and the like, wherein the camera, the laser radar and the like are arranged on the vehicle, and the device respectively collects data such as images, point clouds and the like in an actual driving scene of the vehicle, and the driving environment information obtained based on the data such as the images, the point clouds and the like can construct a vehicle simulation driving scene to simulate the actual driving scene and the actual traffic condition of the vehicle, so that the authenticity and the accuracy of the test are higher.
Fig. 3 is a flowchart of one embodiment of the present disclosure for obtaining driving environment information based on an image and a point cloud, and the method shown in fig. 3 includes the steps of: S301-S303. The following describes each step.
S301, obtaining a mapping conversion matrix between the coordinate system of the image acquisition device and the coordinate system of the radar.
In one embodiment, the image captured by the camera is a two-dimensional image represented by (U, V), and the laser radar captures a three-dimensional point cloud represented by (X, Y, Z). Establishing a mapping conversion matrix M between a camera coordinate system and a radar coordinate system, and mapping the three-dimensional point cloud (X, Y, Z) to the two-dimensional point (U, V), wherein the mapping formula is as follows:
in equation 1-1, the matrix (f)u fv u0 v0) For the camera parameters, fu and fv are scale factors of x and y axes in the camera coordinate system (i.e. effective focal lengths of the camera in the horizontal direction and the vertical direction), u0 and v0 are central points of an image plane, R is a rotation matrix, and t is a translation vector.
S302, obtaining three-dimensional point cloud coordinates corresponding to pixel points in the two-dimensional environment image based on the mapping conversion matrix.
Solving equation 1.1 to obtain:
the final matrix deformation is obtained as:
and solving the formula (1.4) to obtain calibration parameters, and fusing the two-dimensional environment image and the three-dimensional point cloud data to obtain three-dimensional point cloud coordinates corresponding to the pixel points in the two-dimensional environment image.
S303, the actual running environment information of the vehicle is generated. The vehicle actual running environment information includes: the corresponding relation between the two-dimensional environment image, the pixel points in the two-dimensional environment image and the three-dimensional point cloud coordinate and the like.
In one embodiment, the environmental simulation apparatus includes a plurality of scalable environmental simulation blocks and the like. There may be various methods for the environment simulation apparatus that controls the driving ability test mechanism based on the driving environment information to construct a simulated driving scene corresponding to the vehicle.
And determining a mapping conversion matrix between the radar and the image acquisition device as a precondition for realizing the fusion of radar point cloud and image information, and acquiring a three-dimensional point cloud coordinate matrix corresponding to pixel points in the two-dimensional environment image based on the mapping conversion matrix. By obtaining the conversion relation between the radar and the image acquisition device, the advantages of the two sensors can be complemented, and more effective vehicle actual running environment information can be obtained.
FIG. 4 is a flow chart of one embodiment of the present disclosure for constructing a simulated driving scenario, the method shown in FIG. 4 comprising the steps of: S401-S404. The following describes each step.
S401, determining display pixel point information corresponding to each environment simulation block in the two-dimensional environment image.
In one embodiment, a two-dimensional environment image is analyzed, display pixel points of environment elements such as lane lines, intersections, roundabout, barriers, indicator lights, signs, pedestrians and the like are obtained from the two-dimensional environment image, each pixel point in the two-dimensional environment image corresponds to one environment simulation block, and display pixel point information of the environment elements corresponding to the environment simulation blocks is determined.
S402, determining the environment simulation color of the environment simulation block based on the display pixel point information, and controlling the environment simulation block to display the corresponding environment simulation color.
In one embodiment, the color of the environmental simulation block display is determined based on the color information of the environmental element. For example, the environment element is an indicator light, the indicator light is a red light, and the environment simulation color of the environment simulation block simulating the red light is determined to be red based on the display pixel point information of the red light.
And S403, obtaining a three-dimensional point cloud coordinate corresponding to the display pixel point information based on the corresponding relation, and determining the relative position information between the environment simulation block and the vehicle according to the three-dimensional point cloud coordinate.
In one embodiment, three-dimensional point cloud coordinates corresponding to display pixel point information are obtained based on the corresponding relation between pixel points in the two-dimensional environment image and the three-dimensional point cloud coordinates, relative position information between the environment simulation block and the vehicle is determined according to the three-dimensional point cloud coordinates, and distance information between environment elements such as lane lines, crossroads, rotary islands, obstacles, indicator lamps, signs and pedestrians and the vehicle is obtained.
S404, the extending length of the environment simulation block is controlled based on the relative position information. The extension length of the environmental simulation block may be determined based on information on the distance between the environmental element and the vehicle.
In one embodiment, a video shot by a camera in an actual driving scene can be obtained, and each frame of two-dimensional environment image in the video is obtained. And determining display pixel point information corresponding to the environment simulation block in the two-dimensional environment image. Determining the environment simulation color of the environment simulation block based on the display pixel point information, obtaining a three-dimensional point cloud coordinate corresponding to the display pixel point information based on the corresponding relation between the pixel point in the two-dimensional environment image and the three-dimensional point cloud coordinate, and determining the relative position information between the environment simulation block and the vehicle according to the three-dimensional point cloud coordinate. And controlling the environment simulation block to rapidly stretch and display colors according to the environment simulation colors and the relative position information, and constructing a continuously changing simulated driving scene corresponding to the video shot in the actual driving scene.
The environment simulation block is controlled to display the corresponding environment simulation color according to the two-dimensional environment image, the relative position information between the environment simulation block and the vehicle is determined according to the three-dimensional point cloud coordinate, the extending length of the environment simulation block is controlled, the environment simulation device can be controlled to construct a simulation driving scene, the real environment of a road is simulated, the simulated driving condition is more real, the vehicle can complete various detections of automatic driving capability in a driving capability detection mechanism, and the flexibility of the detection is improved.
In one embodiment, the second driving information during actual driving of the vehicle by the driver is obtained, and there may be a plurality of methods for detecting the automatic driving ability of the vehicle based on the second driving information and the first driving information. FIG. 5 is a flow chart of one embodiment of the present disclosure for detecting autopilot capability, the method shown in FIG. 5 comprising the steps of: S501-S503. The following describes each step.
And S501, acquiring second driving information corresponding to the running environment information in the actual running process of the vehicle. The second driving information includes driving information of the driver for at least one of accelerator, brake, steering, gear, light, and the like.
And S502, matching the second driving information with the first driving information to obtain a matching result.
And S503, determining the automatic driving capability of the vehicle based on the matching result.
For example, the driver performs a braking driving operation in a segment a scene in the actual driving context, determines that the second driving information matches the first driving information if the automatic driving system of the vehicle also performs a braking driving operation in a segment B scene of the simulated driving scene corresponding to the segment a scene, and determines that the second driving information does not match the first driving information if the automatic driving system of the vehicle performs an acceleration operation in the segment B scene. The automatic driving ability is scored according to the matching result of the second driving information and the first driving information, and the automatic driving ability can be evaluated based on the score by adopting various scoring rules.
The actual driving information of the driver is matched with the automatic driving information, and the automatic driving capability is detected based on the matching result, so that the vehicle can finish various detections of the automatic driving capability in a driving capability testing mechanism, and the detection of the unmanned driving capability has higher flexibility and adaptability.
In one embodiment, there may be a plurality of methods for controlling the operating state of the conveyor belt based on the first driving information. FIG. 6 is a flow chart of one embodiment of the present disclosure for controlling conveyor belt operation, the method shown in FIG. 6 comprising the steps of: s601 and S602. The following describes each step.
S601, determining the steering direction of the vehicle based on the first driving information, and controlling the base to rotate according to the steering direction so that the steering angle of the vehicle is the same as the rotation angle of the base.
And S602, determining vehicle speed information based on the first driving information, and controlling the running speed of the conveyor belt according to the vehicle speed information so that the running speed of the conveyor belt is the same as the running speed of the vehicle and the running direction of the conveyor belt is opposite to the running direction of the vehicle.
In one embodiment, as shown in fig. 7, the drivability test mechanism includes: the environment simulation device comprises an environment simulation device and a conveyor belt, wherein the conveyor belt can have various structures, the speed and the direction matched with a vehicle can be adjusted, the speed of the conveyor belt can be adjusted according to the speed of the vehicle, and the conveyor belt can turn according to the steering of the vehicle. The drivability test mechanism has a closure mechanism of a preset shape, the preset shape including: cubic, rectangular parallelepiped, etc. The environmental simulator is disposed about the perimeter of the enclosure 700 in which the conveyor belt is disposed. The driving ability testing mechanism comprises a base 701, the base 701 is arranged in the center of the closing mechanism, the conveyor belt is located on the base 701, an unmanned vehicle is placed on the conveyor belt to run, and the base 701 can rotate according to first driving information.
The environmental simulation apparatus includes a plurality of scalable environmental simulation blocks 702. The top end of the environment simulation block 702 can display the color of one pixel point, and can rapidly expand and contract to generate a three-dimensional simulated driving scene, such as the environment simulation block 703 in fig. 7. When the vehicle travels on the conveyor belt, the movement of the vehicle in the front, rear, left, and right directions can be coordinated by controlling the operation of the base 701 and the conveyor belt. According to the collected running environment information, the environment simulation block 702 is controlled to generate a three-dimensional simulated running scene, real information of a road is simulated and restored, the vehicle can recognize the simulated running scene through a radar and a camera of the vehicle, and first driving information is generated under a continuously changing scene through an automatic driving system of the vehicle. And comparing the first driving information with the second driving information of the driver, and detecting the automatic driving state of the vehicle.
The driving ability testing mechanism in the above embodiment can control the environment simulation device to construct a simulated driving scene based on the driving environment information of the vehicle, control the running state of the conveyor belt according to the first driving information generated by the vehicle driving on the conveyor belt, and detect the automatic driving ability based on the actual driving information and the automatic driving information of the driver; the real environment of road can be simulated for the driving condition of simulation is more real, makes the vehicle can accomplish the multiple detection of autopilot ability at driving ability accredited testing organization, only can accomplish the detection of unmanned ability in limited space, need not occupy bigger real space, and can improve the flexibility of test.
Exemplary devices
In one embodiment, as shown in fig. 8, the present disclosure provides an automatic driving state detection apparatus including: a scene building module 801, an operation control module 802 and a capability detection module 803.
The scene construction module 801 obtains driving environment information of the vehicle, and controls an environment simulation device of the driving ability testing mechanism to construct a simulated driving scene corresponding to the vehicle based on the driving environment information.
The operation control module 802 obtains first driving information generated for a simulated driving scene by a vehicle driving on a conveyor belt of a drivability test mechanism, and controls an operation state of the conveyor belt based on the first driving information so that a relative position of the vehicle and the conveyor belt is kept constant.
The ability detection module 803 obtains second driving information corresponding to the running environment information, and detects an autonomous driving state of the vehicle based on the second driving information and the first driving information.
In one embodiment, as shown in FIG. 9, the scene building module 801 comprises: a driving environment information obtaining module 8011 and a simulated driving scene constructing module 8012. The driving environment information obtaining module 8011 obtains image data acquired by the image acquisition device and point cloud data acquired by the radar in the driving process of the vehicle, and obtains driving environment information based on the image data and the point cloud data.
In one embodiment, the driving environment information obtaining module 8011 obtains a mapping transformation matrix between the image capturing device coordinate system and the radar coordinate system. The driving environment information obtaining module 8011 obtains a three-dimensional point cloud coordinate corresponding to a pixel point in a two-dimensional environment image based on a mapping transformation matrix, and generates vehicle actual driving environment information, where the vehicle actual driving environment information includes: the corresponding relation of the two-dimensional environment image, the pixel points and the three-dimensional point cloud coordinate and the like.
In one embodiment, the environmental simulation apparatus includes a plurality of scalable environmental simulation blocks. The simulated driving scene construction module 8012 determines display pixel point information corresponding to each environment simulation block in the two-dimensional environment image, determines an environment simulation color of the environment simulation block based on the display pixel point information, and controls the environment simulation block to display the corresponding environment simulation color. In an embodiment, each pixel point in the two-dimensional environment image corresponds to one environment simulation block.
The simulated driving scene construction module 8012 obtains a three-dimensional point cloud coordinate corresponding to the display pixel point information based on the correspondence, determines relative position information between the environment simulation block and the vehicle according to the three-dimensional point cloud coordinate, and controls the extension length of the environment simulation block based on the relative position information.
In one embodiment, the ability detection module 803 obtains second driving information corresponding to the driving environment information during actual driving of the vehicle, and matches the second driving information with the first driving information to obtain a matching result. The capability detection module 803 determines the automatic driving capability of the vehicle based on the matching result.
In one embodiment, the conveyor belt is located on a base. The operation control module 802 determines a steering direction of the vehicle based on the first driving information, and controls the base to rotate according to the steering direction so that a steering angle of the vehicle is the same as a rotation angle of the base. The operation control module 802 determines vehicle speed information based on the first driving information, and controls the operation speed of the conveyor belt according to the vehicle speed information such that the operation speed of the conveyor belt is the same as the traveling speed of the vehicle and the traveling direction of the conveyor belt is opposite to the traveling direction of the vehicle.
FIG. 10 is a block diagram of one embodiment of an electronic device of the present disclosure, as shown in FIG. 10, the electronic device 101 includes one or more processors 1011 and memory 1012.
The processor 1011 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 101 to perform desired functions.
Memory 1012 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory, for example, may include: random Access Memory (RAM) and/or cache memory (cache), etc. The nonvolatile memory, for example, may include: read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 1011 to implement the methods of detecting an autonomous driving state of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 101 may further include: an input device 1013, an output device 1014, etc., which are interconnected by a bus system and/or other form of connection mechanism (not shown). Further, the input device 1013 may include, for example, a keyboard, a mouse, and the like. The output device 1014 can output various kinds of information to the outside. The output devices 1014 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 101 relevant to the present disclosure are shown in fig. 10, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device 101 may include any other suitable components, depending on the particular application.
In one embodiment, the present disclosure provides an automatic driving ability detection system, which includes a driving ability testing mechanism and an automatic driving state detection device as described in any one of the above embodiments. The detection device of the automatic driving state controls the driving ability test mechanism to perform corresponding operation.
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method of detecting an autonomous driving state according to various embodiments of the present disclosure described in the "exemplary methods" section of this specification above.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a method of detecting an autonomous driving state according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium may include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The method, the apparatus, the electronic device, and the storage medium for detecting an automatic driving state in the above embodiments control the environment simulation apparatus to construct a simulated driving scenario based on the driving environment information of the vehicle, obtain first driving information generated by the vehicle driving on the conveyor belt and control the running state of the conveyor belt according to the first driving information, and detect the automatic driving state of the vehicle according to the second driving information corresponding to the driving environment information and the first driving information; the method can construct a vehicle simulation driving scene and control the running state of a conveyor belt bearing the vehicle, and the automatic driving capability is detected based on the actual driving information and the automatic driving information of a driver; the real environment of road can be simulated for the driving condition of simulation is more true, makes the vehicle can accomplish the multiple detection of autopilot ability at driving ability accredited testing organization, only can accomplish the detection of unmanned ability in limited space, need not occupy bigger real space, can save test space, is fit for multiple test place.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, and systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," comprising, "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects, and the like, will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
Claims (15)
1. A method of detecting an autonomous driving state, comprising:
acquiring running environment information of a vehicle, and controlling an environment simulation device of a driving ability testing mechanism to construct a simulated running scene corresponding to the vehicle based on the running environment information;
obtaining first driving information generated by the vehicle running on a conveyor belt of the driving ability testing mechanism for the simulated running scene;
controlling an operation state of the conveyor belt based on the first driving information so that a relative position of the vehicle and the conveyor belt is kept unchanged;
second driving information corresponding to the running environment information is obtained, and an automatic driving state of the vehicle is detected based on the second driving information and the first driving information.
2. The method of claim 1, wherein the obtaining of the driving environment information of the vehicle comprises:
in the driving process of the vehicle, acquiring image data acquired by an image acquisition device and point cloud data acquired by a radar;
and obtaining the driving environment information based on the image data and the point cloud data.
3. The method of claim 2, the obtaining the driving environment information based on the image data and the point cloud data comprising:
obtaining a mapping conversion matrix between a coordinate system of the image acquisition device and a radar coordinate system;
obtaining three-dimensional point cloud coordinates corresponding to pixel points in the two-dimensional environment image based on the mapping conversion matrix;
generating actual running environment information of the vehicle; wherein the vehicle actual running environment information includes: and the corresponding relation among the two-dimensional environment image, the pixel points and the three-dimensional point cloud coordinate.
4. The method of claim 3, the environmental simulation device comprising: a plurality of retractable environmental simulation blocks; wherein the constructing a simulated driving scene corresponding to the vehicle by the environment simulation apparatus that controls the drivability test mechanism based on the driving environment information includes:
determining display pixel point information corresponding to each environment simulation block in the two-dimensional environment image;
determining the environment simulation color of the environment simulation block based on the display pixel point information, and controlling the environment simulation block to display the corresponding environment simulation color;
obtaining the three-dimensional point cloud coordinate corresponding to the display pixel point information based on the corresponding relation, and determining the relative position information between the environment simulation block and the vehicle according to the three-dimensional point cloud coordinate;
controlling an extension length of the environmental simulation block based on the relative position information.
5. The method of claim 4, wherein,
each pixel point in the two-dimensional environment image corresponds to one of the environment simulation blocks.
6. The method of claim 1, the obtaining second driving information corresponding to the driving environment information, the detecting an autopilot ability of the vehicle based on the second driving information and the first driving information comprising:
obtaining the second driving information corresponding to the driving environment information during actual driving of the vehicle;
matching the second driving information with the first driving information to obtain a matching result;
determining an autopilot capability of the vehicle based on the matching result.
7. The method of claim 1, wherein the conveyor belt is positioned on a base; wherein the controlling the running state of the conveyor belt based on the first driving information includes:
determining a steering direction of the vehicle based on the first driving information, and controlling the base to rotate according to the steering direction so that the steering angle of the vehicle is the same as the rotation angle of the base;
determining vehicle speed information based on the first driving information, and controlling the running speed of the conveyor belt according to the vehicle speed information so that the running speed of the conveyor belt is the same as the running speed of the vehicle and the running direction of the conveyor belt is opposite to the running direction of the vehicle.
8. An automatic driving state detection device comprising:
the scene construction module is used for obtaining running environment information of a vehicle and controlling an environment simulation device of a driving ability testing mechanism to construct a simulated running scene corresponding to the vehicle based on the running environment information;
the operation control module is used for obtaining first driving information generated by the vehicle running on a conveyor belt of the driving ability testing mechanism aiming at the simulated running scene; controlling an operation state of the conveyor belt based on the first driving information so that a relative position of the vehicle and the conveyor belt is kept unchanged;
and the capacity detection module is used for obtaining second driving information corresponding to the running environment information and detecting the automatic driving state of the vehicle based on the second driving information and the first driving information.
9. An automatic driving capability detection system, comprising:
a drivability test mechanism, the detecting device of the automatic driving state according to claim 9;
wherein the detection device of the automatic driving state controls the driving ability test mechanism to execute corresponding operation.
10. The detection system of claim 9,
the drivability test mechanism includes: an environmental simulation device and a conveyor belt; the driving ability testing mechanism is provided with a sealing mechanism with a preset shape; the environment simulation device is arranged at the periphery of the closed mechanism, the conveyor belt is arranged in the closed mechanism, and a vehicle to be tested runs on the conveyor belt.
11. The detection system of claim 10,
the drivability test mechanism includes: a base; the base is arranged in the center of the sealing mechanism, and the conveyor belt is located on the base.
12. The detection system of claim 11,
the environment simulation apparatus includes: a plurality of scalable environmental simulation blocks.
13. The detection system of claim 11,
the preset shape includes: cube, cuboid.
14. A computer-readable storage medium, the storage medium storing a computer program for performing the method of any of the preceding claims 1-7.
15. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to perform the method of any of the preceding claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910695865.7A CN112326258B (en) | 2019-07-30 | 2019-07-30 | Method, device and system for detecting automatic driving state and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910695865.7A CN112326258B (en) | 2019-07-30 | 2019-07-30 | Method, device and system for detecting automatic driving state and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112326258A true CN112326258A (en) | 2021-02-05 |
CN112326258B CN112326258B (en) | 2022-11-08 |
Family
ID=74319888
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910695865.7A Active CN112326258B (en) | 2019-07-30 | 2019-07-30 | Method, device and system for detecting automatic driving state and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112326258B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114061941A (en) * | 2021-10-18 | 2022-02-18 | 吉林大学 | Experimental environment adjusting test method and system for new energy vehicle gearbox and test box |
CN114112426A (en) * | 2021-11-08 | 2022-03-01 | 东风汽车集团股份有限公司 | Automatic driving test method, system and device |
CN115077922A (en) * | 2021-03-15 | 2022-09-20 | 北汽福田汽车股份有限公司 | Calibration method, device, medium and equipment of vehicle driving auxiliary system |
CN117330331A (en) * | 2023-10-30 | 2024-01-02 | 南方(韶关)智能网联新能源汽车试验检测中心有限公司 | Intelligent driving test platform system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3130904A1 (en) * | 2014-04-09 | 2017-02-15 | Panasonic Intellectual Property Management Co., Ltd. | Vehicle evaluation device |
CN108627350A (en) * | 2018-03-27 | 2018-10-09 | 北京新能源汽车股份有限公司 | Vehicle testing system and method |
CN109187048A (en) * | 2018-09-14 | 2019-01-11 | 盯盯拍(深圳)云技术有限公司 | Automatic Pilot performance test methods and automatic Pilot performance testing device |
CN109213126A (en) * | 2018-09-17 | 2019-01-15 | 安徽江淮汽车集团股份有限公司 | Autonomous driving vehicle test macro and method |
CN109520744A (en) * | 2018-11-12 | 2019-03-26 | 百度在线网络技术(北京)有限公司 | The driving performance test method and device of automatic driving vehicle |
-
2019
- 2019-07-30 CN CN201910695865.7A patent/CN112326258B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3130904A1 (en) * | 2014-04-09 | 2017-02-15 | Panasonic Intellectual Property Management Co., Ltd. | Vehicle evaluation device |
CN108627350A (en) * | 2018-03-27 | 2018-10-09 | 北京新能源汽车股份有限公司 | Vehicle testing system and method |
CN109187048A (en) * | 2018-09-14 | 2019-01-11 | 盯盯拍(深圳)云技术有限公司 | Automatic Pilot performance test methods and automatic Pilot performance testing device |
CN109213126A (en) * | 2018-09-17 | 2019-01-15 | 安徽江淮汽车集团股份有限公司 | Autonomous driving vehicle test macro and method |
CN109520744A (en) * | 2018-11-12 | 2019-03-26 | 百度在线网络技术(北京)有限公司 | The driving performance test method and device of automatic driving vehicle |
Non-Patent Citations (2)
Title |
---|
F.MORENO-NAVARRO: "Encoded asphalt materials for the guidance of autonomous vehicles", 《AUTOMATION IN CONSTRUCTION》 * |
赵祥模 等: "基于整车在环仿真的自动驾驶汽车室内快速测试平台", 《中国公路学报》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115077922A (en) * | 2021-03-15 | 2022-09-20 | 北汽福田汽车股份有限公司 | Calibration method, device, medium and equipment of vehicle driving auxiliary system |
CN115077922B (en) * | 2021-03-15 | 2024-02-02 | 北汽福田汽车股份有限公司 | Calibration method, device, medium and equipment of vehicle driving auxiliary system |
CN114061941A (en) * | 2021-10-18 | 2022-02-18 | 吉林大学 | Experimental environment adjusting test method and system for new energy vehicle gearbox and test box |
CN114061941B (en) * | 2021-10-18 | 2023-12-19 | 吉林大学 | Experimental environment adjustment test method and system for new energy vehicle gearbox and test box |
CN114112426A (en) * | 2021-11-08 | 2022-03-01 | 东风汽车集团股份有限公司 | Automatic driving test method, system and device |
CN117330331A (en) * | 2023-10-30 | 2024-01-02 | 南方(韶关)智能网联新能源汽车试验检测中心有限公司 | Intelligent driving test platform system |
CN117330331B (en) * | 2023-10-30 | 2024-03-12 | 南方(韶关)智能网联新能源汽车试验检测中心有限公司 | Intelligent driving test platform system |
Also Published As
Publication number | Publication date |
---|---|
CN112326258B (en) | 2022-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112326258B (en) | Method, device and system for detecting automatic driving state and electronic equipment | |
JP7101255B2 (en) | Methods, vehicle control methods and devices for predicting the direction of motion of a target object | |
CN111919225B (en) | Training, testing, and validating autonomous machines using a simulated environment | |
CN108334055B (en) | Method, device and equipment for checking vehicle automatic driving algorithm and storage medium | |
Liu et al. | Sensor fusion of camera and cloud digital twin information for intelligent vehicles | |
US12005892B2 (en) | Simulating diverse long-term future trajectories in road scenes | |
US11686848B2 (en) | Systems and methods for training object detection models using adversarial examples | |
CN107229329B (en) | Method and system for virtual sensor data generation with deep ground truth annotation | |
Muckenhuber et al. | Object-based sensor model for virtual testing of ADAS/AD functions | |
CN111174782B (en) | Pose estimation method and device, electronic equipment and computer readable storage medium | |
US11138452B2 (en) | Vehicle neural network training | |
Barua et al. | A self-driving car implementation using computer vision for detection and navigation | |
CN111752276A (en) | Local path planning method and device, computer readable storage medium and robot | |
CN115257768A (en) | Intelligent driving vehicle environment sensing method, system, equipment and medium | |
US11975738B2 (en) | Image annotation for deep neural networks | |
CN114913290A (en) | Multi-view-angle fusion scene reconstruction method, perception network training method and device | |
US11126763B1 (en) | Realism metric for testing software for controlling autonomous vehicles | |
CN111678488B (en) | Distance measuring method and device, computer readable storage medium and electronic equipment | |
CN113361312A (en) | Electronic device and method for detecting object | |
US11663807B2 (en) | Systems and methods for image based perception | |
Chen | UAV patrol path planning based on machine vision and multi-sensor fusion | |
Miura et al. | Converting driving scenario framework for testing self-driving systems | |
Ma et al. | A virtual procedure for real-time monitoring of intervisibility between conflicting agents at intersections using point cloud and trajectory data | |
Rebling et al. | Development and testing of driver assistance systems with unity | |
US20230267640A1 (en) | Pose estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |