WO2020199057A1 - Self-piloting simulation system, method and device, and storage medium - Google Patents

Self-piloting simulation system, method and device, and storage medium Download PDF

Info

Publication number
WO2020199057A1
WO2020199057A1 PCT/CN2019/080693 CN2019080693W WO2020199057A1 WO 2020199057 A1 WO2020199057 A1 WO 2020199057A1 CN 2019080693 W CN2019080693 W CN 2019080693W WO 2020199057 A1 WO2020199057 A1 WO 2020199057A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
simulated
simulation
environment
Prior art date
Application number
PCT/CN2019/080693
Other languages
French (fr)
Chinese (zh)
Inventor
黎晓键
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980004921.6A priority Critical patent/CN111316324A/en
Priority to PCT/CN2019/080693 priority patent/WO2020199057A1/en
Publication of WO2020199057A1 publication Critical patent/WO2020199057A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • G06T3/02
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • This application relates to the field of automatic driving technology, and in particular to an automatic driving simulation system, method, device, and storage medium.
  • Self-piloting Automobile With the development of autonomous driving technology, self-piloting automobiles (Self-piloting Automobile) have begun to become a research hotspot.
  • Self-driving cars also known as driverless cars, computer-driven cars, or wheeled mobile robots, are smart cars that realize driverless driving through a computer system.
  • Self-driving cars rely on artificial intelligence, visual computing, radar, monitoring devices, and global positioning systems to work together to allow computers to automatically and safely operate motor vehicles without any human active operation.
  • the driving area detection technology is a very important technology in the automatic driving technology, because whether the result of the driving area detection is correct is related to whether the automatic driving can plan the driving route well. How to realize the test and detection of autonomous driving technology at the simulation level has become a hot research issue.
  • the embodiment of the present application provides an automatic driving simulation system, which can simulate a real driving scene that can be used to test automatic driving technology.
  • an automatic driving simulation system which includes:
  • the motion module is used to simulate the simulated driving process of the mobile platform in the simulated driving environment
  • the camera module is used to determine the camera parameters of the camera of the mobile platform during the simulation driving process
  • a rendering module configured to determine, according to the camera parameters during the simulated driving process, that the camera of the mobile platform captures the simulated environment image obtained by the simulated driving environment during the simulated driving process;
  • a marking module configured to mark the simulated environment image, where at least the drivable area in the simulated environment image is marked when marking;
  • the rendering module is also used to perform image rendering on the marked simulated environment image to obtain a marked image.
  • an embodiment of the present application provides an automatic driving simulation method, and the automatic driving simulation method includes:
  • Image rendering is performed on the marked simulated environment image to obtain a marked image.
  • an embodiment of the present application provides an automatic driving simulation device, the automatic driving simulation device includes a unit for executing the automatic driving simulation method of the second aspect, the automatic driving simulation device includes:
  • the acquiring unit is used to acquire the camera parameters of the camera of the mobile platform during the simulated driving process
  • a rendering unit configured to determine, according to the camera parameters during the simulated driving process, that the camera of the mobile platform captures the simulated environment image obtained by the simulated driving environment during the simulated driving process;
  • the acquiring unit is also used to acquire a marked simulation environment image
  • the rendering unit is also configured to perform image rendering on the marked simulated environment image to obtain a marked image.
  • an embodiment of the present application provides an automatic driving simulation device, including a processor and a memory, the processor and the memory are connected to each other, wherein the memory is used to store a computer program, and the computer program includes program instructions ,
  • the processor is configured to call the program instructions to execute the method as described in the second aspect
  • an embodiment of the present application provides a computer-readable storage medium, wherein the computer storage medium stores a computer program, the computer program includes program instructions, and the program instructions are executed by a processor. To perform the method described in the second aspect.
  • the automatic driving simulation system of the present application can simulate a real automatic driving simulation scene.
  • the motion module is used to simulate the driving process of the mobile platform in the simulated driving environment, and then the camera module and the rendering module are used to simulate camera shooting
  • the simulation environment image obtained by simulating the driving environment is finally used to detect the driving area of the simulation environment image by the marking module and the rendering module, thereby obtaining the marking image for assisting automatic driving in path planning.
  • this application can simulate a real driving scene and test the driving area detection technology, which provides a reliable algorithm verification for the driving area detection technology.
  • the system can also be used to test more autonomous driving technologies in addition to driving area detection technologies.
  • FIG. 1 is a schematic block diagram of an automatic driving simulation system provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of a mark image provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an automatic driving simulation method provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of an automatic driving simulation method provided by another embodiment of the present application.
  • FIG. 5 is a schematic block diagram of an automatic driving simulation device provided by an embodiment of the present application.
  • Fig. 6 is a structural block diagram of an automatic driving simulation device provided by an embodiment of the present application.
  • the embodiment of the application provides an automatic driving simulation system, which can simulate a real automatic driving simulation scene, simulate the driving process of a mobile platform in a simulated driving environment, and simulate a camera shooting simulation driving
  • the simulation environment image is obtained from the environment, and finally, the driving area detection is performed on the simulation environment image, so as to obtain a marked image for assisting automatic driving in path planning.
  • the drivable area is also called Freespace, which is a road on which the mobile platform can travel in automatic driving, and is used to provide path planning for automatic driving to avoid obstacles.
  • the drivable area can be the entire road surface of the road, or a part of the road surface that contains key information of the road (such as road direction information, road midpoint information, etc.).
  • the drivable area includes structured pavement, semi-structured pavement, and unstructured pavement.
  • Structured pavement is a pavement with a single pavement structure and road edges, such as urban main roads, high-speed roads, and national roads.
  • Provincial roads, etc. semi-structured roads are roads with various structures, such as parking lots, squares, etc.
  • unstructured roads are natural roads without structural layers, such as undeveloped uninhabited areas.
  • the above-mentioned automatic driving simulation system includes a motion module 110, a camera module 120, a rendering module 130, and a marking module 140.
  • the motion module 110 simulates the simulated driving process of the mobile platform in a simulated driving environment
  • the camera module 120 is used to determine the camera parameters of the camera of the mobile platform during the aforementioned simulation driving process of the mobile platform, the camera parameters include at least one of the position information, orientation information, rendering mode, and field of view information of the camera, and then the rendering module 130.
  • the marking module 140 marks each image element in the simulated environment image, especially the driving area in the simulated environment image needs to be marked, and finally rendered
  • the module 130 renders the marked simulated environment image to highlight each image element in the simulated environment image to obtain a marked image, which is used to assist the mobile platform in path planning during automatic driving to avoid obstacles.
  • the image elements include images of various objects such as drivable areas, pedestrians, buildings, greenery and roads.
  • the position information in the camera is the position of the camera in the above-mentioned simulated driving environment, and the orientation information is the shooting direction of the camera.
  • the mode is the adjustment method of the image, including resolution change, stretching rotation change, and/or color scale brightness change, etc.
  • the angle of view is the range of angle that the camera can receive the image, also called the field of view. In the imaging range (angle of coverage), the field of view describes the image angle that the camera lens can capture.
  • the camera parameters of the above-mentioned camera are real data information conforming to the real world, so in the real world, the imaging of the camera can be determined according to the above-mentioned camera parameters, and the mobile platform simulated by the above-mentioned motion module 110 is in a simulated driving environment.
  • the movement process is also in line with the physical rules of the real world, because the aforementioned movement module 110 includes a physics engine that can provide movement information for the interaction between the mobile platform and the simulated driving environment.
  • the physical engine essentially instructs the movement module 110 to simulate the movement process of the mobile platform.
  • the arithmetic rules followed, and the movement process simulated by the physics engine conforms to the physical rules in the real world.
  • the movement information includes, for example, the spatial position information of the mobile platform, etc.
  • the spatial position information includes the position information and steering information of the mobile platform.
  • a simulated driving environment containing real-world three-dimensional objects is constructed first.
  • the simulated driving environment includes three-dimensional objects such as terrain, vegetation, weather systems, buildings, and roads.
  • the 3D model is constructed according to the size ratio of the 3D object in the real world, and then the 3D model is beautified, so that in addition to the shape, the color and pattern are closer to the object in the real world, and finally an object that can be rotated at will is obtained.
  • the realistic 3D model displayed at any angle such as constructing the 3D model of the building with the scale of the building, and then overlaying the color graphics of the building on the 3D model, and lighting and adding shadows to the 3D model to obtain An architectural model that is very similar to a real-world building.
  • the architectural model can be observed from any perspective.
  • the aforementioned automatic driving simulation system further includes an output module configured to output the aforementioned mark image.
  • the output mode may be graphic display, network transmission, etc., which is not limited in the embodiment of the present application.
  • the above two renderings are performed to obtain the simulated environment image and the marked image.
  • the difference between the two renderings is that the first rendering is used to generate the simulated environment image obtained by the camera shooting the simulated driving environment.
  • the position information, orientation information and field of view information of the camera are used to determine the position, orientation, and shooting range (field of view) of the camera in the built three-dimensional simulation scene (ie, simulated driving environment), and the camera’s perspective
  • the three-dimensional simulation scene is photographed to obtain the target image, and then the target image is adjusted according to the instructions of the rendering mode in the camera parameters such as resolution change, stretching rotation change, and/or color scale brightness change, so as to obtain the aforementioned simulation environment image .
  • the second rendering is used to highlight the marked image elements in the simulated environment image (especially the drivable area) to obtain a marked image that is easy to read and understand.
  • the simulated simulated image is highlighted with a box The various image elements in the.
  • the aforementioned motion module 110 is used to simulate the simulated driving process of the mobile platform in a simulated driving environment, and to generate spatial position information of the mobile platform during the simulated driving process.
  • the aforementioned camera module 120 is used to simulate the driving process according to the simulated driving process.
  • the spatial position information in generates the camera parameters of the mobile platform's camera in the simulation driving process. Specifically, when the mobile platform is driving in a simulated driving environment, the spatial position is constantly changing. Therefore, when the aforementioned motion module 110 simulates the movement of the mobile platform, it will generate the spatial position information of the mobile platform and transmit the spatial position information to The camera module 120, the spatial position information includes the position information and steering information of the mobile platform, etc.
  • the camera module 120 After obtaining the spatial position information of the mobile platform, the camera module 120 generates the camera parameters of the camera on the mobile platform according to the spatial position information. Specifically, directly Obtain the camera parameters of the camera corresponding to the spatial position information of the mobile platform from the correspondence table in the database, or except that the rendering mode and field angle information of the camera are preset, the position information and steering information in the camera parameters can be Calculate according to the calculation rules between the spatial position information of the mobile platform and the camera parameters of the camera. For example, calculate the position information of the camera according to the position of the mobile platform. Generally speaking, the relative position of the mobile platform and the camera is fixed.
  • the position information of the camera can be calculated according to the position information of the mobile platform and the relative position of the mobile platform and the camera.
  • the camera generally collects images in the moving direction of the mobile platform, so the camera and The turning of the mobile platform is consistent.
  • the turning information of the mobile platform can be used as the turning information of the camera.
  • different spatial position information can correspond to different rendering modes and field angle information, and the correspondence between the spatial position information and the rendering mode and field angle information in the camera parameters is stored in the database.
  • the embodiment of the present application can simulate the position change of the mobile platform when driving in the simulated driving environment, and determine the camera parameters of the camera according to the spatial position information of the mobile platform, so as to determine the camera parameters corresponding to the spatial position information,
  • the camera captures the images that can be captured in a simulated driving scene.
  • the above-mentioned rendering module 130 is used to determine the front view of the road obtained by the camera of the mobile platform during the simulated driving to photograph the simulated driving environment according to the camera parameters in the simulated driving process, and then perform image transformation on the front view of the road. Obtain the road top view, and use the road top view as the simulation environment image. Because in the actual automatic driving process, the camera generally captures the front view of the road, and the front view of the road is not conducive to the extraction of the drivable area, so the road front view can be image transformed to obtain the road top view, which is not only good for driving. Area detection is also more intuitive.
  • the front view of the road is when the lens of the camera is facing the direction of the mobile platform, and the camera captures the image obtained by simulating the driving environment (equivalent to the road situation that the driver can see when the driver is looking ahead), and the top view of the road is the camera's
  • the camera will shoot the image obtained from the simulated traveling environment (equivalent to the bird's-eye image of the road when the helicopter is aerially photographed).
  • the above-mentioned image transformation of the road front view refers to the affine transformation of the road front view (also called perspective transformation), that is, the road front view is transformed into a road top view through a transformation matrix, and the transformation matrix is used to indicate the road front view and the road Rules of transformation between top views.
  • the transformation matrices of cameras with different camera parameters may be different.
  • the correctness of the transformation matrix affects the correctness of the affine transformation and indirectly affects the correctness of the driving area detection.
  • the transformation matrix needs to be determined through calibration experiments, and this automatic driving simulation system can shoot the simulated driving environment at any angle and position by changing the camera parameters of the camera, so the front view of the road can be easily obtained at the same time And the top view of the road, and then calculate the above conversion matrix according to the front view of the road and the top view of the road, and compare the conversion matrix simulated in the automatic driving simulation system with the conversion matrix measured in the calibration experiment.
  • the conversion matrix is verified by algorithm to judge the correctness of the conversion matrix, or the conversion matrix is adjusted appropriately to make the conversion matrix more correct, so this automatic driving simulation system can provide a test environment for automatic driving technologies such as driving area detection technology.
  • the automatic driving simulation system can also replace the calibration experiment to obtain the conversion matrix corresponding to different camera parameters. Similarly, it is not difficult to think that this automatic driving simulation system can also be used to provide a test environment for other automatic driving technologies.
  • the marking module 140 is used to mark the category of the image element in the simulation environment image to obtain a template cache containing the position information and category information of the image element of the simulation environment image.
  • the rendering module 130 uses According to the template cache, the aforementioned simulation environment image is rendered to obtain the aforementioned marked image.
  • the embodiment of the present application describes the process of using the marking module 140 and the rendering module 130 to mark and render the simulated environment image respectively, so as to finally obtain the marked image.
  • the marking module 140 first identifies each image element in the rendering module 130 to obtain The category of each image element, and use the mark character to mark the category of the image element, and then transfer the mark character of the image element on the simulation environment image to the cache template, that is, according to the location of the image element on the simulation cache image, in the cache Mark the corresponding position of the module with marked characters, and then the rendering module 130 reads the position information and category information recorded in the cache template, determines the position and category of each image element in the simulation environment image, and renders the simulation environment image , To highlight the image elements in the simulation environment image, for example, use a box to mark the image elements in the simulation environment image, as shown in Figure 2. Wherein, the marking module 140 contains marking rules.
  • the marking module 140 When marking the simulation environment image, the marking module 140 first recognizes the category of the image element, and then obtains the marking character corresponding to the category of the image element in the marking rule. It can be any combination of characters, numbers, etc., used to uniquely determine the category of the image element, and different categories correspond to different marked characters.
  • the cache template is equivalent to a simplified simulation environment image.
  • the size of the template cache is the same as that of the simulation environment image.
  • Each image element in the simulation environment image has a mark character at the corresponding position on the cache template. Contains the position information and marked characters of each image element of the simulation environment image. When the cache module and the simulation environment image are combined, the position of the marked characters in the cache module completely overlaps with the image elements on the simulation environment image.
  • the above-mentioned marking module 140 recognizes image elements in the simulated environment image, it at least recognizes the drivable area in the simulated environment image. Specifically, the simulated environment image is binarized according to the gray value, and then edge detection is used to obtain the lane line. In order to extract the detected lane lines, the detection method can detect multiple lane lines at the same time.
  • the marking module 140 is used to analyze and recognize the simulated environment image, and assign a mark value to the image element according to the type of the image element in the simulated environment image identified by the analysis, where the image element contains obstacles.
  • the image element and the image element of the drivable area, the image element of the obstacle and the image element of the drivable area have different label values.
  • the above-mentioned marking characters are marked values (numerical values), and the image elements include image elements of the drivable area and image elements of obstacles.
  • the marking characters of the drivable area are 0, including roads and road surfaces
  • the marking characters of obstacles are 220, including buildings, road guardrails, driving cars and pedestrians, etc.
  • the marking module 140 recognizes that pixels 1 to n in the simulated environment image are the drivable area, and then inputs the recognized result to the cache module, and the cache template
  • the pixel point 1 to pixel point n are assigned a value of 0, and then the rendering module 130 reads the value of pixel point 1 to pixel point n in the buffer template to determine that pixel point 1 to pixel point n in the simulation environment image are Driveable area, and when rendering the simulated environment image, mark pixel 1 to pixel n in the simulated environment image as the driveable environment, for example, use a box to frame pixel 1 to pixel n, and A text label marking the drivable area.
  • the image elements of the obstacles include image elements of static obstacles and image elements of dynamic obstacles, and the image elements of the static obstacles and the image elements of dynamic obstacles have different label values.
  • the above obstacles are further divided into static obstacles and dynamic obstacles. Among them, static obstacles are marked with 200 characters, including buildings and road guardrails, and dynamic obstacles are marked with 255 characters, including moving cars. And pedestrians.
  • the aforementioned rendering module 130 is configured to output the image elements of the simulated environment image according to the color indicated by the corresponding tag value to obtain the aforementioned tagged image.
  • the rendering module 130 renders the simulated environment image, it refers to outputting the image elements in the simulated environment image according to the color represented by the mark value, so as to obtain the mark image.
  • the simulation environment image is output according to the position indicated by the position information as the color indicated by the marking characters corresponding to the position information, that is, where the image element is
  • the position output is the color indicated by the label value of the image element, or the translucent color indicated by the label value of the image element is overlaid on the position of the image element.
  • the mark value of the drivable area is 0, the mark character of a static obstacle is 200, and the mark character of a dynamic obstacle is 255.
  • the mark value 0 represents The RGB value (0, 0, 0) is represented as black
  • the marked value 200 represents the RGB value (200, 200, 200), which is represented as gray
  • the marked value 255 represents the RGB value (255, 255, 255), which is represented as white
  • the embodiment of the present application does not limit the color mode. In different color modes, the same label value may correspond to different colors, but in the same color mode, the label value and the color correspond one-to-one.
  • the automatic driving simulation system of the present application can simulate a real automatic driving simulation scene.
  • the movement module 110 is used to simulate the driving process of the mobile platform in the simulated driving environment, and then the camera module 120 and The rendering module 130 simulates the simulated environment image obtained by the camera shooting the simulated driving environment, and finally uses the marking module 140 and the rendering module 130 to detect the driving area of the simulated environment image, thereby obtaining the marked image for assisting automatic driving in path planning.
  • this application can simulate a real driving scene and test the driving area detection technology, which provides a reliable algorithm verification for the driving area detection technology.
  • the system can also be used to test more autonomous driving technologies in addition to driving area detection technologies.
  • the embodiment of the present invention proposes an automatic driving simulation method in FIG. 3, and the automatic driving simulation method can be implemented by the rendering module 130 in the aforementioned automatic driving simulation system.
  • the simulated driving process is the process of the simulated mobile platform driving in the simulated driving environment, and the simulated driving environment is to imitate the driving environment of the mobile platform in the real world.
  • the constructed three-dimensional simulation environment, the camera parameters include at least one of the camera's position information, orientation information, rendering mode, and field of view information.
  • the position information in the camera is the position and orientation information of the camera in the aforementioned simulated driving environment It is the shooting direction of the camera, the rendering mode is the adjustment method of the image, including resolution change, stretch rotation change and/or color scale brightness change, etc.
  • the angle of view is the angle range of the camera that can receive the image , Also known as the field of view, is different from the imaging range (angle of coverage), the field of view describes the image angle that the camera lens can capture.
  • the camera parameters of the above-mentioned camera are in line with the real data information of the real world, so in the real world the imaging of the camera can be determined according to the above-mentioned camera parameters, and the movement process of the above-mentioned mobile platform in the simulated driving environment is also in line with the real
  • the motion information of the interaction between the mobile platform and the simulated driving environment is provided by the physics engine.
  • the physics engine essentially instructs the motion module 110 to follow the calculation rules when simulating the motion process of the mobile platform, and the motion simulated by the physics engine
  • the process conforms to the physical rules in the real world.
  • the movement information includes, for example, the spatial position information of the mobile platform, etc.
  • the spatial position information includes the position information and steering information of the mobile platform.
  • a simulated driving environment containing real-world three-dimensional things is constructed first.
  • the simulated driving environment includes three-dimensional objects such as terrain, vegetation, weather systems, buildings, and roads.
  • the 3D model is constructed according to the size ratio of the 3D object in the real world, and then the 3D model is beautified, so that in addition to the shape, the color and pattern are closer to the object in the real world, and finally an object that can be rotated at will is obtained.
  • the realistic 3D model displayed at any angle such as constructing the 3D model of the building with the scale of the building, and then overlaying the color graphics of the building on the 3D model, and lighting and adding shadows to the 3D model to obtain An architectural model that is very similar to a real-world building.
  • the architectural model can be observed from any perspective.
  • obtaining the camera parameters of the mobile platform's camera during the simulation driving process refers to obtaining the camera parameters of the mobile platform's camera during the simulation driving process generated according to the spatial position information of the mobile platform during the simulation driving process.
  • the above-mentioned spatial position information of the mobile platform during the simulated driving refers to the spatial position information generated by the mobile platform driving in the simulated driving environment, including the position information and steering information of the mobile platform.
  • the above-mentioned camera parameters of the mobile platform’s camera generated during the simulation driving process based on the spatial position information of the mobile platform during the simulation driving process refer to the camera parameters corresponding to the spatial position information of the mobile platform acquired in the correspondence table in the database.
  • the position information and steering information in the camera parameters are calculated according to the calculation law between the spatial position information of the mobile platform and the camera parameters of the camera
  • the position information of the camera is calculated according to the position of the mobile platform.
  • the relative position of the mobile platform and the camera is fixed, so when the position information of the mobile platform is determined, it can be based on the position information of the mobile platform and the mobile platform.
  • the relative position of the camera is used to calculate the position information of the camera.
  • the camera generally collects images in the moving direction of the mobile platform, so the steering of the camera and the mobile platform are consistent.
  • the mobile platform can be The turning information of the camera is used as the turning information of the camera.
  • different spatial position information can correspond to different rendering modes and field angle information, and the correspondence between the spatial position information and the rendering mode and field angle information in the camera parameters is stored in the database.
  • the camera of the mobile platform during the simulated driving process captures the simulated environment image obtained by the simulated driving environment.
  • the position, orientation, and shooting range (field of view) of the camera in the constructed three-dimensional simulation scene ie, simulated driving environment
  • the position, orientation, and shooting range (field of view) of the camera in the constructed three-dimensional simulation scene are respectively determined.
  • the above-mentioned determination of the simulated environment image obtained by the camera of the mobile platform during the simulated driving to capture the simulated driving environment based on the camera parameters during the simulated driving process refers to the determination of the camera parameters in the simulated driving process.
  • the camera of the mobile platform shoots the front view of the road obtained from the simulated driving environment, and then performs image transformation on the front view of the road to obtain the road top view, and the road top view is used as the simulation environment image.
  • the camera generally captures the front view of the road, and the front view of the road is not conducive to the extraction of the drivable area, so the road front view can be image transformed to obtain the road top view, which is not only good for driving. Area detection is also more intuitive.
  • the front view of the road is when the lens of the camera is facing the direction of the mobile platform, and the camera captures the image obtained by simulating the driving environment (equivalent to the road situation that the driver can see when the driver is looking ahead), and the top view of the road is the camera's
  • the camera will shoot the image obtained from the simulated traveling environment (equivalent to the bird's-eye image of the road when the helicopter is aerially photographed).
  • the above-mentioned image transformation of the road front view refers to the affine transformation of the road front view (also called perspective transformation), that is, the road front view is transformed into a road top view through a transformation matrix, and the transformation matrix is used to indicate the road front view and the road Rules of transformation between top views.
  • the transformation matrices of cameras with different camera parameters may be different.
  • the correctness of the transformation matrix affects the correctness of the affine transformation and indirectly affects the correctness of the driving area detection.
  • the transformation matrix needs to be determined through calibration experiments, and this automatic driving simulation system can shoot the simulated driving environment at any angle and position by changing the camera parameters of the camera, so the front view of the road can be easily obtained at the same time And the top view of the road, and then calculate the above conversion matrix according to the front view of the road and the top view of the road, and compare the conversion matrix simulated in the automatic driving simulation system with the conversion matrix measured in the calibration experiment.
  • the conversion matrix is verified by algorithm to judge the correctness of the conversion matrix, or the conversion matrix is adjusted appropriately to make the conversion matrix more correct, so this automatic driving simulation system can provide a test environment for automatic driving technologies such as driving area detection technology.
  • the automatic driving simulation system can also replace the calibration experiment to obtain the conversion matrix corresponding to different camera parameters. Similarly, it is not difficult to think that this automatic driving simulation system can also be used to provide a test environment for other automatic driving technologies.
  • a marked simulation environment image is obtained.
  • the marked simulated environment image means that each image element in the simulated environment image is marked, especially the drivable area is marked.
  • the image elements include drivable areas, pedestrians, buildings, greenery, roads, etc.
  • the drivable area is also called Freespace, which is a road on which the mobile platform can travel in automatic driving, and is used to provide path planning for automatic driving to avoid obstacles.
  • the drivable area can be the entire road surface of the road, or a part of the road surface that contains key information of the road (such as road direction information, road midpoint information, etc.). More specifically, the drivable area includes structured pavement, semi-structured pavement, and unstructured pavement.
  • Structured pavement is a pavement with a single pavement structure and road edges, such as urban main roads, high-speed roads, and national roads.
  • Provincial roads, etc. semi-structured roads are roads with various structures, such as parking lots, squares, etc.
  • unstructured roads are natural roads without structural layers, such as undeveloped uninhabited areas.
  • image rendering is performed on the marked simulated environment image to obtain a marked image.
  • the marked images are used to assist the mobile platform in path planning during automatic driving to avoid Obstacles, for example, as shown in Figure 2, use boxes to focus on each image element in the simulated image.
  • the aforementioned acquisition of the marked simulation environment image refers to the acquisition of the template cache containing the position information and category information of the image elements of the simulation environment image, and correspondingly, the aforementioned image rendering of the marked simulation environment image .
  • Obtaining the marked image refers to rendering the simulation environment image according to the template cache to obtain the marked image.
  • the size of the cache template is the same as that of the simulation environment image, which corresponds to the location of the image element on the simulation cache image.
  • the corresponding position of the cache module is marked with the mark character of the image element, and the mark character represents the category to which the image element belongs.
  • the cache template contains the location information and category information of each image element in the simulation environment image.
  • the simulation cache image is rendered according to the cache template, that is, while reading the position information and category information recorded in the cache template, the position and location of each image element in the simulation environment image are determined.
  • Category while rendering the simulation environment image to highlight the image elements in the simulation environment image, for example, using a box to mark the image elements in the simulation environment image, as shown in Figure 2.
  • mark character is used to indicate the category of the image element.
  • the mark character can be any combination of characters, numbers, etc., and is used to uniquely determine the category of the image element. Image elements of the same category correspond to the same mark character, and different categories The image elements correspond to different marked characters.
  • the cache template is equivalent to a simplified simulation environment image.
  • the size of the template cache is the same as that of the simulation environment image.
  • Each image element in the simulation environment image has a mark character at the corresponding position on the cache template. Contains the position information and marked characters of each image element of the simulation environment image. When the cache module and the simulation environment image are combined, the position of the marked characters in the cache module completely overlaps with the image elements on the simulation environment image.
  • the above-mentioned marked characters are marked values (numerical values), and the image elements in the simulated environment image after the above-mentioned marking are assigned marked values according to categories, where the image elements include the image elements of the obstacles and the driving area.
  • the label values of the image elements, the image elements of the obstacles and the image elements of the drivable area are not the same.
  • the marking character of the drivable area is 0, including roads and pavements
  • the marking character of obstacles is 220, including buildings, road guardrails, driving cars, and pedestrians.
  • the pixel 1 to pixel n in the cache template are assigned the value 0, and then read Take the values from pixel 1 to pixel n in the cache template to determine that pixel 1 to pixel n in the simulation environment image are the drivable area, and when rendering the simulation environment image, use the simulation environment image Pixels 1 to n in are marked as a drivable environment. For example, use a box to frame pixel 1 to pixel n and mark the text label of the drivable area, as shown in Figure 2.
  • the image elements of the obstacles include image elements of static obstacles and image elements of dynamic obstacles, and the image elements of the static obstacles and the image elements of dynamic obstacles have different label values.
  • the above obstacles are further divided into static obstacles and dynamic obstacles. Among them, static obstacles are marked with 200 characters, including buildings and road guardrails, and dynamic obstacles are marked with 255 characters, including moving cars. And pedestrians.
  • the image elements of the simulated environment image are output according to the color indicated by the corresponding label value to obtain the above-mentioned label image.
  • the simulation environment image is output according to the position indicated by the position information as the color indicated by the marking characters corresponding to the position information, that is, where the image element is The position output is the color indicated by the label value of the image element, or the translucent color indicated by the label value of the image element is overlaid on the position of the image element.
  • the mark value of the drivable area is 0, the mark character of a static obstacle is 200, and the mark character of a dynamic obstacle is 255.
  • the mark value 0 represents The RGB value (0, 0, 0) is represented as black
  • the marked value 200 represents the RGB value (200, 200, 200), which is represented as gray
  • the marked value 255 represents the RGB value (255, 255, 255), which is represented as white
  • the embodiment of the present application does not limit the color mode. In different color modes, the same label value may correspond to different colors, but in the same color mode, the label value and the color correspond one-to-one.
  • the embodiment of the present application can simulate the simulated environment image obtained by shooting the simulated driving environment by the camera of the mobile platform according to the camera parameters, and then render the simulated environment image after marking to obtain a path planning aid for automatic driving Mark the image. It can be seen that the embodiments of the present application can be used to test the driving area detection technology, and provide a reliable algorithm verification for the driving area detection technology.
  • the embodiment of the present invention also proposes a more detailed automatic driving simulation method, as shown in FIG. 4.
  • the camera parameters of the camera of the mobile platform during the simulation driving which are generated according to the spatial position information of the mobile platform during the simulation driving, are acquired.
  • the simulated driving process is the process of the simulated mobile platform driving in the simulated driving environment
  • the simulated driving environment is a three-dimensional simulation environment constructed following the driving environment of the mobile platform in the real world.
  • the above-mentioned spatial position information is the simulated driving environment of the mobile platform.
  • the camera parameters include at least one of the camera’s position information, orientation information, rendering mode, and field of view information.
  • the position information in the camera is determined by the camera in the aforementioned simulated driving environment.
  • the position, the orientation information is the shooting direction of the camera
  • the rendering mode is the adjustment method of the image, including resolution change, stretching rotation change and/or color scale brightness change, etc.
  • the angle of view is the camera
  • the range of angles that can receive images also known as the field of view, is different from the imaging range (angle of coverage).
  • the field of view describes the image angle that the camera lens can capture.
  • the above-mentioned camera parameters of the mobile platform’s camera generated during the simulation driving process based on the spatial position information of the mobile platform during the simulation driving process refer to the camera parameters corresponding to the spatial position information of the mobile platform acquired in the correspondence table in the database.
  • Camera parameters, or in addition to the camera's rendering mode and field angle information are preset, the position information and steering information in the camera parameters are calculated according to the calculation law between the spatial position information of the mobile platform and the camera parameters of the camera
  • the position information of the camera is calculated according to the position of the mobile platform.
  • the relative position of the mobile platform and the camera is fixed, so when the position information of the mobile platform is determined, it can be based on the position information of the mobile platform and the mobile platform.
  • the relative position of the camera is used to calculate the position information of the camera.
  • the camera generally collects images in the moving direction of the mobile platform, so the steering of the camera and the mobile platform are consistent. After the steering information of the mobile platform is determined, the mobile platform can be The turning information of the camera is used as the turning information of the camera.
  • different spatial position information can correspond to different rendering modes and field angle information, and the correspondence between the spatial position information and the rendering mode and field angle information in the camera parameters is stored in the database.
  • the camera parameters of the above-mentioned camera are in line with the real data information of the real world, so in the real world the imaging of the camera can be determined according to the above-mentioned camera parameters, and the movement process of the above-mentioned mobile platform in the simulated driving environment is also in line with the real
  • the motion information of the interaction between the mobile platform and the simulated driving environment is provided by the physics engine.
  • the physics engine essentially instructs the motion module 110 to follow the calculation rules when simulating the motion process of the mobile platform, and the motion simulated by the physics engine
  • the process conforms to the physical rules in the real world.
  • the movement information includes, for example, the spatial position information of the mobile platform, etc.
  • the spatial position information includes the position information and steering information of the mobile platform.
  • a simulated driving environment containing real-world three-dimensional things is constructed first.
  • the simulated driving environment includes three-dimensional objects such as terrain, vegetation, weather systems, buildings, and roads.
  • the 3D model is constructed according to the size ratio of the 3D object in the real world, and then the 3D model is beautified, so that in addition to the shape, the color and pattern are closer to the object in the real world, and finally an object that can be rotated at will is obtained.
  • the realistic 3D model displayed at any angle such as constructing the 3D model of the building with the scale of the building, and then overlaying the color graphics of the building on the 3D model, and lighting and adding shadows to the 3D model to obtain An architectural model that is very similar to a real-world building.
  • the architectural model can be observed from any perspective.
  • the camera of the mobile platform during the simulated driving process captures the road front view obtained by the simulated driving environment.
  • the position, orientation, and shooting range (field of view) of the camera in the constructed three-dimensional simulation scene ie, simulated driving environment
  • the position, orientation, and shooting range (field of view) of the camera in the constructed three-dimensional simulation scene are respectively determined.
  • image transformation is performed on the above-mentioned road front view to obtain a road top view, and the road top view is used as a simulation environment image.
  • the image transformation of the front view of the road refers to the affine transformation of the front view of the road (also called perspective transformation), that is, the front view of the road is transformed into a top view of the road through a transformation matrix, and the transformation matrix is used to indicate the front view of the road. Conversion rules between and the top view of the road.
  • the transformation matrices of cameras with different camera parameters may be different.
  • the correctness of the transformation matrix affects the correctness of the affine transformation and indirectly affects the correctness of the driving area detection.
  • the camera usually captures the front view of the road, and the front view of the road is not conducive to the extraction of the drivable area, so the road front view can be image transformed to obtain the road top view.
  • the top view is not only conducive to driving area detection, but also more intuitive.
  • the front view of the road is when the lens of the camera is facing the direction of the mobile platform, and the camera captures the image obtained by simulating the driving environment (equivalent to the road situation that the driver can see when the driver is looking ahead), and the top view of the road is the camera's
  • the camera will shoot the image obtained from the simulated traveling environment (equivalent to the bird's-eye image of the road when the helicopter is aerially photographed).
  • the transformation matrix needs to be determined through calibration experiments, and this automatic driving simulation system can shoot the simulated driving environment at any angle and position by changing the camera parameters of the camera, so the front view of the road can be easily obtained at the same time And the top view of the road, and then calculate the above conversion matrix according to the front view of the road and the top view of the road, and compare the conversion matrix simulated in the automatic driving simulation system with the conversion matrix measured in the calibration experiment.
  • the conversion matrix is verified by algorithm to judge the correctness of the conversion matrix, or the conversion matrix is adjusted appropriately to make the conversion matrix more correct, so this automatic driving simulation system can provide a test environment for automatic driving technologies such as driving area detection technology.
  • the automatic driving simulation system can also replace the calibration experiment to obtain the conversion matrix corresponding to different camera parameters. Similarly, it is not difficult to think that this automatic driving simulation system can also be used to provide a test environment for other automatic driving technologies.
  • a template cache containing the location information and category information of the image elements of the aforementioned simulation environment image is obtained.
  • the size of the cache template is the same as that of the simulation environment image, which corresponds to the location of the image element on the simulation cache image.
  • the corresponding position of the cache module is marked with the mark character of the image element, and the mark character represents the category to which the image element belongs.
  • the cache template contains the location information and category information of each image element in the simulation environment image.
  • the image elements include images of various objects such as drivable areas, pedestrians, buildings, greenery and roads.
  • mark character is used to indicate the category of the image element.
  • the mark character can be any combination of characters, numbers, etc., and is used to uniquely determine the category of the image element. Image elements of the same category correspond to the same mark character, and different categories The image elements correspond to different marked characters.
  • the cache template is equivalent to a simplified simulation environment image.
  • the size of the template cache is the same as that of the simulation environment image.
  • Each image element in the simulation environment image has a mark character at the corresponding position on the cache template. Contains the position information and marked characters of each image element of the simulation environment image. When the cache module and the simulation environment image are combined, the position of the marked characters in the cache module completely overlaps with the image elements on the simulation environment image.
  • the above-mentioned drivable area is also called Freespace, which is a road on which the mobile platform can travel in automatic driving, and is used to provide path planning for automatic driving to avoid obstacles.
  • the drivable area can be the entire road surface of the road, or a part of the road surface that contains key information of the road (such as road direction information, road midpoint information, etc.). More specifically, the drivable area includes structured pavement, semi-structured pavement, and unstructured pavement.
  • Structured pavement is a pavement with a single pavement structure and road edges, such as urban main roads, high-speed roads, and national roads.
  • Unstructured roads are roads with various structures, such as parking lots, squares, etc.
  • unstructured roads are natural roads without structural layers, such as undeveloped uninhabited areas.
  • the simulation environment image is rendered according to the template cache to obtain a marked image.
  • the simulation cache image is rendered according to the cache template, that is, while reading the position information and category information recorded in the cache template, the position and location of each image element in the simulation environment image are determined.
  • Category while rendering the simulation environment image to highlight the image elements in the simulation environment image, to obtain a marked image that is used to assist the mobile platform in path planning to avoid obstacles during automatic driving, such as using a box to mark the simulation
  • the image elements in the environment image are marked as shown in Figure 2.
  • the above-mentioned marked characters are marked values (numerical values), and the image elements in the simulated environment image after the above-mentioned marking are assigned marked values according to categories, where the image elements include the image elements of the obstacles and the driving area.
  • the label values of the image elements, the image elements of the obstacles and the image elements of the drivable area are not the same.
  • the marking character of the drivable area is 0, including roads and pavements
  • the marking character of obstacles is 220, including buildings, road guardrails, driving cars, and pedestrians.
  • the mark character of the drivable area is 0, and the pixel 1 to pixel n in the simulated environment image are the drivable area, then the pixel 1 to pixel n in the cache template are assigned the value 0, and then read Take the values from pixel 1 to pixel n in the cache template to determine that pixel 1 to pixel n in the simulated environment image are the drivable area, and when the simulated environment image is rendered, the simulated environment image Pixels 1 to n in are marked as a drivable environment. For example, use a box to frame pixel 1 to pixel n and mark the text label of the drivable area, as shown in Figure 2.
  • the image elements of the obstacles include image elements of static obstacles and image elements of dynamic obstacles, and the image elements of the static obstacles and the image elements of dynamic obstacles have different label values.
  • the above obstacles are further divided into static obstacles and dynamic obstacles. Among them, static obstacles are marked with 200 characters, including buildings and road guardrails, and dynamic obstacles are marked with 255 characters, including moving cars. And pedestrians.
  • the image elements of the simulated environment image are output according to the color indicated by the corresponding label value to obtain the above-mentioned label image.
  • the simulation environment image is output according to the position indicated by the position information as the color indicated by the marking characters corresponding to the position information, that is, where the image element is The position output is the color indicated by the label value of the image element, or the translucent color indicated by the label value of the image element is overlaid on the position of the image element.
  • the mark value of the drivable area is 0, the mark character of a static obstacle is 200, and the mark character of a dynamic obstacle is 255.
  • the mark value 0 represents The RGB value (0, 0, 0) is represented as black
  • the marked value 200 represents the RGB value (200, 200, 200), which is represented as gray
  • the marked value 255 represents the RGB value (255, 255, 255), which is represented as white
  • the embodiment of the present application does not limit the color mode. In different color modes, the same label value may correspond to different colors, but in the same color mode, the label value and the color correspond one-to-one.
  • the embodiment of the present application is more detailed than the embodiment of the previous application, and describes in detail the process of determining the mobile platform according to the camera parameters to capture the simulated driving environment during the driving process to obtain the simulated environment image. It should be noted that the above description of the various embodiments tends to emphasize the differences between the various embodiments, and the similarities or similarities can be referred to each other. For the sake of brevity, details are not repeated herein.
  • an embodiment of the present application further provides an automatic driving simulation device, which is used to execute a unit of any one of the foregoing automatic driving simulation methods.
  • FIG. 5 is a schematic block diagram of an automatic driving simulation device provided by an embodiment of the present application.
  • the automatic driving simulation device of the embodiment of the present application includes: an acquisition unit 510 and a rendering unit 520. specific:
  • the acquiring unit 510 is configured to acquire camera parameters of the camera of the mobile platform during the simulated driving process
  • the rendering unit 520 is configured to determine, according to the camera parameters in the simulated driving process, that the camera of the mobile platform captures the simulated environment image obtained in the simulated driving environment during the simulated driving process;
  • the above-mentioned acquiring unit 510 is also used to acquire a marked simulation environment image
  • the rendering unit 520 is also used to perform image rendering on the marked simulated environment image to obtain a marked image.
  • the acquisition unit 510 is specifically configured to acquire the camera parameters of the camera of the mobile platform during the simulation driving process generated according to the spatial position information of the mobile platform during the simulation driving process.
  • the rendering unit 520 is specifically configured to determine, according to the camera parameters during the simulation driving process, the front view of the road obtained by the camera of the mobile platform during the simulation driving process, which is obtained by shooting the simulation driving environment;
  • the simulation device also includes a conversion unit 530, which is used to perform image conversion on the front view of the road to obtain a top view of the road and use the top view of the road as the simulation environment image.
  • the acquisition unit 510 is specifically configured to acquire a template cache containing the location information and category information of the image elements of the simulation environment image; the rendering unit is further configured to perform processing on the simulation environment image according to the template cache. Render to get the above-mentioned marked image.
  • the image elements in the marked simulated environment image are assigned marked values according to categories, wherein the image elements include the image elements of the obstacle and the image elements of the drivable area, and the image elements of the obstacle The label value of the image element in the above-mentioned drivable area is different.
  • the aforementioned rendering unit 520 is specifically configured to output the image elements of the aforementioned simulated environment image according to the color indicated by the corresponding tag value to obtain the aforementioned tagged image.
  • the image element of the obstacle includes the image element of the static obstacle and the image element of the dynamic obstacle, and the image elements of the static obstacle and the image element of the dynamic obstacle have different label values.
  • the aforementioned camera parameters include at least one of position information, orientation information, rendering mode, and field of view information of the camera.
  • the above-mentioned program can be stored in a computer-readable storage medium. When executed, it may include the processes of the above-mentioned method embodiments.
  • the aforementioned storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
  • FIG. 6 is a structural block diagram of an automatic driving simulation device provided by another embodiment of the present application.
  • the automatic driving simulation device in this embodiment as shown in the figure may include: one or more processors 610 and a memory 620.
  • the aforementioned processor 610 and memory 620 are connected through a bus 630.
  • the memory 620 is configured to store a computer program, and the computer program includes program instructions, and the processor 610 is configured to execute the program instructions stored in the memory 620.
  • the processor 610 is configured to perform the functions of the obtaining unit 510, and is used to obtain the camera parameters of the camera of the mobile platform during the simulation driving process; and is also used to perform the functions of the rendering unit 520, and is used to perform the above-mentioned camera parameters during the simulation driving process. , Determining that the camera of the mobile platform captures the simulated environment image obtained from the simulated driving environment during the simulated driving process; is also used to obtain the marked simulated environment image; and is also used to perform image rendering on the marked simulated environment image, Get the marked image.
  • the processor 610 is specifically configured to obtain the camera parameters of the camera of the mobile platform during the simulation driving process generated according to the spatial position information of the mobile platform during the simulation driving process.
  • the processor 610 is specifically configured to determine, according to the camera parameters during the simulated driving process, that the camera of the mobile platform captures the road front view obtained by the simulated driving environment during the simulated driving process; 610 is also used to perform the function of the transformation unit 530, which is used to perform image transformation on the front view of the road to obtain a top view of the road, and use the top view of the road as the simulation environment image.
  • the aforementioned processor 610 is specifically configured to obtain a template cache containing the location information and category information of the image elements of the aforementioned simulated environment image; and is also configured to render the aforementioned simulated environment image according to the aforementioned template cache to obtain the aforementioned Mark the image.
  • the image elements in the marked simulated environment image are assigned marked values according to categories, wherein the image elements include the image elements of the obstacle and the image elements of the drivable area, and the image elements of the obstacle The label value of the image element in the above-mentioned drivable area is different.
  • the aforementioned processor 610 is specifically configured to output the image elements of the aforementioned simulated environment image according to the color indicated by the corresponding mark value to obtain the aforementioned marked image.
  • the image element of the obstacle includes the image element of the static obstacle and the image element of the dynamic obstacle, and the image elements of the static obstacle and the image element of the dynamic obstacle have different label values.
  • the aforementioned camera parameters include at least one of position information, orientation information, rendering mode, and field of view information of the camera.
  • the processor may be a central processing unit (Central Processing Unit, CPU), and the processor may also be another general-purpose processor, that is, a microprocessor or any conventional processor, such as digital signal processing DSP (Digital Signal Processor), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware Components, etc.
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the memory 620 may include a read-only memory and a random access memory, and provides instructions and data to the processor 610. A part of the memory 620 may also include a non-volatile random access memory. For example, the memory 620 may also store device type information.
  • the processor 610 described in the embodiment of this application can execute the implementation described in the first and second embodiments of the anchor moving reminder method provided in the embodiment of this application, and can also execute the implementation of this application.
  • the implementation of the automatic driving simulation device described in the example will not be repeated here.
  • a computer-readable storage medium stores a computer program, the computer program includes program instructions, and the program instructions are executed by a processor.
  • the computer-readable storage medium may be an internal storage unit of the automatic driving simulation device of any of the foregoing embodiments, such as the hard disk or memory of the automatic driving simulation device.
  • the computer-readable storage medium can also be an external storage device of the automated driving simulation device, such as a plug-in hard disk, a smart media card (SMC), and a secure digital (SD) card equipped on the automated driving simulation device. , Flash Card, etc.
  • the computer-readable storage medium may also include both an internal storage unit of the automatic driving simulation device and an external storage device.
  • the computer-readable storage medium is used to store computer programs and other programs and data required by the automatic driving simulation device.
  • the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.

Abstract

A self-piloting simulation system, method and device, and a storage medium. The system comprises: a motion module for simulating a simulated travel process, in a simulated travel environment, of a mobile platform; a camera module for determining a camera parameter of a camera of the mobile platform; a rendering module for determining, according to the camera parameter, an emulated environment image obtained by means of the camera photographing the simulated travel environment; and a marking module for marking the emulated environment image, wherein the rendering module is also used for carrying out image rendering on the marked emulated environment image to obtain a marked image. The self-piloting simulation system can emulate a real self-piloting simulation scenario, can simulate a travel process, in a simulated travel environment, of a mobile platform, can simulate an emulated environment image obtained by means of a camera on the mobile platform photographing the simulated travel environment, and can finally carry out freespace detection on the emulated environment image, thereby obtaining a marked image used for assisting the self-piloting simulation system with carrying out route planning.

Description

一种自动驾驶模拟系统、方法、设备及存储介质Automatic driving simulation system, method, equipment and storage medium 技术领域Technical field
本申请涉及自动驾驶技术领域,尤其涉及一种自动驾驶模拟系统、方法、设备及存储介质。This application relates to the field of automatic driving technology, and in particular to an automatic driving simulation system, method, device, and storage medium.
背景技术Background technique
随着自动驾驶技术的发展,自动驾驶汽车(Self-piloting automobile)开始成为研究的热点。自动驾驶汽车又称无人驾驶汽车、电脑驾驶汽车、或轮式移动机器人,是一种通过电脑系统实现无人驾驶的智能汽车。自动驾驶汽车依靠人工智能、视觉计算、雷达、监控装置和全球定位系统协同合作,让电脑可以在没有任何人类主动的操作下,自动安全地操作机动车辆。With the development of autonomous driving technology, self-piloting automobiles (Self-piloting Automobile) have begun to become a research hotspot. Self-driving cars, also known as driverless cars, computer-driven cars, or wheeled mobile robots, are smart cars that realize driverless driving through a computer system. Self-driving cars rely on artificial intelligence, visual computing, radar, monitoring devices, and global positioning systems to work together to allow computers to automatically and safely operate motor vehicles without any human active operation.
虽然自动驾驶汽车的市场具有很大的潜能,但是能够生产自动驾驶汽车的企业很少,这是因为许多自动驾驶技术还在试水阶段,自动驾驶技术虽然越来越先进和繁多,但是其稳定性无法保证。可行驶区域检测技术是自动驾驶技术中十分重要的一项技术,因为可行驶区域检测的结果是否正确关系到自动驾驶是否能够很好的规划行驶路线。如何在模拟层面实现自动驾驶技术的测试检测成为研究的热点问题。Although the market for self-driving cars has great potential, there are few companies that can produce self-driving cars. This is because many self-driving technologies are still in the testing stage. Although self-driving technologies are becoming more advanced and diverse, they are stable Sex cannot be guaranteed. The driving area detection technology is a very important technology in the automatic driving technology, because whether the result of the driving area detection is correct is related to whether the automatic driving can plan the driving route well. How to realize the test and detection of autonomous driving technology at the simulation level has become a hot research issue.
发明内容Summary of the invention
本申请实施例提供一种自动驾驶模拟系统,可以仿真出一种可以用于测试自动驾驶技术的真实的驾驶场景。The embodiment of the present application provides an automatic driving simulation system, which can simulate a real driving scene that can be used to test automatic driving technology.
第一方面,本申请实施例提供了一种自动驾驶模拟系统,该系统包括:In the first aspect, an embodiment of the present application provides an automatic driving simulation system, which includes:
运动模块,用于模拟移动平台在模拟行驶环境中的模拟行驶过程;The motion module is used to simulate the simulated driving process of the mobile platform in the simulated driving environment;
相机模块,用于确定所述移动平台的相机在所述模拟行驶过程中的相机参数;The camera module is used to determine the camera parameters of the camera of the mobile platform during the simulation driving process;
渲染模块,用于根据所述模拟行驶过程中的相机参数,确定在所述模拟行驶过程中所述移动平台的相机拍摄所述模拟行驶环境得到的仿真环境图像;A rendering module, configured to determine, according to the camera parameters during the simulated driving process, that the camera of the mobile platform captures the simulated environment image obtained by the simulated driving environment during the simulated driving process;
标记模块,用于对所述仿真环境图像进行标记,其中,在进行标记时至少对所述仿真环境图像中的可行驶区域进行标记;A marking module, configured to mark the simulated environment image, where at least the drivable area in the simulated environment image is marked when marking;
所述渲染模块,还用于对标记后的仿真环境图像进行图像渲染,得到标记图像。The rendering module is also used to perform image rendering on the marked simulated environment image to obtain a marked image.
第二方面,本申请实施例提供了一种自动驾驶模拟方法,该自动驾驶模拟方法包括:In the second aspect, an embodiment of the present application provides an automatic driving simulation method, and the automatic driving simulation method includes:
获取移动平台的相机在模拟行驶过程中的相机参数;Obtain the camera parameters of the mobile platform's camera in the simulation driving process;
根据所述模拟行驶过程中的相机参数,确定在所述模拟行驶过程中所述移动平台的相机拍摄所述模拟行驶环境得到的仿真环境图像;Determining, according to the camera parameters in the simulated driving process, the simulated environment image obtained by the camera of the mobile platform during the simulated driving process to capture the simulated driving environment;
获取标记后的仿真环境图像;Obtain the marked simulation environment image;
对所述标记后的仿真环境图像进行图像渲染,得到标记图像。Image rendering is performed on the marked simulated environment image to obtain a marked image.
第三方面,本申请实施例提供了一种自动驾驶模拟设备,该自动驾驶模拟设备包括用于执行上述第二方面的自动驾驶模拟方法的单元,该自动驾驶模拟设备包括:In a third aspect, an embodiment of the present application provides an automatic driving simulation device, the automatic driving simulation device includes a unit for executing the automatic driving simulation method of the second aspect, the automatic driving simulation device includes:
获取单元,用于获取移动平台的相机在模拟行驶过程中的相机参数;The acquiring unit is used to acquire the camera parameters of the camera of the mobile platform during the simulated driving process;
渲染单元,用于根据所述模拟行驶过程中的相机参数,确定在所述模拟行驶过程中所述移动平台的相机拍摄所述模拟行驶环境得到的仿真环境图像;A rendering unit, configured to determine, according to the camera parameters during the simulated driving process, that the camera of the mobile platform captures the simulated environment image obtained by the simulated driving environment during the simulated driving process;
所述获取单元,还用于获取标记后的仿真环境图像;The acquiring unit is also used to acquire a marked simulation environment image;
所述渲染单元,还用于对所述标记后的仿真环境图像进行图像渲染,得到标记图像。The rendering unit is also configured to perform image rendering on the marked simulated environment image to obtain a marked image.
第四方面,本申请实施例提供了一种自动驾驶模拟设备,包括处理器和存储器,所述处理器和存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,用以执行如第二方面所述的方法In a fourth aspect, an embodiment of the present application provides an automatic driving simulation device, including a processor and a memory, the processor and the memory are connected to each other, wherein the memory is used to store a computer program, and the computer program includes program instructions , The processor is configured to call the program instructions to execute the method as described in the second aspect
第五方面,本申请实施例提供了一种计算机可读存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行,用以执行如第二方面所述的方法。In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, wherein the computer storage medium stores a computer program, the computer program includes program instructions, and the program instructions are executed by a processor. To perform the method described in the second aspect.
本申请的自动驾驶模拟系统可以仿真出一种真实的自动驾驶模拟场景,具体的,先通过运动模块模拟出移动平台在模拟行驶环境中的行驶过程,然后利 用相机模块和渲染模块模拟出相机拍摄模拟行驶环境而得到的仿真环境图像,最后利用标记模块和渲染模块对仿真环境图像进行可行驶区域检测,从而得到用于辅助自动驾驶进行路径规划的标记图像。可见,本申请可以模拟出真实的驾驶场景并测试可行驶区域检测技术,为可行驶区域检测技术提供了可靠性的算法验证,除此之外,由于该系统模拟了真实的驾驶场景,于是该系统还可以用于除可行驶区域检测技术以外的更多的自动驾驶技术的测试。The automatic driving simulation system of the present application can simulate a real automatic driving simulation scene. Specifically, the motion module is used to simulate the driving process of the mobile platform in the simulated driving environment, and then the camera module and the rendering module are used to simulate camera shooting The simulation environment image obtained by simulating the driving environment is finally used to detect the driving area of the simulation environment image by the marking module and the rendering module, thereby obtaining the marking image for assisting automatic driving in path planning. It can be seen that this application can simulate a real driving scene and test the driving area detection technology, which provides a reliable algorithm verification for the driving area detection technology. In addition, since the system simulates a real driving scene, the The system can also be used to test more autonomous driving technologies in addition to driving area detection technologies.
附图说明Description of the drawings
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。In order to explain the technical solutions of the embodiments of the present application more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments.
图1是本申请实施例提供的一种自动驾驶模拟系统的示意性框图;FIG. 1 is a schematic block diagram of an automatic driving simulation system provided by an embodiment of the present application;
图2是本申请实施例提供的一种标记图像的示意图;FIG. 2 is a schematic diagram of a mark image provided by an embodiment of the present application;
图3是本申请实施例提供的一种自动驾驶模拟方法的示意流程图;FIG. 3 is a schematic flowchart of an automatic driving simulation method provided by an embodiment of the present application;
图4是本申请另一实施例提供的一种自动驾驶模拟方法的示意流程图;4 is a schematic flowchart of an automatic driving simulation method provided by another embodiment of the present application;
图5是本申请实施例提供的一种自动驾驶模拟设备的示意性框图;FIG. 5 is a schematic block diagram of an automatic driving simulation device provided by an embodiment of the present application;
图6是本申请实施例提供的一种自动驾驶模拟设备的结构性框图。Fig. 6 is a structural block diagram of an automatic driving simulation device provided by an embodiment of the present application.
具体实施方式detailed description
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施例的技术方案进行描述。In order to make the purpose, technical solutions and advantages of the present application clearer, the technical solutions of the embodiments of the present application will be described below with reference to the accompanying drawings.
本申请实施例提供了一种自动驾驶模拟系统,该自动驾驶模拟系统可以仿真出一种真实的自动驾驶模拟场景,模拟出移动平台在模拟行驶环境中的行驶过程,并模拟出相机拍摄模拟行驶环境而得到的仿真环境图像,最后对仿真环境图像进行可行驶区域检测,从而得到用于辅助自动驾驶进行路径规划的标记图像。其中,可行驶区域也称为Freespace,是移动平台在自动驾驶中可行驶的道路,用于为自动驾驶提供路径规划以躲避障碍物。可行驶区域可以是道路的整个路面,也可以是包含道路的关键信息(例如道路的走向信息,道路的中点信息等)的部分路面。更具体的,可行驶区域包括了结构化的路面、半结构化的路面、非结构化的路面,结构化的路面为路面结构单一且有道路边缘线的 路面,比如城市主干道,高速、国道、省道等,半结构化的路面为结构多样的路面,比如停车场,广场等,非结构化的路面为天然且没有结构层的路面,例如未开发的无人区。The embodiment of the application provides an automatic driving simulation system, which can simulate a real automatic driving simulation scene, simulate the driving process of a mobile platform in a simulated driving environment, and simulate a camera shooting simulation driving The simulation environment image is obtained from the environment, and finally, the driving area detection is performed on the simulation environment image, so as to obtain a marked image for assisting automatic driving in path planning. Among them, the drivable area is also called Freespace, which is a road on which the mobile platform can travel in automatic driving, and is used to provide path planning for automatic driving to avoid obstacles. The drivable area can be the entire road surface of the road, or a part of the road surface that contains key information of the road (such as road direction information, road midpoint information, etc.). More specifically, the drivable area includes structured pavement, semi-structured pavement, and unstructured pavement. Structured pavement is a pavement with a single pavement structure and road edges, such as urban main roads, high-speed roads, and national roads. , Provincial roads, etc., semi-structured roads are roads with various structures, such as parking lots, squares, etc., and unstructured roads are natural roads without structural layers, such as undeveloped uninhabited areas.
具体的,如图1所示,上述自动驾驶模拟系统包括运动模块110、相机模块120、渲染模块130和标记模块140,首先通过运动模块110模拟移动平台在模拟行驶环境中的模拟行驶过程,并利用相机模块120确定在移动平台上述模拟行驶过程中,移动平台的相机的相机参数,该相机参数包括相机的位置信息、朝向信息、渲染模式以及视场角信息中的至少一种,然后渲染模块130根据该相机参数确定相机拍摄模拟行驶环境所得到的仿真环境图像,最后标记模块140对仿真环境图像中的各个图像元素进行标记,尤其是需要标记出仿真环境图像中的可行驶区域,最后渲染模块130对标记之后的仿真环境图像进行渲染以突出仿真环境图像中的各个图像元素,得到标记图像,该标记图像用于辅助移动平台在自动驾驶的时候进行路径规划以避开障碍物。其中,图像元素包括可行驶区域、行人、建筑、绿化和道路等各种物体的图像,相机中的位置信息为相机在上述模拟行驶环境中所处的位置,朝向信息为相机的拍摄方向,渲染模式为图像的调整方式,包括分辨率改变、拉伸旋转变化和/或色阶亮度改变等,视场角(fov,angle of view)为相机可以接收影像的角度范围,也称为视野,不同于成像范围(angle of coverage),视场角描述的是相机的镜头可以撷取的影像角度。Specifically, as shown in FIG. 1, the above-mentioned automatic driving simulation system includes a motion module 110, a camera module 120, a rendering module 130, and a marking module 140. First, the motion module 110 simulates the simulated driving process of the mobile platform in a simulated driving environment, and The camera module 120 is used to determine the camera parameters of the camera of the mobile platform during the aforementioned simulation driving process of the mobile platform, the camera parameters include at least one of the position information, orientation information, rendering mode, and field of view information of the camera, and then the rendering module 130. According to the camera parameters, determine the simulated environment image obtained by the camera shooting the simulated driving environment, and finally the marking module 140 marks each image element in the simulated environment image, especially the driving area in the simulated environment image needs to be marked, and finally rendered The module 130 renders the marked simulated environment image to highlight each image element in the simulated environment image to obtain a marked image, which is used to assist the mobile platform in path planning during automatic driving to avoid obstacles. Among them, the image elements include images of various objects such as drivable areas, pedestrians, buildings, greenery and roads. The position information in the camera is the position of the camera in the above-mentioned simulated driving environment, and the orientation information is the shooting direction of the camera. The mode is the adjustment method of the image, including resolution change, stretching rotation change, and/or color scale brightness change, etc. The angle of view (fov, angle of view) is the range of angle that the camera can receive the image, also called the field of view. In the imaging range (angle of coverage), the field of view describes the image angle that the camera lens can capture.
需要说明的是,上述相机的相机参数是符合真实世界的真实数据信息,于是在真实世界中根据上述相机参数是可以确定相机的成像,而且上述运动模块110所模拟的移动平台在模拟行驶环境中的运动过程也是符合真实世界的物理规则的,因为上述运动模块110包含能够提供移动平台与模拟行驶环境交互的运动信息的物理引擎,物理引擎实质上指示了运动模块110模拟移动平台运动过程时所遵循的运算规则,通过物理引擎模拟出来的运动过程符合真实世界中的物理规则。其中,运动信息例如有移动平台的空间位置信息等,空间位置信息包括移动平台的位置信息和转向信息。It should be noted that the camera parameters of the above-mentioned camera are real data information conforming to the real world, so in the real world, the imaging of the camera can be determined according to the above-mentioned camera parameters, and the mobile platform simulated by the above-mentioned motion module 110 is in a simulated driving environment. The movement process is also in line with the physical rules of the real world, because the aforementioned movement module 110 includes a physics engine that can provide movement information for the interaction between the mobile platform and the simulated driving environment. The physical engine essentially instructs the movement module 110 to simulate the movement process of the mobile platform. The arithmetic rules followed, and the movement process simulated by the physics engine conforms to the physical rules in the real world. Among them, the movement information includes, for example, the spatial position information of the mobile platform, etc. The spatial position information includes the position information and steering information of the mobile platform.
在一种实施中,模拟上述行驶过程之前,先构建包含真实世界的三维事物的模拟行驶环境,该模拟行驶环境有地形植被、天气系统、建筑和道路等三维 物体。具体的,按照在真实世界中三维物体的尺寸比例构建三维模型,然后对该三维模型进行美化,使得其除了形状以外,在颜色和图案上更加接近真实世界中的物体,最终得到一个可以任意转动并且以任意角度展示的逼真的三维模型,例如以建筑的尺寸比例构建建筑的三维模型,然后将建筑的彩色图形覆盖在该三维模型上,并为该三维模型打光和添加阴影等,从而得到一个与真实世界的建筑物十分相似的建筑模型,通过转动该建筑模型,能以任意视角观察该建筑模型。In one implementation, before simulating the above-mentioned driving process, a simulated driving environment containing real-world three-dimensional objects is constructed first. The simulated driving environment includes three-dimensional objects such as terrain, vegetation, weather systems, buildings, and roads. Specifically, the 3D model is constructed according to the size ratio of the 3D object in the real world, and then the 3D model is beautified, so that in addition to the shape, the color and pattern are closer to the object in the real world, and finally an object that can be rotated at will is obtained. And the realistic 3D model displayed at any angle, such as constructing the 3D model of the building with the scale of the building, and then overlaying the color graphics of the building on the 3D model, and lighting and adding shadows to the 3D model to obtain An architectural model that is very similar to a real-world building. By rotating the architectural model, the architectural model can be observed from any perspective.
在一种实施中,上述自动驾驶模拟系统还包括输出模块,该输出模块用于输出上述标记图像。其中输出的方式可以是图形显示,也可以是网络传输等方式,本申请实施例对此不作限定。In an implementation, the aforementioned automatic driving simulation system further includes an output module configured to output the aforementioned mark image. The output mode may be graphic display, network transmission, etc., which is not limited in the embodiment of the present application.
上述分别为了获取仿真环境图像和标记图像进行了两次渲染,前后两次的渲染不同在于,第一次渲染用于生成相机拍摄模拟行驶环境而得到的仿真环境图像,具体的,根据相机参数中的位置信息、朝向信息和视场角信息来分别确定相机在已构建好的三维模拟场景(即模拟行驶环境)的位置、朝向和可拍摄范围(视场角)等,并以相机的视角对该三维模拟场景进行拍摄,以得到目标图像,再按照相机参数中的渲染模式的指示对目标图像进行分辨率改变、拉伸旋转变化和/或色阶亮度改变等调整,从而得到上述仿真环境图像。而第二次渲染用于突出仿真环境图像中被标记的图像元素(尤其是可行驶区域),以得到易读易理解的标记图像,例如图2所示,利用方框重点框出模拟仿真图像中的各个图像元素。The above two renderings are performed to obtain the simulated environment image and the marked image. The difference between the two renderings is that the first rendering is used to generate the simulated environment image obtained by the camera shooting the simulated driving environment. Specifically, according to the camera parameters The position information, orientation information and field of view information of the camera are used to determine the position, orientation, and shooting range (field of view) of the camera in the built three-dimensional simulation scene (ie, simulated driving environment), and the camera’s perspective The three-dimensional simulation scene is photographed to obtain the target image, and then the target image is adjusted according to the instructions of the rendering mode in the camera parameters such as resolution change, stretching rotation change, and/or color scale brightness change, so as to obtain the aforementioned simulation environment image . The second rendering is used to highlight the marked image elements in the simulated environment image (especially the drivable area) to obtain a marked image that is easy to read and understand. For example, as shown in Figure 2, the simulated simulated image is highlighted with a box The various image elements in the.
在一个实施例中,上述运动模块110用于模拟移动平台在模拟行驶环境中的模拟行驶过程,并生成移动平台在模拟行驶过程中的空间位置信息,上述相机模块120,用于根据模拟行驶过程中的空间位置信息生成移动平台的相机在模拟行驶过程中的相机参数。具体的,移动平台在模拟行驶环境中行驶的过程中,空间位置不断变化,于是上述运动模块110在模拟移动平台运动的时候,会生成移动平台的空间位置信息,并将该空间位置信息传输给相机模块120,空间位置信息包括移动平台的位置信息以及转向信息等,相机模块120在获得移动平台的空间位置信息之后,根据该空间位置信息生成移动平台上的相机的相机参数,具体的,直接在数据库中的对应关系表中获取移动平台的空间位置 信息所对应的相机的相机参数,或者除了相机的渲染模式以及场视角信息是预先设定的以外,相机参数中的位置信息和转向信息可以按照移动平台的空间位置信息与相机的相机参数之间的计算规律来进行计算,例如根据移动平台的位置计算相机的位置信息,因为一般来说移动平台与相机的相对位置是固定的,于是当移动平台的位置信息确定的时候,可以根据移动平台的位置信息和移动平台与相机的相对位置来计算相机的位置信息,此外,相机采集的一般是移动平台的移动方向上的图像,于是相机与移动平台的转向一致,当确定移动平台的转向信息之后,可以将移动平台的转向信息作为相机的转向信息。其中,不同的空间位置信息可以对应不同的渲染模式和场视角信息,空间位置信息与相机参数中的渲染模式和场视角信息的对应关系存储在数据库中。In one embodiment, the aforementioned motion module 110 is used to simulate the simulated driving process of the mobile platform in a simulated driving environment, and to generate spatial position information of the mobile platform during the simulated driving process. The aforementioned camera module 120 is used to simulate the driving process according to the simulated driving process. The spatial position information in generates the camera parameters of the mobile platform's camera in the simulation driving process. Specifically, when the mobile platform is driving in a simulated driving environment, the spatial position is constantly changing. Therefore, when the aforementioned motion module 110 simulates the movement of the mobile platform, it will generate the spatial position information of the mobile platform and transmit the spatial position information to The camera module 120, the spatial position information includes the position information and steering information of the mobile platform, etc. After obtaining the spatial position information of the mobile platform, the camera module 120 generates the camera parameters of the camera on the mobile platform according to the spatial position information. Specifically, directly Obtain the camera parameters of the camera corresponding to the spatial position information of the mobile platform from the correspondence table in the database, or except that the rendering mode and field angle information of the camera are preset, the position information and steering information in the camera parameters can be Calculate according to the calculation rules between the spatial position information of the mobile platform and the camera parameters of the camera. For example, calculate the position information of the camera according to the position of the mobile platform. Generally speaking, the relative position of the mobile platform and the camera is fixed. When the position information of the mobile platform is determined, the position information of the camera can be calculated according to the position information of the mobile platform and the relative position of the mobile platform and the camera. In addition, the camera generally collects images in the moving direction of the mobile platform, so the camera and The turning of the mobile platform is consistent. After determining the turning information of the mobile platform, the turning information of the mobile platform can be used as the turning information of the camera. Among them, different spatial position information can correspond to different rendering modes and field angle information, and the correspondence between the spatial position information and the rendering mode and field angle information in the camera parameters is stored in the database.
可见,本申请实施例可以模拟出移动平台在模拟行驶环境中进驶时的位置变化,并根据移动平台的空间位置信息确定相机的相机参数,从而确定在该空间位置信息对应的相机参数下,相机拍摄模拟行驶场景所能拍摄到的画面。It can be seen that the embodiment of the present application can simulate the position change of the mobile platform when driving in the simulated driving environment, and determine the camera parameters of the camera according to the spatial position information of the mobile platform, so as to determine the camera parameters corresponding to the spatial position information, The camera captures the images that can be captured in a simulated driving scene.
在一个实施例中,上述渲染模块130用于根据模拟行驶过程中的相机参数,确定在模拟行驶过程中移动平台的相机拍摄模拟行驶环境得到的道路前视图,然后对该道路前视图进行图像变换,得到道路俯视图,并将道路俯视图作为仿真环境图像。由于在现实的自动驾驶过程中,相机拍摄到的一般是道路前视图,而道路前视图不利于可行驶区域的提取,于是可以对道路前视图进行图像变换得到道路俯视图,道路俯视图不仅利于可行驶区域检测也更加直观。其中,道路前视图为相机的镜头正对移动平台行驶的方向时,相机拍摄模拟行驶环境得到的图像(相当于驾驶员正视驾驶前方时,眼中可以看到的道路情况),道路俯视图为相机的镜头垂直于移动平台的行驶方向时,相机俯拍模拟行驶环境得到的图像(相当于直升机航拍道路时的鸟瞰图像)。In one embodiment, the above-mentioned rendering module 130 is used to determine the front view of the road obtained by the camera of the mobile platform during the simulated driving to photograph the simulated driving environment according to the camera parameters in the simulated driving process, and then perform image transformation on the front view of the road. Obtain the road top view, and use the road top view as the simulation environment image. Because in the actual automatic driving process, the camera generally captures the front view of the road, and the front view of the road is not conducive to the extraction of the drivable area, so the road front view can be image transformed to obtain the road top view, which is not only good for driving. Area detection is also more intuitive. Among them, the front view of the road is when the lens of the camera is facing the direction of the mobile platform, and the camera captures the image obtained by simulating the driving environment (equivalent to the road situation that the driver can see when the driver is looking ahead), and the top view of the road is the camera's When the lens is perpendicular to the traveling direction of the mobile platform, the camera will shoot the image obtained from the simulated traveling environment (equivalent to the bird's-eye image of the road when the helicopter is aerially photographed).
上述对道路前视图进行图像变换指的是对道路前视图进行仿射变换(也称为透视变换),即通过变换矩阵将道路前视图变换为道路俯视图,变换矩阵用于指示道路前视图与道路俯视图之间的变换规则。需要注意的是,相机参数不同的相机的变换矩阵可能是不一样的,变换矩阵的正确度影响了仿射变换的正确度,也间接影响了可行驶区域检测的正确度。The above-mentioned image transformation of the road front view refers to the affine transformation of the road front view (also called perspective transformation), that is, the road front view is transformed into a road top view through a transformation matrix, and the transformation matrix is used to indicate the road front view and the road Rules of transformation between top views. It should be noted that the transformation matrices of cameras with different camera parameters may be different. The correctness of the transformation matrix affects the correctness of the affine transformation and indirectly affects the correctness of the driving area detection.
在现实世界中变换矩阵需要通过标定实验来确定,而本自动驾驶模拟系统 可以通过改变相机的相机参数来对模拟行驶环境进行任意角度和位置的拍摄,于是可以很轻易的同时获取到道路前视图和道路俯视图,然后根据该道前视图和道路俯视图计算上述转换矩阵,并通过将自动驾驶模拟系统中仿真出来的转换矩阵,与标定实验测得的转换矩阵进行对比,来对标定实验中测得的转换矩阵进行算法验证以判断转换矩阵的正确性,或者对转换矩阵进行适当调整使得转换矩阵更加正确,于是本自动驾驶模拟系统可以为可行驶区域检测技术等自动驾驶技术提供测试环境。此外,本自动驾驶模拟系统还可以替代标定实验来得到不同相机参数对应的转换矩阵。同样的,不难想到本自动驾驶模拟系统还可以用于为其他自动驾驶技术提供测试环境。In the real world, the transformation matrix needs to be determined through calibration experiments, and this automatic driving simulation system can shoot the simulated driving environment at any angle and position by changing the camera parameters of the camera, so the front view of the road can be easily obtained at the same time And the top view of the road, and then calculate the above conversion matrix according to the front view of the road and the top view of the road, and compare the conversion matrix simulated in the automatic driving simulation system with the conversion matrix measured in the calibration experiment. The conversion matrix is verified by algorithm to judge the correctness of the conversion matrix, or the conversion matrix is adjusted appropriately to make the conversion matrix more correct, so this automatic driving simulation system can provide a test environment for automatic driving technologies such as driving area detection technology. In addition, the automatic driving simulation system can also replace the calibration experiment to obtain the conversion matrix corresponding to different camera parameters. Similarly, it is not difficult to think that this automatic driving simulation system can also be used to provide a test environment for other automatic driving technologies.
在一个实施例中,上述标记模块140用于对仿真环境图像中的图像元素的类别进行标记,得到包含上述仿真环境图像的图像元素的位置信息和类别信息的模板缓存,上述渲染模块130,用于根据该模板缓存对上述仿真环境图像进行渲染,得到上述标记图像。本申请实施例描述了上述利用标记模块140和渲染模块130分别对仿真环境图像进行标记和渲染,从而最终得到标记图像的过程,标记模块140先对渲染模块130中的各个图像元素进行识别,得到各个图像元素的类别,并利用标记字符标记图像元素的类别,然后将仿真环境图像上的图像元素的标记字符对应的传输到缓存模板中,即按照仿真缓存图像上的图像元素所在位置,在缓存模块的对应位置上标记上标记字符,然后渲染模块130一边读取缓存模板中记载的位置信息和类别信息,确定仿真环境图像中的每个图像元素的位置和类别,一边对仿真环境图像进行渲染,以突出仿真环境图像中的图像元素,例如利用方框标出仿真环境图像中的图像元素,如图2所示。其中,标记模块140中包含了标记规则,标记模块140在对仿真环境图像进行标记的时候,先识别图像元素的类别,然后获取该图像元素的类别在标记规则中对应的标记字符,该标记字符可以是字符、数字等的任意组合,用于唯一确定图像元素的类别,不同类别对应不同的标记字符。In one embodiment, the marking module 140 is used to mark the category of the image element in the simulation environment image to obtain a template cache containing the position information and category information of the image element of the simulation environment image. The rendering module 130 uses According to the template cache, the aforementioned simulation environment image is rendered to obtain the aforementioned marked image. The embodiment of the present application describes the process of using the marking module 140 and the rendering module 130 to mark and render the simulated environment image respectively, so as to finally obtain the marked image. The marking module 140 first identifies each image element in the rendering module 130 to obtain The category of each image element, and use the mark character to mark the category of the image element, and then transfer the mark character of the image element on the simulation environment image to the cache template, that is, according to the location of the image element on the simulation cache image, in the cache Mark the corresponding position of the module with marked characters, and then the rendering module 130 reads the position information and category information recorded in the cache template, determines the position and category of each image element in the simulation environment image, and renders the simulation environment image , To highlight the image elements in the simulation environment image, for example, use a box to mark the image elements in the simulation environment image, as shown in Figure 2. Wherein, the marking module 140 contains marking rules. When marking the simulation environment image, the marking module 140 first recognizes the category of the image element, and then obtains the marking character corresponding to the category of the image element in the marking rule. It can be any combination of characters, numbers, etc., used to uniquely determine the category of the image element, and different categories correspond to different marked characters.
实际上,缓存模板相当于精简化的仿真环境图像,模板缓存的尺寸大小与仿真环境图像一致,仿真环境图像中的每个图像元素,在缓存模板上的对应位置上都有标记字符,于是仅包含仿真环境图像的各个图像元素的位置信息和标记字符,当把缓存模块与仿真环境图像合在一起时,缓存模块中存在标记字符 的位置与仿真环境图像上的图像元素完全重合。In fact, the cache template is equivalent to a simplified simulation environment image. The size of the template cache is the same as that of the simulation environment image. Each image element in the simulation environment image has a mark character at the corresponding position on the cache template. Contains the position information and marked characters of each image element of the simulation environment image. When the cache module and the simulation environment image are combined, the position of the marked characters in the cache module completely overlaps with the image elements on the simulation environment image.
上述标记模块140在识别仿真环境图像中的图像元素的时候,至少识别仿真环境图像中的可行驶区域,具体的,将仿真环境图像按照灰度值进行二值化,然后采用边缘检测得到车道线的边缘轮廓,从而将检测到的车道线提取出来,该检测方法可以同时检测多条车道线。When the above-mentioned marking module 140 recognizes image elements in the simulated environment image, it at least recognizes the drivable area in the simulated environment image. Specifically, the simulated environment image is binarized according to the gray value, and then edge detection is used to obtain the lane line. In order to extract the detected lane lines, the detection method can detect multiple lane lines at the same time.
在一个实施例中,上述标记模块140用于对上述仿真环境图像进行分析识别,根据分析识别到的仿真环境图像中的图像元素的类别给图像元素赋予标记值,其中,图像元素包含障碍物的图像元素和可行驶区域的图像元素,障碍物的图像元素和可行驶区域的图像元素的标记值不相同。上述标记字符为标记值(数值),图像元素包括可行驶区域的图像元素和障碍物的图像元素等,其中,可行驶区域的标记字符为0,包括道路和路面等,障碍物的标记字符为220,包括建筑、道路护栏、行驶的汽车和行人等。In one embodiment, the marking module 140 is used to analyze and recognize the simulated environment image, and assign a mark value to the image element according to the type of the image element in the simulated environment image identified by the analysis, where the image element contains obstacles. The image element and the image element of the drivable area, the image element of the obstacle and the image element of the drivable area have different label values. The above-mentioned marking characters are marked values (numerical values), and the image elements include image elements of the drivable area and image elements of obstacles. Among them, the marking characters of the drivable area are 0, including roads and road surfaces, and the marking characters of obstacles are 220, including buildings, road guardrails, driving cars and pedestrians, etc.
举例来说,假设可行驶区域的标记字符为0,标记模块140识别到仿真环境图像中的像素点1到像素点n为可行驶区域,则将该识别的结果输入到缓存模块,将缓存模板中的像素点1到像素点n赋值为0,然后渲染模块130通过读取到缓存模板中的像素点1到像素点n的值,来确定仿真环境图像中的像素点1到像素点n为可行驶区域,并在对仿真环境图像进行渲染的时候,将仿真环境图像中的像素点1到像素点n标记为可行驶环境,例如用方框将像素点1到像素点n框出来,并标记上可行驶区域的文字标签。For example, assuming that the mark character of the drivable area is 0, the marking module 140 recognizes that pixels 1 to n in the simulated environment image are the drivable area, and then inputs the recognized result to the cache module, and the cache template The pixel point 1 to pixel point n are assigned a value of 0, and then the rendering module 130 reads the value of pixel point 1 to pixel point n in the buffer template to determine that pixel point 1 to pixel point n in the simulation environment image are Driveable area, and when rendering the simulated environment image, mark pixel 1 to pixel n in the simulated environment image as the driveable environment, for example, use a box to frame pixel 1 to pixel n, and A text label marking the drivable area.
在一个实施例中,上述障碍物的图像元素包括静止障碍物的图像元素和动态障碍物的图像元素,静止障碍物的图像元素和动态障碍物的图像元素的标记值不相同。上述障碍物被进一步的细化分为静止障碍物和动态障碍物等,其中,静止障碍物的标记字符为200,包括建筑和道路护栏等,动态障碍物的标记字符为255,包括行驶的汽车和行人等。In one embodiment, the image elements of the obstacles include image elements of static obstacles and image elements of dynamic obstacles, and the image elements of the static obstacles and the image elements of dynamic obstacles have different label values. The above obstacles are further divided into static obstacles and dynamic obstacles. Among them, static obstacles are marked with 200 characters, including buildings and road guardrails, and dynamic obstacles are marked with 255 characters, including moving cars. And pedestrians.
在一个实施例中,上述渲染模块130用于将仿真环境图像的图像元素按照对应的标记值所指示的颜色输出,得到上述标记图像。上述渲染模块130在对仿真环境图像进行渲染指的是将上述仿真环境图像中的图像元素按照其标记值所代表的颜色进行输出,从而得到上述标记图像。具体的,按照上述缓存模块中包含的图像元素的位置信息和标记字符,将上述仿真环境图像按照上述位 置信息所指示的位置输出为该位置信息对应的标记字符所指示的颜色,即将图像元素所在位置输出为该图像元素的标记值所指示的颜色,或者在图像元素所在位置上覆盖上该图像元素的标记值所指示的半透明颜色。举例来说,可行驶区域的标记值为0,静止障碍物的标记字符为200,动态障碍物的标记字符为255,在红绿蓝(RGB,red green blue)颜色模式中,标记值0代表RGB值(0,0,0),表现为黑色,标记值200代表RGB值(200,200,200),表现为灰色,标记值255代表RGB值(255,255,255),表现为白色,于是将可行驶区域输出为颜色值0所代表的黑色,或者在可行驶区域上覆盖上一层半透明黑色,将静止障碍物输出为颜色值为200所代表的灰色,或者在静止障碍物上覆盖上一层半透明灰色,将动态障碍物输出为颜色值为255所代表的白色,或者在动态障碍物上覆盖上一层半透明白色。本申请实施例对颜色模式不做限定,在不同的颜色模式中,相同的标记值可能对应不同的颜色,但是在同种颜色模式中,标记值与颜色一一对应。In one embodiment, the aforementioned rendering module 130 is configured to output the image elements of the simulated environment image according to the color indicated by the corresponding tag value to obtain the aforementioned tagged image. When the rendering module 130 renders the simulated environment image, it refers to outputting the image elements in the simulated environment image according to the color represented by the mark value, so as to obtain the mark image. Specifically, according to the position information and marking characters of the image elements contained in the above-mentioned cache module, the simulation environment image is output according to the position indicated by the position information as the color indicated by the marking characters corresponding to the position information, that is, where the image element is The position output is the color indicated by the label value of the image element, or the translucent color indicated by the label value of the image element is overlaid on the position of the image element. For example, the mark value of the drivable area is 0, the mark character of a static obstacle is 200, and the mark character of a dynamic obstacle is 255. In the red green blue (RGB, red green blue) color mode, the mark value 0 represents The RGB value (0, 0, 0) is represented as black, the marked value 200 represents the RGB value (200, 200, 200), which is represented as gray, and the marked value 255 represents the RGB value (255, 255, 255), which is represented as white, Then output the drivable area as black represented by the color value 0, or cover the drivable area with a layer of translucent black, and output the stationary obstacle as gray represented by the color value 200, or on the stationary obstacle Cover a layer of translucent gray, and output the dynamic obstacle as white represented by the color value of 255, or cover the dynamic obstacle with a layer of translucent white. The embodiment of the present application does not limit the color mode. In different color modes, the same label value may correspond to different colors, but in the same color mode, the label value and the color correspond one-to-one.
由此可见,本申请的自动驾驶模拟系统可以仿真出一种真实的自动驾驶模拟场景,具体的,先通过运动模块110模拟出移动平台在模拟行驶环境中的行驶过程,然后利用相机模块120和渲染模块130模拟出相机拍摄模拟行驶环境而得到的仿真环境图像,最后利用标记模块140和渲染模块130对仿真环境图像进行可行驶区域检测,从而得到用于辅助自动驾驶进行路径规划的标记图像。可见,本申请可以模拟出真实的驾驶场景并测试可行驶区域检测技术,为可行驶区域检测技术提供了可靠性的算法验证,除此之外,由于该系统模拟了真实的驾驶场景,于是该系统还可以用于除可行驶区域检测技术以外的更多的自动驾驶技术的测试。It can be seen that the automatic driving simulation system of the present application can simulate a real automatic driving simulation scene. Specifically, the movement module 110 is used to simulate the driving process of the mobile platform in the simulated driving environment, and then the camera module 120 and The rendering module 130 simulates the simulated environment image obtained by the camera shooting the simulated driving environment, and finally uses the marking module 140 and the rendering module 130 to detect the driving area of the simulated environment image, thereby obtaining the marked image for assisting automatic driving in path planning. It can be seen that this application can simulate a real driving scene and test the driving area detection technology, which provides a reliable algorithm verification for the driving area detection technology. In addition, since the system simulates a real driving scene, the The system can also be used to test more autonomous driving technologies in addition to driving area detection technologies.
可以理解的是,本申请实施例描述的系统架构以及业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着系统架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。It is understandable that the system architecture and business scenarios described in the embodiments of this application are intended to more clearly illustrate the technical solutions of the embodiments of this application, and do not constitute a limitation on the technical solutions provided in the embodiments of this application. Those of ordinary skill in the art It can be seen that with the evolution of the system architecture and the emergence of new business scenarios, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
基于上述的描述,本发明实施例在图3中提出了一种自动驾驶模拟方法,该自动驾驶模拟方法可以由前述的自动驾驶模拟系统中的渲染模块130来实 现。Based on the above description, the embodiment of the present invention proposes an automatic driving simulation method in FIG. 3, and the automatic driving simulation method can be implemented by the rendering module 130 in the aforementioned automatic driving simulation system.
在S301中,获取移动平台的相机在模拟行驶过程中的相机参数,该模拟行驶过程为模拟的移动平台在模拟行驶环境中行驶的过程,模拟行驶环境为仿照移动平台在真实世界中的行驶环境构建的三维仿真环境,相机参数包括相机的位置信息、朝向信息、渲染模式以及视场角信息中的至少一种,相机中的位置信息为相机在上述模拟行驶环境中所处的位置,朝向信息为相机的拍摄方向,渲染模式为图像的调整方式,包括分辨率改变、拉伸旋转变化和/或色阶亮度改变等,视场角(fov,angle of view)为相机可以接收影像的角度范围,也称为视野,不同于成像范围(angle of coverage),视场角描述的是相机的镜头可以撷取的影像角度。In S301, obtain the camera parameters of the mobile platform's camera during the simulated driving process. The simulated driving process is the process of the simulated mobile platform driving in the simulated driving environment, and the simulated driving environment is to imitate the driving environment of the mobile platform in the real world. The constructed three-dimensional simulation environment, the camera parameters include at least one of the camera's position information, orientation information, rendering mode, and field of view information. The position information in the camera is the position and orientation information of the camera in the aforementioned simulated driving environment It is the shooting direction of the camera, the rendering mode is the adjustment method of the image, including resolution change, stretch rotation change and/or color scale brightness change, etc. The angle of view (fov, angle of view) is the angle range of the camera that can receive the image , Also known as the field of view, is different from the imaging range (angle of coverage), the field of view describes the image angle that the camera lens can capture.
需要说明的是,上述相机的相机参数是符合真实世界的真实数据信息,于是在真实世界中根据上述相机参数是可以确定相机的成像,而且上述移动平台在模拟行驶环境中的运动过程也是符合真实世界的物理规则的,上述移动平台与模拟行驶环境交互的运动信息由物理引擎提供,物理引擎实质上指示了运动模块110模拟移动平台运动过程时所遵循的运算规则,通过物理引擎模拟出来的运动过程符合真实世界中的物理规则。其中,运动信息例如有移动平台的空间位置信息等,空间位置信息包括移动平台的位置信息和转向信息。It should be noted that the camera parameters of the above-mentioned camera are in line with the real data information of the real world, so in the real world the imaging of the camera can be determined according to the above-mentioned camera parameters, and the movement process of the above-mentioned mobile platform in the simulated driving environment is also in line with the real According to the physical rules of the world, the motion information of the interaction between the mobile platform and the simulated driving environment is provided by the physics engine. The physics engine essentially instructs the motion module 110 to follow the calculation rules when simulating the motion process of the mobile platform, and the motion simulated by the physics engine The process conforms to the physical rules in the real world. Among them, the movement information includes, for example, the spatial position information of the mobile platform, etc. The spatial position information includes the position information and steering information of the mobile platform.
在一种实施中,模拟上述行驶过程之前,先构建包含真实世界的三维事物的模拟行驶环境,该模拟行驶环境有地形植被、天气系统、建筑和道路等三维物体。具体的,按照在真实世界中三维物体的尺寸比例构建三维模型,然后对该三维模型进行美化,使得其除了形状以外,在颜色和图案上更加接近真实世界中的物体,最终得到一个可以任意转动并且以任意角度展示的逼真的三维模型,例如以建筑的尺寸比例构建建筑的三维模型,然后将建筑的彩色图形覆盖在该三维模型上,并为该三维模型打光和添加阴影等,从而得到一个与真实世界的建筑物十分相似的建筑模型,通过转动该建筑模型,能以任意视角观察该建筑模型。In one implementation, before simulating the above-mentioned driving process, a simulated driving environment containing real-world three-dimensional things is constructed first. The simulated driving environment includes three-dimensional objects such as terrain, vegetation, weather systems, buildings, and roads. Specifically, the 3D model is constructed according to the size ratio of the 3D object in the real world, and then the 3D model is beautified, so that in addition to the shape, the color and pattern are closer to the object in the real world, and finally an object that can be rotated at will is obtained. And the realistic 3D model displayed at any angle, such as constructing the 3D model of the building with the scale of the building, and then overlaying the color graphics of the building on the 3D model, and lighting and adding shadows to the 3D model to obtain An architectural model that is very similar to a real-world building. By rotating the architectural model, the architectural model can be observed from any perspective.
在一个实施例中,上述获取移动平台的相机在模拟行驶过程中的相机参数指的是,获取根据模拟行驶过程中移动平台的空间位置信息生成的移动平台的相机在模拟行驶过程中的相机参数。其中,上述模拟行驶过程中移动平台的空 间位置信息指的是,由于移动平台在模拟行驶环境中行驶而产生的空间位置信息,包括移动平台的位置信息以及转向信息等。上述根据模拟行驶过程中移动平台的空间位置信息生成的移动平台的相机在模拟行驶过程中的相机参数指的是,在数据库中的对应关系表中获取的移动平台的空间位置信息所对应的相机的相机参数,或者除了相机的渲染模式以及场视角信息是预先设定的以外,相机参数中的位置信息和转向信息是按照移动平台的空间位置信息与相机的相机参数之间的计算规律来计算得到的,例如根据移动平台的位置计算相机的位置信息,因为一般来说移动平台与相机的相对位置是固定的,于是当移动平台的位置信息确定的时候,可以根据移动平台的位置信息和移动平台与相机的相对位置来计算相机的位置信息,此外,相机采集的一般是移动平台的移动方向上的图像,于是相机与移动平台的转向一致,当确定移动平台的转向信息之后,可以将移动平台的转向信息作为相机的转向信息。其中,不同的空间位置信息可以对应不同的渲染模式和场视角信息,空间位置信息与相机参数中的渲染模式和场视角信息的对应关系存储在数据库中。In one embodiment, obtaining the camera parameters of the mobile platform's camera during the simulation driving process refers to obtaining the camera parameters of the mobile platform's camera during the simulation driving process generated according to the spatial position information of the mobile platform during the simulation driving process. Among them, the above-mentioned spatial position information of the mobile platform during the simulated driving refers to the spatial position information generated by the mobile platform driving in the simulated driving environment, including the position information and steering information of the mobile platform. The above-mentioned camera parameters of the mobile platform’s camera generated during the simulation driving process based on the spatial position information of the mobile platform during the simulation driving process refer to the camera parameters corresponding to the spatial position information of the mobile platform acquired in the correspondence table in the database. Camera parameters, or in addition to the camera's rendering mode and field angle information are preset, the position information and steering information in the camera parameters are calculated according to the calculation law between the spatial position information of the mobile platform and the camera parameters of the camera For example, the position information of the camera is calculated according to the position of the mobile platform. Generally speaking, the relative position of the mobile platform and the camera is fixed, so when the position information of the mobile platform is determined, it can be based on the position information of the mobile platform and the mobile platform. The relative position of the camera is used to calculate the position information of the camera. In addition, the camera generally collects images in the moving direction of the mobile platform, so the steering of the camera and the mobile platform are consistent. After the steering information of the mobile platform is determined, the mobile platform can be The turning information of the camera is used as the turning information of the camera. Among them, different spatial position information can correspond to different rendering modes and field angle information, and the correspondence between the spatial position information and the rendering mode and field angle information in the camera parameters is stored in the database.
在S302中,根据上述模拟行驶过程中的相机参数,确定在模拟行驶过程中移动平台的相机拍摄模拟行驶环境得到的仿真环境图像。具体的,根据相机参数中的位置信息、朝向信息和视场角信息来分别确定相机在已构建好的三维模拟场景(即模拟行驶环境)的位置、朝向和可拍摄范围(视场角)等,并以相机的视角对该三维模拟场景进行拍摄,以得到目标图像,再按照相机参数中的渲染模式的指示对目标图像进行分辨率改变、拉伸旋转变化和/或色阶亮度改变等调整,从而得到上述仿真环境图像。In S302, according to the aforementioned camera parameters during the simulated driving process, it is determined that the camera of the mobile platform during the simulated driving process captures the simulated environment image obtained by the simulated driving environment. Specifically, according to the position information, orientation information, and field of view information in the camera parameters, the position, orientation, and shooting range (field of view) of the camera in the constructed three-dimensional simulation scene (ie, simulated driving environment) are respectively determined. , And shoot the 3D simulation scene from the camera's perspective to obtain the target image, and then adjust the target image resolution, stretch rotation, and/or color scale brightness changes according to the instructions of the rendering mode in the camera parameters , So as to obtain the above simulation environment image.
在一个实施例中,上述根据上述模拟行驶过程中的相机参数,确定在模拟行驶过程中移动平台的相机拍摄模拟行驶环境得到的仿真环境图像指的是,根据模拟行驶过程中的相机参数,确定在模拟行驶过程中移动平台的相机拍摄模拟行驶环境得到的道路前视图,然后对该道路前视图进行图像变换,得到道路俯视图,并将道路俯视图作为仿真环境图像。由于在现实的自动驾驶过程中,相机拍摄到的一般是道路前视图,而道路前视图不利于可行驶区域的提取,于是可以对道路前视图进行图像变换得到道路俯视图,道路俯视图不仅利于可行驶区域检测也更加直观。其中,道路前视图为相机的镜头正对移动平台行驶的 方向时,相机拍摄模拟行驶环境得到的图像(相当于驾驶员正视驾驶前方时,眼中可以看到的道路情况),道路俯视图为相机的镜头垂直于移动平台的行驶方向时,相机俯拍模拟行驶环境得到的图像(相当于直升机航拍道路时的鸟瞰图像)。In one embodiment, the above-mentioned determination of the simulated environment image obtained by the camera of the mobile platform during the simulated driving to capture the simulated driving environment based on the camera parameters during the simulated driving process refers to the determination of the camera parameters in the simulated driving process. During the simulated driving process, the camera of the mobile platform shoots the front view of the road obtained from the simulated driving environment, and then performs image transformation on the front view of the road to obtain the road top view, and the road top view is used as the simulation environment image. Because in the actual automatic driving process, the camera generally captures the front view of the road, and the front view of the road is not conducive to the extraction of the drivable area, so the road front view can be image transformed to obtain the road top view, which is not only good for driving. Area detection is also more intuitive. Among them, the front view of the road is when the lens of the camera is facing the direction of the mobile platform, and the camera captures the image obtained by simulating the driving environment (equivalent to the road situation that the driver can see when the driver is looking ahead), and the top view of the road is the camera's When the lens is perpendicular to the traveling direction of the mobile platform, the camera will shoot the image obtained from the simulated traveling environment (equivalent to the bird's-eye image of the road when the helicopter is aerially photographed).
上述对道路前视图进行图像变换指的是对道路前视图进行仿射变换(也称为透视变换),即通过变换矩阵将道路前视图变换为道路俯视图,变换矩阵用于指示道路前视图与道路俯视图之间的变换规则。需要注意的是,相机参数不同的相机的变换矩阵可能是不一样的,变换矩阵的正确度影响了仿射变换的正确度,也间接影响了可行驶区域检测的正确度。The above-mentioned image transformation of the road front view refers to the affine transformation of the road front view (also called perspective transformation), that is, the road front view is transformed into a road top view through a transformation matrix, and the transformation matrix is used to indicate the road front view and the road Rules of transformation between top views. It should be noted that the transformation matrices of cameras with different camera parameters may be different. The correctness of the transformation matrix affects the correctness of the affine transformation and indirectly affects the correctness of the driving area detection.
在现实世界中变换矩阵需要通过标定实验来确定,而本自动驾驶模拟系统可以通过改变相机的相机参数来对模拟行驶环境进行任意角度和位置的拍摄,于是可以很轻易的同时获取到道路前视图和道路俯视图,然后根据该道前视图和道路俯视图计算上述转换矩阵,并通过将自动驾驶模拟系统中仿真出来的转换矩阵,与标定实验测得的转换矩阵进行对比,来对标定实验中测得的转换矩阵进行算法验证以判断转换矩阵的正确性,或者对转换矩阵进行适当调整使得转换矩阵更加正确,于是本自动驾驶模拟系统可以为可行驶区域检测技术等自动驾驶技术提供测试环境。此外,本自动驾驶模拟系统还可以替代标定实验来得到不同相机参数对应的转换矩阵。同样的,不难想到本自动驾驶模拟系统还可以用于为其他自动驾驶技术提供测试环境。In the real world, the transformation matrix needs to be determined through calibration experiments, and this automatic driving simulation system can shoot the simulated driving environment at any angle and position by changing the camera parameters of the camera, so the front view of the road can be easily obtained at the same time And the top view of the road, and then calculate the above conversion matrix according to the front view of the road and the top view of the road, and compare the conversion matrix simulated in the automatic driving simulation system with the conversion matrix measured in the calibration experiment. The conversion matrix is verified by algorithm to judge the correctness of the conversion matrix, or the conversion matrix is adjusted appropriately to make the conversion matrix more correct, so this automatic driving simulation system can provide a test environment for automatic driving technologies such as driving area detection technology. In addition, the automatic driving simulation system can also replace the calibration experiment to obtain the conversion matrix corresponding to different camera parameters. Similarly, it is not difficult to think that this automatic driving simulation system can also be used to provide a test environment for other automatic driving technologies.
在S303中,获取标记后的仿真环境图像。该标记后的仿真环境图像指的是,仿真环境图像中的各个图像元素都被标记出来,尤其是可行驶区域被标记出来,图像元素包括可行驶区域、行人、建筑、绿化和道路等各种物体的图像。其中,可行驶区域也称为Freespace,是移动平台在自动驾驶中可行驶的道路,用于为自动驾驶提供路径规划以躲避障碍物。可行驶区域可以是道路的整个路面,也可以是包含道路的关键信息(例如道路的走向信息,道路的中点信息等)的部分路面。更具体的,可行驶区域包括了结构化的路面、半结构化的路面、非结构化的路面,结构化的路面为路面结构单一且有道路边缘线的路面,比如城市主干道,高速、国道、省道等,半结构化的路面为结构多样的路面,比如 停车场,广场等,非结构化的路面为天然且没有结构层的路面,例如未开发的无人区。In S303, a marked simulation environment image is obtained. The marked simulated environment image means that each image element in the simulated environment image is marked, especially the drivable area is marked. The image elements include drivable areas, pedestrians, buildings, greenery, roads, etc. The image of the object. Among them, the drivable area is also called Freespace, which is a road on which the mobile platform can travel in automatic driving, and is used to provide path planning for automatic driving to avoid obstacles. The drivable area can be the entire road surface of the road, or a part of the road surface that contains key information of the road (such as road direction information, road midpoint information, etc.). More specifically, the drivable area includes structured pavement, semi-structured pavement, and unstructured pavement. Structured pavement is a pavement with a single pavement structure and road edges, such as urban main roads, high-speed roads, and national roads. , Provincial roads, etc., semi-structured roads are roads with various structures, such as parking lots, squares, etc., and unstructured roads are natural roads without structural layers, such as undeveloped uninhabited areas.
在S304中,对上述标记后的仿真环境图像进行图像渲染,得到标记图像。具体的,突出仿真环境图像中被标记的图像元素(尤其是可行驶区域),以得到易读易理解的标记图像,该标记图像用于辅助移动平台在自动驾驶的时候进行路径规划以避开障碍物,例如图2所示,利用方框重点框出模拟仿真图像中的各个图像元素。In S304, image rendering is performed on the marked simulated environment image to obtain a marked image. Specifically, highlight the marked image elements (especially the drivable area) in the simulated environment image to obtain easy-to-read and understandable marked images. The marked images are used to assist the mobile platform in path planning during automatic driving to avoid Obstacles, for example, as shown in Figure 2, use boxes to focus on each image element in the simulated image.
在一种实施中,上述获取标记后的仿真环境图像指的是,获取包含仿真环境图像的图像元素的位置信息和类别信息的模板缓存,相应的,上述对标记后的仿真环境图像进行图像渲染,得到标记图像指的是,根据模板缓存对仿真环境图像进行渲染,得到标记图像。其中,缓存模板与仿真环境图像的尺寸一样,对应于仿真缓存图像上的图像元素所在位置,缓存模块的相应位置上标记有图像元素的标记字符,标记字符代表了图像元素所属的类别,于是上述缓存模板包含了仿真环境图像中的各个图像元素的位置信息和类别信息。具体的,在获取了上述缓存模板之后,根据该缓存模板对仿真缓存图像进行渲染,即一边读取缓存模板中记载的位置信息和类别信息,确定仿真环境图像中的每个图像元素的位置和类别,一边对仿真环境图像进行渲染,以突出仿真环境图像中的图像元素,例如利用方框标出仿真环境图像中的图像元素,如图2所示。In one implementation, the aforementioned acquisition of the marked simulation environment image refers to the acquisition of the template cache containing the position information and category information of the image elements of the simulation environment image, and correspondingly, the aforementioned image rendering of the marked simulation environment image , Obtaining the marked image refers to rendering the simulation environment image according to the template cache to obtain the marked image. Among them, the size of the cache template is the same as that of the simulation environment image, which corresponds to the location of the image element on the simulation cache image. The corresponding position of the cache module is marked with the mark character of the image element, and the mark character represents the category to which the image element belongs. The cache template contains the location information and category information of each image element in the simulation environment image. Specifically, after obtaining the above-mentioned cache template, the simulation cache image is rendered according to the cache template, that is, while reading the position information and category information recorded in the cache template, the position and location of each image element in the simulation environment image are determined. Category, while rendering the simulation environment image to highlight the image elements in the simulation environment image, for example, using a box to mark the image elements in the simulation environment image, as shown in Figure 2.
需要说明的是,标记字符用于表示图像元素的类别,该标记字符可以是字符、数字等的任意组合,用于唯一确定图像元素的类别,相同类别的图像元素对应相同的标记字符,不同类别的图像元素对应不同的标记字符。It should be noted that the mark character is used to indicate the category of the image element. The mark character can be any combination of characters, numbers, etc., and is used to uniquely determine the category of the image element. Image elements of the same category correspond to the same mark character, and different categories The image elements correspond to different marked characters.
实际上,缓存模板相当于精简化的仿真环境图像,模板缓存的尺寸大小与仿真环境图像一致,仿真环境图像中的每个图像元素,在缓存模板上的对应位置上都有标记字符,于是仅包含仿真环境图像的各个图像元素的位置信息和标记字符,当把缓存模块与仿真环境图像合在一起时,缓存模块中存在标记字符的位置与仿真环境图像上的图像元素完全重合。In fact, the cache template is equivalent to a simplified simulation environment image. The size of the template cache is the same as that of the simulation environment image. Each image element in the simulation environment image has a mark character at the corresponding position on the cache template. Contains the position information and marked characters of each image element of the simulation environment image. When the cache module and the simulation environment image are combined, the position of the marked characters in the cache module completely overlaps with the image elements on the simulation environment image.
在一个实施例中,上述标记字符为标记值(数值),上述标记后的仿真环境图像中的图像元素按照类别被赋予了标记值,其中,图像元素包含障碍物的图像元素和可行驶区域的图像元素,障碍物的图像元素和可行驶区域的图像元 素的标记值不相同。其中,可行驶区域的标记字符为0,包括道路和路面等,障碍物的标记字符为220,包括建筑、道路护栏、行驶的汽车和行人等。In one embodiment, the above-mentioned marked characters are marked values (numerical values), and the image elements in the simulated environment image after the above-mentioned marking are assigned marked values according to categories, where the image elements include the image elements of the obstacles and the driving area. The label values of the image elements, the image elements of the obstacles and the image elements of the drivable area are not the same. Among them, the marking character of the drivable area is 0, including roads and pavements, and the marking character of obstacles is 220, including buildings, road guardrails, driving cars, and pedestrians.
举例来说,假设可行驶区域的标记字符为0,仿真环境图像中的像素点1到像素点n为可行驶区域,则缓存模板中的像素点1到像素点n赋值为0,然后通过读取到缓存模板中的像素点1到像素点n的值,来确定仿真环境图像中的像素点1到像素点n为可行驶区域,并在对仿真环境图像进行渲染的时候,将仿真环境图像中的像素点1到像素点n标记为可行驶环境,例如用方框将像素点1到像素点n框出来,并标记上可行驶区域的文字标签,如图2。For example, suppose that the mark character of the drivable area is 0, and the pixel 1 to pixel n in the simulated environment image are the drivable area, then the pixel 1 to pixel n in the cache template are assigned the value 0, and then read Take the values from pixel 1 to pixel n in the cache template to determine that pixel 1 to pixel n in the simulation environment image are the drivable area, and when rendering the simulation environment image, use the simulation environment image Pixels 1 to n in are marked as a drivable environment. For example, use a box to frame pixel 1 to pixel n and mark the text label of the drivable area, as shown in Figure 2.
在一个实施例中,上述障碍物的图像元素包括静止障碍物的图像元素和动态障碍物的图像元素,静止障碍物的图像元素和动态障碍物的图像元素的标记值不相同。上述障碍物被进一步的细化分为静止障碍物和动态障碍物等,其中,静止障碍物的标记字符为200,包括建筑和道路护栏等,动态障碍物的标记字符为255,包括行驶的汽车和行人等。In one embodiment, the image elements of the obstacles include image elements of static obstacles and image elements of dynamic obstacles, and the image elements of the static obstacles and the image elements of dynamic obstacles have different label values. The above obstacles are further divided into static obstacles and dynamic obstacles. Among them, static obstacles are marked with 200 characters, including buildings and road guardrails, and dynamic obstacles are marked with 255 characters, including moving cars. And pedestrians.
在一个实施例中,将仿真环境图像的图像元素按照对应的标记值所指示的颜色输出,得到上述标记图像。具体的,按照上述缓存模块中包含的图像元素的位置信息和标记字符,将上述仿真环境图像按照上述位置信息所指示的位置输出为该位置信息对应的标记字符所指示的颜色,即将图像元素所在位置输出为该图像元素的标记值所指示的颜色,或者在图像元素所在位置上覆盖上该图像元素的标记值所指示的半透明颜色。举例来说,可行驶区域的标记值为0,静止障碍物的标记字符为200,动态障碍物的标记字符为255,在红绿蓝(RGB,red green blue)颜色模式中,标记值0代表RGB值(0,0,0),表现为黑色,标记值200代表RGB值(200,200,200),表现为灰色,标记值255代表RGB值(255,255,255),表现为白色,于是将可行驶区域输出为颜色值0所代表的黑色,或者在可行驶区域上覆盖上一层半透明黑色,将静止障碍物输出为颜色值为200所代表的灰色,或者在静止障碍物上覆盖上一层半透明灰色,将动态障碍物输出为颜色值为255所代表的白色,或者在动态障碍物上覆盖上一层半透明白色。本申请实施例对颜色模式不做限定,在不同的颜色模式中,相同的标记值可能对应不同的颜色,但是在同种颜色模式中,标记值与颜色一一对应。In one embodiment, the image elements of the simulated environment image are output according to the color indicated by the corresponding label value to obtain the above-mentioned label image. Specifically, according to the position information and marking characters of the image elements contained in the above-mentioned cache module, the simulation environment image is output according to the position indicated by the position information as the color indicated by the marking characters corresponding to the position information, that is, where the image element is The position output is the color indicated by the label value of the image element, or the translucent color indicated by the label value of the image element is overlaid on the position of the image element. For example, the mark value of the drivable area is 0, the mark character of a static obstacle is 200, and the mark character of a dynamic obstacle is 255. In the red green blue (RGB, red green blue) color mode, the mark value 0 represents The RGB value (0, 0, 0) is represented as black, the marked value 200 represents the RGB value (200, 200, 200), which is represented as gray, and the marked value 255 represents the RGB value (255, 255, 255), which is represented as white, Then output the drivable area as black represented by the color value 0, or cover the drivable area with a layer of translucent black, and output the stationary obstacle as gray represented by the color value 200, or on the stationary obstacle Cover a layer of translucent gray, and output the dynamic obstacle as white represented by the color value of 255, or cover the dynamic obstacle with a layer of translucent white. The embodiment of the present application does not limit the color mode. In different color modes, the same label value may correspond to different colors, but in the same color mode, the label value and the color correspond one-to-one.
由此可见,本申请实施例可以根据相机参数来模拟出移动平台的相机拍摄模拟行驶环境而得到的仿真环境图像,然后对标记之后的仿真环境图像进行渲染得到用于辅助自动驾驶进行路径规划的标记图像。可见,本申请实施例可以用于测试可行驶区域检测技术,为可行驶区域检测技术提供了可靠性的算法验证。It can be seen that the embodiment of the present application can simulate the simulated environment image obtained by shooting the simulated driving environment by the camera of the mobile platform according to the camera parameters, and then render the simulated environment image after marking to obtain a path planning aid for automatic driving Mark the image. It can be seen that the embodiments of the present application can be used to test the driving area detection technology, and provide a reliable algorithm verification for the driving area detection technology.
本发明实施例在上一申请实施例的基础上,还提出了一种更详细的自动驾驶模拟方法,如图4所示。Based on the embodiment of the previous application, the embodiment of the present invention also proposes a more detailed automatic driving simulation method, as shown in FIG. 4.
在S401中,获取根据模拟行驶过程中移动平台的空间位置信息生成的移动平台的相机在模拟行驶过程中的相机参数。其中,模拟行驶过程为模拟的移动平台在模拟行驶环境中行驶的过程,模拟行驶环境为仿照移动平台在真实世界中的行驶环境构建的三维仿真环境,上述空间位置信息为移动平台在模拟行驶环境中行驶过程中的位置信息以及转向信息等,相机参数包括相机的位置信息、朝向信息、渲染模式以及视场角信息中的至少一种,相机中的位置信息为相机在上述模拟行驶环境中所处的位置,朝向信息为相机的拍摄方向,渲染模式为图像的调整方式,包括分辨率改变、拉伸旋转变化和/或色阶亮度改变等,视场角(fov,angle of view)为相机可以接收影像的角度范围,也称为视野,不同于成像范围(angle of coverage),视场角描述的是相机的镜头可以撷取的影像角度。In S401, the camera parameters of the camera of the mobile platform during the simulation driving, which are generated according to the spatial position information of the mobile platform during the simulation driving, are acquired. Among them, the simulated driving process is the process of the simulated mobile platform driving in the simulated driving environment, and the simulated driving environment is a three-dimensional simulation environment constructed following the driving environment of the mobile platform in the real world. The above-mentioned spatial position information is the simulated driving environment of the mobile platform. The position information and steering information during the driving process. The camera parameters include at least one of the camera’s position information, orientation information, rendering mode, and field of view information. The position information in the camera is determined by the camera in the aforementioned simulated driving environment. The position, the orientation information is the shooting direction of the camera, the rendering mode is the adjustment method of the image, including resolution change, stretching rotation change and/or color scale brightness change, etc., the angle of view (fov, angle of view) is the camera The range of angles that can receive images, also known as the field of view, is different from the imaging range (angle of coverage). The field of view describes the image angle that the camera lens can capture.
上述根据模拟行驶过程中移动平台的空间位置信息生成的移动平台的相机在模拟行驶过程中的相机参数指的是,在数据库中的对应关系表中获取的移动平台的空间位置信息所对应的相机的相机参数,或者除了相机的渲染模式以及场视角信息是预先设定的以外,相机参数中的位置信息和转向信息是按照移动平台的空间位置信息与相机的相机参数之间的计算规律来计算得到的,例如根据移动平台的位置计算相机的位置信息,因为一般来说移动平台与相机的相对位置是固定的,于是当移动平台的位置信息确定的时候,可以根据移动平台的位置信息和移动平台与相机的相对位置来计算相机的位置信息,此外,相机采集的一般是移动平台的移动方向上的图像,于是相机与移动平台的转向一致,当确定移动平台的转向信息之后,可以将移动平台的转向信息作为相机的转向 信息。其中,不同的空间位置信息可以对应不同的渲染模式和场视角信息,空间位置信息与相机参数中的渲染模式和场视角信息的对应关系存储在数据库中。The above-mentioned camera parameters of the mobile platform’s camera generated during the simulation driving process based on the spatial position information of the mobile platform during the simulation driving process refer to the camera parameters corresponding to the spatial position information of the mobile platform acquired in the correspondence table in the database. Camera parameters, or in addition to the camera's rendering mode and field angle information are preset, the position information and steering information in the camera parameters are calculated according to the calculation law between the spatial position information of the mobile platform and the camera parameters of the camera For example, the position information of the camera is calculated according to the position of the mobile platform. Generally speaking, the relative position of the mobile platform and the camera is fixed, so when the position information of the mobile platform is determined, it can be based on the position information of the mobile platform and the mobile platform. The relative position of the camera is used to calculate the position information of the camera. In addition, the camera generally collects images in the moving direction of the mobile platform, so the steering of the camera and the mobile platform are consistent. After the steering information of the mobile platform is determined, the mobile platform can be The turning information of the camera is used as the turning information of the camera. Among them, different spatial position information can correspond to different rendering modes and field angle information, and the correspondence between the spatial position information and the rendering mode and field angle information in the camera parameters is stored in the database.
需要说明的是,上述相机的相机参数是符合真实世界的真实数据信息,于是在真实世界中根据上述相机参数是可以确定相机的成像,而且上述移动平台在模拟行驶环境中的运动过程也是符合真实世界的物理规则的,上述移动平台与模拟行驶环境交互的运动信息由物理引擎提供,物理引擎实质上指示了运动模块110模拟移动平台运动过程时所遵循的运算规则,通过物理引擎模拟出来的运动过程符合真实世界中的物理规则。其中,运动信息例如有移动平台的空间位置信息等,空间位置信息包括移动平台的位置信息和转向信息。It should be noted that the camera parameters of the above-mentioned camera are in line with the real data information of the real world, so in the real world the imaging of the camera can be determined according to the above-mentioned camera parameters, and the movement process of the above-mentioned mobile platform in the simulated driving environment is also in line with the real According to the physical rules of the world, the motion information of the interaction between the mobile platform and the simulated driving environment is provided by the physics engine. The physics engine essentially instructs the motion module 110 to follow the calculation rules when simulating the motion process of the mobile platform, and the motion simulated by the physics engine The process conforms to the physical rules in the real world. Among them, the movement information includes, for example, the spatial position information of the mobile platform, etc. The spatial position information includes the position information and steering information of the mobile platform.
在一种实施中,模拟上述行驶过程之前,先构建包含真实世界的三维事物的模拟行驶环境,该模拟行驶环境有地形植被、天气系统、建筑和道路等三维物体。具体的,按照在真实世界中三维物体的尺寸比例构建三维模型,然后对该三维模型进行美化,使得其除了形状以外,在颜色和图案上更加接近真实世界中的物体,最终得到一个可以任意转动并且以任意角度展示的逼真的三维模型,例如以建筑的尺寸比例构建建筑的三维模型,然后将建筑的彩色图形覆盖在该三维模型上,并为该三维模型打光和添加阴影等,从而得到一个与真实世界的建筑物十分相似的建筑模型,通过转动该建筑模型,能以任意视角观察该建筑模型。In one implementation, before simulating the above-mentioned driving process, a simulated driving environment containing real-world three-dimensional things is constructed first. The simulated driving environment includes three-dimensional objects such as terrain, vegetation, weather systems, buildings, and roads. Specifically, the 3D model is constructed according to the size ratio of the 3D object in the real world, and then the 3D model is beautified, so that in addition to the shape, the color and pattern are closer to the object in the real world, and finally an object that can be rotated at will is obtained. And the realistic 3D model displayed at any angle, such as constructing the 3D model of the building with the scale of the building, and then overlaying the color graphics of the building on the 3D model, and lighting and adding shadows to the 3D model to obtain An architectural model that is very similar to a real-world building. By rotating the architectural model, the architectural model can be observed from any perspective.
在S402中,根据上述模拟行驶过程中的相机参数,确定在模拟行驶过程中移动平台的相机拍摄模拟行驶环境得到的道路前视图。具体的,根据相机参数中的位置信息、朝向信息和视场角信息来分别确定相机在已构建好的三维模拟场景(即模拟行驶环境)的位置、朝向和可拍摄范围(视场角)等,并以相机的视角对该三维模拟场景进行拍摄,以得到目标图像,再按照相机参数中的渲染模式的指示对目标图像进行分辨率改变、拉伸旋转变化和/或色阶亮度改变等调整,从而得到道路前视图。In S402, according to the aforementioned camera parameters during the simulated driving process, it is determined that the camera of the mobile platform during the simulated driving process captures the road front view obtained by the simulated driving environment. Specifically, according to the position information, orientation information, and field of view information in the camera parameters, the position, orientation, and shooting range (field of view) of the camera in the constructed three-dimensional simulation scene (ie, simulated driving environment) are respectively determined. , And shoot the 3D simulation scene from the camera's perspective to obtain the target image, and then adjust the target image resolution, stretch rotation, and/or color scale brightness changes according to the instructions of the rendering mode in the camera parameters To get a front view of the road.
在S403中,对上述道路前视图进行图像变换,得到道路俯视图,并将道路俯视图作为仿真环境图像。具体的,对道路前视图进行图像变换指的是对道路前视图进行仿射变换(也称为透视变换),即通过变换矩阵将道路前视图变 换为道路俯视图,变换矩阵用于指示道路前视图与道路俯视图之间的变换规则。需要注意的是,相机参数不同的相机的变换矩阵可能是不一样的,变换矩阵的正确度影响了仿射变换的正确度,也间接影响了可行驶区域检测的正确度。In S403, image transformation is performed on the above-mentioned road front view to obtain a road top view, and the road top view is used as a simulation environment image. Specifically, the image transformation of the front view of the road refers to the affine transformation of the front view of the road (also called perspective transformation), that is, the front view of the road is transformed into a top view of the road through a transformation matrix, and the transformation matrix is used to indicate the front view of the road. Conversion rules between and the top view of the road. It should be noted that the transformation matrices of cameras with different camera parameters may be different. The correctness of the transformation matrix affects the correctness of the affine transformation and indirectly affects the correctness of the driving area detection.
需要说明的是,由于在现实的自动驾驶过程中,相机拍摄到的一般是道路前视图,而道路前视图不利于可行驶区域的提取,于是可以对道路前视图进行图像变换得到道路俯视图,道路俯视图不仅利于可行驶区域检测也更加直观。其中,道路前视图为相机的镜头正对移动平台行驶的方向时,相机拍摄模拟行驶环境得到的图像(相当于驾驶员正视驾驶前方时,眼中可以看到的道路情况),道路俯视图为相机的镜头垂直于移动平台的行驶方向时,相机俯拍模拟行驶环境得到的图像(相当于直升机航拍道路时的鸟瞰图像)。It should be noted that in the actual automatic driving process, the camera usually captures the front view of the road, and the front view of the road is not conducive to the extraction of the drivable area, so the road front view can be image transformed to obtain the road top view. The top view is not only conducive to driving area detection, but also more intuitive. Among them, the front view of the road is when the lens of the camera is facing the direction of the mobile platform, and the camera captures the image obtained by simulating the driving environment (equivalent to the road situation that the driver can see when the driver is looking ahead), and the top view of the road is the camera's When the lens is perpendicular to the traveling direction of the mobile platform, the camera will shoot the image obtained from the simulated traveling environment (equivalent to the bird's-eye image of the road when the helicopter is aerially photographed).
在现实世界中变换矩阵需要通过标定实验来确定,而本自动驾驶模拟系统可以通过改变相机的相机参数来对模拟行驶环境进行任意角度和位置的拍摄,于是可以很轻易的同时获取到道路前视图和道路俯视图,然后根据该道前视图和道路俯视图计算上述转换矩阵,并通过将自动驾驶模拟系统中仿真出来的转换矩阵,与标定实验测得的转换矩阵进行对比,来对标定实验中测得的转换矩阵进行算法验证以判断转换矩阵的正确性,或者对转换矩阵进行适当调整使得转换矩阵更加正确,于是本自动驾驶模拟系统可以为可行驶区域检测技术等自动驾驶技术提供测试环境。此外,本自动驾驶模拟系统还可以替代标定实验来得到不同相机参数对应的转换矩阵。同样的,不难想到本自动驾驶模拟系统还可以用于为其他自动驾驶技术提供测试环境。In the real world, the transformation matrix needs to be determined through calibration experiments, and this automatic driving simulation system can shoot the simulated driving environment at any angle and position by changing the camera parameters of the camera, so the front view of the road can be easily obtained at the same time And the top view of the road, and then calculate the above conversion matrix according to the front view of the road and the top view of the road, and compare the conversion matrix simulated in the automatic driving simulation system with the conversion matrix measured in the calibration experiment. The conversion matrix is verified by algorithm to judge the correctness of the conversion matrix, or the conversion matrix is adjusted appropriately to make the conversion matrix more correct, so this automatic driving simulation system can provide a test environment for automatic driving technologies such as driving area detection technology. In addition, the automatic driving simulation system can also replace the calibration experiment to obtain the conversion matrix corresponding to different camera parameters. Similarly, it is not difficult to think that this automatic driving simulation system can also be used to provide a test environment for other automatic driving technologies.
在S404中,获取包含上述仿真环境图像的图像元素的位置信息和类别信息的模板缓存。其中,缓存模板与仿真环境图像的尺寸一样,对应于仿真缓存图像上的图像元素所在位置,缓存模块的相应位置上标记有图像元素的标记字符,标记字符代表了图像元素所属的类别,于是上述缓存模板包含了仿真环境图像中的各个图像元素的位置信息和类别信息。其中,图像元素包括可行驶区域、行人、建筑、绿化和道路等各种物体的图像。In S404, a template cache containing the location information and category information of the image elements of the aforementioned simulation environment image is obtained. Among them, the size of the cache template is the same as that of the simulation environment image, which corresponds to the location of the image element on the simulation cache image. The corresponding position of the cache module is marked with the mark character of the image element, and the mark character represents the category to which the image element belongs. The cache template contains the location information and category information of each image element in the simulation environment image. Among them, the image elements include images of various objects such as drivable areas, pedestrians, buildings, greenery and roads.
需要说明的是,标记字符用于表示图像元素的类别,该标记字符可以是字符、数字等的任意组合,用于唯一确定图像元素的类别,相同类别的图像元素对应相同的标记字符,不同类别的图像元素对应不同的标记字符。It should be noted that the mark character is used to indicate the category of the image element. The mark character can be any combination of characters, numbers, etc., and is used to uniquely determine the category of the image element. Image elements of the same category correspond to the same mark character, and different categories The image elements correspond to different marked characters.
实际上,缓存模板相当于精简化的仿真环境图像,模板缓存的尺寸大小与仿真环境图像一致,仿真环境图像中的每个图像元素,在缓存模板上的对应位置上都有标记字符,于是仅包含仿真环境图像的各个图像元素的位置信息和标记字符,当把缓存模块与仿真环境图像合在一起时,缓存模块中存在标记字符的位置与仿真环境图像上的图像元素完全重合。In fact, the cache template is equivalent to a simplified simulation environment image. The size of the template cache is the same as that of the simulation environment image. Each image element in the simulation environment image has a mark character at the corresponding position on the cache template. Contains the position information and marked characters of each image element of the simulation environment image. When the cache module and the simulation environment image are combined, the position of the marked characters in the cache module completely overlaps with the image elements on the simulation environment image.
上述可行驶区域也称为Freespace,是移动平台在自动驾驶中可行驶的道路,用于为自动驾驶提供路径规划以躲避障碍物。可行驶区域可以是道路的整个路面,也可以是包含道路的关键信息(例如道路的走向信息,道路的中点信息等)的部分路面。更具体的,可行驶区域包括了结构化的路面、半结构化的路面、非结构化的路面,结构化的路面为路面结构单一且有道路边缘线的路面,比如城市主干道,高速、国道、省道等,半结构化的路面为结构多样的路面,比如停车场,广场等,非结构化的路面为天然且没有结构层的路面,例如未开发的无人区。The above-mentioned drivable area is also called Freespace, which is a road on which the mobile platform can travel in automatic driving, and is used to provide path planning for automatic driving to avoid obstacles. The drivable area can be the entire road surface of the road, or a part of the road surface that contains key information of the road (such as road direction information, road midpoint information, etc.). More specifically, the drivable area includes structured pavement, semi-structured pavement, and unstructured pavement. Structured pavement is a pavement with a single pavement structure and road edges, such as urban main roads, high-speed roads, and national roads. , Provincial roads, etc., semi-structured roads are roads with various structures, such as parking lots, squares, etc., and unstructured roads are natural roads without structural layers, such as undeveloped uninhabited areas.
在S405中,根据上述模板缓存对上述仿真环境图像进行渲染,得到标记图像。具体的,在获取了上述缓存模板之后,根据该缓存模板对仿真缓存图像进行渲染,即一边读取缓存模板中记载的位置信息和类别信息,确定仿真环境图像中的每个图像元素的位置和类别,一边对仿真环境图像进行渲染,以突出仿真环境图像中的图像元素,得到用于辅助移动平台在自动驾驶的时候进行路径规划以避开障碍物的标记图像,例如利用方框标出仿真环境图像中的图像元素,得到如图2所示标记图像。In S405, the simulation environment image is rendered according to the template cache to obtain a marked image. Specifically, after obtaining the above-mentioned cache template, the simulation cache image is rendered according to the cache template, that is, while reading the position information and category information recorded in the cache template, the position and location of each image element in the simulation environment image are determined. Category, while rendering the simulation environment image to highlight the image elements in the simulation environment image, to obtain a marked image that is used to assist the mobile platform in path planning to avoid obstacles during automatic driving, such as using a box to mark the simulation The image elements in the environment image are marked as shown in Figure 2.
在一个实施例中,上述标记字符为标记值(数值),上述标记后的仿真环境图像中的图像元素按照类别被赋予了标记值,其中,图像元素包含障碍物的图像元素和可行驶区域的图像元素,障碍物的图像元素和可行驶区域的图像元素的标记值不相同。其中,可行驶区域的标记字符为0,包括道路和路面等,障碍物的标记字符为220,包括建筑、道路护栏、行驶的汽车和行人等。In one embodiment, the above-mentioned marked characters are marked values (numerical values), and the image elements in the simulated environment image after the above-mentioned marking are assigned marked values according to categories, where the image elements include the image elements of the obstacles and the driving area. The label values of the image elements, the image elements of the obstacles and the image elements of the drivable area are not the same. Among them, the marking character of the drivable area is 0, including roads and pavements, and the marking character of obstacles is 220, including buildings, road guardrails, driving cars, and pedestrians.
举例来说,假设可行驶区域的标记字符为0,仿真环境图像中的像素点1到像素点n为可行驶区域,则缓存模板中的像素点1到像素点n赋值为0,然后通过读取到缓存模板中的像素点1到像素点n的值,来确定仿真环境图像中的像素点1到像素点n为可行驶区域,并在对仿真环境图像进行渲染的时候, 将仿真环境图像中的像素点1到像素点n标记为可行驶环境,例如用方框将像素点1到像素点n框出来,并标记上可行驶区域的文字标签,如图2。For example, suppose that the mark character of the drivable area is 0, and the pixel 1 to pixel n in the simulated environment image are the drivable area, then the pixel 1 to pixel n in the cache template are assigned the value 0, and then read Take the values from pixel 1 to pixel n in the cache template to determine that pixel 1 to pixel n in the simulated environment image are the drivable area, and when the simulated environment image is rendered, the simulated environment image Pixels 1 to n in are marked as a drivable environment. For example, use a box to frame pixel 1 to pixel n and mark the text label of the drivable area, as shown in Figure 2.
在一个实施例中,上述障碍物的图像元素包括静止障碍物的图像元素和动态障碍物的图像元素,静止障碍物的图像元素和动态障碍物的图像元素的标记值不相同。上述障碍物被进一步的细化分为静止障碍物和动态障碍物等,其中,静止障碍物的标记字符为200,包括建筑和道路护栏等,动态障碍物的标记字符为255,包括行驶的汽车和行人等。In one embodiment, the image elements of the obstacles include image elements of static obstacles and image elements of dynamic obstacles, and the image elements of the static obstacles and the image elements of dynamic obstacles have different label values. The above obstacles are further divided into static obstacles and dynamic obstacles. Among them, static obstacles are marked with 200 characters, including buildings and road guardrails, and dynamic obstacles are marked with 255 characters, including moving cars. And pedestrians.
在一个实施例中,将仿真环境图像的图像元素按照对应的标记值所指示的颜色输出,得到上述标记图像。具体的,按照上述缓存模块中包含的图像元素的位置信息和标记字符,将上述仿真环境图像按照上述位置信息所指示的位置输出为该位置信息对应的标记字符所指示的颜色,即将图像元素所在位置输出为该图像元素的标记值所指示的颜色,或者在图像元素所在位置上覆盖上该图像元素的标记值所指示的半透明颜色。举例来说,可行驶区域的标记值为0,静止障碍物的标记字符为200,动态障碍物的标记字符为255,在红绿蓝(RGB,red green blue)颜色模式中,标记值0代表RGB值(0,0,0),表现为黑色,标记值200代表RGB值(200,200,200),表现为灰色,标记值255代表RGB值(255,255,255),表现为白色,于是将可行驶区域输出为颜色值0所代表的黑色,或者在可行驶区域上覆盖上一层半透明黑色,将静止障碍物输出为颜色值为200所代表的灰色,或者在静止障碍物上覆盖上一层半透明灰色,将动态障碍物输出为颜色值为255所代表的白色,或者在动态障碍物上覆盖上一层半透明白色。本申请实施例对颜色模式不做限定,在不同的颜色模式中,相同的标记值可能对应不同的颜色,但是在同种颜色模式中,标记值与颜色一一对应。In one embodiment, the image elements of the simulated environment image are output according to the color indicated by the corresponding label value to obtain the above-mentioned label image. Specifically, according to the position information and marking characters of the image elements contained in the above-mentioned cache module, the simulation environment image is output according to the position indicated by the position information as the color indicated by the marking characters corresponding to the position information, that is, where the image element is The position output is the color indicated by the label value of the image element, or the translucent color indicated by the label value of the image element is overlaid on the position of the image element. For example, the mark value of the drivable area is 0, the mark character of a static obstacle is 200, and the mark character of a dynamic obstacle is 255. In the red green blue (RGB, red green blue) color mode, the mark value 0 represents The RGB value (0, 0, 0) is represented as black, the marked value 200 represents the RGB value (200, 200, 200), which is represented as gray, and the marked value 255 represents the RGB value (255, 255, 255), which is represented as white, Then output the drivable area as black represented by the color value 0, or cover the drivable area with a layer of translucent black, and output the stationary obstacle as gray represented by the color value 200, or on the stationary obstacle Cover a layer of translucent gray, and output the dynamic obstacle as white represented by the color value of 255, or cover the dynamic obstacle with a layer of translucent white. The embodiment of the present application does not limit the color mode. In different color modes, the same label value may correspond to different colors, but in the same color mode, the label value and the color correspond one-to-one.
本申请实施例相比上一申请实施例来说更加详细,详细描述了根据相机参数来确定移动平台在行驶过程中拍摄模拟行驶环境得到仿真环境图像的过程。需要说明的是,上文对各个实施例的描述倾向于强调各个实施例之间的不同之处,其相同或相似之处可以互相参考,为了简洁,本文不再赘述。The embodiment of the present application is more detailed than the embodiment of the previous application, and describes in detail the process of determining the mobile platform according to the camera parameters to capture the simulated driving environment during the driving process to obtain the simulated environment image. It should be noted that the above description of the various embodiments tends to emphasize the differences between the various embodiments, and the similarities or similarities can be referred to each other. For the sake of brevity, details are not repeated herein.
基于上述方法实施例的描述,本申请实施例还提供一种自动驾驶模拟设备, 该自动驾驶模拟设备用于执行前述任一项的自动驾驶模拟方法的单元。具体地,参见图5,是本申请实施例提供的一种自动驾驶模拟设备的示意性框图。本申请实施例的自动驾驶模拟设备包括:获取单元510以及渲染单元520。具体的:Based on the description of the foregoing method embodiment, an embodiment of the present application further provides an automatic driving simulation device, which is used to execute a unit of any one of the foregoing automatic driving simulation methods. Specifically, refer to FIG. 5, which is a schematic block diagram of an automatic driving simulation device provided by an embodiment of the present application. The automatic driving simulation device of the embodiment of the present application includes: an acquisition unit 510 and a rendering unit 520. specific:
获取单元510,用于获取移动平台的相机在模拟行驶过程中的相机参数;The acquiring unit 510 is configured to acquire camera parameters of the camera of the mobile platform during the simulated driving process;
渲染单元520,用于根据上述模拟行驶过程中的相机参数,确定在上述模拟行驶过程中上述移动平台的相机拍摄上述模拟行驶环境得到的仿真环境图像;The rendering unit 520 is configured to determine, according to the camera parameters in the simulated driving process, that the camera of the mobile platform captures the simulated environment image obtained in the simulated driving environment during the simulated driving process;
上述获取单元510,还用于获取标记后的仿真环境图像;The above-mentioned acquiring unit 510 is also used to acquire a marked simulation environment image;
上述渲染单元520,还用于对上述标记后的仿真环境图像进行图像渲染,得到标记图像。The rendering unit 520 is also used to perform image rendering on the marked simulated environment image to obtain a marked image.
在一种实施中,上述获取单元510,具体用于获取根据上述模拟行驶过程中上述移动平台的空间位置信息生成的上述移动平台的相机在上述模拟行驶过程中的相机参数。In an implementation, the acquisition unit 510 is specifically configured to acquire the camera parameters of the camera of the mobile platform during the simulation driving process generated according to the spatial position information of the mobile platform during the simulation driving process.
在一种实施中,上述渲染单元520,具体用于根据上述模拟行驶过程中的相机参数,确定在上述模拟行驶过程中上述移动平台的相机拍摄上述模拟行驶环境得到的道路前视图;上述自动驾驶模拟设备还包括变换单元530,用于对上述道路前视图进行图像变换,得到道路俯视图,并将上述道路俯视图作为上述仿真环境图像。In an implementation, the rendering unit 520 is specifically configured to determine, according to the camera parameters during the simulation driving process, the front view of the road obtained by the camera of the mobile platform during the simulation driving process, which is obtained by shooting the simulation driving environment; The simulation device also includes a conversion unit 530, which is used to perform image conversion on the front view of the road to obtain a top view of the road and use the top view of the road as the simulation environment image.
在一种实施中,上述获取单元510,具体用于获取包含上述仿真环境图像的图像元素的位置信息和类别信息的模板缓存;上述渲染单元,还用于根据上述模板缓存对上述仿真环境图像进行渲染,得到上述标记图像。In an implementation, the acquisition unit 510 is specifically configured to acquire a template cache containing the location information and category information of the image elements of the simulation environment image; the rendering unit is further configured to perform processing on the simulation environment image according to the template cache. Render to get the above-mentioned marked image.
在一种实施中,上述标记后的仿真环境图像中的图像元素按照类别被赋予了标记值,其中,上述图像元素包含障碍物的图像元素和可行驶区域的图像元素,上述障碍物的图像元素和上述可行驶区域的图像元素的标记值不相同。In one implementation, the image elements in the marked simulated environment image are assigned marked values according to categories, wherein the image elements include the image elements of the obstacle and the image elements of the drivable area, and the image elements of the obstacle The label value of the image element in the above-mentioned drivable area is different.
在一种实施中,上述渲染单元520,具体用于将上述仿真环境图像的图像元素按照对应的标记值所指示的颜色输出,得到上述标记图像。In an implementation, the aforementioned rendering unit 520 is specifically configured to output the image elements of the aforementioned simulated environment image according to the color indicated by the corresponding tag value to obtain the aforementioned tagged image.
在一种实施中,上述障碍物的图像元素包括静止障碍物的图像元素和动态障碍物的图像元素,上述静止障碍物的图像元素和上述动态障碍物的图像元素的标记值不相同。In one implementation, the image element of the obstacle includes the image element of the static obstacle and the image element of the dynamic obstacle, and the image elements of the static obstacle and the image element of the dynamic obstacle have different label values.
在一种实施中,上述相机参数包括相机的位置信息、朝向信息、渲染模式以及视场角信息中的至少一种。In an implementation, the aforementioned camera parameters include at least one of position information, orientation information, rendering mode, and field of view information of the camera.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,上述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,上述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through a computer program. The above-mentioned program can be stored in a computer-readable storage medium. When executed, it may include the processes of the above-mentioned method embodiments. Among them, the aforementioned storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
需要说明的是,上述描述的自动驾驶模拟设备的具体工作过程,可以参考前述各个实施例中的相关描述,在此不再赘述。It should be noted that, for the specific working process of the automatic driving simulation device described above, reference may be made to the related descriptions in the foregoing embodiments, which will not be repeated here.
参见图6,是本申请另一实施例提供的一种自动驾驶模拟设备的结构性框图。如图所示的本实施例中的自动驾驶模拟设备可以包括:一个或多个处理器610和存储器620。上述处理器610和存储器620通过总线630连接。存储器620用于存储计算机程序,计算机程序包括程序指令,处理器610用于执行存储器620存储的程序指令。Refer to FIG. 6, which is a structural block diagram of an automatic driving simulation device provided by another embodiment of the present application. The automatic driving simulation device in this embodiment as shown in the figure may include: one or more processors 610 and a memory 620. The aforementioned processor 610 and memory 620 are connected through a bus 630. The memory 620 is configured to store a computer program, and the computer program includes program instructions, and the processor 610 is configured to execute the program instructions stored in the memory 620.
处理器610,用于执行获取单元510的功能,用于获取移动平台的相机在模拟行驶过程中的相机参数;还用于执行渲染单元520的功能,用于根据上述模拟行驶过程中的相机参数,确定在上述模拟行驶过程中上述移动平台的相机拍摄上述模拟行驶环境得到的仿真环境图像;还用于获取标记后的仿真环境图像;还用于对上述标记后的仿真环境图像进行图像渲染,得到标记图像。The processor 610 is configured to perform the functions of the obtaining unit 510, and is used to obtain the camera parameters of the camera of the mobile platform during the simulation driving process; and is also used to perform the functions of the rendering unit 520, and is used to perform the above-mentioned camera parameters during the simulation driving process. , Determining that the camera of the mobile platform captures the simulated environment image obtained from the simulated driving environment during the simulated driving process; is also used to obtain the marked simulated environment image; and is also used to perform image rendering on the marked simulated environment image, Get the marked image.
在一种实施中,上述处理器610,具体用于获取根据上述模拟行驶过程中上述移动平台的空间位置信息生成的上述移动平台的相机在上述模拟行驶过程中的相机参数。In an implementation, the processor 610 is specifically configured to obtain the camera parameters of the camera of the mobile platform during the simulation driving process generated according to the spatial position information of the mobile platform during the simulation driving process.
在一种实施中,上述处理器610,具体用于根据上述模拟行驶过程中的相机参数,确定在上述模拟行驶过程中上述移动平台的相机拍摄上述模拟行驶环境得到的道路前视图;上述处理器610还用于执行变换单元530的功能,用于对上述道路前视图进行图像变换,得到道路俯视图,并将上述道路俯视图作为上述仿真环境图像。In an implementation, the processor 610 is specifically configured to determine, according to the camera parameters during the simulated driving process, that the camera of the mobile platform captures the road front view obtained by the simulated driving environment during the simulated driving process; 610 is also used to perform the function of the transformation unit 530, which is used to perform image transformation on the front view of the road to obtain a top view of the road, and use the top view of the road as the simulation environment image.
在一种实施中,上述处理器610,具体用于获取包含上述仿真环境图像的 图像元素的位置信息和类别信息的模板缓存;还用于根据上述模板缓存对上述仿真环境图像进行渲染,得到上述标记图像。In an implementation, the aforementioned processor 610 is specifically configured to obtain a template cache containing the location information and category information of the image elements of the aforementioned simulated environment image; and is also configured to render the aforementioned simulated environment image according to the aforementioned template cache to obtain the aforementioned Mark the image.
在一种实施中,上述标记后的仿真环境图像中的图像元素按照类别被赋予了标记值,其中,上述图像元素包含障碍物的图像元素和可行驶区域的图像元素,上述障碍物的图像元素和上述可行驶区域的图像元素的标记值不相同。In one implementation, the image elements in the marked simulated environment image are assigned marked values according to categories, wherein the image elements include the image elements of the obstacle and the image elements of the drivable area, and the image elements of the obstacle The label value of the image element in the above-mentioned drivable area is different.
在一种实施中,上述处理器610,具体用于将上述仿真环境图像的图像元素按照对应的标记值所指示的颜色输出,得到上述标记图像。In an implementation, the aforementioned processor 610 is specifically configured to output the image elements of the aforementioned simulated environment image according to the color indicated by the corresponding mark value to obtain the aforementioned marked image.
在一种实施中,上述障碍物的图像元素包括静止障碍物的图像元素和动态障碍物的图像元素,上述静止障碍物的图像元素和上述动态障碍物的图像元素的标记值不相同。In one implementation, the image element of the obstacle includes the image element of the static obstacle and the image element of the dynamic obstacle, and the image elements of the static obstacle and the image element of the dynamic obstacle have different label values.
在一种实施中,上述相机参数包括相机的位置信息、朝向信息、渲染模式以及视场角信息中的至少一种。In an implementation, the aforementioned camera parameters include at least one of position information, orientation information, rendering mode, and field of view information of the camera.
在一种实施方式中,该处理器可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器,即微处理器或者任何常规的处理器,例如:数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable GateArray,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,等等。In one embodiment, the processor may be a central processing unit (Central Processing Unit, CPU), and the processor may also be another general-purpose processor, that is, a microprocessor or any conventional processor, such as digital signal processing DSP (Digital Signal Processor), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware Components, etc.
该存储器620可以包括只读存储器和随机存取存储器,并向处理器610提供指令和数据。存储器620的一部分还可以包括非易失性随机存取存储器。例如,存储器620还可以存储设备类型的信息。The memory 620 may include a read-only memory and a random access memory, and provides instructions and data to the processor 610. A part of the memory 620 may also include a non-volatile random access memory. For example, the memory 620 may also store device type information.
具体实现中,本申请实施例中所描述的处理器610可执行本申请实施例提供的走锚提醒方法的第一实施例和第二实施例中所描述的实现方式,也可执行本申请实施例所描述的自动驾驶模拟设备的实现方式,在此不再赘述。In specific implementation, the processor 610 described in the embodiment of this application can execute the implementation described in the first and second embodiments of the anchor moving reminder method provided in the embodiment of this application, and can also execute the implementation of this application. The implementation of the automatic driving simulation device described in the example will not be repeated here.
需要说明的是,上述描述的自动驾驶模拟设备的具体工作过程,可以参考前述各个实施例中的相关描述,在此不再赘述。It should be noted that, for the specific working process of the automatic driving simulation device described above, reference may be made to the related descriptions in the foregoing embodiments, which will not be repeated here.
在本申请的另一实施例中提供一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序包括程序指令,程序指令被处理器执行。In another embodiment of the present application, a computer-readable storage medium is provided. The computer-readable storage medium stores a computer program, the computer program includes program instructions, and the program instructions are executed by a processor.
计算机可读存储介质可以是前述任一实施例的自动驾驶模拟设备的内部 存储单元,例如自动驾驶模拟设备的硬盘或内存。计算机可读存储介质也可以是自动驾驶模拟设备的外部存储设备,例如自动驾驶模拟设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,计算机可读存储介质还可以既包括自动驾驶模拟设备的内部存储单元也包括外部存储设备。计算机可读存储介质用于存储计算机程序以及自动驾驶模拟设备所需的其他程序和数据。计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。The computer-readable storage medium may be an internal storage unit of the automatic driving simulation device of any of the foregoing embodiments, such as the hard disk or memory of the automatic driving simulation device. The computer-readable storage medium can also be an external storage device of the automated driving simulation device, such as a plug-in hard disk, a smart media card (SMC), and a secure digital (SD) card equipped on the automated driving simulation device. , Flash Card, etc. Further, the computer-readable storage medium may also include both an internal storage unit of the automatic driving simulation device and an external storage device. The computer-readable storage medium is used to store computer programs and other programs and data required by the automatic driving simulation device. The computer-readable storage medium can also be used to temporarily store data that has been output or will be output.
以上所揭露的仅为本发明的部分实施例而已,当然不能以此来限定本发明之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本发明权利要求所作的等同变化,仍属于发明所涵盖的范围。The above-disclosed are only part of the embodiments of the present invention. Of course, the scope of rights of the present invention cannot be limited by this. Those of ordinary skill in the art can understand all or part of the processes for implementing the above-mentioned embodiments and make them in accordance with the claims of the present invention. The equivalent changes still fall within the scope of the invention.

Claims (19)

  1. 一种自动驾驶模拟系统,其特征在于,包括:An automatic driving simulation system is characterized in that it comprises:
    运动模块,用于模拟移动平台在模拟行驶环境中的模拟行驶过程;The motion module is used to simulate the simulated driving process of the mobile platform in the simulated driving environment;
    相机模块,用于确定所述移动平台的相机在所述模拟行驶过程中的相机参数;The camera module is used to determine the camera parameters of the camera of the mobile platform during the simulation driving process;
    渲染模块,用于根据所述模拟行驶过程中的相机参数,确定在所述模拟行驶过程中所述移动平台的相机拍摄所述模拟行驶环境得到的仿真环境图像;A rendering module, configured to determine, according to the camera parameters during the simulated driving process, that the camera of the mobile platform captures the simulated environment image obtained by the simulated driving environment during the simulated driving process;
    标记模块,用于对所述仿真环境图像进行标记,其中,在进行标记时至少对所述仿真环境图像中的可行驶区域进行标记;A marking module, configured to mark the simulated environment image, where at least the drivable area in the simulated environment image is marked when marking;
    所述渲染模块,还用于对标记后的仿真环境图像进行图像渲染,得到标记图像。The rendering module is also used to perform image rendering on the marked simulated environment image to obtain a marked image.
  2. 根据权利要求1所述的系统,其特征在于,The system according to claim 1, wherein:
    所述运动模块,用于模拟移动平台在模拟行驶环境中的模拟行驶过程,并生成所述移动平台在所述模拟行驶过程中的空间位置信息;The motion module is used to simulate a simulated driving process of the mobile platform in a simulated driving environment, and generate spatial position information of the mobile platform during the simulated driving process;
    所述相机模块,用于根据所述模拟行驶过程中的空间位置信息生成所述移动平台的相机在所述模拟行驶过程中的相机参数。The camera module is configured to generate camera parameters of the camera of the mobile platform during the simulation driving process according to the spatial position information during the simulation driving process.
  3. 根据权利要求1所述的系统,其特征在于,所述渲染模块用于:The system according to claim 1, wherein the rendering module is used for:
    根据所述模拟行驶过程中的相机参数,确定在所述模拟行驶过程中所述移动平台的相机拍摄所述模拟行驶环境得到的道路前视图;Determine, according to the camera parameters during the simulated driving process, that the camera of the mobile platform captures the road front view obtained by the simulated driving environment during the simulated driving process;
    对所述道路前视图进行图像变换,得到道路俯视图;Performing image transformation on the front view of the road to obtain a top view of the road;
    将所述道路俯视图作为所述仿真环境图像。Use the road top view as the simulation environment image.
  4. 根据权利要求1所述的系统,其特征在于,The system according to claim 1, wherein:
    所述标记模块,用于对所述仿真环境图像中的图像元素的类别进行标记,得到包含所述仿真环境图像的图像元素的位置信息和类别信息的模板缓存;The marking module is used to mark the category of the image element in the simulation environment image to obtain a template cache containing the position information and category information of the image element of the simulation environment image;
    所述渲染模块,用于根据所述模板缓存对所述仿真环境图像进行渲染,得 到所述标记图像。The rendering module is configured to render the simulation environment image according to the template cache to obtain the marked image.
  5. 根据权利要求1所述的系统,其特征在于,所述标记模块,用于对所述仿真环境图像进行分析识别,根据分析识别到的所述仿真环境图像中的图像元素的类别给所述图像元素赋予标记值,其中,所述图像元素包含障碍物的图像元素和可行驶区域的图像元素,所述障碍物的图像元素和所述可行驶区域的图像元素的标记值不相同。The system according to claim 1, wherein the marking module is configured to analyze and recognize the simulation environment image, and provide the image according to the classification of the image element in the simulation environment image identified by the analysis. The element is assigned a label value, wherein the image element includes the image element of the obstacle and the image element of the drivable area, and the label value of the image element of the obstacle and the image element of the drivable area are different.
  6. 根据权利要求5所述的系统,其特征在于,所述渲染模块,用于将所述仿真环境图像的图像元素按照对应的标记值所指示的颜色输出,得到所述标记图像。The system according to claim 5, wherein the rendering module is configured to output the image elements of the simulated environment image according to the color indicated by the corresponding label value to obtain the label image.
  7. 根据权利要求5所述的系统,其特征在于,所述障碍物的图像元素包括静止障碍物的图像元素和动态障碍物的图像元素,所述静止障碍物的图像元素和所述动态障碍物的图像元素的标记值不相同。The system according to claim 5, wherein the image element of the obstacle includes the image element of the static obstacle and the image element of the dynamic obstacle, the image element of the static obstacle and the image element of the dynamic obstacle The tag values of the image elements are not the same.
  8. 根据权利要求1至7任意一项所述的系统,其特征在于,所述相机参数包括相机的位置信息、朝向信息、渲染模式以及视场角信息中的至少一种。The system according to any one of claims 1 to 7, wherein the camera parameters include at least one of position information, orientation information, rendering mode, and field of view information of the camera.
  9. 一种自动驾驶模拟方法,其特征在于,包括:An automatic driving simulation method, characterized by comprising:
    获取移动平台的相机在模拟行驶过程中的相机参数;Obtain the camera parameters of the mobile platform's camera in the simulation driving process;
    根据所述模拟行驶过程中的相机参数,确定在所述模拟行驶过程中所述移动平台的相机拍摄所述模拟行驶环境得到的仿真环境图像;Determining, according to the camera parameters in the simulated driving process, the simulated environment image obtained by the camera of the mobile platform during the simulated driving process to capture the simulated driving environment;
    获取标记后的仿真环境图像;Obtain the marked simulation environment image;
    对所述标记后的仿真环境图像进行图像渲染,得到标记图像。Image rendering is performed on the marked simulated environment image to obtain a marked image.
  10. 根据权利要求9所述的方法,其特征在于,所述获取移动平台的相机在模拟行驶过程中的相机参数,包括:The method according to claim 9, wherein said acquiring camera parameters of a camera of a mobile platform during a simulated driving process comprises:
    获取根据所述模拟行驶过程中所述移动平台的空间位置信息生成的所述 移动平台的相机在所述模拟行驶过程中的相机参数。Obtain the camera parameters of the camera of the mobile platform during the simulation driving process generated according to the spatial position information of the mobile platform during the simulation driving process.
  11. 根据权利要求9所述的方法,其特征在于,所述根据所述模拟行驶过程中的相机参数,确定在所述模拟行驶过程中所述移动平台的相机拍摄所述模拟行驶环境得到的仿真环境图像,包括:The method according to claim 9, characterized in that said determining the simulation environment obtained by the camera of the mobile platform during the simulation driving photographing the simulated driving environment according to the camera parameters in the simulation driving process Images, including:
    根据所述模拟行驶过程中的相机参数,确定在所述模拟行驶过程中所述移动平台的相机拍摄所述模拟行驶环境得到的道路前视图;Determine, according to the camera parameters during the simulated driving process, that the camera of the mobile platform captures the road front view obtained by the simulated driving environment during the simulated driving process;
    对所述道路前视图进行图像变换,得到道路俯视图;Performing image transformation on the front view of the road to obtain a top view of the road;
    将所述道路俯视图作为所述仿真环境图像。Use the road top view as the simulation environment image.
  12. 根据权利要求9所述的方法,其特征在于,所述获取标记后的仿真环境图像,包括:The method according to claim 9, wherein said acquiring the marked simulated environment image comprises:
    获取包含所述仿真环境图像的图像元素的位置信息和类别信息的模板缓存;Acquiring a template cache containing location information and category information of image elements of the simulation environment image;
    所述对所述标记后的仿真环境图像进行图像渲染,得到标记图像,包括:The performing image rendering on the marked simulated environment image to obtain the marked image includes:
    根据所述模板缓存对所述仿真环境图像进行渲染,得到所述标记图像。Render the simulation environment image according to the template cache to obtain the marked image.
  13. 根据权利要求9所述的方法,其特征在于,所述标记后的仿真环境图像中的图像元素按照类别被赋予了标记值,其中,所述图像元素包含障碍物的图像元素和可行驶区域的图像元素,所述障碍物的图像元素和所述可行驶区域的图像元素的标记值不相同。The method according to claim 9, wherein the image elements in the marked simulated environment image are assigned marked values according to categories, wherein the image elements include image elements of obstacles and areas of the drivable area. Image elements, the image elements of the obstacle and the image elements of the drivable area have different label values.
  14. 根据权利要求13所述的方法,其特征在于,所述对所述标记后的仿真环境图像进行图像渲染,得到标记图像,包括:The method according to claim 13, wherein the performing image rendering on the marked simulated environment image to obtain the marked image comprises:
    将所述仿真环境图像的图像元素按照对应的标记值所指示的颜色输出,得到所述标记图像。The image elements of the simulation environment image are output according to the color indicated by the corresponding label value to obtain the label image.
  15. 根据权利要求13所述的方法,其特征在于,所述障碍物的图像元素包括静止障碍物的图像元素和动态障碍物的图像元素,所述静止障碍物的图像 元素和所述动态障碍物的图像元素的标记值不相同。The method according to claim 13, wherein the image element of the obstacle includes the image element of the static obstacle and the image element of the dynamic obstacle, the image element of the static obstacle and the image element of the dynamic obstacle The tag values of the image elements are not the same.
  16. 根据权利要求9至15任意一项所述的方法,其特征在于,所述相机参数包括相机的位置信息、朝向信息、渲染模式以及视场角信息中的至少一种。The method according to any one of claims 9 to 15, wherein the camera parameters include at least one of position information, orientation information, rendering mode, and field of view information of the camera.
  17. 一种自动驾驶模拟设备,其特征在于,包括:An automatic driving simulation device, characterized by comprising:
    获取单元,用于获取移动平台的相机在模拟行驶过程中的相机参数;The acquiring unit is used to acquire the camera parameters of the camera of the mobile platform during the simulated driving process;
    渲染单元,用于根据所述模拟行驶过程中的相机参数,确定在所述模拟行驶过程中所述移动平台的相机拍摄所述模拟行驶环境得到的仿真环境图像;A rendering unit, configured to determine, according to the camera parameters during the simulated driving process, that the camera of the mobile platform captures the simulated environment image obtained by the simulated driving environment during the simulated driving process;
    所述获取单元,还用于获取标记后的仿真环境图像;The acquiring unit is also used to acquire a marked simulation environment image;
    所述渲染单元,还用于对所述标记后的仿真环境图像进行图像渲染,得到标记图像。The rendering unit is also configured to perform image rendering on the marked simulated environment image to obtain a marked image.
  18. 一种自动驾驶模拟设备,其特征在于,包括处理器和存储器,所述处理器和存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,用以执行如权利要求9-16任一项所述的方法。An automatic driving simulation device, characterized by comprising a processor and a memory, the processor and the memory are connected to each other, wherein the memory is used to store a computer program, the computer program includes program instructions, and the processor is It is configured to call the program instructions to execute the method according to any one of claims 9-16.
  19. 一种计算机可读存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行,用以执行如权利要求9-16任一项所述的方法。A computer-readable storage medium, wherein the computer storage medium stores a computer program, the computer program includes program instructions, and the program instructions are executed by a processor to execute any one of claims 9-16 The method described in the item.
PCT/CN2019/080693 2019-03-30 2019-03-30 Self-piloting simulation system, method and device, and storage medium WO2020199057A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980004921.6A CN111316324A (en) 2019-03-30 2019-03-30 Automatic driving simulation system, method, equipment and storage medium
PCT/CN2019/080693 WO2020199057A1 (en) 2019-03-30 2019-03-30 Self-piloting simulation system, method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/080693 WO2020199057A1 (en) 2019-03-30 2019-03-30 Self-piloting simulation system, method and device, and storage medium

Publications (1)

Publication Number Publication Date
WO2020199057A1 true WO2020199057A1 (en) 2020-10-08

Family

ID=71159508

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/080693 WO2020199057A1 (en) 2019-03-30 2019-03-30 Self-piloting simulation system, method and device, and storage medium

Country Status (2)

Country Link
CN (1) CN111316324A (en)
WO (1) WO2020199057A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112052738A (en) * 2020-08-07 2020-12-08 北京中科慧眼科技有限公司 Indoor obstacle testing method, system, equipment and readable storage medium
CN112498342A (en) * 2020-11-26 2021-03-16 潍柴动力股份有限公司 Pedestrian collision prediction method and system
WO2022141294A1 (en) * 2020-12-30 2022-07-07 深圳市大疆创新科技有限公司 Simulation test method and system, simulator, storage medium, and program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011144929A1 (en) * 2010-05-19 2011-11-24 Bae Systems Plc Evaluating capability of a system to perform a task by comparing models of the system and the task
CN106339079A (en) * 2016-08-08 2017-01-18 清华大学深圳研究生院 Method and device for realizing virtual reality by using unmanned aerial vehicle based on computer vision
CN107452268A (en) * 2017-07-03 2017-12-08 扬州大学 A kind of multi-mode driving platform and its control method based on simulator
CN108763733A (en) * 2018-05-24 2018-11-06 北京汽车集团有限公司 driving simulation test method, device and system
CN109101690A (en) * 2018-07-11 2018-12-28 深圳地平线机器人科技有限公司 Method and apparatus for rendering the scene in Vehicular automatic driving simulator
CN109187048A (en) * 2018-09-14 2019-01-11 盯盯拍(深圳)云技术有限公司 Automatic Pilot performance test methods and automatic Pilot performance testing device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5341789B2 (en) * 2010-01-22 2013-11-13 富士通テン株式会社 Parameter acquisition apparatus, parameter acquisition system, parameter acquisition method, and program
US10176634B2 (en) * 2015-10-16 2019-01-08 Ford Global Technologies, Llc Lane boundary detection data generation in virtual environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011144929A1 (en) * 2010-05-19 2011-11-24 Bae Systems Plc Evaluating capability of a system to perform a task by comparing models of the system and the task
CN106339079A (en) * 2016-08-08 2017-01-18 清华大学深圳研究生院 Method and device for realizing virtual reality by using unmanned aerial vehicle based on computer vision
CN107452268A (en) * 2017-07-03 2017-12-08 扬州大学 A kind of multi-mode driving platform and its control method based on simulator
CN108763733A (en) * 2018-05-24 2018-11-06 北京汽车集团有限公司 driving simulation test method, device and system
CN109101690A (en) * 2018-07-11 2018-12-28 深圳地平线机器人科技有限公司 Method and apparatus for rendering the scene in Vehicular automatic driving simulator
CN109187048A (en) * 2018-09-14 2019-01-11 盯盯拍(深圳)云技术有限公司 Automatic Pilot performance test methods and automatic Pilot performance testing device

Also Published As

Publication number Publication date
CN111316324A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
US20210058608A1 (en) Method and apparatus for generating three-dimensional (3d) road model
US10019652B2 (en) Generating a virtual world to assess real-world video analysis performance
CN110119698A (en) For determining the method, apparatus, equipment and storage medium of Obj State
WO2020199057A1 (en) Self-piloting simulation system, method and device, and storage medium
US11282164B2 (en) Depth-guided video inpainting for autonomous driving
CN112639846A (en) Method and device for training deep learning model
CN110148177A (en) For determining the method, apparatus of the attitude angle of camera, calculating equipment, computer readable storage medium and acquisition entity
WO2024016877A1 (en) Roadside sensing simulation system for vehicle-road collaboration
Zhou et al. Developing and testing robust autonomy: The university of sydney campus data set
CN116883610A (en) Digital twin intersection construction method and system based on vehicle identification and track mapping
WO2023123837A1 (en) Map generation method and apparatus, electronic device, and storage medium
Wang et al. A synthetic dataset for Visual SLAM evaluation
CN113920101A (en) Target detection method, device, equipment and storage medium
TWI725681B (en) Autonomous vehicle semantic map establishment system and establishment method
Kinzig et al. Real-time seamless image stitching in autonomous driving
Bai et al. Cyber mobility mirror for enabling cooperative driving automation: A co-simulation platform
CN111241923B (en) Method and system for detecting stereo garage in real time
Du et al. Validation of vehicle detection and distance measurement method using virtual vehicle approach
Grapinet et al. Characterization and simulation of optical sensors
CN104182993B (en) Target tracking method
Bellusci et al. Semantic interpretation of raw survey vehicle sensory data for lane-level HD map generation
CN116917936A (en) External parameter calibration method and device for binocular camera
Zhang et al. Automated visibility field evaluation of traffic sign based on 3d Lidar point clouds
Zheng et al. An image-based object detection method using two cameras
Dai et al. Roadside Edge Sensed and Fused Three-dimensional Localization using Camera and LiDAR

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19922446

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19922446

Country of ref document: EP

Kind code of ref document: A1