CN117611771A - Method for generating driving auxiliary display image based on image engine - Google Patents

Method for generating driving auxiliary display image based on image engine Download PDF

Info

Publication number
CN117611771A
CN117611771A CN202311419420.9A CN202311419420A CN117611771A CN 117611771 A CN117611771 A CN 117611771A CN 202311419420 A CN202311419420 A CN 202311419420A CN 117611771 A CN117611771 A CN 117611771A
Authority
CN
China
Prior art keywords
image
vehicle
road
generating
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311419420.9A
Other languages
Chinese (zh)
Inventor
张昱
杨小鸣
姚伟
舒培超
王伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Motor Corp
Dongfeng Yuexiang Technology Co Ltd
Original Assignee
Dongfeng Motor Corp
Dongfeng Yuexiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Motor Corp, Dongfeng Yuexiang Technology Co Ltd filed Critical Dongfeng Motor Corp
Priority to CN202311419420.9A priority Critical patent/CN117611771A/en
Publication of CN117611771A publication Critical patent/CN117611771A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2008Assembling, disassembling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method for generating a driving auxiliary image based on an image engine, which is applied to a vehicle and comprises the following steps: presetting a space box and a road grid by using an image engine according to the received external scenery image of the vehicle and the point cloud data; preloading road grids, simplifying the grids outside the road and remote objects in the road; identifying a dynamic object in an external scene image of a vehicle, generating a dynamic object model, calculating the position of the dynamic object according to information sent by a vehicle distance sensor, and loading the dynamic object model at the corresponding position of a road grid; synthesizing a dynamic object model, a road grid and a sky box into a virtual image, wherein the virtual image corresponds to a received external scene image of a vehicle; a plurality of virtual cameras are provided so that the driver observes virtual images from a plurality of perspectives. The problem caused by image distortion or image stitching errors in the remote driving process is effectively solved.

Description

Method for generating driving auxiliary display image based on image engine
Technical Field
The invention relates to the technical field of remote driving, in particular to a method for generating a driving auxiliary display image based on an image engine.
Background
Along with the development of science and technology, the automatic driving technology is applied to various fields, and remote driving is used as the technology supplement of automatic driving, so that the reversing image and driving auxiliary display technology are applied, and the driving safety and convenience are improved. However, these techniques have a certain limitation in dealing with picture distortion and distortion due to problems of camera view angle and image stitching, resulting in a failure of the driver to accurately judge the vehicle distance.
Disclosure of Invention
In view of the above, the invention provides a method for generating a driving auxiliary image based on an image engine, which is mainly used for solving the problem that in the prior art, a driver cannot accurately judge the vehicle distance and the vehicle condition due to image distortion caused by a camera view angle and image stitching errors. The method is applied to a vehicle and comprises the following steps: generating a space box and a road grid by using an image engine according to the received external scenery image of the vehicle and the point cloud data; preloading road grids, simplifying the grids outside the road and remote objects in the road; identifying a dynamic object in an external scene image of a vehicle, generating a dynamic object model, calculating the position of the dynamic object according to information sent by a vehicle distance sensor, and loading the dynamic object model at the corresponding position of a road grid; synthesizing a dynamic object model, a road grid and a sky box into a virtual image, wherein the virtual image corresponds to a received external scene image of a vehicle; a plurality of virtual cameras are provided so that the driver observes virtual images from a plurality of perspectives.
Further, the generating the road grid specifically includes: performing image processing on a front ground image in an external scene image of the vehicle to generate a patch; performing point cloud compaction on the external point cloud data of the vehicle, and fitting a grid; and splicing and loading the generated patches and the fitted grids to generate the road grids.
Further, the generating a sky box specifically includes: selecting a sky map preset by an image engine according to the direction of the vehicle body and the current time; judging whether the upper part is shielded or not according to the top point cloud in the external point cloud data of the vehicle; if the shielding is not available, loading a sky map to generate a sky box; and if the shielding exists, generating a shielding object grid model according to an upper image in the scenery image outside the vehicle, and simultaneously loading a sky map to generate a sky box.
Further, the generating a sky box further includes: and setting a brightness range according to the data acquired by the vehicle illumination sensor, wherein the maximum value of the brightness range is based on that the picture does not generate glare or blushing, and the minimum value is based on that the image contrast is not reduced.
Further, the simplified off-road grid and the remote objects in the road are specifically: and fitting the road external grid into a white model with a lower surface number, and displaying the long-distance object by using a low-precision self-generated white model or a prefabricated low-precision grid body.
Further, the preloading road grid further includes: and identifying and calling an in-road grid preset by an image engine according to the external scenery image of the vehicle, wherein the in-road grid comprises guardrails and flower beds.
Further, the identifying a dynamic object in the external scene image of the vehicle, and generating the dynamic object model specifically includes: performing image processing and feature extraction on the external scenery image of the vehicle by using a vision AI algorithm, and identifying a dynamic object; selecting a network model according to the extracted category, and selecting a material according to the extracted color; the method comprises the steps of obtaining the size and the action of an external object of a vehicle through external point cloud data of the vehicle, scaling a network model according to the size difference of the object, and selecting a corresponding animation special effect according to the action of the object; after the network model is subjected to material loading, size scaling and corresponding animation special effect updating, a dynamic object model is formed; wherein, the network model, the material and the animation special effects are preset by an image engine.
Further, the image processing includes: brightness adjustment, distortion adjustment, moving object identification and road identification.
Further, the vehicle distance sensor is a lidar.
Further, the setting of the plurality of virtual cameras, specifically setting three virtual cameras, includes: the first virtual camera forms a 90-degree overlooking view angle with the ground, the position is adjusted up and down in a set range, when the first virtual camera is higher than a roof obstacle, the first virtual camera automatically descends and does not exceed the set range, the lowest position of the range is capable of seeing 1-3m in front and behind the vehicle, and the highest position is capable of excluding a distorted view of scenery images outside the vehicle; the second virtual camera is positioned obliquely above the rear of the vehicle, forms an included angle of 30-60 degrees with the ground, and covers the tail and the contours of two sides of the vehicle in view; the third virtual camera is positioned above the engine cover, the view field covers the engine cover and the fender, and meanwhile, blind areas of the engine head are avoided as much as possible.
According to the technical scheme, the virtual image is constructed through the image engine, and the virtual image is consistent with a real external scenery image of the vehicle, so that the virtual image is visual in display, distortion and error are avoided, and a driver can conveniently judge driving; the image engine can also adjust the light intensity of the virtual image, so that the problem that the display effect of the image in the prior art is poor in a dim light environment and in strong light (such as vehicle high beam) is solved; the multiple virtual cameras arranged in the method can provide multiple observation visual angles for drivers, and driving safety and convenience are greatly improved.
Drawings
FIG. 1 is a flow chart of a method for generating a driving assistance image based on an image engine provided by the present application;
FIG. 2 is a flow chart of a method for generating a sky box by an image engine provided herein;
FIG. 3 is a flow chart of a method for an image engine to generate a road grid provided herein;
FIG. 4 is a flowchart of another method for generating a driving assistance image based on an image engine provided herein;
fig. 5 is a schematic diagram of virtual camera position setting provided in the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Embodiment one:
as shown in fig. 1, the method for generating a driving assistance image based on an image engine is applied to a vehicle, and the steps of the method are as follows.
S1: generating a space box and a road grid by using an image engine according to the received external scenery image of the vehicle and the point cloud data;
s2: preloading road grids, simplifying the grids outside the road and remote objects in the road;
s3: identifying a dynamic object in an external scene image of a vehicle, generating a dynamic object model, calculating the position of the dynamic object according to information sent by a vehicle distance sensor, and loading the dynamic object model at the corresponding position of a road grid;
s4: synthesizing a dynamic object model, a road grid and a sky box into a virtual image, wherein the virtual image corresponds to a received external scene image of a vehicle;
s5: a plurality of virtual cameras are provided so that the driver observes virtual images from a plurality of perspectives.
According to the technical scheme, the virtual image corresponding to the received external scenery image of the vehicle is synthesized through the image engine, and a driver remotely driving can judge driving through the virtual image. Because the virtual image is vivid and visual, and is consistent with the real external scenery image of the vehicle, the problem of image distortion or image stitching errors in the prior art is effectively solved, the driver can intuitively judge the vehicle condition, and the driving safety and convenience are greatly improved.
Embodiment two:
the method for generating the driving auxiliary image based on the image engine is applied to a vehicle and comprises the following steps of.
S1: generating a space box and a road grid by using an image engine according to the received external scenery image of the vehicle and the point cloud data;
the image engine is a software engine for processing images, such as a fantasy engine, which can provide tools and application programming interfaces for developers to create electronic games, create graphics and visualizations, including content ranging from artificial intelligence and animation to physical simulation and audio aspects, which can use instant messaging technology to convert ideas into attractive visualizations, and which can be extended, functional, and custom-built to support 2D and 3D scenes.
Various assets are built into the image engine, including static grids, material or map packs, animations, and special effects. The assets are used for setting environment, and the environment where a real vehicle is located is simulated by using a sky map, namely a sky box is generated, and the specific steps are shown in fig. 2.
S11: selecting a sky map preset by an image engine according to the direction of the vehicle body and the current time;
the image engine is used for environmental setting, i.e. setting environmental parameters such as time, weather, atmospheric scattering, etc., which can be acquired by various sensors on the vehicle. The image engine judges the sunlight direction according to the vehicle body direction and the current time, selects a proper sky map and gives the sky box. The built-in assets in the image engine comprise various prefabricated weather and sky maps of time periods.
S12: judging whether the upper part is shielded or not according to the top point cloud data in the external point cloud data of the vehicle, if not, turning to the step S13, and if so, turning to the step S14;
since various sensors, such as an image sensor, a distance sensor, an illumination sensor, etc., are mounted on the unmanned vehicle, these sensors can collect various data, including point cloud data. The image engine can judge whether a shielding object such as a tree shadow, a backdrop and the like exists above the vehicle according to the point cloud data at the top of the vehicle.
S13: loading a sky map to generate a sky box;
s14: and generating a shielding object grid model according to an upper image in the scene image outside the vehicle, and simultaneously loading a sky map to generate a sky box.
And (3) through the judgment of the step S12, if the space is not shielded, the sky map selected in the step S11 can be directly loaded to generate a space box. If the shielding exists, a shielding object grid model is needed to be generated, and meanwhile, the shielding object grid model and the sky map selected in the step S11 are loaded to generate a sky box.
After the sky map is loaded, the image engine can also adjust the brightness according to the data collected by the vehicle illumination sensor. The brightness adjustment range is preferably such that the maximum value does not cause glare or blushing of the screen, and the minimum value does not reduce the image contrast. Compared with the prior art, the technical scheme can solve the problem that the display effect of the image is poor in a dark light environment and in strong light (such as vehicle high beam).
The image engine may also generate a road grid, the specific steps of which are shown in fig. 3.
S21: performing image processing on a front ground image in an external scene image of the vehicle to generate a map;
s22: performing point cloud compaction on the external point cloud data of the vehicle, and fitting a grid;
s23: and carrying out splicing loading on the generated map and the fitted grid to generate the road grid.
The image engine performs image processing, namely brightness adjustment, distortion correction and road identification according to a front ground image in the external scene image of the vehicle to generate a road map; then, simplifying the point cloud data outside the vehicle, and fitting the point cloud data into a grid; and finally, fitting the map and the grids to generate the road grids which are the same as the roads in the external images of the vehicles.
After the road grids are arranged, various objects on the road, including vehicles, pedestrians, guardrails, plants and the like, can be adjusted and updated at any time according to the subsequently received external images of the vehicles.
S2: preloading road grids, simplifying the grids outside the road and remote objects in the road;
s3: identifying a dynamic object in an external scene image of a vehicle, generating a dynamic object model, calculating the position of the dynamic object according to information sent by a vehicle distance sensor, and loading the dynamic object model at the corresponding position of a road grid;
s4: synthesizing a dynamic object model, a road grid and a sky box into a virtual image, wherein the virtual image corresponds to a received external scene image of a vehicle;
s5: a plurality of virtual cameras are provided so that the driver observes virtual images from a plurality of perspectives.
According to the technical scheme, the virtual image corresponding to the received external scenery image of the vehicle is synthesized through the image engine, and a driver remotely driving can judge driving through the virtual image. Because the virtual image is vivid and visual, and is consistent with the real external scenery image of the vehicle, the problems of image distortion or image stitching errors in the prior art are effectively solved, and the driver can intuitively judge the vehicle condition conveniently; meanwhile, the problem that in the prior art, the display effect of the image is poor under a dim light environment and strong light (such as vehicle high beam) is solved by adjusting the light intensity of the virtual image, and the driving safety and convenience are greatly improved.
Embodiment III:
as shown in fig. 1, the method for generating a driving assistance image based on an image engine is applied to a vehicle, and the steps of the method are as follows.
S1: presetting a space box and a road grid by using an image engine according to the received external scenery image of the vehicle and the point cloud data;
s2: preloading road grids, simplifying the grids outside the road and remote objects in the road;
after the operation of step S1, the large frame of the virtual environment outside the vehicle has been initially completed, and then detail optimization is performed based on the received scene image outside the vehicle. When the road grids are loaded, the off-road grids and the remote objects in the road are processed firstly, and because the scenery and the remote objects outside the road cannot influence normal vehicle driving, the off-road grids are usually simplified and fitted into white models with lower surface numbers in order to avoid the influence of excessive objects on the attention of a driver, and the off-road grids are preloaded in advance in a low-model mode; the distant objects are displayed with low precision self-generated white models or prefabricated low precision grids. As the vehicle moves forward, the distant objects gradually get closer, and when the objects can be identified, high-precision rendering can be performed according to the built-in assets of the image engine, so that a more realistic virtual scene is constructed. How the close objects in the road are identified and displayed will be described in detail in the following steps.
Because sceneries such as guardrails, flower beds and the like are usually arranged on two sides of a road, in order to truly restore a real scene, an in-road grid preset by an image engine is required to be identified and called according to an external sceneries image of a vehicle. The grids in the road are assets built in the image engine, and comprise grids such as guardrails, flower beds and the like. In the case of off-road scenes and obstructions, the point cloud data is fitted and reduced to a square white-mode grid for display as described above to reduce the image engine computation.
S3: identifying a dynamic object in an external scene image of a vehicle, generating a dynamic object model, calculating the position of the dynamic object according to information sent by a vehicle distance sensor, and loading the dynamic object model at the corresponding position of a road grid;
s4: synthesizing a dynamic object model, a road grid and a sky box into a virtual image, wherein the virtual image corresponds to a received external scene image of a vehicle;
s5: a plurality of virtual cameras are provided so that the driver observes virtual images from a plurality of perspectives.
According to the technical scheme, the virtual image corresponding to the received external scenery image of the vehicle is synthesized through the image engine, and a driver remotely driving can judge driving through the virtual image. Because the virtual image is vivid and visual, and is consistent with the real external scenery image of the vehicle, the problem of image distortion or image stitching errors in the prior art is effectively solved, the driver can intuitively judge the vehicle condition, and the driving safety and convenience are greatly improved.
Embodiment four:
as shown in fig. 4, the method for generating the driving assistance image based on the image engine is applied to a vehicle, and the steps of the method are as follows.
S1: presetting a space box and a road grid by using an image engine according to the received external scenery image of the vehicle and the point cloud data;
s2: preloading road grids, simplifying the grids outside the road and remote objects in the road;
s3: identifying a dynamic object in an external scene image of a vehicle, generating a dynamic object model, calculating the position of the dynamic object according to information sent by a vehicle distance sensor, and loading the dynamic object model at the corresponding position of a road grid;
from the foregoing, it can be seen that there are various common model grids of vehicles, pedestrians and common animals of different sizes and types in the built-in assets of the image engine, as well as their materials, map packs, animations and special effects (e.g., vehicle motion animation: forward, turn, brake, vegetation and tree dynamic animation, pedestrian and common animal animation). The image engine uses these assets to generate a dynamic object model from the scene image outside the vehicle, as follows.
S31: performing image processing and feature extraction on the external scenery image of the vehicle by using a vision AI algorithm, and identifying a dynamic object;
under normal conditions, dynamic objects such as vehicles, pedestrians and the like usually appear in the middle of a road, so that an external scene image of the vehicle is subjected to image processing and feature extraction by using a visual AI algorithm, and the types and colors of the dynamic objects are identified according to the extracted features. The image processing here generally includes brightness adjustment, distortion adjustment, moving object recognition, and road recognition. Since these image processing methods are prior art, they will not be described in detail here.
S32: selecting a model grid according to the extracted category, and selecting a material according to the extracted color;
through the detection and recognition in step S31, the type of the object to be recognized, such as a vehicle (including a car, an SUV, an MPV, a passenger car, a light card, a heavy card, etc.), a pedestrian (a adult, a child), an animal, etc., is primarily determined, the model grid corresponding to the type in the built-in asset of the image engine is read, the color of the object is simultaneously recognized, and then the material with the corresponding color is selected from the built-in asset of the image engine and is loaded on the model grid. For example, the type of the dynamic object is detected as a vehicle by an AI visual algorithm, and then the color of the vehicle body is recognized, so that the paint material with the color is given to the vehicle body.
S33: the method comprises the steps of obtaining the size and the action of an object through the external point cloud data of a vehicle, scaling a network model according to the size difference of the object, and selecting a corresponding animation special effect according to the action of the object;
and (3) acquiring the length, width and height of each object, namely the size of the object, through the received external point cloud data of the vehicle, and scaling the model grid loaded with the materials in the step S32 according to the size of the object. And simultaneously, acquiring the azimuth angle and the distance of the object, calculating the vector speed of the object, selecting corresponding animation special effects, and updating the grid action of the model.
S34: and after the model grids are subjected to material loading, size scaling and corresponding animation special effect updating, a dynamic object model is formed.
After the operations of steps S32 and S33, the model mesh built in the image engine forms a dynamic object model, and matches the shape, color, motion, and size ratio of the object in the scene image outside the vehicle.
After the dynamic object model is completed, the position and the direction of the object are calculated according to the information sent by the distance sensor of the vehicle, usually a laser radar, specifically: assuming that the radar center is the origin, the position of the object is (x, Y, z), the spatial distance from the origin to the position is R, the angle between the line between the scan point and the origin and the horizontal plane is ω, and the angle between the line between the scan point and the origin and the horizontal plane projection is α with respect to the Y axis, x=r×cos (ω) ×sin (α), y=r×cos (ω) COS (α), and z=r×sin (ω). After the position of the object is calculated, the dynamic object model is loaded onto the corresponding position of the road grid.
S4: synthesizing a dynamic object model, a road grid and a sky box into a virtual image, wherein the virtual image corresponds to a received external scene image of a vehicle;
since the space box, the road mesh and the dynamic object model are all generated from the received vehicle external scene image, the virtual image thus generated corresponds exactly to the real image. And because the image engine displays complete images and illumination effects through setting a series of steps of scene arrangement, environment arrangement, import resources, material design, light source setting, shadow setting, post setting and the like, the realistic and perfect picture presentation is realized. Compared with the images acquired by the vision sensor of the vehicle, the display effect of surrounding vehicles and the environment images is more visual, the immersion effect and the rendering effect are better, the movement display of the surrounding vehicles is smooth and smooth, no obvious abrupt change or frame jump occurs, the display precision is high, no obvious lens distortion or image splicing errors occur, and the driver can judge the distance and the condition of the vehicle more conveniently.
S5: a plurality of virtual cameras are provided so that the driver observes virtual images from a plurality of perspectives.
According to the technical scheme, the virtual image corresponding to the received external scenery image of the vehicle is synthesized through the image engine, and a driver remotely driving can judge driving through the virtual image. Because the virtual image is vivid and visual, and is consistent with the real external scenery image of the vehicle, the problem of image distortion or image stitching errors in the prior art is effectively solved, the driver can intuitively judge the vehicle condition, and the driving safety and convenience are greatly improved.
Fifth embodiment:
as shown in fig. 1, the method for generating a driving assistance image based on an image engine is applied to a vehicle, and the steps of the method are as follows.
S1: presetting a space box and a road grid by using an image engine according to the received external scenery image of the vehicle and the point cloud data;
s2: preloading road grids, simplifying the grids outside the road and remote objects in the road;
s3: identifying a dynamic object in an external scene image of a vehicle, generating a dynamic object model, calculating the position of the dynamic object according to information sent by a vehicle distance sensor, and loading the dynamic object model at the corresponding position of a road grid;
s4: synthesizing a dynamic object model, a road grid and a sky box into a virtual image, wherein the virtual image corresponds to a received external scene image of a vehicle;
s5: a plurality of virtual cameras are provided so that the driver observes virtual images from a plurality of perspectives.
The virtual camera referred to herein is essentially a point in virtual space through which the world in the simulated environment can be observed. Compared with the mode that the display center is fixed in the prior art, the multiple virtual cameras provide multiple observation visual angles for drivers, a more accurate, simple and visual picture display mode can be provided, and drivers can conveniently drive and judge. As shown in fig. 5, the number of virtual cameras is generally three, the virtual cameras are generally set to follow the vehicle body, the angle of view is approximately 90-110 degrees, and at the same time, the virtual cameras are set to execute in a fixed frame, and the visual field following force and smooth transition are added, so that the dizziness feeling is not easy to generate. The specific setting positions are as follows: the first virtual camera forms a 90-degree overlooking view angle with the ground, the position is adjusted up and down in a set range, when the first virtual camera is higher than a roof obstacle, the first virtual camera automatically descends and does not exceed the set range, the lowest position of the range is capable of seeing 1-3m in front and behind the vehicle, and the highest position is capable of excluding a distorted view of scenery images outside the vehicle; the second virtual camera is positioned obliquely above the rear of the vehicle, forms an included angle of 30-60 degrees with the ground, and covers the tail and the contours of two sides of the vehicle in view; the third virtual camera is positioned above the engine cover, the view field covers the engine cover and the fender, and meanwhile, blind areas of the engine head are avoided as much as possible.
According to the technical scheme, the virtual image corresponding to the received external scenery image of the vehicle is synthesized through the image engine, and a driver remotely driving can judge driving through the virtual image. Because the virtual image is vivid and visual, and is consistent with the real external scenery image of the vehicle, the problems of image distortion or image stitching errors in the prior art are effectively solved, and the driver can intuitively judge the vehicle condition conveniently. The multiple virtual cameras that this technical scheme set up can provide a plurality of viewing angles for the navigating mate, improve driving security and convenience greatly.
In summary, the embodiment of the invention provides a method for generating a driving assistance image based on an image engine, which utilizes the image engine to synthesize a virtual image, and the virtual image corresponds to a received external scene image of a vehicle, so that the image display is visual, the rendering effect is good, the problems of image distortion or image stitching errors in the prior art are effectively overcome, and the driver is helped to judge the distance and the condition of the vehicle.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather to enable any modification, equivalent replacement, improvement or the like to be made within the spirit and principles of the invention.

Claims (10)

1. A method for generating a driving assistance image based on an image engine, applied to a vehicle, comprising:
generating a space box and a road grid by using an image engine according to the received external scenery image of the vehicle and the point cloud data;
preloading road grids, simplifying the grids outside the road and remote objects in the road;
identifying a dynamic object in an external scene image of a vehicle, generating a dynamic object model, calculating the position of the dynamic object according to information sent by a vehicle distance sensor, and loading the dynamic object model at the corresponding position of a road grid;
synthesizing a dynamic object model, a road grid and a sky box into a virtual image, wherein the virtual image corresponds to a received external scene image of a vehicle;
a plurality of virtual cameras are provided so that the driver observes virtual images from a plurality of perspectives.
2. The method for generating a driving assistance image based on an image engine according to claim 1, wherein the generating a road grid specifically includes:
performing image processing on a front ground image in an external scene image of the vehicle to generate a map;
performing point cloud compaction on the external point cloud data of the vehicle, and fitting a grid;
and carrying out splicing loading on the generated map and the fitted grid to generate the road grid.
3. The method for generating a driving assistance image based on an image engine according to claim 1, wherein the generating a sky box specifically comprises:
selecting a sky map preset by an image engine according to the direction of the vehicle body and the current time;
judging whether the upper part is shielded or not according to the top point cloud data in the external point cloud data of the vehicle;
if the shielding is not available, loading a sky map to generate a sky box;
and if the shielding exists, generating a shielding object grid model according to an upper image in the scenery image outside the vehicle, and simultaneously loading a sky map to generate a sky box.
4. A method of generating a driving assistance image based on an image engine according to claim 3, wherein the generating a sky box further comprises: and setting a brightness range according to the data acquired by the vehicle illumination sensor, wherein the maximum value of the brightness range is based on that the picture does not generate glare or blushing, and the minimum value is based on that the image contrast is not reduced.
5. The method for generating a driving assistance image based on an image engine according to claim 1, wherein the simplified off-road grid and the remote objects in the road are specifically: and fitting the road external grid into a white model with a lower surface number, and displaying the long-distance object by using a low-precision self-generated white model or a prefabricated low-precision grid body.
6. The method for generating a driving assistance image based on an image engine according to claim 1, wherein said preloading road grid further comprises: and identifying and calling an in-road grid preset by an image engine according to the external scenery image of the vehicle, wherein the in-road grid comprises guardrails and flower beds.
7. The method for generating a driving assistance image based on an image engine according to claim 1, wherein the identifying a dynamic object in a scene image outside a vehicle, generating a dynamic object model specifically includes:
performing image processing and feature extraction on the external scenery image of the vehicle by using a vision AI algorithm, and identifying a dynamic object;
selecting a model grid according to the extracted category, and selecting a material according to the extracted color;
the method comprises the steps of obtaining the size and the action of an object through the external point cloud data of a vehicle, scaling a model grid according to the size difference of the object, and selecting a corresponding animation special effect according to the action of the object;
after the model grid is subjected to material loading, size scaling and corresponding animation special effect updating, a dynamic object model is formed;
wherein, the model grids, materials and animation special effects are preset by an image engine.
8. The method for generating a driving assistance image based on an image engine according to claim 7, wherein the image processing includes: brightness adjustment, distortion adjustment, moving object identification and road identification.
9. The method for generating a driving assistance image based on an image engine according to claim 1, wherein the vehicle distance sensor is a lidar.
10. The method for generating a driving assistance image based on an image engine according to claim 1, wherein the setting a plurality of virtual cameras, in particular three virtual cameras, comprises:
the first virtual camera forms a 90-degree overlooking view angle with the ground, the position is adjusted up and down in a set range, when the first virtual camera is higher than a roof obstacle, the first virtual camera automatically descends and does not exceed the set range, the lowest position of the range is capable of seeing 1-3m in front and behind a vehicle, and the highest position is capable of excluding a distorted view of a scene image outside the vehicle;
the second virtual camera is positioned obliquely above the rear of the vehicle, forms an included angle of 30-60 degrees with the ground, and covers the tail and the contours of two sides of the vehicle in view;
the third virtual camera is positioned above the engine cover, the view field covers the engine cover and the fender, and meanwhile, blind areas of the engine head are avoided as much as possible.
CN202311419420.9A 2023-10-30 2023-10-30 Method for generating driving auxiliary display image based on image engine Pending CN117611771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311419420.9A CN117611771A (en) 2023-10-30 2023-10-30 Method for generating driving auxiliary display image based on image engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311419420.9A CN117611771A (en) 2023-10-30 2023-10-30 Method for generating driving auxiliary display image based on image engine

Publications (1)

Publication Number Publication Date
CN117611771A true CN117611771A (en) 2024-02-27

Family

ID=89955130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311419420.9A Pending CN117611771A (en) 2023-10-30 2023-10-30 Method for generating driving auxiliary display image based on image engine

Country Status (1)

Country Link
CN (1) CN117611771A (en)

Similar Documents

Publication Publication Date Title
TWI703064B (en) Systems and methods for positioning vehicles under poor lighting conditions
CN110758243B (en) Surrounding environment display method and system in vehicle running process
KR101265667B1 (en) Device for 3d image composition for visualizing image of vehicle around and method therefor
US11657319B2 (en) Information processing apparatus, system, information processing method, and non-transitory computer-readable storage medium for obtaining position and/or orientation information
EP2491530B1 (en) Determining the pose of a camera
US10293745B2 (en) Projection of a pre-definable light pattern
US20110184895A1 (en) Traffic object recognition system, method for recognizing a traffic object, and method for setting up a traffic object recognition system
CN105825173A (en) Universal road and lane detection system and method
CN104272345A (en) Display device for vehicle, display method for vehicle, and display program for vehicle
CN103770708A (en) Dynamic rearview mirror adaptive dimming overlay through scene brightness estimation
JP4272966B2 (en) 3DCG synthesizer
JP3162664B2 (en) Method and apparatus for creating three-dimensional cityscape information
CN112703527A (en) Head-up display (HUD) content control system and method
CN114445592A (en) Bird view semantic segmentation label generation method based on inverse perspective transformation and point cloud projection
US20210327129A1 (en) Method for a sensor-based and memory-based representation of a surroundings, display device and vehicle having the display device
CN117611771A (en) Method for generating driving auxiliary display image based on image engine
Galazka et al. CiThruS2: Open-source photorealistic 3d framework for driving and traffic simulation in real time
US11188767B2 (en) Image generation device and image generation method
CN112184605A (en) Method, equipment and system for enhancing vehicle driving visual field
EP4276763A1 (en) Image generation for testing an electronic vehicle guidance system
CN117173014B (en) Method and device for synthesizing 3D target in BEV image
US20230351692A1 (en) Image processing device
Anwar et al. Augmented Reality based Simulated Data (ARSim) with multi-view consistency for AV perception networks
CN117830492A (en) Vehicle model rendering method and device, electronic equipment and storage medium
Yang et al. Enhancing Scene Simulation for Autonomous Driving with Neural Point Rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination