CN110758243B - Surrounding environment display method and system in vehicle running process - Google Patents

Surrounding environment display method and system in vehicle running process Download PDF

Info

Publication number
CN110758243B
CN110758243B CN201911054420.7A CN201911054420A CN110758243B CN 110758243 B CN110758243 B CN 110758243B CN 201911054420 A CN201911054420 A CN 201911054420A CN 110758243 B CN110758243 B CN 110758243B
Authority
CN
China
Prior art keywords
vehicle
data
model
module
surrounding environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911054420.7A
Other languages
Chinese (zh)
Other versions
CN110758243A (en
Inventor
王磊
贺磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dilu Technology Co Ltd
Original Assignee
Dilu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dilu Technology Co Ltd filed Critical Dilu Technology Co Ltd
Priority to CN201911054420.7A priority Critical patent/CN110758243B/en
Publication of CN110758243A publication Critical patent/CN110758243A/en
Application granted granted Critical
Publication of CN110758243B publication Critical patent/CN110758243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mechanical Engineering (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a surrounding environment display method and a surrounding environment display system in the running process of a vehicle, wherein the surrounding environment display method comprises the following steps that an image module acquires object parameters around the vehicle and processes the object parameters to generate an environment data set; the identification module receives the processed environment data and identifies the specific type of the environment data; retrieving an existing model matched with the specific type identified by the identification module from a model library, or automatically generating a new model; and the display module carries out three-dimensional reconstruction display on the surrounding environment by using the retrieved and/or automatically generated model, and warns dangerous objects around the vehicle after the scene is reproduced. The invention has the beneficial effects that: by reconstructing the surrounding environment of the vehicle body, personnel in the vehicle are helped to clearly determine the correlation between the vehicle body and the surrounding environment to prompt the user, and risks are avoided; the driving experience of the user is improved, and the driving safety of the user is improved.

Description

Surrounding environment display method and system in vehicle running process
Technical Field
The invention relates to the technical field of human-computer interaction image vision of automobile contents, in particular to a surrounding environment display method in the running process of a vehicle and a system based on the method.
Background
Based on the car body radar or the camera as environmental data acquisition, the car body radar or the camera is used for restoring the surrounding environment of the car through a car interior computing unit and displaying on display equipment such as a car machine, a head-up display, an instrument panel and the like of the car, so that Tesla is existing in products, but only the appearance of objects with driving influence on the car around the car body can be simulated, the real surrounding environment cannot be sensed through sensors such as the camera, and the content which can be known by a user is limited.
In the current market, the vehicle technology can only simulate the appearance of objects around a vehicle body, which have driving influence on the vehicle, and can not sense the real surrounding environment through sensors such as cameras, and the content which can be known by a user is limited.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description summary and in the title of the application, to avoid obscuring the purpose of this section, the description summary and the title of the invention, which should not be used to limit the scope of the invention.
The present invention has been made in view of the above-described problems occurring in the prior art.
Therefore, the technical problems solved by the invention are as follows: the surrounding environment display method in the running process of the vehicle can help personnel in the vehicle to clearly determine the correlation between the vehicle body and the surrounding environment and prompt a user through reconstructing the surrounding environment of the vehicle body, so that risks are avoided.
In order to solve the technical problems, the invention provides the following technical scheme: the method for displaying the surrounding environment in the running process of the vehicle comprises the following steps that an image module collects object parameters around the vehicle and processes the object parameters to generate an environment data set; the identification module receives the processed environment data and identifies the specific type of the environment data; retrieving an existing model matched with the specific type identified by the identification module from a model library, or automatically generating a new model; and the display module carries out three-dimensional reconstruction display on the surrounding environment by using the retrieved and/or automatically generated model, and warns dangerous objects around the vehicle after the scene is reproduced.
As a preferable mode of the method for displaying the surrounding environment during the running of the vehicle according to the present invention, wherein: the image module comprises a camera, a vehicle-mounted radar and a sensor for rapidly scanning object parameters around the vehicle, and sending the object parameters to the image processing module for processing and generating an environment data set; and the collected data comprise actual object three-dimensional parameters of roads, pedestrians, vehicles and surrounding buildings, street lamps and trees related to driving.
As a preferable mode of the method for displaying the surrounding environment during the running of the vehicle according to the present invention, wherein: the model library comprises map manufacturer model loading, manual making of models stored in the model library, and automatic generation of new models.
As a preferable mode of the method for displaying the surrounding environment during the running of the vehicle according to the present invention, wherein: the automatic generation of the new model comprises the following steps that an object with space attributes without variables is loaded by a map manufacturer model, is manually made or is stored in the model library by a point cloud model; an object with variable space attributes and capable of influencing the running of the vehicle is manufactured by hand, and a model is manufactured by hand and stored in the model library; the image module acquires object parameters to confirm the outline of the object; and automatically pulling up or shrinking the part with the variable according to the parameter comparison of the confirmed outer outline of the object by the stored model of the model library to generate a new model conforming to the outer outline of the actual object.
As a preferable mode of the method for displaying the surrounding environment during the running of the vehicle according to the present invention, wherein: the method comprises the steps that the identification module classifies an environment data set, self-locates the level, and locates the position information of the vehicle by adding a contrast object; the environment level is used for reconstructing and displaying a three-dimensional scene of the surrounding environment of the vehicle; prompt level, identifying dangerous data in the environment data set, displaying prompt labels in the reconstructed three-dimensional scene, and prompting a driver; and (3) warning level, namely identifying collision data in the environment data set, and displaying warning marks and related maps in the reconstructed three-dimensional scene.
As a preferable mode of the method for displaying the surrounding environment during the running of the vehicle according to the present invention, wherein: the method comprises the step of classifying an environment data set by the identification module, wherein the display data of the vehicle and the state of the vehicle comprises acquired three-dimensional model data of the vehicle body and vehicle body running state data; pavement or geographic information including basic pavement information, data of surrounding terrain relief; traffic signs or markings, including data for roadside signs, road blocks; traffic participant data including pedestrians, vehicles, buildings, and obstacles.
As a preferable mode of the method for displaying the surrounding environment during the running of the vehicle according to the present invention, wherein: the traffic signs include warning signs, forbidden signs, indicating signs, road directing signs, tourist area signs and other signs; the traffic markings include forbidden markings, indicator markings, and warning markings.
As a preferable mode of the method for displaying the surrounding environment during the running of the vehicle according to the present invention, wherein: the method comprises the steps that according to different driving scenes of speed change of a vehicle in a driving process, the display module presents environment reconstruction pictures based on different visual field ranges around the vehicle; in the high-speed running process of the vehicle, a driver can see more environment contents farther and more behind the vehicle, and in the low-speed running process, the angle of the camera becomes higher, and the environment in the range of about 5-10 meters of the vehicle body is presented.
As a preferable mode of the method for displaying the surrounding environment during the running of the vehicle according to the present invention, wherein: the scene presentation of the corresponding visual field range of different driving scenes comprises the following steps that an image module acquires the speed data of the current vehicle driving; the adjusting module distinguishes: the speed is more than or equal to 120km/h in a high-speed state, and the range of the visual angles is displayed corresponding to 200m in front of the vehicle, 50m behind the vehicle and 10m on the left and right; v is 100-120 km/h in a rapid state, and corresponds to a presentation view angle range of 100m in front of a vehicle, 20m in back and 6m on the left and right sides; v is more than or equal to 0 and less than or equal to 40km/h in a low-speed state, and corresponds to a presentation view angle range of 10m in front of a vehicle, 6m in back and 5m on the left and right; in a reversing state, the gear is positioned at the position of R and corresponds to the display visual angle range of 5m in front of the vehicle, 10m behind the vehicle and 10m on the left and right.
Therefore, another technical problem solved by the present invention is: the method can be applied to the display system.
In order to solve the technical problems, the invention provides the following technical scheme: a surrounding environment display system in the running process of a vehicle comprises an image module, an identification module, a model library, a display module and an adjustment module; the image module is used for collecting and processing data of the vehicle and surrounding environment; the identification module is connected with the image module and is used for receiving the data acquired by the image module to identify the data type; the model library is connected with the identification module and is used for calling and/or generating a new model according to the data type identified by the identification module; the display module displays the models screened by the model library on a head-up display of the vehicle; the adjusting module is connected with the image module and used for adjusting and displaying scenes under different fields of view according to the speed data of the collected vehicle.
The invention has the beneficial effects that: by reconstructing the surrounding environment of the vehicle body, personnel in the vehicle are helped to clearly determine the correlation between the vehicle body and the surrounding environment to prompt the user, and risks are avoided; the driving experience of the user is improved, and the driving safety of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flow chart of a method for displaying the surrounding environment during the driving of a vehicle according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of classification according to a first embodiment of the present invention;
FIG. 3 is a schematic illustration of a traffic sign/marking according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram of a level annotation according to a first embodiment of the invention;
FIG. 5 is a schematic diagram showing the classification of the positional/relative motion relationship according to the first embodiment of the present invention;
FIG. 6 is a schematic view of the safety distance according to the first embodiment of the present invention;
FIG. 7 is a schematic diagram of constructing a three-dimensional scene according to a first embodiment of the invention;
FIG. 8 is a schematic view of the visual field as a function of vehicle speed gear according to a second embodiment of the present invention;
FIG. 9 is a schematic overall structure of a system for displaying an ambient environment during driving of a vehicle according to a third embodiment of the present invention;
FIG. 10 is a schematic diagram of a practical application scenario according to a third embodiment of the present invention;
FIG. 11 is a schematic diagram of a third embodiment of the present invention;
fig. 12 is another schematic diagram of a practical application scenario according to a third embodiment of the present invention.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
While the embodiments of the present invention have been illustrated and described in detail in the drawings, the cross-sectional view of the device structure is not to scale in the general sense for ease of illustration, and the drawings are merely exemplary and should not be construed as limiting the scope of the invention. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Also in the description of the present invention, it should be noted that the orientation or positional relationship indicated by the terms "upper, lower, inner and outer", etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first, second, or third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected, and coupled" should be construed broadly in this disclosure unless otherwise specifically indicated and defined, such as: can be fixed connection, detachable connection or integral connection; it may also be a mechanical connection, an electrical connection, or a direct connection, or may be indirectly connected through an intermediate medium, or may be a communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Example 1
Referring to the illustrations of fig. 1 to 4, the present embodiment proposes a method for displaying a surrounding environment during a vehicle driving process, which is applied to human-computer interaction image vision of automobile contents, including but not limited to, presenting a surrounding three-dimensional model scene of a vehicle body 600 on an automobile instrument panel, hud (head-up display) or an automobile machine, and by reconstructing the surrounding environment of the vehicle body, thereby helping personnel in the vehicle to clearly define the correlation between the vehicle body and the surrounding environment to prompt a user, and avoiding risks. The driving experience of the user is improved, and the driving safety of the user is improved. The reappearance of the surrounding environment of the vehicle body without dead angles is realized in a 3d modeling mode, and the environment with the distance of 20-50 meters in the range of the vehicle body is reappeared from far to near and the elements of running risks are formed on the vehicle body. Specifically, the method comprises the following steps,
the image module 100 processes object parameters collected around the vehicle to generate an environment data set; in the step, the image module 100 includes that object parameters around a vehicle are rapidly scanned through the camera 101, the vehicle-mounted radar 102 and the sensor 103 and sent to the image processing module 104 to be processed and generated into an environment data set; and the collected data comprise actual object three-dimensional parameters of roads, pedestrians, vehicles and surrounding buildings, street lamps and trees related to driving.
The identification module 200 receives the processed environmental data and identifies the specific type of the environmental data, wherein the step comprises the step of grading the environmental data set by the identification module 200, and the step of self-positioning the vehicle by adding the position information of the comparison object; the environment level is used for reconstructing and displaying a three-dimensional scene of the surrounding environment of the vehicle; prompt level, identifying dangerous data in the environment data set, displaying prompt labels in the reconstructed three-dimensional scene, and prompting a driver; and (3) warning level, namely identifying collision data in the environment data set, and displaying warning labels and related maps in the reconstructed three-dimensional scene. The step of classifying the environmental data set by the recognition module 200, wherein the display data of the vehicle and the state of the vehicle comprises collected three-dimensional model data of the vehicle body and vehicle body running state data; pavement or geographic information including basic pavement information, data of surrounding terrain relief; traffic signs or markings, including data for roadside signs, road blocks; traffic participant data including pedestrians, vehicles, buildings, and obstacles. Wherein the traffic signs include warning signs, ban signs, indicating signs, road directing signs, tourist area signs and other signs; traffic markings include forbidden markings, indicator markings, and warning markings. Referring to fig. 5 to 6, the step further includes a position/relative motion relationship classification method, in which the influence caused by the current steering wheel angle and increment is considered, and the reversing of the vehicle body 600 and the level of about 20km/h are distinguished, wherein P is a priority level, D is a safety distance, and R is a reversing gear. The table shows the range of interest around the vehicle when the vehicle is R range, speed is less than or equal to 20km/h, speed > 20. Left/right (steering + steering angle of open amount), the range of the sector that needs to be concerned after the steering angle effect when steering is considered. The table shows that the safety distance conservation value of the brake and the safety distance conservation value after the buffer is added are used for different vehicle speeds, and the buffer time is 1s.
The model library 300 retrieves existing models that match the specific type identified by the identification module 200 or automatically generates new models, wherein the model library 300 includes map vendor model loading, manually creating models stored in the model library 300, and automatically generating new models. Specifically, the method comprises the following steps that an object with space attributes without variables is loaded by a map manufacturer model, is manually made or is stored in a model library 300 by a point cloud model; an object having a variable in a spatial attribute and affecting the running of the vehicle is manually modeled and stored in the model library 300; the image module 100 collects object parameters to confirm the outline of the object; the stored model of the model library 300 automatically pulls up or reduces the part with the variable according to the parameter comparison of the confirmed object outline, and generates a new model conforming to the actual object outline.
The display module 400 performs three-dimensional reconstruction display of the surrounding environment on the retrieved and/or automatically generated model, and alerts dangerous objects around the vehicle after the scene is reproduced. And monitoring whether the situation of threat to the running of the vehicle exists or not through sensing equipment such as a camera, a radar and the like. After danger is met, the intelligent brain of the vehicle makes corresponding decisions by itself and reminds the user in environment reconstruction. And includes providing monitoring and warning of the safe distance of the vehicle body 600, ensuring safe driving of the vehicle.
It should be noted that, in this embodiment, a large amount of driving assistance information is required, and the most important information is the precise three-dimensional parameters of the road network, for example: intersection layout, road sign location, obstructions, vehicle pedestrians, etc. Also included are a number of semantic messages, the map may report meaning of different colors on the traffic lights, and may indicate speed limits of the road, and the location of the left turn lane. The method comprises the steps of three-dimensional model importing and positioning, sensing and planning warning of the map.
The positioning, sensing software and planning all rely on a high-precision map, the high-precision map can help vehicles to find proper driving space, help a planner to determine different route selections, and help prediction software to predict the future positions of other vehicles on roads, and the high-precision map can enable the vehicles to look at in advance, accelerate in advance or change lanes on road sections with limited speed or obstacles. The model library 300 in this embodiment may be loaded in a map manufacturer model, and the map includes road definitions, intersections, traffic signals, lane rules, and other elements for car navigation. A large amount of data is collected by a variety of sensors, such as GPS, inertial measurement units, lidar, cameras, and the like. The collected incoming data is sorted, classified and cleaned to obtain an initial map template without any semantic information or comments.
Simple image data is the easiest data to collect, cameras are cheap and of a wide variety and easy to use, but it is difficult to achieve a high accuracy of positioning with cameras. However, in this embodiment, the camera data is combined with the map and the GPS, and the probability is used to determine whether the camera data is compared with the data acquired by the sensor such as the map or the GPS, so as to locate the position of the vehicle or the obstacle.
Further, the reconstruction of the three-dimensional scene according to the present embodiment is implemented in the following manner:
the terrain visualization technology is used for constructing a basis of a vivid three-dimensional geographic scene, and the purpose of reconstructing the three-dimensional scene is to simulate a monitoring area. An on-surface target while providing a virtual object for virtual camera imaging. Under the condition that the monitoring area is bare, the data on the high-precision existing map can be directly adopted, or the three-dimensional laser scanner can be adopted to scan the area to form point cloud data, and then a general geographic information system is utilized to form an earth surface model of a regular grid; the three-dimensional scene drawing method can adopt a quadtree data structure to carry out layering and blocking on the topographic data and the texture data, build pyramids with different resolutions of digital topographic and texture images and build topographic block nodes with different layers, and is beneficial to improving the efficiency of scene modeling.
The imaging process is as follows: in the virtual scene space, the object rotates, translates and zooms through the model transformation matrix M, and the size, the position and the shape of the object are determined; then, performing perspective transformation through a perspective projection matrix P to form a two-dimensional image; finally, mapping the matrix V to a screen for display through a viewport transformation matrix V, wherein a matrix multiplication formula corresponding to the transformation process is as follows:
[x y 1] T =V·P·M[X Y Z 1] T
wherein: the perspective projection transformation is realized by using functions (Xl, xr, yb, yt, zn, zf), wherein (Xl, yb, zn), (Xr, yt, zn) are respectively the left lower corner and the right upper corner coordinates on the near clipping surface in the view cone, zn and Zf respectively determine the near and far clipping surfaces of the projection view cone, and the projection view cone defined by the functions can correspond to the photogrammetric internal azimuth elements.
Let the length and width of the photographic film be L respectively x 、L y The inner azimuth element (focal length f and principal point coordinate x of the image) can be calculated by using the equal ratio relationship 0 ,y 0 ) Substituting the imaging function parameter list to simulate the imaging result of the actual camera.
The conversion relation is as follows:
according to the above formula, a corresponding projection matrix P is calculated, and the perspective image is mapped onto a computer screen and is consistent with the imaging result of the actual camera, wherein the following formula is as follows:
according to the length and width L x 、L y Calculating a corresponding viewport matrix, as follows:
in photogrammetry, the external azimuth element records the position and attitude of the camera center at the moment of photography. And calculating to obtain an external parameter matrix according to the external azimuth element, namely setting a model transformation matrix in the simulation imaging process. According to the above analysis, perspective projection imaging and photogrammetric imaging are not only matched in principle, but also the parameters of projection imaging have a corresponding relationship with photogrammetric imaging.
Spatial localization of the target:
the simulated image formed by the camera can correspond to a single-frame video image shot by an actual video camera. Therefore, the corresponding pixel coordinates (i.e., screen coordinates) of the monitoring target can be calculated in the simulation image according to the video image coordinates where the monitoring target is located, as follows:
wherein: winX, winY are screen coordinates; w (w) r 、h r Is the actual image width and height; w (w) v 、h v Width and height of the simulation image; u, v are the pixel coordinates of the target on the actual image. According to the inverse imaging process, the real world coordinates (X, Y, Z) in the three-dimensional space can be calculated by the screen coordinates and the corresponding depth values winZ, as follows:
[X Y Z 1] T =M -1 ·P -1 ·V -1 [winX winY winZ] T
finally, the imaging projection ray of the target in the three-dimensional space can be obtained in the virtual geographic scene, the ray is intersected with the surface model in the virtual scene, and the intersection point position is the space coordinate of the target sitting on the ground surface.
Further, to realize virtual imaging of the three-dimensional scene, the method further comprises the following steps:
in order to compare the calculation result with the point cloud data, the three-dimensional scene is constructed by using an engineering coordinate system used by a scanner, and the constructed three-dimensional scene is illustrated in fig. 7 through an earth surface model with a resolution of 0.1 m. Since the position and posture of each photograph (corresponding to a single frame image of the camera) taken by the camera are stored in the three-dimensional laser scanner system in the form of a matrix, in the calculation of the virtual camera imaging matrix, calculation of the relationship between the correlation matrices involving the three-dimensional laser scanner system is required. For example:
projection matrix calculation: according to the inherent parameters of the digital camera, namely that the focal length f is 20mm, the distance d of the unit pixel in the X, Y direction x 、d y All 0.0055mm, L of photographic film x 、L y 23.584mm and 15.664mm respectively, and performing camera calibration to obtain an image principal point coordinate x 0 =0.222mm,y 0 =0.1875 mm. Will be put onSubstituting the internal azimuth element into the formula can obtain a corresponding virtual camera projection matrix P, and the formula is as follows:
and setting the current matrix as a projection matrix, and then simulating a projection result of the real camera. Next, the virtual camera position and posture are set so as to coincide with the real camera.
Model view matrix calculation: the position and the posture of the camera can be obtained through transformation operation of three coordinate systems of the three-dimensional laser scanner system. The three coordinate systems are respectively: scanner coordinate system, camera coordinate system, engineering coordinate system. And the coordinate transformation process includes the following 2 steps:
and converting the engineering coordinate system into the scanner coordinate system. For each scanning station, there is a coordinate system in which the position and orientation are recorded, and the engineering coordinate system can be converted to the scanner coordinate system according to the following equation: c (C) s =SOP -1 ×C p In C p Is engineering coordinate system, C s For the scanner coordinate system, SOP is the transformation matrix.
Conversion of the scanner coordinate system to the camera coordinate system. Let the coordinate system of the camera be C c . The matrix records the relationship between the camera and the scanner after the camera is mounted on the scanner head. The camera takes a picture with the Z-axis of the scanner as the rotation axis during the shooting process, so a matrix is needed to record the angle and attitude matrix of the camera relative to the scanner at each moment of shooting, denoted by COP. Thus, the three-dimensional laser scanner coordinates are translated into camera coordinates as follows: c (C) c =M mount ×COP -1 ×C s Wherein M is mount Is a matrix.
According to the above procedure, the engineering coordinate system can be converted into the camera coordinate system, and the conversion process is as follows:
C c =M mount ×COP -1 ×SOP -1 ×C P
the three-dimensional point cloud data takes an engineering coordinate system as a reference, and the three-dimensional virtual scene is also constructed under the engineering coordinate system, so that the engineering coordinate system is taken as a world coordinate system in the camera imaging model. From the above coordinate transformation relationship with respect to the three-dimensional laser scanner, it can be known that the external parameter matrix m=m when the camera captures each image mount ×COP -1 ×SOP -1 Is the model view matrix corresponding to the simulated image.
Through the matrix operation, the model view matrix of 3 pictures taken during laser scanning can be obtained to be M respectively 1 、M 2 、M 3 Based on camera position and pose parameters. And respectively inputting the obtained projection matrix and model view matrix into a three-dimensional scene virtual imaging and target positioning system, setting the position, the posture and projection imaging parameters of the virtual camera, and correspondingly generating 3 simulation images. In the actual photo series, the highlight point is the position of the reflecting sheet, and the construction of the three-dimensional scene is completed.
Example 2
Referring to the illustration of fig. 8, in the present embodiment, it is proposed that the display module 400 presents an environment reconstruction screen based on different visual fields around the vehicle according to different driving scenarios of the speed change of the vehicle during driving; in the high-speed running process of the vehicle, a driver can see more environment contents farther and more behind the vehicle, and in the low-speed running process, the angle of the camera becomes higher, and the environment in the range of about 5-10 meters of the vehicle body is presented. In particular, the scene presentation of the corresponding field of view of the different driving scenes comprises the steps of,
the image module 100 collects speed data of the current vehicle running; the adjustment module 500 distinguishes: the speed is more than or equal to 120km/h in a high-speed state, and the range of the visual angles is displayed corresponding to 200m in front of the vehicle, 50m behind the vehicle and 10m on the left and right; v is 100-120 km/h in a rapid state, and corresponds to a presentation view angle range of 100m in front of a vehicle, 20m in back and 6m on the left and right sides; v is more than or equal to 0 and less than or equal to 40km/h in a low-speed state, and corresponds to a presentation view angle range of 10m in front of a vehicle, 6m in back and 5m on the left and right; in a reversing state, the gear is positioned at the position of R and corresponds to the display visual angle range of 5m in front of the vehicle, 10m behind the vehicle and 10m on the left and right. Different viewing angles are changed according to different driving scenes of the user, and better driving experience is provided for the user.
Example 3
Referring to fig. 9 to 12, an overall schematic diagram of an ambient environment display system during driving of a vehicle is provided for the present embodiment, and the ambient environment display method during driving of a vehicle is practically applied. Specifically, the display system includes an image module 100, an identification module 200, a model library 300, a display module 400, and an adjustment module 500; the image module 100 is used for collecting and processing data of the vehicle and surrounding environment; the identification module 200 is connected with the image module 100 and is used for receiving the data collected by the image module 100 to identify the data type; the model library 300 is connected with the identification module 200 and is used for calling and/or generating a new model according to the data type identified by the identification module 200; the display module 400 displays the models screened by the model library 300 on a head-up display of the vehicle; the adjusting module 500 is connected with the image module 100, and is configured to adjust and display scenes in different fields of view according to the acquired vehicle speed data of the vehicle.
The reappearance vehicle body surrounding environment without dead angle is formed by reappearance vehicle body range 20-50 m distance environment from far to near and vehicle body forming elements of running risk, wherein 3d modeling can be specifically carried out by loading map manufacturer models, manually manufacturing models stored in a model library (3 dmax Maya and other three-dimensional model manufacturing software manufacturing), point cloud models (oblique photography and three-dimensional laser scanning), automatically generating models (confirming the outline of an object and automatically pulling up the height of the model). The building model uses a map vendor model, a point cloud model, or a model that is automatically generated.
The object with variable spatial attributes such as vehicles, pedestrians and the like and influencing the running of the vehicles is manually made and stored in a model library. Objects with spatial attributes such as trees, garbage cans and the like and without variables are manually made and stored in a model library or are presented through a point cloud model
The practical application process comprises the following specific embodiments:
the system comprises a vehicle radar, a camera and an image processing module; the visual presentation by 3D modeling is for the user to display the car surroundings on hud (heads-up display). By classifying and grading the environment models, the software directly invokes the model library data, and directly invokes the models in the model library after the object is identified, so that the matching of the environment around the vehicle body is better and faster. For example: buildings, pedestrians, household vehicles, passenger cars, bicycles, guideboards, etc.
Through the speed difference of the vehicle in the running process, the environment reconstruction of different visual angles is presented on the vehicle machine for the user, the judgment is carried out according to the gear and the vehicle speed, the user can see more environment contents farther and more behind the vehicle in the high-speed running process, the angle of the camera becomes higher in the low-speed running process, more attention is paid to the environment of the vehicle body about 5-10 m, the different visual angles are changed according to different driving scenes of the user, and better driving experience is provided for the user.
Visual appearance layer: objects around the vehicle body are rapidly scanned and rapidly modeled through sensors such as cameras, radars and the like, and the most realistic and close scene reproduction is presented to a user. The environment reappearance is presented in a line mode with a sense of science and technology, and the environment reappearance is not visually interfered with the user.
Hazard warning: when a certain object in the surrounding environment affects or threatens driving, the user is reminded in a prominent color visual pattern on environment reconstruction. And monitoring whether the situation of threat to the running of the vehicle exists or not through sensing equipment such as a camera, a radar and the like. After danger is met, the intelligent brain of the vehicle makes corresponding decisions by itself and reminds the user in environment reconstruction.
For the state of the vehicle (all information in the vehicle is acquired through the intelligent brain in the vehicle), the state effects of the light, the door opening and closing, the trunk and the like are simulated by 3d, and the user can directly observe the current all states of the vehicle body in the screen.
It should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered in the scope of the claims of the present invention.

Claims (7)

1. A method for displaying the surrounding environment during the running process of a vehicle is characterized in that: comprises the steps of,
an image module (100) collects object parameters around the vehicle and processes the object parameters to generate an environment data set;
an identification module (200) receives the processed environmental data and identifies a specific type of the environmental data;
retrieving an existing model matching the specific type identified by the identification module (200) in a model library (300), or automatically generating a new model;
the display module (400) carries out three-dimensional reconstruction display on the surrounding environment by using the retrieved and/or automatically generated model, and warns dangerous objects around the vehicle after the scene is reproduced;
according to different driving scenes of the speed change of the vehicle in the driving process, the display module (400) presents an environment reconstruction picture based on different visual field ranges around the vehicle, a driver can see more environment contents farther and more behind the vehicle in the high-speed driving process of the vehicle, the angle of a camera becomes higher in the low-speed driving process, and the environment within the range of 5-10 meters of the vehicle body is presented;
the three-dimensional scene drawing is to adopt a quadtree data structure to carry out layering and blocking on topographic data and texture data, construct pyramids with different resolutions of digital topographic and texture images and establish nodes of different topographic blocks;
in the virtual scene space, the object rotates, translates and zooms through the model transformation matrix M, the size, the position and the shape of the object are determined, then perspective transformation is carried out through the perspective projection matrix P to form a two-dimensional image, and finally the two-dimensional image is mapped to a screen for display through the view port transformation matrix V;
the space positioning of the target is based on the video image coordinates of the monitored target, the corresponding pixel coordinates are calculated in the simulation image, and the expression is as follows:
wherein, winX and winY are screen coordinates, w r 、h r For the actual image width and height, w v 、h V For the width and height of the simulation image, u and v are pixel coordinates of the target on the actual image;
according to the inverse process of imaging, calculating real world coordinates (X, Y, Z) in a three-dimensional space through screen coordinates and corresponding depth values winZ;
finally, imaging projection rays of the target in a three-dimensional space are obtained in the virtual geographic scene, the rays are intersected with a surface model in the virtual scene, and the intersection point position is the space coordinate of the target sitting on the ground surface;
scene presentation of corresponding visual fields of different driving scenes comprises the steps that an image module (100) collects speed data of current vehicle driving;
the adjustment module (500) distinguishes:
v is more than or equal to 120km/h in a high-speed state, and corresponds to a display visual angle range of 200m in front of a vehicle, 50m in back of the vehicle and 10m on the left and right;
v is 100-120 km/h in a rapid state, and corresponds to a display view angle range of 100m in front of a vehicle, 20m behind the vehicle and 6m on the left and right;
v is more than or equal to 0 and less than or equal to 40km/h in a low-speed state, and corresponds to a presentation view angle range of 10m in front of a vehicle, 6m behind the vehicle and 5m on the left and right;
in a reversing state, the gear is positioned at the position R and corresponds to the display view angle range of 5m in front of the vehicle, 10m behind the vehicle and 10m on the left and right;
the step of ranking the environmental data sets by the identification module (200),
a self-positioning level for positioning the position information of the vehicle by adding a control;
the environment level is used for reconstructing and displaying a three-dimensional scene of the surrounding environment of the vehicle;
prompt level, identifying dangerous data in the environment data set, displaying prompt labels in the reconstructed three-dimensional scene, and prompting a driver;
and (5) warning level, namely identifying collision data in the environment data set, and displaying warning marks and maps in the reconstructed three-dimensional scene.
2. The method for displaying the surrounding environment during the running of the vehicle according to claim 1, wherein: the image module (100) comprises,
the method comprises the steps of rapidly scanning object parameters around a vehicle through a camera (101), a vehicle-mounted radar (102) and a sensor (103), and sending the object parameters to an image processing module (104) for processing and generating an environment data set; and the collected data comprise actual object three-dimensional parameters of roads, pedestrians, vehicles and surrounding buildings, street lamps and trees related to driving.
3. The surrounding environment display method during running of a vehicle according to claim 1 or 2, characterized in that: the model library (300) includes map vendor model loading (301), manual model creation (302) stored in the model library (300), and automatic generation of new models (303).
4. A method for displaying the surrounding environment during the running of a vehicle according to claim 3, wherein: the automatic generation of new models comprises the steps of loading, manually making or storing a point cloud model into the model library (300) by using a map manufacturer model for an object with space attributes without variables;
an object having a variable in a spatial attribute and affecting the running of the vehicle is modeled by hand and stored in the model repository (300);
the image module (100) collects object parameters to confirm the outline of the object;
and the stored model of the model library (300) automatically pulls up or reduces the part with the variable according to the parameter comparison of the confirmed outer contour of the object, and generates a new model conforming to the outer contour of the actual object.
5. The method for displaying the surrounding environment during the running of the vehicle according to claim 4, wherein: comprising the step of classifying an environmental dataset by said identification module (200),
the display data of the vehicle and the state of the vehicle comprises collected three-dimensional model data of the vehicle body and vehicle body running state data;
the road surface comprises basic road surface information and peripheral topography fluctuation data;
traffic signs, including data for roadside signs, road blocks;
traffic participant data including pedestrians, vehicles, buildings, and obstacles.
6. The method for displaying the surrounding environment during the running of the vehicle according to claim 5, wherein: the traffic sign comprises a warning sign, a forbidden sign, an indicating sign, a road indicating sign and a tourist area sign; traffic markings include forbidden markings, indicator markings, and warning markings.
7. A system employing the method for displaying the surrounding environment during running of a vehicle according to any one of claims 1 to 6, characterized in that: comprises an image module (100), an identification module (200), a model library (300), a display module (400) and an adjustment module (500);
the image module (100) is used for acquiring and processing data of the vehicle and surrounding environment;
the identification module (200) is connected with the image module (100) and is used for receiving data acquired by the image module (100) to identify a data type;
the model library (300) is connected with the identification module (200) and is used for calling and/or generating a new model according to the data type identified by the identification module (200);
the display module (400) displays the models screened by the model library (300) on a head-up display of the vehicle;
the adjusting module (500) is connected with the image module (100) and is used for adjusting and displaying scenes under different fields of view according to the speed data of the collected vehicle.
CN201911054420.7A 2019-10-31 2019-10-31 Surrounding environment display method and system in vehicle running process Active CN110758243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911054420.7A CN110758243B (en) 2019-10-31 2019-10-31 Surrounding environment display method and system in vehicle running process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911054420.7A CN110758243B (en) 2019-10-31 2019-10-31 Surrounding environment display method and system in vehicle running process

Publications (2)

Publication Number Publication Date
CN110758243A CN110758243A (en) 2020-02-07
CN110758243B true CN110758243B (en) 2024-04-02

Family

ID=69335342

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911054420.7A Active CN110758243B (en) 2019-10-31 2019-10-31 Surrounding environment display method and system in vehicle running process

Country Status (1)

Country Link
CN (1) CN110758243B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915915A (en) * 2020-07-16 2020-11-10 华人运通(上海)自动驾驶科技有限公司 Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
CN111880533B (en) * 2020-07-16 2023-03-24 华人运通(上海)自动驾驶科技有限公司 Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
CN111860379A (en) * 2020-07-27 2020-10-30 联通智网科技有限公司 Method and device for establishing surrounding vehicle model, storage medium and computer equipment
CN112019808A (en) * 2020-08-07 2020-12-01 华东师范大学 Vehicle-mounted real-time video information intelligent recognition device based on MPSoC
CN112184605A (en) * 2020-09-24 2021-01-05 华人运通(上海)自动驾驶科技有限公司 Method, equipment and system for enhancing vehicle driving visual field
CN112325845B (en) * 2020-10-26 2022-09-06 的卢技术有限公司 Method and device for positioning vehicle lifting height through air pressure information, vehicle and storage medium
CN112492522B (en) * 2020-11-10 2022-09-23 的卢技术有限公司 Control method for autonomous parking of vehicle
CN112907757A (en) * 2021-04-08 2021-06-04 深圳市慧鲤科技有限公司 Navigation prompting method and device, electronic equipment and storage medium
CN113392796A (en) * 2021-06-29 2021-09-14 广州小鹏汽车科技有限公司 Display method, display device, vehicle, and computer-readable storage medium
CN113276774B (en) * 2021-07-21 2021-10-26 新石器慧通(北京)科技有限公司 Method, device and equipment for processing video picture in unmanned vehicle remote driving process
CN113553508A (en) * 2021-07-28 2021-10-26 中国第一汽车股份有限公司 Road condition model generation method, device, storage medium and system
CN113715733A (en) * 2021-08-31 2021-11-30 江苏高瞻数据科技有限公司 System and method for presenting media content in unmanned driving
CN113619607B (en) * 2021-09-17 2023-04-18 合众新能源汽车股份有限公司 Control method and control system for automobile running
CN114859754B (en) * 2022-04-07 2023-10-03 江苏泽景汽车电子股份有限公司 Simulation test method and simulation test system of head-up display system
CN117523526A (en) * 2023-11-02 2024-02-06 深圳鑫扬明科技有限公司 Vehicle detection system and method based on machine vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206664437U (en) * 2017-03-27 2017-11-24 北京汽车股份有限公司 Vehicle and display system for vehicle
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
CN109636924A (en) * 2018-12-28 2019-04-16 吉林大学 Vehicle multi-mode formula augmented reality system based on real traffic information three-dimensional modeling
WO2019097763A1 (en) * 2017-11-17 2019-05-23 アイシン・エィ・ダブリュ株式会社 Superposed-image display device and computer program
CN109878514A (en) * 2019-03-13 2019-06-14 的卢技术有限公司 A kind of subitem method and its application system of vehicle-periphery

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010040803A1 (en) * 2010-09-15 2012-03-15 Continental Teves Ag & Co. Ohg Visual driver information and warning system for a driver of a motor vehicle
KR101843773B1 (en) * 2015-06-30 2018-05-14 엘지전자 주식회사 Advanced Driver Assistance System, Display apparatus for vehicle and Vehicle
EP3367366B1 (en) * 2015-10-22 2021-05-05 Nissan Motor Co., Ltd. Display control method and display control device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN206664437U (en) * 2017-03-27 2017-11-24 北京汽车股份有限公司 Vehicle and display system for vehicle
WO2019097763A1 (en) * 2017-11-17 2019-05-23 アイシン・エィ・ダブリュ株式会社 Superposed-image display device and computer program
JP2019095213A (en) * 2017-11-17 2019-06-20 アイシン・エィ・ダブリュ株式会社 Superimposed image display device and computer program
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
CN109636924A (en) * 2018-12-28 2019-04-16 吉林大学 Vehicle multi-mode formula augmented reality system based on real traffic information three-dimensional modeling
CN109878514A (en) * 2019-03-13 2019-06-14 的卢技术有限公司 A kind of subitem method and its application system of vehicle-periphery

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
浅谈汽车无人驾驶技术;黄美宜;《科技资讯》;20170923(第27期);全文 *
面向智能车定位的道路环境视觉地图构建;李祎承;《中国公路学报》;20181130;第31卷(第11期);全文 *

Also Published As

Publication number Publication date
CN110758243A (en) 2020-02-07

Similar Documents

Publication Publication Date Title
CN110758243B (en) Surrounding environment display method and system in vehicle running process
CN103236160B (en) Road network traffic condition monitoring system based on video image processing technology
Creß et al. A9-dataset: Multi-sensor infrastructure-based dataset for mobility research
JP5397373B2 (en) VEHICLE IMAGE PROCESSING DEVICE AND VEHICLE IMAGE PROCESSING METHOD
AU2006203980B2 (en) Navigation and inspection system
JP5208203B2 (en) Blind spot display device
US8773534B2 (en) Image processing apparatus, medium recording image processing program, and image processing method
CN110060297B (en) Information processing apparatus, information processing system, information processing method, and storage medium
CN110648389A (en) 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle
US20140285523A1 (en) Method for Integrating Virtual Object into Vehicle Displays
CN113009506B (en) Virtual-real combined real-time laser radar data generation method, system and equipment
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN102555905B (en) Produce the method and apparatus of the image of at least one object in vehicle-periphery
US11593996B2 (en) Synthesizing three-dimensional visualizations from perspectives of onboard sensors of autonomous vehicles
Zhao et al. Autonomous driving simulation for unmanned vehicles
CN109727314A (en) A kind of fusion of augmented reality scene and its methods of exhibiting
CN114295139A (en) Cooperative sensing positioning method and system
CN116420096A (en) Method and system for marking LIDAR point cloud data
CN111447431A (en) Naked eye 3D display method and system applied to vehicle-mounted all-around camera shooting
EP4293622A1 (en) Method for training neural network model and method for generating image
Wahbeh et al. Image-based reality-capturing and 3D modelling for the creation of VR cycling simulations
Dai et al. Roadside Edge Sensed and Fused Three-dimensional Localization using Camera and LiDAR
Gao et al. 3D reconstruction for road scene with obstacle detection feedback
JP4530214B2 (en) Simulated field of view generator
Zhang et al. Automated visibility field evaluation of traffic sign based on 3D lidar point clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant