CN113516733B - Method and system for filling blind areas at bottom of vehicle - Google Patents

Method and system for filling blind areas at bottom of vehicle Download PDF

Info

Publication number
CN113516733B
CN113516733B CN202110555121.2A CN202110555121A CN113516733B CN 113516733 B CN113516733 B CN 113516733B CN 202110555121 A CN202110555121 A CN 202110555121A CN 113516733 B CN113516733 B CN 113516733B
Authority
CN
China
Prior art keywords
looking
coordinates
screen buffer
image
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110555121.2A
Other languages
Chinese (zh)
Other versions
CN113516733A (en
Inventor
郝长亮
金凌鸽
曲和政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yihang Yuanzhi Technology Co Ltd
Original Assignee
Beijing Yihang Yuanzhi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yihang Yuanzhi Technology Co Ltd filed Critical Beijing Yihang Yuanzhi Technology Co Ltd
Priority to CN202110555121.2A priority Critical patent/CN113516733B/en
Publication of CN113516733A publication Critical patent/CN113516733A/en
Application granted granted Critical
Publication of CN113516733B publication Critical patent/CN113516733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to a method and a system for filling blind areas of a vehicle bottom. The method for filling the blind area at the bottom of the vehicle comprises the steps of rendering a current frame of a looking-around image to a first off-screen buffer area, determining a first vehicle track according to a driving signal after rendering a previous frame of the current frame to a second off-screen buffer area, adding a preset weight to the first vehicle track to form a second vehicle track, intercepting the looking-around image corresponding to the second vehicle track in the second off-screen buffer area, attaching the intercepted looking-around image to the blind area position of the looking-around image in the first off-screen buffer area, attaching the looking-around image attached to the blind area position in the first off-screen buffer area to the blind area position in the default buffer area according to image rendering information of the default buffer area, and displaying, so that the blind area filling efficiency is improved, the phenomenon of position-shifting is avoided, and the blind area filling accuracy is further improved.

Description

Method and system for filling blind areas at bottom of vehicle
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for filling blind areas of a vehicle bottom.
Background
The panoramic image is an automobile safety system for assisting driving, four cameras are arranged around the automobile, the main controller collects images of the four cameras, then the four original images are placed in the same coordinate system to be spliced, finally, a panoramic image is formed, scenes around the automobile can be well displayed, a driver can see the scenes at a glance, and a certain auxiliary effect is achieved on safe driving.
However, the method only displays the scene around the vehicle, and a part of black matrix blind areas which cannot be shot by the camera still exist under the vehicle. The existence of this blind area can cause the visual experience of driver not good on the one hand, and on the other hand also can not see if the automobile bottom has the object that influences the driving, increases the potential safety hazard. Therefore, a method is needed to display the blind area scene of the vehicle bottom in real time.
In the prior art, although a filling method for a black matrix blind area of a panoramic image also exists, after the blind area is filled, the blind area is compared with surrounding looking-around images, the blind area is very likely to be tampered, and a large number of pictures are needed in the filling process, so that the filling efficiency is reduced, and meanwhile, too much expenditure is increased in performance. Such as patents CN108068696a and CN112215747a.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a method and a system for filling blind areas of a vehicle bottom.
In order to achieve the above object, the present invention provides the following solutions:
a method of filling a blind area of a vehicle floor, comprising:
Acquiring a first off-screen buffer area, a second off-screen buffer area and a looking-around image of a vehicle to be filled;
Rendering a current frame of the panoramic image to the first off-screen buffer area, and rendering a previous frame of the current frame to the second off-screen buffer area;
Acquiring a driving signal; the driving signal includes: gear, gear pulse and steering wheel angle;
Determining a first driving track according to the driving signal, and adding a preset weight to the first driving track to form a second driving track;
Intercepting a looking-around image corresponding to the second driving track in the second off-screen buffer zone, and attaching the intercepted looking-around image to a blind area position of the looking-around image in the first off-screen buffer zone;
acquiring image rendering information of a default buffer area; the image rendering information comprises an orthogonal projection matrix and a viewport used by a default buffer zone;
and attaching the panoramic image attached to the blind area position in the default buffer area according to the image rendering information of the default buffer area, and displaying the panoramic image.
Preferably, the rendering the current frame of the looking-around image to the first off-screen buffer area, and the rendering the previous frame of the looking-around image of the current frame of the looking-around image to the second off-screen buffer area specifically includes:
Acquiring the vehicle size, and determining the size of a projection model according to the vehicle size;
Determining vertex coordinates of a projection model according to the vehicle size and the size of the projection model;
Determining texture coordinates according to the vertex coordinates of the projection model; the texture coordinates are coordinates in screen coordinates of the vertex coordinates projected to the looking-around camera;
acquiring a preset orthogonal projection matrix;
And drawing the current frame of the looking-around image into a first off-screen buffer area and drawing the previous frame of the looking-around image of the current frame of the looking-around image into a second off-screen buffer area according to the preset orthogonal projection matrix, the fixed point coordinates and the texture coordinates by adopting a shader based on the looking-around image which is transmitted in real time by a camera.
Preferably, the determining texture coordinates according to the vertex coordinates of the projection model specifically includes:
Acquiring internal parameters and external parameters of a camera;
And determining texture coordinates according to the internal parameters, the external parameters and the vertex coordinates.
Preferably, the preset weight is x:
x=r×sinθ;
Where r is the tire radius of the vehicle to be filled and θ is the maximum wheel rotation angle of the vehicle to be filled.
Preferably, the capturing the looking-around image corresponding to the second driving track in the second off-screen buffer area, and attaching the captured looking-around image to the blind area position of the looking-around image in the first off-screen buffer area, specifically includes:
Acquiring a scene representation range and vehicle body coordinates in the second driving track; the scene representation range comprises vehicle body coordinates in a looking-around image range;
determining a proportional relation between the second driving track and the looking-around scene according to the scene representation range and the vehicle body coordinates in the second driving track;
intercepting a looking-around image corresponding to the second driving track in the second off-screen buffer zone according to the proportion relation;
and respectively taking the coordinates of the vehicle body in the second driving track as texture coordinates of the second off-screen buffer zone and vertex coordinates at the blind zone position, transmitting the texture coordinates and the vertex coordinates into a shader, and attaching the intercepted looking-around image to the blind zone position of the looking-around image in the first off-screen buffer zone through an OpenGL drawing command.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
according to the method for filling the blind area at the bottom of the vehicle, the current frame of the looking-around image is rendered to the first off-screen buffer zone, after the previous frame of the current frame is rendered to the second off-screen buffer zone, the first vehicle track is determined according to the driving signal, the preset weight is added to the first vehicle track to form the second vehicle track, then the looking-around image corresponding to the second vehicle track is intercepted in the second off-screen buffer zone, the intercepted looking-around image is attached to the blind area position of the looking-around image in the first off-screen buffer zone, and then the looking-around image attached to the blind area position of the blind area position in the first off-screen buffer zone is attached to the blind area position in the default buffer zone according to the image rendering information of the default buffer zone, and is displayed, so that the blind area filling efficiency is improved, the phenomenon of position-shifting is avoided, and the blind area filling accuracy is further improved.
Corresponding to the method for filling the blind areas at the bottom of the vehicle, the invention also provides a system for filling the blind areas at the bottom of the vehicle, which comprises the following steps:
a system for filling a blind area of a vehicle floor, comprising:
The first acquisition module is used for acquiring the first off-screen buffer zone, the second off-screen buffer zone and the looking-around image of the vehicle to be filled;
The rendering module is used for rendering the current frame of the looking-around image to the first off-screen buffer area, and rendering the previous frame of the current frame to the second off-screen buffer area;
The second acquisition module is used for acquiring driving signals; the driving signal includes: gear, gear pulse and steering wheel angle;
The driving track forming module is used for determining a first driving track according to the driving signal and adding a preset weight value to the first driving track to form a second driving track;
the first blind area attaching module is used for intercepting the looking-around image corresponding to the second driving track in the second off-screen buffer area and attaching the intercepted looking-around image to the blind area position of the looking-around image in the first off-screen buffer area;
The third acquisition module is used for acquiring image rendering information of the default buffer zone; the image rendering information comprises an orthogonal projection matrix and a viewport used by a default buffer zone;
And the second blind area attaching module is used for attaching the panoramic image attached to the blind area position in the first off-screen buffer area to the blind area position in the default buffer area according to the image rendering information of the default buffer area and displaying the panoramic image.
Preferably, the rendering module includes:
The first acquisition unit is used for acquiring the vehicle size and determining the size of the projection model according to the vehicle size;
a vertex coordinate determining unit configured to determine vertex coordinates of a projection model according to the vehicle size and the size of the projection model;
A texture coordinate determining unit, configured to determine texture coordinates according to vertex coordinates of the projection model; the texture coordinates are coordinates in screen coordinates of the vertex coordinates projected to the looking-around camera;
the second acquisition unit is used for acquiring a preset orthogonal projection matrix;
And the rendering unit is used for drawing the current frame of the looking-around image into a first off-screen buffer area and drawing the previous frame of the looking-around image of the current frame into a second off-screen buffer area according to the preset orthogonal projection matrix, the fixed point coordinates and the texture coordinates by adopting a shader based on the looking-around image which is transmitted in real time by a camera.
Preferably, the texture coordinate determination unit includes:
the parameter acquisition subunit is used for acquiring internal parameters and external parameters of the camera;
And the texture coordinate determining subunit is used for determining texture coordinates according to the internal parameter, the external parameter and the vertex coordinates.
Preferably, the first blind area attaching module includes:
The third acquisition unit is used for acquiring a scene representation range and vehicle body coordinates in the second driving track; the scene representation range comprises vehicle body coordinates in a looking-around image range;
the proportional relation determining unit is used for determining the proportional relation between the second driving track and the looking-around scene according to the scene representation range and the vehicle body coordinates in the second driving track;
the intercepting unit is used for intercepting the looking-around image corresponding to the second travelling path in the second off-screen buffer zone according to the proportion relation;
And the attaching unit is used for respectively transmitting the vehicle body coordinates in the second travelling path into the shader as texture coordinates of the second off-screen buffer zone and vertex coordinates of the blind zone position, and attaching the intercepted looking-around image to the blind zone position of the looking-around image in the first off-screen buffer zone through an OpenGL drawing command.
The technical effect achieved by the system for filling the blind areas at the bottom of the vehicle is the same as that achieved by the method for filling the blind areas at the bottom of the vehicle, so that the detailed description is omitted.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method of filling blind areas of a vehicle bottom provided by the invention;
FIG. 2 is a flowchart of a method for filling blind areas of a vehicle bottom according to an embodiment of the present invention;
FIG. 3 is a flow chart of a rendering process provided by an embodiment of the present invention;
fig. 4 is a schematic view of capturing a driving image according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a system for filling blind areas in a vehicle bottom according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a method and a system for filling blind areas at the bottom of a vehicle, so that the blind area filling efficiency is improved, the phenomenon of tamper is avoided, and the accuracy of blind area filling is further improved.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Before introducing a specific technical scheme, the following description of several important concepts is provided
Vertex coordinates texture coordinates: openGL terms, vertex coordinates represent the position of the display, and connecting all vertices together is the shape of the display.
Texture coordinates: the texture coordinates control the texture filled on the shape, and the range is 0-1, if a part of the texture is filled, the proportion of the part is calculated.
Off-screen rendering: the OpenGL terminology opens up a buffer area directly on the video memory, which can be understood as creating another invisible screen, but can be used as a screen, and the subsequent operations of rendering, intercepting and constructing a complete looking-around image are all performed on the off-screen buffer area.
Vehicle body coordinate system: the center of the vehicle is the origin of coordinates, the right side of the vehicle is the positive direction of the x axis, the front side of the vehicle is the positive direction of the y axis, and the upper side of the vehicle is the positive direction of the z axis, and the unit is meter.
Screen coordinate system: the lower left corner of the screen is the origin of coordinates, the right is the positive x-axis direction, the upward is the positive y-axis direction, and the units are pixels. When the image is intercepted, the corresponding image picture is intercepted according to the proportion relation of the pixel value of the middle blind area position to the screen under the screen coordinate system.
Projection matrix: the looking-around image displayed on the current screen, whether the 2D aerial view or the 3D view is displayed on the screen finally through a series of coordinate transformation, and the process of realizing the coordinate transformation is to set different matrixes. There are two important matrices: orthographic projection and perspective projection.
Orthographic projection: it is understood that a large cube, only objects inside the cube will be rendered onto the screen, while parts outside the cube will be discarded.
Perspective projection: in the real world, the closer to the viewpoint, the larger the object appears, and the farther the object appears smaller, so that perspective projection is provided, and an effect of being close to and far from the viewpoint is obtained through perspective projection matrix transformation. In addition, perspective projection is also a visual field concept, which can be understood as that the size of the range seen by eyes of a person, squinting eyes and opening eyes is different, although the size of an object is not changed all the time.
Typically, rendering 2D pictures will use orthogonal projection, such as looking around 2D birds-eye views, and rendering 3D scenes will use perspective projection.
As shown in fig. 1, a method for filling a blind area of a vehicle bottom includes:
Step 100: and acquiring the first off-screen buffer area, the second off-screen buffer area and the looking-around image of the vehicle to be filled. In this case, when initializing, two buffer objects a and b are defined, and the size of the buffer should not be too large, which means that more pixels should be processed from time to time, and when switching the buffer, more contents should be stored in order to save context information, thereby causing a large performance overhead. However, if the buffer is too small, the image blur will occur, so that the size needs to be verified repeatedly, and the definition of the image is ensured. In addition, the first off-screen buffer x used in the present invention may be referred to as a or b, and is alternated between two frames. For example: if the first off-screen buffer is bound at the time of the previous frame and the non-blind area looking-around image is built based on the first off-screen buffer, the second off-screen buffer is bound at the time of the current frame. If the previous frame is a non-blind area looking around image constructed based on the second off-screen buffer, the first off-screen buffer is bound at the current frame.
Step 101: and rendering the current frame of the looking-around image to a first off-screen buffer area, and rendering the previous frame of the current frame to a second off-screen buffer area.
Step 102: and acquiring a driving signal. The driving signal includes: gear, gear pulse and steering wheel angle.
Step 103: and determining a first driving track according to the driving signal, and adding a preset weight value to the first driving track to form a second driving track.
Step 104: and intercepting the looking-around image corresponding to the second driving track in the second off-screen buffer zone, and attaching the intercepted looking-around image to the position of a blind area of the looking-around image in the first off-screen buffer zone.
Step 105: and obtaining image rendering information of the default buffer zone. The image rendering information includes an orthogonal projection matrix and viewport used by the default buffer.
Step 106: and attaching the panoramic image attached to the blind area position in the default buffer area according to the image rendering information of the default buffer area, and displaying the panoramic image.
As shown in fig. 2, the overall implementation framework of the method for filling the blind areas of the vehicle bottom provided by the invention can be summarized as follows: 1) Backing up the information influencing rendering such as the matrix used by the current default buffer area, a viewport and the like, and setting an orthogonal projection matrix for off-screen rendering. 2) Binding an off-screen buffer zone x, rendering the current frame of the looking-around image into the off-screen buffer zone x, and constructing the complete looking-around image without blind areas. 3) And calculating the track of the vehicle according to the vehicle signals (information such as gears, gear pulses, steering wheel angles and the like). 4) According to the calculated driving track, the looking-around image is intercepted in the off-screen buffer zone y and is attached to the blind zone position of the off-screen buffer zone x, and the off-screen buffer zone x is the complete looking-around image without the blind zone. 5) And restoring information influencing rendering such as a matrix and a view port used by the default buffer zone, attaching the image corresponding to the blind zone position in the off-screen buffer zone x to the blind zone position of the default buffer zone, and displaying the image.
Steps 100-106 of the method of filling the blind areas of the vehicle bottom described above are described in detail below in connection with this unitary frame.
In order to improve the rendering accuracy, as shown in fig. 3, the step 101 of the present invention specifically includes:
Step 101-1, acquiring the vehicle size, and determining the size of the projection model according to the vehicle size. Specifically, according to which vehicle type the current product is specifically applied to, the specific size of the vehicle is directly consulted with a vehicle factory, and only the length and the width are needed. The size of the projection model is determined according to the vehicle size, and the size is not required to be hard, and is generally determined according to the display range of the panoramic image, preferably 2 to 3 times the vehicle size.
Step 101-2, determining vertex coordinates of the projection model according to the vehicle size and the size of the projection model. Specifically, the projection model has dimensions of [ L, W, H ], the vehicle has dimensions of [ L, W ], and the meshing vertex coordinate set of the projection model is established as { (Xn, yn, zn) |n=1, 2,3,..n }, where the calculation formula of Zn is as follows:
m and n both represent model curved surface control points, and the value range is By adjusting the values of m, n, the shape and extent of the curved surface can be controlled. And the vertex coordinates of the middle blind area position are known at the same time of generating the model vertex coordinates.
And step 101-3, determining texture coordinates according to the vertex coordinates of the projection model. The texture coordinates are coordinates in which the vertex coordinates are projected into screen coordinates behind the looking-around camera. Specifically, the texture coordinate determining process is as follows:
Acquiring internal parameters and external parameters of the camera. The camera internal parameters are intrinsic parameters of the camera, and are calibrated by a manufacturer when leaving the factory, written into an EEPROM memory of the camera, and directly read from the inside when in use.
Texture coordinates are determined based on the internal parameters, external parameters, and vertex coordinates. Wherein, the internal parameter is M 1, the external parameter is M 2, and the imaging rules according to the camera are:
And calculating the coordinates of each vertex [ X n,Yn,Zn ] of the 3D projection model under the vehicle body coordinate system, and projecting the coordinates of the screen coordinate system behind the corresponding looking-around camera, namely the texture coordinates [ X n,yn ].
And 101-4, acquiring a preset orthogonal projection matrix. The reason why the orthogonal projection matrix is preset is that if the development platform is superior in performance, for convenience, the orthogonal projection matrix used when rendering the 2D bird's eye view may be used, but there are drawbacks: in general, the range represented by the 2D bird's eye view is relatively large, and the used interception range is only a small part larger than the middle blind area range, so that rendering of more scenes is not used, and waste is caused. The perspective projection involves the concept of a field of view, and the true display range depends on the size of the field of view, so that when the display range of the scene on the screen is finally determined, additional calculation amount is added, and accuracy may be lost, so that the invention finally chooses to set an orthogonal projection matrix separately.
Glm is a mathematical library of OpenGL in which all the mathematical correlation knowledge used in OpenGL development can be found. Thus, the orthogonal projection matrix can be set directly by glm. For example, it is preferable in the present invention to set the orthogonal projection matrix as:
projectMatrix=glm::ortho(-2000.0,2000.0,-3000.0,3000.0,10.0,100000.0);
the parameters in the matrix are the left, right, lower and upper coordinates of the display range and the distance between the near plane and the far plane in sequence, the setting represents the display range with the width and the height of 4m by 6m, and the range is adjusted according to the actual project.
101-5, Drawing the current frame of the looking-around image into a first off-screen buffer zone and drawing the previous frame of the looking-around image of the current frame of the looking-around image into a second off-screen buffer zone by adopting a shader based on the looking-around image which is transmitted in real time by a camera according to a preset orthogonal projection matrix, fixed point coordinates and texture coordinates.
Step 101-4 and step 101-5 are that the set projection matrix, the determined vertex coordinates and texture coordinates are transferred to a shader of the looking-around image, corresponding textures, that is, the looking-around image transferred by the camera in real time are bound, and finally drawn into the second off-screen buffer area through an OpenGL drawing command.
Further, in the above-mentioned step 103, the first track is determined by calculating the driving direction according to the gear and the steering wheel angle, and then calculating the traveling distance according to the gear pulse and the time between the current frame and the previous frame of the ring car image, so as to determine the track of the vehicle from the previous frame to the current frame, and the track calculation process is mostly similar and is calculated as a general part, so that the protection content is not provided.
After the driving track is determined, a weight is required to be added to the left and the right of the current track, and the specific weight calculation method is as follows:
x=r×sinθ (3)
where x is the calculated weight, r is the tire radius, and θ is the maximum wheel rotation angle.
Because the tires are shot by the cameras on the left and right sides of the vehicle in the running and steering process of the vehicle, the tires appear in the panoramic annular view which is spliced and synthesized, and then the tires are cut out from the panoramic annular view and are filled in the images of the blind areas at the bottom of the vehicle. The blind area of the vehicle bottom is used for indicating the road on which the vehicle walks, and tires are not allowed to appear, so that the vehicle bottom is avoided.
By increasing the weight, the intercepting width is increased, the tire rotating range is also included in the intercepted image, so that the intercepting width is wider than the width of the actual blind area position in the previous frame each time, as shown in fig. 4, and then the actual blind area width is intercepted again from the 'widened' looking-around image and attached to the screen.
Since the image is taken from the previous frame each time, and no tire is present in the first frame, no tire is present until all frames later.
Based on the above theoretical considerations, the step 104 preferably comprises:
And 104-1, acquiring a scene representation range and vehicle body coordinates in the second driving track. The scene representation range includes vehicle body coordinates within the pan around image range.
And 104-2, determining the proportional relation between the second driving track and the looking-around scene according to the scene representation range and the vehicle body coordinates in the second driving track.
And 104-3, capturing the looking-around image corresponding to the second driving track in the second off-screen buffer area according to the proportional relation.
104-4, Respectively using the vehicle body coordinates in the second driving track as texture coordinates of the second off-screen buffer zone and vertex coordinates at the blind zone position to be transmitted into a shader, and attaching the intercepted looking-around image to the blind zone position of the looking-around image in the first off-screen buffer zone through an OpenGL drawing command.
This step corresponds to step 100, and if x (first off-screen buffer) in step 100 is denoted a, then y (second off-screen buffer) in this step is denoted b. If x in step 100 refers to b, then y in this step refers to a.
In the above step 101-4, the scene representation range is designated when the orthogonal projection matrix is set, and is set as Y n, and n=1, 2,3, and 4 are the vehicle body coordinates of four points of the looking-around image range, respectively. In step 103, the vehicle track is also determined, and is set as X n, where n=1, 2,3, and 4 are the coordinates of the vehicle body at four points of the vehicle track, so that the proportional relationship k between the dynamic vehicle track and the looking-around scene can be obtained according to the following formula,
Since the looking-around image is based on the width and height of the screen coordinate system and is the size designated when the buffer is initially created, the trajectory X n based on the screen coordinate system can be obtained by replacing Y n with the width and height of the looking-around image based on the screen coordinate system. The blind area position is determined when the size of the projection model is determined based on the vertex coordinates of the vehicle body coordinate system, so that X n is used as the texture coordinates of the second off-screen buffer zone y used in the previous frame and the vertex coordinates of the blind area position are transmitted into the colorant of the looking-around image only by binding the textures of the second off-screen buffer zone y, and the image picture corresponding to the dynamic driving track can be attached to the blind area position of the looking-around image in the first off-screen buffer zone through an OpenGL drawing command.
The previous operations are off-screen rendering, and there is no change on the currently displayed screen, so in the following step 106, the intercepted blind area image is synchronized to the current screen, and the blind area image has two sources: one is to attach the portion of the second off-screen buffer y to the first off-screen buffer x for attaching to the screen. And secondly, directly cutting out and attaching the screen according to the position coordinates of the dead zone in the first off-screen buffer zone x. The two methods are equally effective. The specific attaching method is the same as that in step 104, but only this time, the buffer area corresponding to the current screen and the information of the image rendering backup are bound, and the image at the blind area position is drawn again.
In summary, compared with the prior art, the invention has the following advantages:
1. The invention directly utilizes the GPU video memory resource to operate the image, does not need to open up physical space and complex pointer operation, saves memory and improves accuracy.
2. The invention fully utilizes the capability of the GPU for processing the image, and the image has no distortion or chromatic aberration and is clearer.
3. The invention can be based on the parallel processing program of the CPU and the GPU, and the CPU is not singly used for processing, so that the program is smoother and the frame rate is higher.
4. The invention does not need to use modeling software to generate a model, only needs to change formula parameters when the shape is changed, and has higher efficiency.
In addition, corresponding to the method for filling the blind area of the vehicle bottom, the invention also provides a system for filling the blind area of the vehicle bottom, as shown in fig. 5, comprising: the system comprises a first acquisition module 1, a rendering module 2, a second acquisition module 3, a travelling path forming module 4, a first blind area attaching module 5, a third acquisition module 6 and a second blind area attaching module 7.
The first acquiring module 1 is configured to acquire a first off-screen buffer area, a second off-screen buffer area, and a looking-around image of a vehicle to be filled.
The rendering module 2 is used for rendering a current frame of the looking-around image to the first off-screen buffer area, and a previous frame of the current frame to the second off-screen buffer area.
The second acquisition module 3 is used for acquiring driving signals. The driving signal includes: gear, gear pulse and steering wheel angle.
The driving track forming module 4 is configured to determine a first driving track according to the driving signal, and add a preset weight to the first driving track to form a second driving track.
The first blind area attaching module 5 is used for intercepting the looking-around image corresponding to the second driving track in the second off-screen buffer zone and attaching the intercepted looking-around image to the blind area position of the looking-around image in the first off-screen buffer zone.
The third obtaining module 6 is configured to obtain image rendering information of the default buffer. The image rendering information includes an orthogonal projection matrix and viewport used by the default buffer.
The second blind area attaching module 7 is used for attaching the panoramic image attached to the blind area position in the first off-screen buffer area to the blind area position in the default buffer area according to the image rendering information of the default buffer area and displaying the panoramic image.
Further, corresponding to the specific rendering process, the rendering module provided by the present invention preferably includes: the image processing device comprises a first acquisition unit, a vertex coordinate determination unit, a texture coordinate determination unit, a second acquisition unit and a rendering unit.
The first acquisition unit is used for acquiring the vehicle size and determining the size of the projection model according to the vehicle size.
The vertex coordinates determining unit is used for determining vertex coordinates of the projection model according to the vehicle size and the size of the projection model.
The texture coordinate determining unit is used for determining texture coordinates according to vertex coordinates of the projection model. The texture coordinates are coordinates in which the vertex coordinates are projected into screen coordinates behind the looking-around camera.
The second acquisition unit is used for acquiring a preset orthogonal projection matrix.
The rendering unit is used for drawing the current frame of the looking-around image into the first off-screen buffer zone and drawing the previous frame of the looking-around image of the current frame of the looking-around image into the second off-screen buffer zone by adopting the shader based on the looking-around image which is transmitted in real time by the camera according to the preset orthogonal projection matrix, the fixed point coordinates and the texture coordinates.
Wherein the texture coordinate determination unit includes: the parameter acquisition subunit and the texture coordinate determination subunit.
The parameter acquisition subunit is used for acquiring internal parameters and external parameters of the camera.
The texture coordinate determination subunit is configured to determine texture coordinates according to the internal parameters, the external parameters, and the vertex coordinates.
Further, corresponding to the above specific blind area attaching process in the first off-screen buffer area, the first blind area attaching module provided by the invention includes: the device comprises a third acquisition unit, a proportional relation determination unit, an interception unit and a laminating unit.
The third acquisition unit is used for acquiring the scene representation range and the vehicle body coordinates in the second driving track. The scene representation range includes vehicle body coordinates within the pan around image range.
The proportional relation determining unit is used for determining the proportional relation between the second driving track and the looking-around scene according to the scene representation range and the vehicle body coordinates in the second driving track.
The intercepting unit is used for intercepting the looking-around image corresponding to the second driving track in the second off-screen buffer zone according to the proportion relation.
The attaching unit is used for respectively transmitting the coordinates of the vehicle body in the second driving track into the shader as texture coordinates of the second off-screen buffer zone and vertex coordinates at the blind zone position, and attaching the intercepted looking-around image to the blind zone position of the looking-around image in the first off-screen buffer zone through an OpenGL drawing command.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (9)

1. A method of filling a blind area of a vehicle floor, comprising:
Acquiring a first off-screen buffer area, a second off-screen buffer area and a looking-around image of a vehicle to be filled;
Rendering a current frame of the panoramic image to the first off-screen buffer area, and rendering a previous frame of the current frame to the second off-screen buffer area;
Acquiring a driving signal; the driving signal includes: gear, gear pulse and steering wheel angle;
Determining a first driving track according to the driving signal, and adding a preset weight to the first driving track to form a second driving track;
Intercepting a looking-around image corresponding to the second driving track in the second off-screen buffer zone, and attaching the intercepted looking-around image to a blind area position of the looking-around image in the first off-screen buffer zone;
acquiring image rendering information of a default buffer area; the image rendering information comprises an orthogonal projection matrix and a viewport used by a default buffer zone;
and attaching the panoramic image attached to the blind area position in the default buffer area according to the image rendering information of the default buffer area, and displaying the panoramic image.
2. The method for filling a blind area of a vehicle bottom according to claim 1, wherein the rendering the current frame of the looking-around image to the first off-screen buffer and the previous frame of the current frame to the second off-screen buffer specifically comprises:
Acquiring the vehicle size, and determining the size of a projection model according to the vehicle size;
Determining vertex coordinates of a projection model according to the vehicle size and the size of the projection model;
Determining texture coordinates according to the vertex coordinates of the projection model; the texture coordinates are coordinates in screen coordinates of the vertex coordinates projected to the looking-around camera;
acquiring a preset orthogonal projection matrix;
and drawing the current frame of the looking-around image into a first off-screen buffer zone and drawing the last frame of the looking-around image of the current frame of the looking-around image into a second off-screen buffer zone according to the preset orthogonal projection matrix, the vertex coordinates and the texture coordinates based on the looking-around image which is transmitted in real time by a camera by adopting a shader.
3. The method for filling a blind area of a vehicle bottom according to claim 2, wherein the determining texture coordinates according to the vertex coordinates of the projection model specifically includes:
Acquiring internal parameters and external parameters of a camera;
And determining texture coordinates according to the internal parameters, the external parameters and the vertex coordinates.
4. The method for filling a blind area under a vehicle according to claim 1, wherein the preset weight is x:
x=r×sinθ;
Where r is the tire radius of the vehicle to be filled and θ is the maximum wheel rotation angle of the vehicle to be filled.
5. The method for filling a blind area of a vehicle bottom according to claim 1, wherein capturing an looking-around image corresponding to the second track in the second off-screen buffer zone, and attaching the captured looking-around image to a blind area position of the looking-around image in the first off-screen buffer zone, specifically comprises:
Acquiring a scene representation range and vehicle body coordinates in the second driving track; the scene representation range comprises vehicle body coordinates in a looking-around image range;
determining a proportional relation between the second driving track and the looking-around scene according to the scene representation range and the vehicle body coordinates in the second driving track;
intercepting a looking-around image corresponding to the second driving track in the second off-screen buffer zone according to the proportion relation;
and respectively taking the coordinates of the vehicle body in the second driving track as texture coordinates of the second off-screen buffer zone and vertex coordinates at the blind zone position, transmitting the texture coordinates and the vertex coordinates into a shader, and attaching the intercepted looking-around image to the blind zone position of the looking-around image in the first off-screen buffer zone through an OpenGL drawing command.
6. A system for filling a blind area of a vehicle floor, comprising:
The first acquisition module is used for acquiring the first off-screen buffer zone, the second off-screen buffer zone and the looking-around image of the vehicle to be filled;
The rendering module is used for rendering the current frame of the looking-around image to the first off-screen buffer area, and rendering the previous frame of the current frame to the second off-screen buffer area;
The second acquisition module is used for acquiring driving signals; the driving signal includes: gear, gear pulse and steering wheel angle;
The driving track forming module is used for determining a first driving track according to the driving signal and adding a preset weight value to the first driving track to form a second driving track;
the first blind area attaching module is used for intercepting the looking-around image corresponding to the second driving track in the second off-screen buffer area and attaching the intercepted looking-around image to the blind area position of the looking-around image in the first off-screen buffer area;
The third acquisition module is used for acquiring image rendering information of the default buffer zone; the image rendering information comprises an orthogonal projection matrix and a viewport used by a default buffer zone;
And the second blind area attaching module is used for attaching the panoramic image attached to the blind area position in the first off-screen buffer area to the blind area position in the default buffer area according to the image rendering information of the default buffer area and displaying the panoramic image.
7. The system for filling an underbody blind spot according to claim 6, wherein said rendering module comprises:
The first acquisition unit is used for acquiring the vehicle size and determining the size of the projection model according to the vehicle size;
a vertex coordinate determining unit configured to determine vertex coordinates of a projection model according to the vehicle size and the size of the projection model;
A texture coordinate determining unit, configured to determine texture coordinates according to vertex coordinates of the projection model; the texture coordinates are coordinates in screen coordinates of the vertex coordinates projected to the looking-around camera;
the second acquisition unit is used for acquiring a preset orthogonal projection matrix;
And the rendering unit is used for drawing the current frame of the looking-around image into a first off-screen buffer zone and drawing the last frame of the looking-around image of the current frame of the looking-around image into a second off-screen buffer zone on the basis of the looking-around image which is transmitted in real time by a camera according to the preset orthogonal projection matrix, the vertex coordinates and the texture coordinates.
8. The system for filling a blind spot on a vehicle bottom according to claim 7, wherein the texture coordinate determining unit includes:
the parameter acquisition subunit is used for acquiring internal parameters and external parameters of the camera;
And the texture coordinate determining subunit is used for determining texture coordinates according to the internal parameter, the external parameter and the vertex coordinates.
9. The system for filling an underbody blind zone according to claim 6, wherein the first blind zone fitting module comprises:
The third acquisition unit is used for acquiring a scene representation range and vehicle body coordinates in the second driving track; the scene representation range comprises vehicle body coordinates in a looking-around image range;
the proportional relation determining unit is used for determining the proportional relation between the second driving track and the looking-around scene according to the scene representation range and the vehicle body coordinates in the second driving track;
the intercepting unit is used for intercepting the looking-around image corresponding to the second travelling path in the second off-screen buffer zone according to the proportion relation;
And the attaching unit is used for respectively transmitting the vehicle body coordinates in the second travelling path into the shader as texture coordinates of the second off-screen buffer zone and vertex coordinates of the blind zone position, and attaching the intercepted looking-around image to the blind zone position of the looking-around image in the first off-screen buffer zone through an OpenGL drawing command.
CN202110555121.2A 2021-05-21 2021-05-21 Method and system for filling blind areas at bottom of vehicle Active CN113516733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110555121.2A CN113516733B (en) 2021-05-21 2021-05-21 Method and system for filling blind areas at bottom of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110555121.2A CN113516733B (en) 2021-05-21 2021-05-21 Method and system for filling blind areas at bottom of vehicle

Publications (2)

Publication Number Publication Date
CN113516733A CN113516733A (en) 2021-10-19
CN113516733B true CN113516733B (en) 2024-05-17

Family

ID=78065006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110555121.2A Active CN113516733B (en) 2021-05-21 2021-05-21 Method and system for filling blind areas at bottom of vehicle

Country Status (1)

Country Link
CN (1) CN113516733B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549321A (en) * 2022-02-25 2022-05-27 小米汽车科技有限公司 Image processing method and apparatus, vehicle, and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014129026A1 (en) * 2013-02-21 2014-08-28 本田技研工業株式会社 Driving assistance device and image processing program
CN107274342A (en) * 2017-05-22 2017-10-20 纵目科技(上海)股份有限公司 A kind of underbody blind area fill method and system, storage medium, terminal device
CN107848465A (en) * 2015-05-06 2018-03-27 麦格纳镜片美国有限公司 Shown and the vehicle vision system of caution system with blind area
CN111160070A (en) * 2018-11-07 2020-05-15 广州汽车集团股份有限公司 Vehicle panoramic image blind area eliminating method and device, storage medium and terminal equipment
CN112215747A (en) * 2019-07-12 2021-01-12 杭州海康威视数字技术股份有限公司 Method and device for generating vehicle-mounted panoramic picture without vehicle bottom blind area and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107554430B (en) * 2017-09-20 2020-01-17 京东方科技集团股份有限公司 Vehicle blind area visualization method, device, terminal, system and vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014129026A1 (en) * 2013-02-21 2014-08-28 本田技研工業株式会社 Driving assistance device and image processing program
CN107848465A (en) * 2015-05-06 2018-03-27 麦格纳镜片美国有限公司 Shown and the vehicle vision system of caution system with blind area
CN107274342A (en) * 2017-05-22 2017-10-20 纵目科技(上海)股份有限公司 A kind of underbody blind area fill method and system, storage medium, terminal device
CN111160070A (en) * 2018-11-07 2020-05-15 广州汽车集团股份有限公司 Vehicle panoramic image blind area eliminating method and device, storage medium and terminal equipment
CN112215747A (en) * 2019-07-12 2021-01-12 杭州海康威视数字技术股份有限公司 Method and device for generating vehicle-mounted panoramic picture without vehicle bottom blind area and storage medium

Also Published As

Publication number Publication date
CN113516733A (en) 2021-10-19

Similar Documents

Publication Publication Date Title
CN107792179B (en) A kind of parking guidance method based on vehicle-mounted viewing system
US9495799B2 (en) Image distortion correction system
CN112224132B (en) Vehicle panoramic all-around obstacle early warning method
CN109087251B (en) Vehicle-mounted panoramic image display method and system
WO2009144994A1 (en) Vehicle image processor, and vehicle image processing system
EP1462762A1 (en) Circumstance monitoring device of a vehicle
US20130141547A1 (en) Image processing apparatus and computer-readable recording medium
CN102298771A (en) Fish-eye image rapid correction method of panoramic parking auxiliary system
CN102163331A (en) Image-assisting system using calibration method
CN101487895B (en) Reverse radar system capable of displaying aerial vehicle image
CN111582080A (en) Method and device for realizing 360-degree all-round monitoring of vehicle
CN112215747A (en) Method and device for generating vehicle-mounted panoramic picture without vehicle bottom blind area and storage medium
CN103802725A (en) New method for generating vehicle-mounted driving assisting image
EP3811326B1 (en) Heads up display (hud) content control system and methodologies
CN107240065A (en) A kind of 3D full view image generating systems and method
CN113870161A (en) Vehicle-mounted 3D (three-dimensional) panoramic stitching method and device based on artificial intelligence
CN108174089B (en) Backing image splicing method and device based on binocular camera
CN113516733B (en) Method and system for filling blind areas at bottom of vehicle
CN112242009A (en) Display effect fusion method, system, storage medium and main control unit
CN105774657B (en) Single-camera panoramic reverse image system
CN113449582A (en) Vehicle bottom blind area filling method, device, system, storage medium and computer program product
CN113724133B (en) 360-degree circular splicing method for non-rigid body connected trailer
KR20210008503A (en) Rear view method and apparatus using augmented reality camera
JP2001202527A (en) Method for displaying three-dimensional graphic and three-dimensionally plotting device
CN209290277U (en) DAS (Driver Assistant System)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant