CN115086575B - Video picture splicing method and device based on unmanned vehicle remote driving - Google Patents

Video picture splicing method and device based on unmanned vehicle remote driving Download PDF

Info

Publication number
CN115086575B
CN115086575B CN202210980376.8A CN202210980376A CN115086575B CN 115086575 B CN115086575 B CN 115086575B CN 202210980376 A CN202210980376 A CN 202210980376A CN 115086575 B CN115086575 B CN 115086575B
Authority
CN
China
Prior art keywords
video picture
video
grid
unmanned vehicle
splicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210980376.8A
Other languages
Chinese (zh)
Other versions
CN115086575A (en
Inventor
华炜
高健健
朱建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202210980376.8A priority Critical patent/CN115086575B/en
Publication of CN115086575A publication Critical patent/CN115086575A/en
Application granted granted Critical
Publication of CN115086575B publication Critical patent/CN115086575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a video picture splicing method and a device based on unmanned vehicle remote driving, wherein the method comprises the steps of firstly parking an unmanned vehicle on the ground with grid characteristics, and acquiring video pictures of all cameras at the same time; constructing a grid model according to the ground grid characteristics, establishing mapping from the grid characteristics in a video picture to the grid model, and calculating a plurality of vertex data of the vertexes of the grid model according to the mapping relation; and finally, in a real-time operation stage, drawing a grid model by using the GPU, sampling a real-time video picture, and mixing to obtain a final spliced picture. According to the method, only one ground with grid characteristics is used as a calibration scene, and a grid model and a mapping relation are generated at one time, so that the splicing stability and predictability are improved; because the GPU is used for drawing and mixing in the real-time operation stage, the splicing efficiency is high, and the expansibility is good. The method is particularly suitable for splicing video pictures on the ground, can fuse the pictures of a plurality of cameras, and enlarges the range of visual angles.

Description

Video picture splicing method and device based on unmanned vehicle remote driving
Technical Field
The invention relates to the field of unmanned vehicle remote driving, in particular to a video picture splicing method and device based on unmanned vehicle remote driving.
Background
In recent years, with the maturity of various types of sensors, the increasing computing power of computing devices, and the popularization of 5G networks, the unmanned vehicle automatic driving technology is rapidly developing at an unprecedented speed. However, there is still a huge gap between the existing automatic driving and the full automatic driving based on the artificial intelligence technology, and it takes at least 10 years to make up for the gap according to the prediction of experts in the industry.
The remote driving technology enables a remote driver to remotely control the unmanned vehicle in the real-time manner by returning the current environmental information of the unmanned vehicle, and provides more flexible, intelligent and safe service under the condition that the automatic driving technology is not mature. Remote driving usually transmits data of a sensor carried on an unmanned vehicle back to a remote cockpit in real time for a remote driver to observe, analyze and make driving decisions, wherein the most important returned data is a real-time video picture of a camera. A traditional remote driving method displays a group of video pictures, and each video picture corresponds to a camera to form a video wall. Although each camera picture has an overlapping area, the camera pictures are independently displayed, and the remote driver is not friendly to the relationship understanding between the cameras and the establishment of the scene space feeling. In addition, the independent camera screen display also causes information redundancy, and the attention of a remote driver is dispersed.
Disclosure of Invention
In order to solve the defects of the prior art and realize the video picture splicing of the remote driving of the unmanned vehicle, the invention adopts the following technical scheme:
a video picture splicing method based on unmanned vehicle remote driving comprises the following steps:
s1, fixedly mounting a camera needing video picture splicing on an unmanned vehicle;
s2, arranging a calibration scene to enable the ground to have grid characteristics, and stopping the unmanned vehicle in the calibration scene;
s3, acquiring a video picture of all the cameras at the same time, wherein each video picture contains grid characteristics of the ground, and the grid characteristics divide the video picture into a plurality of video picture areas; all video picture areas of all video pictures form a video picture area set; storing vertex data including texture coordinates at the vertex of the video picture area;
s4, generating a grid model, wherein each vertex of the grid model stores a group of vertex data, and the vertex data comprises world coordinates of the vertex, and index values, texture coordinates and transparency of a plurality of corresponding video pictures;
s5, establishing mapping from the video picture area set to the grid model, and updating the corresponding vertex data of the grid model according to the vertex data in the video picture area;
s6, accessing real-time video streams of all the cameras, and acquiring real-time video pictures of all the cameras at a certain moment; and drawing the grid model by using a graphic drawing interface, sampling a real-time video picture according to the vertex data of the grid model with the vertex data updated in the step S5, and mixing to obtain a final video picture splicing result.
Further, the mounting positions and angles of all the cameras in S1 satisfy the following conditions: for any camera, there is necessarily a video picture that is shot by at least one other camera and has an intersection with the other camera, and the cameras with all the video pictures having the intersection only form a set, and the set includes all the cameras.
Further, the mesh feature of the ground in S2 is a polygonal mesh.
Further, in S3, the video picture areas in each video picture do not intersect with each other, and the video picture areas are to cover the picture range of the video picture that needs to be spliced.
Further, all vertices of the mesh model in S4 are distributed on a two-dimensional plane, and the mesh in the mesh model and the mesh in the ground mesh feature in the calibration scene are of the same type.
Further, the mapping relationship established in S5 satisfies the following condition: for each grid in the grid model, a plurality of video picture sub-areas are corresponding to the grid, and the video picture sub-areas form a video picture sub-area set, wherein the video picture sub-areas are subsets of the video picture areas; traversing all vertexes of the mesh model, finding a plurality of vertexes in the corresponding video picture subregion set according to the mapping relation, and updating data of the vertexes of the mesh model, wherein the specific updating logic is as follows:
s5.1, setting a video picture index value set of the grid model vertex into the index value of a video picture corresponding to each video picture subregion in the video picture subregion set;
s5.2, setting a texture coordinate set of the grid model vertexes as texture coordinates of a plurality of corresponding vertexes in the video picture sub-region set in the video picture region, wherein the texture coordinates are obtained by interpolating the texture coordinates of all the vertexes of the corresponding video picture region;
and S5.3, setting a transparency set of the grid model vertexes according to the distance between the corresponding vertexes in the video picture subregion set and the video picture central point, and ensuring that the sum of all transparencies is 1.
Further, the splicing of the real-time video pictures in S6 specifically includes the following sub-steps:
s6.1, uploading all real-time video pictures to a GPU video memory;
s6.2, drawing the grid model by using a graphic drawing interface, and calculating the color value of each pixel in a shader by the specific calculation method as follows:
s6.2.1, sampling a color value of a corresponding real-time video picture by using an index value and texture coordinates of a grid vertex;
s6.2.2, mixing the transparency of the grid vertex and the color value obtained by sampling to obtain the pixel color value of the spliced picture;
and S6.3, obtaining an image cache drawn in the S6.2, recording video picture splicing results of all the cameras by the image cache, and refreshing and displaying the video picture splicing results on a screen.
Further, when the positions and attitudes of all the cameras are not changed, the preprocessing steps described in S1 to S4 need only be performed once.
Further, in S6.2, the texture coordinates and the transparency of the vertex need to be interpolated in the process of being output from the vertex shader to the pixel shader, and the index value of the vertex does not need to be interpolated in the process of being output from the vertex shader to the pixel shader.
A video picture splicing device based on unmanned vehicle remote driving comprises one or more processors and is used for achieving the video picture splicing method based on unmanned vehicle remote driving.
The invention has the following beneficial effects:
the video picture splicing method based on the unmanned vehicle remote driving firstly parks the unmanned vehicle on the ground with grid characteristics and obtains video pictures of all cameras at the same time; constructing a grid model with a similar structure according to the ground grid characteristics, establishing mapping from the grid characteristics in a video picture to the grid model, and calculating a plurality of vertex data of the vertexes of the grid model according to the mapping relation; and finally, in a real-time operation stage, drawing a grid model by using the GPU, sampling a real-time video picture, and mixing to obtain a final spliced picture. The method is simple to operate, and only one ground with grid characteristics is needed as a calibration scene; the grid model and the mapping relation are generated at one time, and if the pose of the camera is not changed, the change is not needed, so that the splicing stability and the predictability are improved; because the GPU is used for drawing and mixing in the real-time operation stage, the splicing efficiency is high, and the expansibility is good. The method is particularly suitable for splicing ground video pictures, can fuse the pictures of a plurality of cameras, enlarges the visual angle range and enhances the spatial sense and perception capability of a remote driving safety driver to the driving environment.
Drawings
Fig. 1 is a flowchart illustrating a video frame splicing method based on unmanned vehicle remote driving in an exemplary embodiment.
FIG. 2 is a diagram of a ground mesh feature and a corresponding mesh model in an exemplary embodiment, where the center portion is the ground mesh feature and the mesh surrounding the ground mesh feature is the mesh model.
FIG. 3 is a flow diagram of video frame splicing in an exemplary embodiment; the upper left image is an original video image, the lower left image is a video image superposed with the grid model, the middle image is a video image obtained by drawing the grid model, and the right image is a splicing mixed effect image of the multi-camera video images.
Fig. 4 is a diagram illustrating the result of video frame splicing in an exemplary embodiment.
Fig. 5 is a schematic diagram of a video frame splicing device based on unmanned vehicle remote driving in an exemplary embodiment.
Detailed Description
For purposes of promoting an understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description of the embodiments taken in conjunction with the accompanying drawings, it being understood that the specific embodiments described herein are illustrative of the invention and are not intended to be exhaustive. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, are within the scope of the present invention.
In an embodiment, as shown in fig. 1, a video picture stitching method based on unmanned vehicle remote driving is provided, in which an unmanned vehicle is parked on a ground with grid features, and video pictures of all cameras at the same time are acquired; constructing a grid model with a similar structure according to the ground grid characteristics, establishing mapping from the grid characteristics in a video picture to the grid model, and calculating a plurality of vertex data of the vertexes of the grid model according to the mapping relation; and finally, in a real-time operation stage, drawing a grid model by using the GPU, sampling real-time video pictures, and mixing to obtain a final spliced picture.
The method specifically comprises the following steps:
step 1, fixedly mounting a camera needing video picture splicing on an unmanned vehicle;
the unmanned vehicle model of this embodiment is easily become S2 public vehicle of plugging into, has installed 4 industrial grade on-vehicle cameras above, and the angle of view of camera is 170 degrees, is fisheye camera, covers four regions of look ahead, back vision, left side and right side respectively, and decurrent inclination is 15 degrees. The camera is installed by using a standard installation support, so that the camera is prevented from obviously shaking and changing the position and the posture in the operation process of the unmanned vehicle. The installation positions and angles of all cameras meet the following conditions: for any camera, at least one video picture shot by other cameras has intersection with the camera, and the cameras with the intersection of all the video pictures only form a set which comprises all the cameras;
step 2, arranging a calibration scene to enable the ground to have grid characteristics, and stopping the unmanned vehicle in the calibration scene;
in this embodiment, the calibration scene is a square in a certain park, the ground of the square is made of tiles with uniform sizes, the tiles are 60cm × 60cm, and the grid features of the ground are formed, wherein the grid features are square grids; the unmanned vehicle is driven by a security officer and is parked in a specified area of a square, the unmanned vehicle needs to be cleaned within 20m of the area, and sundries cannot appear on the ground.
Step 3, acquiring a video picture of all the cameras at the same time, wherein each video picture contains grid characteristics of the ground, and the grid characteristics divide the video picture into a plurality of video picture areas; all video picture areas of all video pictures form a video picture area set; storing vertex data including texture coordinates at the vertex of the video picture area;
in the embodiment, a remote driving safety worker obtains one video image of all the cameras at a certain moment, and each video image is guaranteed to contain the ground tile grid characteristics. Sundries above the floor tiles need to be cleaned in time, cleanness and tidiness are guaranteed, and boundaries of the floor tiles are clear and recognizable in the picture of the camera. As the tiles are not overlapped with each other, the condition that a plurality of video picture areas of each video picture have no intersection with each other and the video picture areas cover the picture range needing to be spliced in the video picture is met.
Step 4, generating a grid model similar to the ground grid characteristics, wherein each vertex of the grid model stores a group of vertex data, and the vertex data comprises world coordinates of the vertex and index values, texture coordinates and transparency of a plurality of corresponding video pictures;
in this embodiment, as shown in fig. 2, a tool is generated by a video frame mesh model, and the tool can load and display a plurality of video frames. The user may initially set the number of rows and columns of the mesh, and create an initial standard square mesh, where all vertices of the mesh model are distributed on a two-dimensional plane, and preferably, the feature distribution of the mesh in the mesh model has similarity to the mesh features in the calibration scene, that is, the mesh in the mesh model and the mesh in the ground mesh features in the calibration scene are similar meshes of the same type, for example, both are quadrilateral.
Step 5, establishing mapping from the video picture area set to the grid model, and updating the corresponding vertex data of the grid model according to the vertex data in the video picture area;
in this embodiment, the grid points of the square grid are dragged to the grid points of the grid features of the corresponding video picture through the mouse and the keyboard, so as to ensure that the grid model completely covers the video picture part to be spliced. The user manually performs a matching process from the grid model to the grid characteristics of the video picture region set, namely a process of establishing a mapping relation, wherein the mapping relation is recorded in a video picture grid model generation tool, and the mapping relation between the grid model and the video picture grid model generation tool needs to meet the following conditions: for each grid in the grid model, a plurality of video picture sub-areas are corresponding to the grid, and the video picture sub-areas form a video picture sub-area set, wherein the video picture sub-areas are subsets of the video picture areas; traversing all vertexes of the mesh model, finding a plurality of vertexes in the corresponding video picture subregion set according to the mapping relation, and updating data of the vertexes of the mesh model, wherein the specific updating logic is as follows:
(1) Setting a video picture index value set of the grid model vertex as an index value of a video picture corresponding to each video picture subregion in a video picture subregion set; in this embodiment, the mesh of one mesh model only corresponds to one video picture sub-region, and therefore the video picture index value set of the mesh model vertex only includes one index value;
(2) Setting a texture coordinate set of the grid model vertexes as texture coordinates of a plurality of corresponding vertexes in the video picture sub-region set in the video picture region, wherein the texture coordinates can be obtained by interpolating texture coordinates of all vertexes of the corresponding video picture region; in the present embodiment, the interpolation method of texture coordinates uses bilinear interpolation; in this embodiment, the mesh of one mesh model only corresponds to one video picture sub-region, so the texture coordinate set of the mesh model vertex only includes one texture coordinate;
(3) Setting a transparency set of the mesh model vertexes according to the distance between a plurality of corresponding vertexes in the video picture subregion set and the video picture central point, and ensuring that the sum of all transparencies is 1; in this embodiment, the mesh of one mesh model only corresponds to one video picture sub-region, so the transparency set of the mesh model vertex only contains one transparency, and the value is 1;
step 6, accessing real-time video streams of all the cameras, and acquiring real-time video pictures of all the cameras at a certain moment; and drawing the grid model by using a graphic drawing interface, sampling a real-time video picture according to vertex data of the grid model, and mixing to obtain a final video picture splicing result, which is specifically shown in fig. 3.
In this embodiment, in the real-time operation stage, the real-time video streams of all the cameras are transmitted back to the remote cockpit through the 5G network and the video data transmission network library, and the remote driving simulator receives the original video picture data and performs time synchronization to obtain all the real-time video pictures at the same time. The specific real-time video picture splicing method comprises the following steps:
s6.1, loading all real-time video pictures into a memory, calling glGenTextures and glTexImage2D functions to upload video data in the memory to a GPU video memory;
s6.2, drawing the grid model generated in the step 4 by using an OpenGL graphic drawing interface, and calculating the color value of each pixel in a pixel shader, wherein all the uploaded real-time video picture data are uniform texture variables; the specific calculation method is as follows:
s6.2.1, calling a texture function to sample a color value corresponding to a real-time video picture by using an index value and a texture coordinate of a grid vertex;
s6.2.2, mixing the transparency of the grid vertex and the color value obtained by sampling to obtain the pixel color value of the spliced picture;
s6.3, obtaining an image cache buffer drawn in the S6.2, wherein the image cache records video picture splicing results of all cameras and refreshes and displays the video picture splicing results on a screen;
the final video picture splicing result of the remote driving of the unmanned vehicle is shown in fig. 4, wherein the splicing results of the video pictures of 4 cameras are simultaneously displayed, namely a front-view camera, a rear-view camera, a left-view camera and a surround-view camera; the middle part of fig. 4 is a top-view preview of the unmanned vehicle, so that a remote driving safety worker can more intuitively sense the spatial position of the unmanned vehicle conveniently. It can be seen that the splicing effect of the video pictures of the 4 cameras is accurate, no obvious seam exists, the size proportion of the ground objects is consistent, and the beneficial effect of the invention is verified.
Corresponding to the foregoing embodiments, the present invention further provides an embodiment of a video picture stitching apparatus based on unmanned vehicle remote driving, as shown in fig. 5, the apparatus includes one or more processors for implementing the above-mentioned remote driving video picture stitching method.
The embodiment of the video picture splicing device based on the remote driving of the unmanned vehicle can be applied to any equipment with data processing capability, such as computers and the like. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software.
Figure 616100DEST_PATH_IMAGE001
The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. In terms of hardware, in addition to the processor, the memory, the network interface, and the nonvolatile memory, any device with data processing capability where the apparatus in the embodiment is located may also include other hardware according to an actual function of the any device with data processing capability, which is not described herein again.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the present invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the invention also provides a computer readable storage medium, wherein a program is stored on the computer readable storage medium, and when the program is executed by a processor, the video picture splicing method based on the unmanned vehicle remote driving in the embodiment is realized.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be an external storage device such as a plug-in hard disk, a Smart Media Card (SMC), an SD card, a Flash memory card (Flash card), etc. provided on the device. Further, the computer readable storage medium may include both internal storage units and external storage devices of any data processing capable device. The computer readable storage medium is used to store the computer program, and other programs and data needed by any of the data processing capable devices, and may also be used to temporarily store data that has been or will be output.
The above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A video picture splicing method based on unmanned vehicle remote driving is characterized by comprising the following steps:
s1, fixedly mounting a camera needing video picture splicing on an unmanned vehicle;
s2, arranging a calibration scene to enable the ground to have grid characteristics, and stopping the unmanned vehicle in the calibration scene;
s3, acquiring a video picture of all the cameras at the same time, wherein each video picture contains grid characteristics of the ground, and the grid characteristics divide the video picture into a plurality of video picture areas; all video picture areas of all video pictures form a video picture area set; storing vertex data including texture coordinates at the vertex of the video picture area;
s4, generating a grid model, wherein each vertex of the grid model stores a group of vertex data, and the vertex data comprises world coordinates of the vertex and index values, texture coordinates and transparency of a plurality of corresponding video pictures;
s5, establishing mapping from the video picture area set to the grid model, and updating the vertex data of the corresponding grid model according to the vertex data in the video picture area;
s6, accessing real-time video streams of all the cameras, and acquiring real-time video pictures of all the cameras at a certain moment; drawing the grid model by using a graphic drawing interface, sampling a real-time video picture according to the vertex data of the grid model with the vertex data updated in the step S5, and mixing to obtain a final video picture splicing result;
the mapping relationship established in S5 satisfies the following condition: for each grid in the grid model, a plurality of video picture subregions are corresponding to the grid, and the video picture subregions form a video picture subregion set, wherein the video picture subregions are subsets of the video picture regions; traversing all vertexes of the mesh model, finding a plurality of vertexes in the corresponding video picture subregion set according to the mapping relation, and updating data of the vertexes of the mesh model, wherein the specific updating logic is as follows:
s5.1, setting a video picture index value set of the grid model vertex into the index value of a video picture corresponding to each video picture subregion in the video picture subregion set;
s5.2, setting a texture coordinate set of the grid model vertexes as texture coordinates of a plurality of corresponding vertexes in the video picture sub-region set in the video picture region, wherein the texture coordinates are obtained by interpolating the texture coordinates of all the vertexes of the corresponding video picture region;
and S5.3, setting a transparency set of the grid model vertexes according to the distance between a plurality of corresponding vertexes in the video picture subregion set and the central point of the video picture, and ensuring that the sum of all transparencies is 1.
2. The unmanned vehicle remote driving-based video picture splicing method according to claim 1, wherein the installation positions and angles of all cameras in S1 satisfy the following conditions: for any camera, there is necessarily a video picture that is taken by at least one other camera and has an intersection with the other camera, and the cameras with all the video pictures having the intersection only form a set, and the set includes all the cameras.
3. The unmanned vehicle remote driving-based video picture splicing method according to claim 1, wherein the mesh feature of the ground in S2 is a polygonal mesh.
4. The method for stitching video frames based on unmanned vehicle remote driving as claimed in claim 1, wherein the video frame areas in each video frame in S3 do not intersect with each other, and the video frame areas cover the frame area of the video frame to be stitched.
5. The unmanned vehicle remote driving-based video picture stitching method according to claim 1, wherein all vertices of the mesh model in S4 are distributed on a two-dimensional plane, and the mesh in the mesh model and the mesh in the ground mesh feature in the calibration scene are of the same type of mesh.
6. The unmanned vehicle remote driving-based video picture splicing method according to claim 1, wherein the splicing of the real-time video pictures in S6 specifically comprises the following sub-steps:
s6.1, uploading all real-time video pictures to a GPU video memory;
s6.2, drawing the grid model by using a graphic drawing interface, and calculating the color value of each pixel in a shader, wherein the specific calculation method comprises the following steps:
s6.2.1, sampling a color value of a corresponding real-time video picture by using an index value and texture coordinates of a grid vertex;
s6.2.2, mixing the transparency of the grid vertex and the color value obtained by sampling to obtain the pixel color value of the spliced picture;
and S6.3, obtaining an image cache drawn in the S6.2, recording video picture splicing results of all the cameras by the image cache, and refreshing and displaying the video picture splicing results on a screen.
7. The method for splicing video frames based on unmanned vehicle remote driving according to claim 1, wherein when the positions and postures of all the cameras are not changed, the preprocessing steps described in S1 to S4 are only required to be performed once.
8. The method for stitching video frames according to claim 1, wherein texture coordinates and transparency of the vertices in S6.2 are interpolated during the process of output from the vertex shader to the pixel shader, and index values of the vertices are not interpolated during the process of output from the vertex shader to the pixel shader.
9. A video picture splicing device based on unmanned vehicle remote driving is characterized by comprising one or more processors and being used for realizing the video picture splicing method based on unmanned vehicle remote driving in any one of claims 1 to 8.
CN202210980376.8A 2022-08-16 2022-08-16 Video picture splicing method and device based on unmanned vehicle remote driving Active CN115086575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210980376.8A CN115086575B (en) 2022-08-16 2022-08-16 Video picture splicing method and device based on unmanned vehicle remote driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210980376.8A CN115086575B (en) 2022-08-16 2022-08-16 Video picture splicing method and device based on unmanned vehicle remote driving

Publications (2)

Publication Number Publication Date
CN115086575A CN115086575A (en) 2022-09-20
CN115086575B true CN115086575B (en) 2022-11-29

Family

ID=83245118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210980376.8A Active CN115086575B (en) 2022-08-16 2022-08-16 Video picture splicing method and device based on unmanned vehicle remote driving

Country Status (1)

Country Link
CN (1) CN115086575B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086575B (en) * 2022-08-16 2022-11-29 之江实验室 Video picture splicing method and device based on unmanned vehicle remote driving

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993152A (en) * 2016-01-21 2017-07-28 杭州海康威视数字技术股份有限公司 Three-dimension monitoring system and its quick deployment method
CN109741456A (en) * 2018-12-17 2019-05-10 深圳市航盛电子股份有限公司 3D based on GPU concurrent operation looks around vehicle assistant drive method and system
CN113276774A (en) * 2021-07-21 2021-08-20 新石器慧通(北京)科技有限公司 Method, device and equipment for processing video picture in unmanned vehicle remote driving process
CN113313813A (en) * 2021-05-12 2021-08-27 武汉极目智能技术有限公司 Vehicle-mounted 3D panoramic all-around viewing system capable of actively early warning
CN115086575A (en) * 2022-08-16 2022-09-20 之江实验室 Video picture splicing method and device based on unmanned vehicle remote driving

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140098185A1 (en) * 2012-10-09 2014-04-10 Shahram Davari Interactive user selected video/audio views by real time stitching and selective delivery of multiple video/audio sources
CN109658365B (en) * 2017-10-11 2022-12-06 阿里巴巴(深圳)技术有限公司 Image processing method, device, system and storage medium
US11146783B2 (en) * 2019-08-16 2021-10-12 GM Cruise Holdings, LLC Partial calibration target detection for improved vehicle sensor calibration
CN111476716B (en) * 2020-04-03 2023-09-26 深圳力维智联技术有限公司 Real-time video stitching method and device
CN113870161A (en) * 2021-09-13 2021-12-31 杭州鸿泉物联网技术股份有限公司 Vehicle-mounted 3D (three-dimensional) panoramic stitching method and device based on artificial intelligence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993152A (en) * 2016-01-21 2017-07-28 杭州海康威视数字技术股份有限公司 Three-dimension monitoring system and its quick deployment method
CN109741456A (en) * 2018-12-17 2019-05-10 深圳市航盛电子股份有限公司 3D based on GPU concurrent operation looks around vehicle assistant drive method and system
CN113313813A (en) * 2021-05-12 2021-08-27 武汉极目智能技术有限公司 Vehicle-mounted 3D panoramic all-around viewing system capable of actively early warning
CN113276774A (en) * 2021-07-21 2021-08-20 新石器慧通(北京)科技有限公司 Method, device and equipment for processing video picture in unmanned vehicle remote driving process
CN115086575A (en) * 2022-08-16 2022-09-20 之江实验室 Video picture splicing method and device based on unmanned vehicle remote driving

Also Published As

Publication number Publication date
CN115086575A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
US7554539B2 (en) System for viewing a collection of oblique imagery in a three or four dimensional virtual scene
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
CN107993276B (en) Panoramic image generation method and device
US20170038942A1 (en) Playback initialization tool for panoramic videos
US9704282B1 (en) Texture blending between view-dependent texture and base texture in a geographic information system
CN111325824A (en) Image data display method and device, electronic equipment and storage medium
JPH11259672A (en) Three-dimensional virtual space display device
WO2017017790A1 (en) Image generation device, image generation system, and image generation method
CN115086575B (en) Video picture splicing method and device based on unmanned vehicle remote driving
WO2023207963A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN115439616B (en) Heterogeneous object characterization method based on multi-object image alpha superposition
US5929861A (en) Walk-through rendering system
CN110807413B (en) Target display method and related device
GB2256568A (en) Image generation system for 3-d simulations
CN114926612A (en) Aerial panoramic image processing and immersive display system
WO2023004559A1 (en) Editable free-viewpoint video using a layered neural representation
WO1996013018A1 (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically
CN112215739A (en) Orthographic projection image file processing method and device for AutoCAD and storage medium
US5936626A (en) Computer graphics silhouette load management
CN116597076A (en) Three-dimensional visual storehouse display method and system
RU2467395C1 (en) Method of forming images of three-dimensional objects for real-time systems
CN111476716B (en) Real-time video stitching method and device
CN114332356A (en) Virtual and real picture combining method and device
US20170228926A1 (en) Determining Two-Dimensional Images Using Three-Dimensional Models
JP3149389B2 (en) Method and apparatus for overlaying a bitmap image on an environment map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant