CN115086575A - Video picture splicing method and device based on unmanned vehicle remote driving - Google Patents

Video picture splicing method and device based on unmanned vehicle remote driving Download PDF

Info

Publication number
CN115086575A
CN115086575A CN202210980376.8A CN202210980376A CN115086575A CN 115086575 A CN115086575 A CN 115086575A CN 202210980376 A CN202210980376 A CN 202210980376A CN 115086575 A CN115086575 A CN 115086575A
Authority
CN
China
Prior art keywords
video picture
video
grid
unmanned vehicle
mesh
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210980376.8A
Other languages
Chinese (zh)
Other versions
CN115086575B (en
Inventor
华炜
高健健
朱建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202210980376.8A priority Critical patent/CN115086575B/en
Publication of CN115086575A publication Critical patent/CN115086575A/en
Application granted granted Critical
Publication of CN115086575B publication Critical patent/CN115086575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The invention discloses a video picture splicing method and a device based on unmanned vehicle remote driving, wherein the method comprises the steps of firstly parking an unmanned vehicle on the ground with grid characteristics, and acquiring video pictures of all cameras at the same time; constructing a grid model according to the ground grid characteristics, establishing mapping from the grid characteristics in a video picture to the grid model, and calculating a plurality of vertex data of the vertexes of the grid model according to the mapping relation; and finally, in a real-time operation stage, drawing a grid model by using the GPU, sampling a real-time video picture, and mixing to obtain a final spliced picture. According to the method, only one ground with grid characteristics is used as a calibration scene, and a grid model and a mapping relation are generated at one time, so that the splicing stability and predictability are improved; because the GPU is used for drawing and mixing in the real-time operation stage, the splicing efficiency is high, and the expansibility is good. The method is particularly suitable for splicing video pictures on the ground, can fuse the pictures of a plurality of cameras, and enlarges the range of visual angles.

Description

Video picture splicing method and device based on unmanned vehicle remote driving
Technical Field
The invention relates to the field of unmanned vehicle remote driving, in particular to a video picture splicing method and device based on unmanned vehicle remote driving.
Background
In recent years, with the maturity of various sensors, the increase of computing power of computing equipment and the popularization of 5G networks, unmanned vehicle automatic driving technology is rapidly developing at an unprecedented speed. However, a huge gap still exists between the existing automatic driving and the full automatic driving based on the artificial intelligence technology, and according to the estimation of experts in the industry, at least 10 years are needed for making up the gap.
The remote driving technology enables a remote driver to remotely control the unmanned vehicle in the real-time manner by returning the current environmental information of the unmanned vehicle, and provides more flexible, intelligent and safe service under the condition that the automatic driving technology is not mature. Remote driving usually transmits data of a sensor carried on an unmanned vehicle back to a remote cockpit in real time for a remote driver to observe, analyze and make driving decisions, wherein the most important returned data is a real-time video picture of a camera. A traditional remote driving method displays a group of video pictures, and each video picture corresponds to a camera to form a video wall. Although each camera picture has an overlapping area, the camera pictures are independently displayed, and the remote driver is not friendly to the relationship understanding between the cameras and the establishment of the scene space feeling. In addition, the independent camera screen display also causes information redundancy, and the attention of the remote driver is dispersed.
Disclosure of Invention
In order to solve the defects of the prior art and realize the video picture splicing of the remote driving of the unmanned vehicle, the invention adopts the following technical scheme:
a video picture splicing method based on unmanned vehicle remote driving comprises the following steps:
s1, fixedly mounting a camera needing video picture splicing on the unmanned vehicle;
s2, arranging a calibration scene to enable the ground to have grid characteristics, and stopping the unmanned vehicle in the calibration scene;
s3, acquiring a video picture of all cameras at the same time, wherein each video picture contains grid characteristics of the ground, and the grid characteristics divide the video picture into a plurality of video picture areas; all video picture areas of all video pictures form a video picture area set; storing vertex data including texture coordinates at the vertex of the video picture area;
s4, generating a mesh model, wherein each vertex of the mesh model stores a group of vertex data, and the vertex data comprises world coordinates of the vertex and index values, texture coordinates and transparency of a plurality of corresponding video pictures;
s5, establishing the mapping from the video picture area set to the grid model, and updating the vertex data of the corresponding grid model according to the vertex data in the video picture area;
s6, accessing the real-time video streams of all the cameras, and acquiring the real-time video pictures of all the cameras at a certain moment; and drawing the grid model by using a graphic drawing interface, sampling a real-time video picture according to the vertex data of the grid model after updating the vertex data according to S5, and mixing to obtain a final video picture splicing result.
Further, the mounting positions and angles of all the cameras in S1 satisfy the following conditions: for any camera, there is necessarily a video picture that is taken by at least one other camera and has an intersection with the other camera, and the cameras with all the video pictures having the intersection only form a set, and the set includes all the cameras.
Further, the mesh feature of the ground in S2 is a polygonal mesh.
Further, in S3, the video picture areas in each video picture do not intersect with each other, and the video picture areas cover the picture areas of the video pictures to be spliced.
Further, all vertices of the mesh model in S4 are distributed on a two-dimensional plane, and the mesh in the mesh model is the same type of mesh as the mesh in the ground mesh feature in the calibration scene.
Further, the mapping relationship established in S5 satisfies the following condition: for each grid in the grid model, a plurality of video picture sub-areas are corresponding to the grid, and the video picture sub-areas form a video picture sub-area set, wherein the video picture sub-areas are subsets of the video picture areas; traversing all vertexes of the mesh model, finding a plurality of vertexes in the corresponding video picture subregion set according to the mapping relation, and updating data of the vertexes of the mesh model, wherein the specific updating logic is as follows:
s5.1, setting a video picture index value set of the grid model vertex into the index value of a video picture corresponding to each video picture subregion in the video picture subregion set;
s5.2, setting a texture coordinate set of the grid model vertexes as texture coordinates of a plurality of corresponding vertexes in the video picture sub-area set in the video picture area, wherein the texture coordinates are obtained by interpolating texture coordinates of all vertexes of the corresponding video picture area;
and S5.3, setting a transparency set of the grid model vertexes according to the distance between the corresponding vertexes in the video picture subregion set and the video picture central point, and ensuring that the sum of all transparencies is 1.
Further, the splicing of the real-time video frames in S6 specifically includes the following sub-steps:
s6.1, uploading all real-time video pictures to a GPU video memory;
s6.2, drawing the grid model by using a graphic drawing interface, and calculating the color value of each pixel in a shader, wherein the specific calculation method comprises the following steps:
s6.2.1, sampling color values corresponding to the real-time video picture using index values and texture coordinates of the mesh vertices;
s6.2.2, mixing the transparency of the grid vertex and the color value obtained by sampling to obtain the pixel color value of the spliced picture;
and S6.3, obtaining an image cache drawn in the S6.2, recording video picture splicing results of all the cameras by the image cache, and refreshing and displaying the video picture splicing results on a screen.
Further, when the positions and attitudes of all the cameras are not changed, the preprocessing steps described in S1 to S4 need only be performed once.
Further, in S6.2, the texture coordinates and the transparency of the vertex need to be interpolated in the process of being output from the vertex shader to the pixel shader, and the index value of the vertex does not need to be interpolated in the process of being output from the vertex shader to the pixel shader.
A video picture splicing device based on unmanned vehicle remote driving comprises one or more processors and is used for achieving the video picture splicing method based on unmanned vehicle remote driving.
The invention has the following beneficial effects:
the video picture splicing method based on the unmanned vehicle remote driving firstly parks the unmanned vehicle on the ground with grid characteristics and obtains video pictures of all cameras at the same time; constructing a grid model with a similar structure according to the ground grid characteristics, establishing mapping from the grid characteristics in a video picture to the grid model, and calculating a plurality of vertex data of the vertexes of the grid model according to the mapping relation; and finally, in a real-time operation stage, drawing a grid model by using the GPU, sampling a real-time video picture, and mixing to obtain a final spliced picture. The method is simple to operate, and only one ground with grid characteristics is needed as a calibration scene; the grid model and the mapping relation are generated at one time, and if the pose of the camera is not changed, the change is not needed, so that the splicing stability and the predictability are improved; because the GPU is used for drawing and mixing in the real-time operation stage, the splicing efficiency is high, and the expansibility is good. The method is particularly suitable for splicing ground video pictures, can fuse the pictures of a plurality of cameras, enlarges the visual angle range and enhances the spatial sense and perception capability of a remote driving safety driver to the driving environment.
Drawings
Fig. 1 is a flowchart illustrating a video frame splicing method based on unmanned vehicle remote driving in an exemplary embodiment.
FIG. 2 is a diagram of a ground mesh feature and a corresponding mesh model in an exemplary embodiment, where the center portion is the ground mesh feature and the mesh surrounding the ground mesh feature is the mesh model.
FIG. 3 is a flow diagram of video frame splicing in an exemplary embodiment; the upper left image is an original video image, the lower left image is a video image superposed with the grid model, the middle image is a video image obtained by drawing the grid model, and the right image is a splicing mixed effect image of the multi-camera video images.
Fig. 4 is a diagram illustrating the result of video frame splicing in an exemplary embodiment.
Fig. 5 is a schematic diagram of a video frame splicing apparatus based on unmanned vehicle remote driving in an exemplary embodiment.
Detailed Description
For purposes of promoting an understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description of the embodiments taken in conjunction with the accompanying drawings, it being understood that the specific embodiments described herein are illustrative of the invention and are not intended to be exhaustive. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, are within the scope of the present invention.
In an embodiment, as shown in fig. 1, a video picture stitching method based on unmanned vehicle remote driving is provided, in which an unmanned vehicle is parked on a ground with grid features, and video pictures of all cameras at the same time are acquired; constructing a grid model with a similar structure according to the ground grid characteristics, establishing mapping from the grid characteristics in a video picture to the grid model, and calculating a plurality of vertex data of the vertexes of the grid model according to the mapping relation; and finally, in a real-time operation stage, drawing a grid model by using the GPU, sampling a real-time video picture, and mixing to obtain a final spliced picture.
The method specifically comprises the following steps:
step 1, fixedly mounting a camera needing video picture splicing on an unmanned vehicle;
the unmanned vehicle model of this embodiment is easily become S2 public connection car, has installed 4 industrial grade on-vehicle cameras above, and the angle of view of camera is 170 degrees, is fisheye camera, covers four regions of look ahead, back vision, left side and right side respectively, and decurrent inclination is 15 degrees. The camera is installed by using a standard installation support, so that the camera is prevented from obviously shaking and changing the position and the posture in the operation process of the unmanned vehicle. The installation positions and angles of all cameras meet the following conditions: for any camera, at least one video picture shot by other cameras has intersection with the camera, and the cameras with the intersection of all the video pictures only form a set which comprises all the cameras;
step 2, arranging a calibration scene to enable the ground to have grid characteristics, and stopping the unmanned vehicle in the calibration scene;
in this embodiment, the calibration scene is a square in a certain park, the ground of the square is made of tiles with uniform size, the size of the tiles is 60cm by 60cm, and the grid characteristics of the ground are formed, wherein the grid characteristics are square grids; the unmanned vehicle is driven by a security officer and parked in a specified area of a square, the unmanned vehicle needs to be cleaned within 20m of the area, and sundries cannot appear on the ground.
Step 3, acquiring a video picture of all the cameras at the same time, wherein each video picture contains grid characteristics of the ground, and the grid characteristics divide the video picture into a plurality of video picture areas; all video picture areas of all video pictures form a video picture area set; storing vertex data including texture coordinates at the vertex of the video picture area;
in this embodiment, a remote driving safety operator acquires one video image of all cameras at a certain time, and each video image is guaranteed to contain the ground tile grid characteristics. Sundries above the floor tiles need to be cleaned in time, cleanness and tidiness are guaranteed, and boundaries of the floor tiles are clear and recognizable in the picture of the camera. As the tiles are not overlapped with each other, the requirement that a plurality of video picture areas of each video picture have no intersection with each other is met, and the video picture areas cover the picture range needing to be spliced in the video pictures.
Step 4, generating a grid model similar to the ground grid characteristics, wherein each vertex of the grid model stores a group of vertex data, and the vertex data comprises world coordinates of the vertex and index values, texture coordinates and transparency of a plurality of corresponding video pictures;
in this embodiment, as shown in fig. 2, a tool is generated by a video screen mesh model, and the tool can load and display a plurality of video screens. The user may initially set the number of rows and columns of the mesh, and create an initial standard square mesh, where all vertices of the mesh model are distributed on a two-dimensional plane, and preferably, the feature distribution of the mesh in the mesh model has similarity to the mesh features in the calibration scene, that is, the mesh in the mesh model and the mesh in the ground mesh features in the calibration scene are similar meshes of the same type, for example, both are quadrilateral.
Step 5, establishing the mapping from the video picture area set to the grid model, and updating the corresponding vertex data of the grid model according to the vertex data in the video picture area;
in this embodiment, the grid points of the square grid are dragged to the grid points of the grid features of the corresponding video picture through the mouse and the keyboard, so as to ensure that the grid model completely covers the video picture part to be spliced. The user manually performs a matching process from the grid model to the grid characteristics of the video picture region set, namely a process for establishing a mapping relation, wherein the mapping relation is recorded in a video picture grid model generation tool, and the mapping relation between the grid model and the video picture region set needs to meet the following conditions: for each grid in the grid model, a plurality of video picture sub-areas are corresponding to the grid, and the video picture sub-areas form a video picture sub-area set, wherein the video picture sub-areas are subsets of the video picture areas; traversing all vertexes of the grid model, finding a plurality of vertexes in the corresponding video picture subregion set according to the mapping relation, and updating data of the vertexes of the grid model, wherein the specific updating logic is as follows:
(1) setting a video picture index value set of the grid model vertexes as the index value of a video picture corresponding to each video picture subregion in a video picture subregion set; in this embodiment, the mesh of one mesh model only corresponds to one video picture sub-region, and therefore the video picture index value set of the mesh model vertex only includes one index value;
(2) setting a texture coordinate set of the grid model vertexes as texture coordinates of a plurality of corresponding vertexes in the video picture sub-area set in the video picture area, wherein the texture coordinates can be obtained by interpolating texture coordinates of all vertexes of the corresponding video picture area; in the present embodiment, the interpolation method of texture coordinates uses bilinear interpolation; in this embodiment, the mesh of one mesh model only corresponds to one video picture sub-region, and therefore the texture coordinate set of the mesh model vertex only includes one texture coordinate;
(3) setting a transparency set of the mesh model vertexes according to the distance between a plurality of corresponding vertexes in the video picture subregion set and the video picture central point, and ensuring that the sum of all transparencies is 1; in this embodiment, the mesh of one mesh model only corresponds to one video picture sub-region, so the transparency set of the mesh model vertex only contains one transparency, and the value is 1;
step 6, accessing real-time video streams of all the cameras, and acquiring real-time video pictures of all the cameras at a certain moment; and drawing the grid model by using a graphic drawing interface, sampling a real-time video picture according to vertex data of the grid model, and mixing to obtain a final video picture splicing result, which is specifically shown in fig. 3.
In this embodiment, in the real-time operation stage, the real-time video streams of all the cameras are transmitted back to the remote cockpit through the 5G network and the video data transmission network library, and the remote driving simulator receives the original video picture data and performs time synchronization to obtain all the real-time video pictures at the same time. The specific real-time video picture splicing method comprises the following steps:
s6.1, loading all real-time video pictures into a memory, calling glGenTextures and glTexImage2D functions to upload video data in the memory into a GPU video memory;
s6.2, drawing the grid model generated in the step 4 by using an OpenGL graphic drawing interface, and calculating a color value of each pixel in a pixel shader, wherein all uploaded real-time video picture data are uniform texture variables; the specific calculation method is as follows:
s6.2.1, using the index value and texture coordinate of the mesh vertex, calling texture function to sample the color value of the corresponding real-time video picture;
s6.2.2, mixing the transparency of the grid vertex and the color value obtained by sampling to obtain the pixel color value of the spliced picture;
s6.3, obtaining an image cache buffer drawn in the S6.2, wherein the image cache records video picture splicing results of all cameras and refreshes and displays the video picture splicing results on a screen;
the final video picture splicing result of the remote driving of the unmanned vehicle is shown in fig. 4, wherein the splicing results of the video pictures of 4 cameras are simultaneously displayed, namely a front-view camera, a rear-view camera, a left-view camera and a surround-view camera; the middle part of fig. 4 is a top view preview of the unmanned vehicle, which is convenient for a remote driving security officer to more intuitively sense the spatial position of the unmanned vehicle. It can be seen that the splicing effect of the video pictures of the 4 cameras is accurate, no obvious seam exists, the size proportion of the ground objects is consistent, and the beneficial effect of the invention is verified.
Corresponding to the foregoing embodiments, the present invention further provides an embodiment of a video picture stitching apparatus based on unmanned vehicle remote driving, as shown in fig. 5, the apparatus includes one or more processors for implementing the above-mentioned remote driving video picture stitching method.
The embodiment of the video picture splicing device based on the remote driving of the unmanned vehicle can be applied to any equipment with data processing capability, such as computers and the like. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. In terms of hardware, in addition to the processor, the memory, the network interface, and the nonvolatile memory, any device with data processing capability where the apparatus in the embodiment is located may also include other hardware according to an actual function of the any device with data processing capability, which is not described herein again.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the invention also provides a computer readable storage medium, wherein a program is stored on the computer readable storage medium, and when the program is executed by a processor, the video picture splicing method based on the unmanned vehicle remote driving in the embodiment is realized.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be an external storage device such as a plug-in hard disk, a Smart Media Card (SMC), an SD card, a Flash memory card (Flash card), etc. provided on the device. Further, the computer readable storage medium may include both internal storage units and external storage devices of any data processing capable device. The computer readable storage medium is used for storing the calculation Zhang program and other programs and data needed by the arbitrary data processing capable device, and may also be used for temporarily storing data that has been or will be output.
The above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the scope of the embodiments of the present invention in nature.

Claims (10)

1. A video picture splicing method based on unmanned vehicle remote driving is characterized by comprising the following steps:
s1, fixedly mounting a camera needing video picture splicing on the unmanned vehicle;
s2, arranging a calibration scene to enable the ground to have grid characteristics, and stopping the unmanned vehicle in the calibration scene;
s3, acquiring a video picture of all cameras at the same time, wherein each video picture contains grid characteristics of the ground, and the grid characteristics divide the video picture into a plurality of video picture areas; all video picture areas of all video pictures form a video picture area set; storing vertex data including texture coordinates at the vertex of the video picture area;
s4, generating a mesh model, wherein each vertex of the mesh model stores a group of vertex data, and the vertex data comprises world coordinates of the vertex and index values, texture coordinates and transparency of a plurality of corresponding video pictures;
s5, establishing the mapping from the video picture area set to the grid model, and updating the vertex data of the corresponding grid model according to the vertex data in the video picture area;
s6, accessing the real-time video streams of all the cameras, and acquiring the real-time video pictures of all the cameras at a certain moment; and drawing the grid model by using a graphic drawing interface, sampling a real-time video picture according to the vertex data of the grid model after updating the vertex data in S5, and mixing to obtain a final video picture splicing result.
2. The unmanned vehicle remote driving-based video picture splicing method according to claim 1, wherein the installation positions and angles of all the cameras in S1 satisfy the following conditions: for any camera, there is necessarily a video picture that is taken by at least one other camera and has an intersection with the other camera, and the cameras with all the video pictures having the intersection only form a set, and the set includes all the cameras.
3. The unmanned vehicle remote driving-based video picture splicing method as claimed in claim 1, wherein the mesh feature of the ground in S2 is a polygonal mesh.
4. The method of claim 1, wherein the video frame areas in each video frame in S3 do not intersect with each other, and the video frame areas cover the frame area of the video frame to be spliced.
5. The unmanned vehicle remote driving-based video picture stitching method as claimed in claim 1, wherein all vertices of the mesh model in S4 are distributed on a two-dimensional plane, and the mesh in the mesh model is the same type of mesh as the mesh in the ground mesh feature in the calibration scene.
6. The unmanned vehicle remote driving-based video picture splicing method according to claim 1, wherein the mapping relationship established in S5 satisfies the following condition: for each grid in the grid model, a plurality of video picture sub-areas are corresponding to the grid, and the video picture sub-areas form a video picture sub-area set, wherein the video picture sub-areas are subsets of the video picture areas; traversing all vertexes of the mesh model, finding a plurality of vertexes in the corresponding video picture subregion set according to the mapping relation, and updating data of the vertexes of the mesh model, wherein the specific updating logic is as follows:
s5.1, setting a video picture index value set of the grid model vertex into the index value of a video picture corresponding to each video picture subregion in the video picture subregion set;
s5.2, setting a texture coordinate set of the grid model vertexes as texture coordinates of a plurality of corresponding vertexes in the video picture sub-area set in the video picture area, wherein the texture coordinates are obtained by interpolating texture coordinates of all vertexes of the corresponding video picture area;
and S5.3, setting a transparency set of the grid model vertexes according to the distance between the corresponding vertexes in the video picture subregion set and the video picture central point, and ensuring that the sum of all transparencies is 1.
7. The method for splicing video frames based on unmanned vehicle remote driving according to claim 1, wherein the splicing of the real-time video frames in S6 specifically comprises the following sub-steps:
s6.1, uploading all real-time video pictures to a GPU video memory;
s6.2, drawing the grid model by using a graphic drawing interface, and calculating the color value of each pixel in a shader, wherein the specific calculation method comprises the following steps:
s6.2.1, sampling color values corresponding to the real-time video picture using index values and texture coordinates of the mesh vertices;
s6.2.2, mixing the transparency of the grid vertex and the color value obtained by sampling to obtain the pixel color value of the spliced picture;
and S6.3, obtaining an image cache drawn in the S6.2, recording video picture splicing results of all the cameras by the image cache, and refreshing and displaying the video picture splicing results on a screen.
8. The method for stitching video frames according to claim 1, wherein the preprocessing steps described in S1 to S4 are performed only once when the positions and attitudes of all cameras are not changed.
9. The method for stitching video frames according to claim 1, wherein texture coordinates and transparency of the vertices in S6.2 are interpolated during the process of output from the vertex shader to the pixel shader, and index values of the vertices are not interpolated during the process of output from the vertex shader to the pixel shader.
10. The video picture splicing device based on the remote driving of the unmanned vehicle is characterized by comprising one or more processors and being used for realizing the video picture splicing method based on the remote driving of the unmanned vehicle as claimed in any one of claims 1 to 9.
CN202210980376.8A 2022-08-16 2022-08-16 Video picture splicing method and device based on unmanned vehicle remote driving Active CN115086575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210980376.8A CN115086575B (en) 2022-08-16 2022-08-16 Video picture splicing method and device based on unmanned vehicle remote driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210980376.8A CN115086575B (en) 2022-08-16 2022-08-16 Video picture splicing method and device based on unmanned vehicle remote driving

Publications (2)

Publication Number Publication Date
CN115086575A true CN115086575A (en) 2022-09-20
CN115086575B CN115086575B (en) 2022-11-29

Family

ID=83245118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210980376.8A Active CN115086575B (en) 2022-08-16 2022-08-16 Video picture splicing method and device based on unmanned vehicle remote driving

Country Status (1)

Country Link
CN (1) CN115086575B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086575B (en) * 2022-08-16 2022-11-29 之江实验室 Video picture splicing method and device based on unmanned vehicle remote driving

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140098185A1 (en) * 2012-10-09 2014-04-10 Shahram Davari Interactive user selected video/audio views by real time stitching and selective delivery of multiple video/audio sources
CN106993152A (en) * 2016-01-21 2017-07-28 杭州海康威视数字技术股份有限公司 Three-dimension monitoring system and its quick deployment method
US20190108646A1 (en) * 2017-10-11 2019-04-11 Alibaba Group Holding Limited Image processing method, apparatus, and storage medium
CN109741456A (en) * 2018-12-17 2019-05-10 深圳市航盛电子股份有限公司 3D based on GPU concurrent operation looks around vehicle assistant drive method and system
CN111476716A (en) * 2020-04-03 2020-07-31 深圳力维智联技术有限公司 Real-time video splicing method and device
US20210051317A1 (en) * 2019-08-16 2021-02-18 Gm Cruise Holdings Llc Partial calibration target detection for improved vehicle sensor calibration
CN113276774A (en) * 2021-07-21 2021-08-20 新石器慧通(北京)科技有限公司 Method, device and equipment for processing video picture in unmanned vehicle remote driving process
CN113313813A (en) * 2021-05-12 2021-08-27 武汉极目智能技术有限公司 Vehicle-mounted 3D panoramic all-around viewing system capable of actively early warning
CN113870161A (en) * 2021-09-13 2021-12-31 杭州鸿泉物联网技术股份有限公司 Vehicle-mounted 3D (three-dimensional) panoramic stitching method and device based on artificial intelligence

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086575B (en) * 2022-08-16 2022-11-29 之江实验室 Video picture splicing method and device based on unmanned vehicle remote driving

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140098185A1 (en) * 2012-10-09 2014-04-10 Shahram Davari Interactive user selected video/audio views by real time stitching and selective delivery of multiple video/audio sources
CN106993152A (en) * 2016-01-21 2017-07-28 杭州海康威视数字技术股份有限公司 Three-dimension monitoring system and its quick deployment method
US20190108646A1 (en) * 2017-10-11 2019-04-11 Alibaba Group Holding Limited Image processing method, apparatus, and storage medium
CN109741456A (en) * 2018-12-17 2019-05-10 深圳市航盛电子股份有限公司 3D based on GPU concurrent operation looks around vehicle assistant drive method and system
US20210051317A1 (en) * 2019-08-16 2021-02-18 Gm Cruise Holdings Llc Partial calibration target detection for improved vehicle sensor calibration
CN111476716A (en) * 2020-04-03 2020-07-31 深圳力维智联技术有限公司 Real-time video splicing method and device
CN113313813A (en) * 2021-05-12 2021-08-27 武汉极目智能技术有限公司 Vehicle-mounted 3D panoramic all-around viewing system capable of actively early warning
CN113276774A (en) * 2021-07-21 2021-08-20 新石器慧通(北京)科技有限公司 Method, device and equipment for processing video picture in unmanned vehicle remote driving process
CN113870161A (en) * 2021-09-13 2021-12-31 杭州鸿泉物联网技术股份有限公司 Vehicle-mounted 3D (three-dimensional) panoramic stitching method and device based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何川等: "具有直线结构保护的网格化图像拼接", 《中国图象图形学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086575B (en) * 2022-08-16 2022-11-29 之江实验室 Video picture splicing method and device based on unmanned vehicle remote driving

Also Published As

Publication number Publication date
CN115086575B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
US7554539B2 (en) System for viewing a collection of oblique imagery in a three or four dimensional virtual scene
US5694533A (en) 3-Dimensional model composed against textured midground image and perspective enhancing hemispherically mapped backdrop image for visual realism
CN107993276B (en) Panoramic image generation method and device
US20170038942A1 (en) Playback initialization tool for panoramic videos
CN106296783A (en) A kind of combination space overall situation 3D view and the space representation method of panoramic pictures
CN111325824A (en) Image data display method and device, electronic equipment and storage medium
US9165397B2 (en) Texture blending between view-dependent texture and base texture in a geographic information system
JP6310149B2 (en) Image generation apparatus, image generation system, and image generation method
AU2019226134B2 (en) Environment map hole-filling
JPH11259672A (en) Three-dimensional virtual space display device
US11276150B2 (en) Environment map generation and hole filling
JPH09319891A (en) Image processor and its processing method
CN115086575B (en) Video picture splicing method and device based on unmanned vehicle remote driving
EP3435670A1 (en) Apparatus and method for generating a tiled three-dimensional image representation of a scene
WO2023207963A1 (en) Image processing method and apparatus, electronic device, and storage medium
US5929861A (en) Walk-through rendering system
CN115439616B (en) Heterogeneous object characterization method based on multi-object image alpha superposition
CN110807413B (en) Target display method and related device
GB2256568A (en) Image generation system for 3-d simulations
CN116485984A (en) Global illumination simulation method, device, equipment and medium for panoramic image vehicle model
CN114926612A (en) Aerial panoramic image processing and immersive display system
Burkert et al. A photorealistic predictive display
RU2606875C2 (en) Method and system for displaying scaled scenes in real time
US5936626A (en) Computer graphics silhouette load management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant