WO2021179605A1 - 基于gpu的摄像头视频投影方法、装置、设备及存储介质 - Google Patents

基于gpu的摄像头视频投影方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2021179605A1
WO2021179605A1 PCT/CN2020/121661 CN2020121661W WO2021179605A1 WO 2021179605 A1 WO2021179605 A1 WO 2021179605A1 CN 2020121661 W CN2020121661 W CN 2020121661W WO 2021179605 A1 WO2021179605 A1 WO 2021179605A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
video frame
camera
mapping relationship
video
Prior art date
Application number
PCT/CN2020/121661
Other languages
English (en)
French (fr)
Inventor
高星
程远初
徐建明
陈奇毅
石立阳
Original Assignee
佳都新太科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 佳都新太科技股份有限公司 filed Critical 佳都新太科技股份有限公司
Publication of WO2021179605A1 publication Critical patent/WO2021179605A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Definitions

  • the embodiments of the present application relate to the field of image processing, and in particular, to a GPU-based camera video projection method, device, equipment, and storage medium.
  • the current video projection schemes are all based on the pinhole camera model, that is, the camera is considered to conform to the ideal distortionless small-hole imaging model.
  • the camera's field of view is set and the camera's position and posture and other external orientation parameters are adjusted. , So that the relative position and posture of the camera in the three-dimensional digital space and its position and posture in the physical world are the same, so that the video projection screen and the three-dimensional model can be perfectly fitted.
  • the camera is often distorted, especially fisheye and wide-angle cameras.
  • the straight road in the video picture tends to become a curve.
  • the picture needs to be corrected before projection, video decoding, Distortion correction and projection need to process each pixel of each frame of the picture.
  • the picture will be returned to the memory, and then the distortion correction and projection will be performed.
  • the data is stored in the video memory and the memory. The problem of copying multiple times and affecting efficiency.
  • the embodiments of the present application provide a method, device, device, and storage medium for camera video projection based on GPU (Graphic Processing Unit, graphics processor), so as to reduce data copying between video memory and memory and improve projection efficiency.
  • GPU Graphic Processing Unit, graphics processor
  • embodiments of the present application provide a GPU-based camera video projection method, including:
  • the corrected video frame is sent to the rendering pipeline by way of video memory copy, and the rendering pipeline performs video projection on the corrected video frame.
  • the determining the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame based on the distortion parameters of the camera, and storing the pixel-by-pixel mapping relationship in the video memory includes:
  • An association relationship between the pixel-by-pixel mapping relationship and the device ID is established, and the pixel-by-pixel mapping relationship is stored in the video memory.
  • the method before performing distortion correction on the original video frame in the GPU according to the pixel-by-pixel mapping relationship to obtain the corrected video frame, the method further includes:
  • the device ID of the camera is acquired, and based on the association relationship between the pixel-by-pixel mapping relationship and the device ID, the pixel-by-pixel mapping relationship used for distortion correction of the original video frame is determined from the video memory.
  • the performing distortion correction on the original video frame in the GPU according to the pixel-by-pixel mapping relationship to obtain a corrected video frame includes:
  • each pixel in the original video frame is converted into a pixel in the corrected video frame in the GPU;
  • the pixel data of each pixel in the corrected video frame is determined from the video frame according to the correspondence between the pixels.
  • the method further includes:
  • the projection preprocessing includes one or a combination of brightness adjustment, transparency adjustment, and edge cropping.
  • the method further includes:
  • the distortion parameter of the camera is determined based on the checkerboard calibration method, and the distortion parameter is stored in the camera database of the corresponding camera, and the camera database is set in the memory.
  • the method further includes:
  • the distortion parameters in the camera database are monitored, and the pixel-by-pixel mapping relationship is updated in response to changes in the distortion parameters.
  • an embodiment of the present application provides a GPU-based camera video projection device, including a mapping relationship determination module, a video decoding module, a distortion correction module, and a video projection module, wherein:
  • the mapping relationship determination module is used to determine the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame based on the distortion parameters of the camera, and save the pixel-by-pixel mapping relationship in the video memory;
  • the video decoding module is used to perform GPU hard decoding on the video stream returned by the camera to obtain the original video frame
  • a distortion correction module configured to perform distortion correction on the original video frame in the GPU according to the pixel-by-pixel mapping relationship to obtain a corrected video frame
  • the video projection module is configured to send the corrected video frame to the rendering pipeline by way of video memory copy, and the rendering pipeline performs video projection on the corrected video frame.
  • mapping relationship determining module is specifically configured to:
  • An association relationship between the pixel-by-pixel mapping relationship and the device ID is established, and the pixel-by-pixel mapping relationship is stored in the video memory.
  • the device further includes a mapping relationship acquisition module, which is used for the distortion correction module to perform distortion correction on the original video frame in the GPU according to the pixel-by-pixel mapping relationship to obtain a correction
  • the device ID of the camera is obtained, and based on the association relationship between the pixel-by-pixel mapping relationship and the device ID, the pixel-by-pixel mapping relationship used for distortion correction of the original video frame is determined from the display memory.
  • the distortion correction module is specifically used for:
  • each pixel in the original video frame is converted into a pixel in the corrected video frame in the GPU;
  • the pixel data of each pixel in the corrected video frame is determined from the video frame according to the correspondence between the pixels.
  • the device further includes a preprocessing module configured to perform distortion correction on the original video frame in the GPU according to the pixel-by-pixel mapping relationship in the distortion correction module to obtain a corrected video frame Afterwards, projection preprocessing is performed on the corrected video frame according to the needs of video projection, and the projection preprocessing includes one or a combination of brightness adjustment, transparency adjustment, and edge cropping.
  • a preprocessing module configured to perform distortion correction on the original video frame in the GPU according to the pixel-by-pixel mapping relationship in the distortion correction module to obtain a corrected video frame Afterwards, projection preprocessing is performed on the corrected video frame according to the needs of video projection, and the projection preprocessing includes one or a combination of brightness adjustment, transparency adjustment, and edge cropping.
  • the device further includes a parameter storage module for determining the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame based on the distortion parameters of the camera in the mapping relationship determination module, and Before the pixel-by-pixel mapping relationship is stored in the video memory, the distortion parameters of the camera are determined based on the checkerboard calibration method, and the distortion parameters are stored in the camera database of the corresponding camera, and the camera database is set in the memory.
  • the device further includes a monitoring module, which is used to determine the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame in the mapping relationship determination module based on the distortion parameters of the camera, and to compare the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame.
  • a monitoring module which is used to determine the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame in the mapping relationship determination module based on the distortion parameters of the camera, and to compare the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame.
  • an embodiment of the present application provides a computer device, including: a memory and one or more processors;
  • the memory is used to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the GPU-based camera video projection method as described in the first aspect.
  • embodiments of the present application provide a storage medium containing computer-executable instructions, which are used to execute the GPU-based camera video as described in the first aspect when the computer-executable instructions are executed by a computer processor. Projection method.
  • the GPU performs hard decoding on the video stream returned by the camera to obtain the original video frame, and performs distortion correction on the original video frame in the GPU based on the pixel-by-pixel mapping relationship to obtain the corrected video frame, and the corrected video frame obtained at this time It can be transmitted to the rendering pipeline for the video projection process.
  • the video image and the 3D scene can be merged, which effectively improves the projection effect of the video, and the video data is processed in the GPU throughout the process, reducing the amount of video data in the process.
  • Multiple copies between video memory and memory, distortion correction and video projection are combined and unified into GPU for acceleration, which greatly improves the efficiency of video projection and effectively solves the problem of video stuttering.
  • Fig. 1 is a flowchart of a GPU-based camera video projection method provided by an embodiment of the present application
  • FIG. 2 is a flowchart of another GPU-based camera video projection method provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a GPU-based camera video projection device provided by an embodiment of the present application.
  • Fig. 4 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • Figure 1 shows a flowchart of a GPU-based camera video projection method provided by an embodiment of the application.
  • the GPU-based camera video projection method provided by an embodiment of the application can be executed by a GPU-based camera video projection device.
  • the camera video projection device can be implemented by hardware and/or software and integrated into the computer equipment.
  • a GPU-based camera video projection device executes a GPU-based camera video projection method as an example.
  • the GPU-based camera video projection method includes:
  • S101 Determine a pixel-by-pixel mapping relationship between the original video frame and the corrected video frame based on the distortion parameter of the camera, and save the pixel-by-pixel mapping relationship in the video memory.
  • the original video frame should be understood as the video frame in the video stream returned by the camera
  • the corrected video frame should be understood as the video frame obtained by the distortion correction of the original video frame.
  • the distortion parameter is used to perform distortion correction on the original video frame taken by the camera with distortion in the imaging effect, so as to obtain the corrected video frame.
  • the camera is calibrated, the internal and external parameters and distortion parameters of the camera are acquired, and the internal and external parameters and distortion parameters of the camera are stored in the memory.
  • the internal and external parameters and distortion parameters of the corresponding camera are retrieved from the memory, and the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame is determined based on the distortion parameters, and the pixel-by-pixel mapping relationship is determined.
  • the mapping relationship is stored in the video memory. Through this pixel-by-pixel mapping relationship, the position of each pixel in the original video frame can be mapped to the corresponding position of the corrected video frame.
  • S102 Perform GPU hard decoding on the video stream returned by the camera to obtain the original video frame.
  • the hardware decoder of the GPU is used to hard-decode the video stream to obtain the original video frame and save it in the video memory.
  • the GPU in this embodiment may be the graphics processing chip in the NVIDIA graphics card, and the hardware decoder is an independent video decoding module built in the NVIDIA graphics card, supports H.264 and H.265 decoding, and has a maximum resolution of 8K.
  • S103 Perform distortion correction on the original video frame in the GPU according to the pixel-by-pixel mapping relationship to obtain a corrected video frame.
  • the coordinate position of each pixel in the original video frame is substituted into the pixel-by-pixel mapping relationship in the GPU. Get the coordinate position of the pixel in the corrected video frame.
  • the GPU After obtaining the position of each pixel in the original video frame corresponding to the corrected video frame, the GPU then obtains the color value of each pixel in the original video frame according to the corresponding relationship between the pixels of the two video frames, and The color value is assigned to the pixel point corresponding to the corrected video frame to obtain the corrected video frame after distortion correction.
  • the corrected video frame can be stored on the GPU. Video projection based on undistorted images.
  • S104 Send the corrected video frame to the rendering pipeline by way of video memory copy, and the rendering pipeline performs video projection on the corrected video frame.
  • the corrected video frame is transmitted to the rendering pipeline in the GPU by way of video memory copy for video projection. Since the corrected video frame has eliminated the distorted image of the camera, it can be used as an undistorted image
  • the video projection process of the video projection is performed by adjusting the position and posture of the camera in the three-dimensional scene to realize the fusion of the video picture and the three-dimensional scene.
  • the rendering pipeline maps and merges the corrected video frames in the 3D scene, that is, adjusts the position and posture of the camera in the 3D scene, determines the mapping relationship between the pixels in the video frame and the 3D points in the 3D scene, and according to The mapping relationship maps the color texture of the video frame in the three-dimensional scene, and performs smooth transition processing on the overlapping area of the color texture mapping, thereby fusing the video frame into the three-dimensional scene, and completing the video projection of the corrected video frame in the three-dimensional scene. It is understandable that the video projection of the video frame can be performed based on the existing video projection method, which will not be repeated here.
  • the camera used for video projection is generally based on an undistorted pinhole camera.
  • multiple edge fusion splicing are required in the projection screen, which leads to a decline in the projection effect and It consumes a lot of computing resources of the computer.
  • a camera with a larger shooting angle such as a wide-angle lens or a fisheye lens
  • the distortion correction of video frames is performed in the CPU, and data needs to be copied repeatedly in the CPU and GPU, and the distortion correction performed in the CPU is based on serial calculations, which causes a significant drop in the efficiency of video projection.
  • the market basically uses a pinhole camera for video projection.
  • the original video frame is obtained by hard decoding the video stream returned by the camera through the GPU, and the original video frame is deformed and corrected in the GPU based on the pixel-by-pixel mapping relationship to obtain the corrected video frame.
  • the corrected video frame obtained at this time can be transmitted Perform the video projection process for the rendering pipeline, realize the fusion of the video picture and the 3D scene by adjusting the position and posture of the camera in the 3D scene, effectively improve the projection effect of the video, and combine distortion correction and video projection and unified into the GPU to accelerate the video.
  • the data is processed in the GPU throughout, which reduces multiple copies of video data between the video memory and the internal memory, which greatly improves the efficiency of video projection and effectively solves the problem of video freezes.
  • FIG. 2 is a flowchart of another GPU-based camera video projection method provided by an embodiment of the application.
  • the GPU-based camera video projection method embodies the aforementioned GPU-based camera video projection method.
  • the GPU-based camera video projection method includes:
  • S201 Determine the distortion parameter of the camera based on the checkerboard calibration method, and save the distortion parameter in the camera database of the corresponding camera, and the camera database is set in the memory.
  • each camera is correspondingly set with a device ID, and the device ID may be the MAC address, IP address, device number, etc. of the device.
  • a camera database is created for each camera in the memory, and each camera database is associated with the device ID of the corresponding camera, and the corresponding camera database can be accessed according to the device ID and the data therein can be obtained.
  • each camera is calibrated by a checkerboard calibration method to obtain internal and external parameters and distortion parameters determined by the calibration results, and the internal and external parameters and distortion parameters are stored in the corresponding camera database. It is understandable that the calibration of the camera is generally carried out when the camera is installed or when the camera is debugged. When the camera is re-calibrated, the re-acquired internal and external parameters and distortion parameters are compared to the original internal and external parameters and distortion parameters. Perform an overwrite update.
  • the internal and external parameters include internal parameters and external parameters.
  • the internal parameters are determined by the camera itself and do not change due to changes in the external environment.
  • the internal parameters of the camera include 6 parameters: 1/dx, 1/dy, r, u0, v0, and f.
  • fx in opencv is also F*Sx, where F is the focal length, that is, f, and Sx is pixels/per millimeter, that is, 1/dx, dx and dy indicate how many units a pixel occupies in the x direction and y direction, which is the key to reflect the physical coordinate relationship of the image in reality and the conversion of the pixel coordinate system.
  • u0, v0 represent the center pixel coordinates of the image and the pixel coordinates of the image origin. The number of horizontal and vertical pixels that differ between each other.
  • the external parameters of the camera include the rotation parameter R and the translation parameter T.
  • the rotation parameters of the three axes (x, y, z) are respectively ( ⁇ , ⁇ , ⁇ ), and the 3*3 rotation matrix of each axis is combined (ie Multiply between matrices) to obtain the R that sets the rotation information of the three axes, and its size is still 3*3;
  • the translation parameters of the three axes of T are (Tx, Ty, Tz).
  • R and T are combined into a 3*4 matrix, which is the key to conversion to calibration paper coordinates.
  • the distortion parameters include radial distortion coefficients k1, k2, k3 and tangential distortion coefficients p1, p2.
  • the radial distortion occurs in the process of converting the camera coordinate system to the image physical coordinate system, and the tangential distortion occurs because the photosensitive element plane is not parallel to the lens.
  • the device ID of these cameras is determined according to the cameras corresponding to the screen that the video projection function needs to call, and based on the association relationship between the device ID and the camera database, it is determined to save the internal and external parameters and parameters of these cameras.
  • the camera database of distortion parameters is accessed and the internal and external parameters and distortion parameters are retrieved from the camera database.
  • S203 Calculate a pixel-by-pixel mapping relationship between the original video frame and the corrected video frame based on the distortion parameter.
  • the original video frame should be understood as the video frame in the video stream returned by the camera
  • the corrected video frame should be understood as the video frame obtained by the distortion correction of the original video frame.
  • the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame is calculated based on the internal and external parameters and the distortion parameters.
  • the distortion parameters for radial distortion:
  • (x, y) is the position coordinate of the distortion point (the pixel on the original video frame)
  • (x', y') is the new position after correction (the pixel on the corrected video frame)
  • r is the point The distance from the imaging center (camera)
  • k1, k2, and k3 are radial distortion systems.
  • (x, y) is the position coordinate of the distortion point (the pixel on the original video frame)
  • (x', y') is the new position after correction (the pixel on the corrected video frame)
  • r is the point
  • p1 and p2 are the tangential distortion system.
  • a pixel-by-pixel mapping relationship between the original video frame and the corrected video frame can be established, and the pixel coordinates in the original video frame can be brought into the pixel-by-pixel mapping relationship to obtain the pixel coordinates on the corresponding corrected video frame.
  • S204 Establish an association relationship between the pixel-by-pixel mapping relationship and the device ID, and save the pixel-by-pixel mapping relationship in the video memory.
  • the pixel-by-pixel mapping relationship is associated with the device ID of the corresponding camera, so as to establish the pixel-by-pixel mapping relationship and the camera association relationship.
  • the pixel-by-pixel mapping relationship and the association relationship with the camera are stored in the video memory.
  • S205 Perform GPU hard decoding on the video stream returned by the camera to obtain the original video frame.
  • S206 Obtain the device ID of the camera, and based on the association relationship between the pixel-by-pixel mapping relationship and the device ID, determine a pixel-by-pixel mapping relationship used for distortion correction of the original video frame from the video memory.
  • the device information carried by the video stream returned by the camera includes the device ID.
  • the device ID carried by the video stream is obtained, and the device ID is obtained according to the pixel-by-pixel mapping relationship.
  • the associated relationship with the device ID determines and obtains the storage location of the pixel-by-pixel mapping relationship used for distortion correction of the original video frame in the video memory, and obtains the pixel-by-pixel mapping relationship from the storage location.
  • each pixel in the original video frame is converted into a pixel in the corrected video frame in the GPU.
  • the coordinate position of each pixel in the original video frame is brought into the pixel-by-pixel mapping relationship, and the coordinate position of the pixel in the corrected video frame is calculated. Traverse all the pixels of the original video frame, and convert each pixel in the original video frame to a pixel in the corrected video frame.
  • S208 Determine the pixel data of each pixel in the corrected video frame from the video frame according to the corresponding relationship of the pixel points.
  • the pixel data includes color value, depth, and so on.
  • the acquisition and assignment of pixel data can also be performed in synchronization with determining the mapping of each pixel point between the original video frame and the corrected video frame.
  • the projection preprocessing includes one or a combination of brightness adjustment, transparency adjustment, and edge cropping.
  • the projection preprocessing includes one or a combination of brightness adjustment, transparency adjustment, and edge cropping. According to different positions of the camera and the imaging effect of the camera, different projection preprocessing methods can be set. For example, for a camera with a darker imaging effect, the brightness or transparency of the corresponding corrected video frame can be increased. The edge trimming can be trimmed with the pixel center point of the corrected video frame as the center according to projection requirements.
  • S210 Send the corrected video frame to the rendering pipeline by way of video memory copy, and the rendering pipeline performs video projection on the corrected video frame.
  • S211 Monitor the distortion parameters in the camera database, and update the pixel-by-pixel mapping relationship in response to changes in the distortion parameters.
  • the device IDs of these cameras are determined according to the cameras corresponding to the screens that the video projection function needs to call, and the camera database that needs to be monitored is determined based on the association relationship between the device ID and the camera database.
  • the data monitor is called to monitor the internal and external parameters and distortion parameters in the camera database, and when the internal and external parameters or distortion parameters change, the changed internal and external parameters and distortion parameters are obtained from the camera database in the memory, and based on the The internal and external parameters and distortion parameters are recalculated pixel-by-pixel mapping relationship, and the original pixel-by-pixel mapping relationship is overwritten and updated.
  • the original video frame is obtained by hard decoding the video stream returned by the camera through the GPU, and the original video frame is deformed and corrected in the GPU based on the pixel-by-pixel mapping relationship to obtain the corrected video frame.
  • the corrected video frame obtained at this time can be transmitted Perform the video projection process for the rendering pipeline, realize the fusion of the video picture and the 3D scene by adjusting the position and posture of the camera in the 3D scene, effectively improve the projection effect of the video, and the video data is processed in the GPU throughout, reducing the video data in the video memory and Multiple copies between memories greatly improve the efficiency of video projection.
  • distortion correction is performed on the initial video frame through data parallel processing in the GPU, which improves the efficiency of the GPU to perform distortion correction.
  • the camera database is monitored, and the pixel-by-pixel mapping relationship is updated in real time when the camera database changes to ensure the effects of distortion correction and video projection.
  • FIG. 3 is a schematic structural diagram of a GPU-based camera video projection device provided by an embodiment of the application.
  • the GPU-based camera video projection device provided by this embodiment includes a mapping relationship determination module 31, a video decoding module 32, a distortion correction module 33 and a video projection module 34.
  • the mapping relationship determination module 31 is used to determine the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame based on the distortion parameters of the camera, and save the pixel-by-pixel mapping relationship in the video memory;
  • the video decoding module 32 It is used to perform GPU hard decoding on the video stream returned by the camera to obtain the original video frame;
  • the distortion correction module 33 is used to perform distortion correction on the original video frame in the GPU according to the pixel-by-pixel mapping relationship to obtain a corrected video Frame;
  • video projection module 34 configured to send the corrected video frame to the rendering pipeline by way of video memory copy, and the rendering pipeline performs video projection on the corrected video frame.
  • the original video frame is obtained by hard decoding the video stream returned by the camera through the GPU, and the original video frame is deformed and corrected in the GPU based on the pixel-by-pixel mapping relationship to obtain the corrected video frame.
  • the corrected video frame obtained at this time can be transmitted Perform the video projection process for the rendering pipeline, realize the fusion of the video picture and the 3D scene by adjusting the position and posture of the camera in the 3D scene, effectively improve the projection effect of the video, and the video data is processed in the GPU throughout, reducing the video data in the video memory and Multiple copies between memories greatly improve the efficiency of video projection and reduce the occurrence of video jams.
  • mapping relationship determining module 31 is specifically configured to:
  • the association relationship between the pixel-by-pixel mapping relationship and the device ID is established, and the pixel-by-pixel mapping relationship is stored in the video memory.
  • the device further includes a mapping relationship acquisition module configured to perform, in the GPU, the distortion correction module 33 according to the pixel-by-pixel mapping relationship. Perform distortion correction to obtain the device ID of the camera before obtaining the corrected video frame, and based on the association relationship between the pixel-by-pixel mapping relationship and the device ID, determine the pixel-by-pixel mapping relationship used for distortion correction of the original video frame from the video memory.
  • a mapping relationship acquisition module configured to perform, in the GPU, the distortion correction module 33 according to the pixel-by-pixel mapping relationship. Perform distortion correction to obtain the device ID of the camera before obtaining the corrected video frame, and based on the association relationship between the pixel-by-pixel mapping relationship and the device ID, determine the pixel-by-pixel mapping relationship used for distortion correction of the original video frame from the video memory.
  • the distortion correction module 33 is specifically configured to:
  • each pixel in the original video frame is converted into a pixel in the corrected video frame in the GPU;
  • the pixel data of each pixel in the corrected video frame is determined from the video frame according to the correspondence between the pixels.
  • the device further includes a preprocessing module, which is used for the distortion correction module 33 to distort the original video frame in the GPU according to the pixel-by-pixel mapping relationship.
  • a preprocessing module which is used for the distortion correction module 33 to distort the original video frame in the GPU according to the pixel-by-pixel mapping relationship.
  • projection preprocessing is performed on the corrected video frame according to the needs of video projection, and the projection preprocessing includes one or a combination of brightness adjustment, transparency adjustment, and edge cropping.
  • the device further includes a parameter storage module for determining the pixel-by-pixel mapping between the original video frame and the corrected video frame in the mapping relationship determining module 31 based on the distortion parameters of the camera.
  • a parameter storage module for determining the pixel-by-pixel mapping between the original video frame and the corrected video frame in the mapping relationship determining module 31 based on the distortion parameters of the camera.
  • the device further includes a monitoring module configured to determine the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame in the mapping relationship determining module 31 based on the distortion parameters of the camera, After storing the pixel-by-pixel mapping relationship in the video memory, the distortion parameters in the camera database are monitored, and the pixel-by-pixel mapping relationship is updated in response to changes in the distortion parameters.
  • a monitoring module configured to determine the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame in the mapping relationship determining module 31 based on the distortion parameters of the camera.
  • FIG. 4 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the computer equipment includes: an input device 43, an output device 44, a memory 42, and one or more processors 41; the memory 42 is used to store one or more programs; when the one or more programs It is executed by the one or more processors 41, so that the one or more processors 41 implement the GPU-based camera video projection method provided in the foregoing embodiment.
  • the input device 43, the output device 44, the memory 42, and the processor 41 may be connected by a bus or in other ways. In FIG. 4, the connection by a bus is taken as an example.
  • the memory 42 can be used to store software programs, computer executable programs, and modules, such as the program instructions/modules (for example, The mapping relationship determination module 31, the video decoding module 32, the distortion correction module 33, and the video projection module 34 in the GPU-based camera video projection device.
  • the memory 42 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system and an application program required by at least one function; the data storage area may store data created according to the use of the device, and the like.
  • the memory 42 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 42 may further include a memory remotely provided with respect to the processor 41, and these remote memories may be connected to the device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the input device 43 can be used to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the device.
  • the output device 44 may include a display device such as a display screen.
  • the processor 41 executes various functional applications and data processing of the device by running software programs, instructions, and modules stored in the memory 42, that is, realizes the aforementioned GPU-based camera video projection method.
  • the GPU-based camera video projection device and computer provided above can be used to execute the GPU-based camera video projection method provided in the above embodiments, and have corresponding functions and beneficial effects.
  • the embodiments of the present application also provide a storage medium containing computer-executable instructions, when the computer-executable instructions are executed by a computer processor, they are used to execute the GPU-based camera video projection method provided in the above-mentioned embodiments.
  • the camera video projection method includes: determining the pixel-by-pixel mapping relationship between the original video frame and the corrected video frame based on the distortion parameters of the camera, and storing the pixel-by-pixel mapping relationship in the video memory, and the corrected video frame is determined by the The original video frame is obtained by distortion correction; GPU hard decoding is performed on the video stream returned by the camera to obtain the original video frame; according to the pixel-by-pixel mapping relationship, the original video frame is subjected to distortion correction in the GPU to obtain a corrected video frame The corrected video frame is sent to the rendering pipeline by way of video memory copy, and the rendering pipeline performs video projection on the corrected video frame.
  • Storage medium any of various types of storage devices or storage devices.
  • the term “storage medium” is intended to include: installation media, such as CD-ROM, floppy disk or tape device; computer system memory or random access memory, such as DRAM, DDR RAM, SRAM, EDO RAM, Rambus RAM, etc. ; Non-volatile memory, such as flash memory, magnetic media (such as hard disk or optical storage); registers or other similar types of memory elements.
  • the storage medium may further include other types of memory or a combination thereof.
  • the storage medium may be located in the first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the Internet).
  • the second computer system can provide the program instructions to the first computer for execution.
  • the term “storage medium” may include two or more storage media that may reside in different locations (for example, in different computer systems connected through a network).
  • the storage medium may store program instructions (for example, embodied as a computer program) executable by one
  • the storage medium containing computer-executable instructions provided in the embodiments of the present application is not limited to the GPU-based camera video projection method described above, and can also execute the methods provided in any of the embodiments of the present application. Related operations in the GPU-based camera video projection method.
  • the GPU-based camera video projection device, device, and storage medium provided in the above embodiments can implement the GPU-based camera video projection method provided in any embodiment of this application.
  • the GPU-based camera video projection method provided by any embodiment of the present application.

Abstract

本申请实施例公开了基于GPU的摄像头视频投影方法、装置、设备及存储介质。本申请实施例提供的技术方案通过GPU对摄像头回传的视频流进行硬解码得到原始视频帧,并基于逐像素映射关系在GPU中对原始视频帧进行畸变校正,得到校正视频帧,此时得到的校正视频帧可传输给渲染管线进行视频投影流程,通过调节相机在三维场景中的位置姿态实现视频画面和三维场景的融合,有效提高视频的投影效果,并且视频数据全程在GPU中进行处理,减少视频数据在显存和内存之间的多次拷贝,极大提高了视频投影的效率。

Description

基于GPU的摄像头视频投影方法、装置、设备及存储介质 技术领域
本申请实施例涉及图像处理领域,尤其涉及基于GPU的摄像头视频投影方法、装置、设备及存储介质。
背景技术
目前的视频投影方案都是基于针孔相机模型的,即认为相机符合理想的无畸变小孔成像模型,在投影配置过程中,通过设置相机的视场角并调节相机的位置姿态等外方位参数,使得相机在三维数字空间中的相对位置姿态和其在物理世界里的位置姿态相同,如此便可实现视频投影画面和三维模型完美贴合。
然而在实际工程中,摄像头往往是带有畸变的,尤其是鱼眼和广角相机,其视频画面中的笔直的道路往往会变成曲线,在进行投影前需要对画面进行校正,视频的解码、畸变校正和投影需要对每帧画面的每个像素做处理,传统方法中视频无论是软件解码还是硬件解码都会将画面返回到内存中,然后再做畸变校正和投影,存在数据在显存和内存之间多次拷贝而影响效率的问题。
发明内容
本申请实施例提供基于GPU(Graphic Processing Unit,图形处理器)的摄像头视频投影方法、装置、设备及存储介质,以减少数据在显存和内存之间的拷贝,提高投影效率。
在第一方面,本申请实施例提供了基于GPU的摄像头视频投影方法,包括:
基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中,所述校正视频帧由所述原始视频帧经畸变校正获得;
对摄像头回传的视频流进行GPU硬解码,得到原始视频帧;
根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧;
通过显存拷贝的方式将所述校正视频帧发送给渲染管线,由渲染管线对所述校正视频帧进行视频投影。
进一步的,所述基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中,包括:
基于摄像头的设备ID,从相机数据库中获取对应的畸变参数;
基于所述畸变参数计算原始视频帧和校正视频帧之间的逐像素映射关系;
建立所述逐像素映射关系与设备ID的关联关系,并将所述逐像素映射关系保存在显存中。
进一步的,所述根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧之前,还包括:
获取摄像头的设备ID,并基于逐像素映射关系与设备ID的关联关系,从显存中确定用于对所述原始视频帧进行畸变校正的逐像素映射关系。
进一步的,所述根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧,包括:
根据所述逐像素映射关系,在GPU中将所述原始视频帧中的每个像素点转换为校正视频帧中的像素点;
根据像素点的对应关系从所述视频帧中确定所述校正视频帧中每个像素点的像素数据。
进一步的,所述根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧之后,还包括:
根据视频投影的需要对校正视频帧进行投影预处理,所述投影预处理包括亮度调节、透明度调节以及边缘裁剪中的一种或多种的组合。
进一步的,所述基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中之前,还包括:
基于棋盘格标定法确定摄像头的畸变参数,并将所述畸变参数保存在对应摄像头的相机数据库中,所述相机数据库设置于内存中。
进一步的,所述基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中之后,还包括:
对相机数据库中的畸变参数进行监视,并响应于畸变参数的变化对所述逐像素映射关系进行更新。
在第二方面,本申请实施例提供了基于GPU的摄像头视频投影装置,包括映射关系确定模块、视频解码模块、畸变校正模块和视频投影模块,其中:
映射关系确定模块,用于基于摄像头的畸变参数,确定原始视频帧和校正 视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中;
视频解码模块,用于对摄像头回传的视频流进行GPU硬解码,得到原始视频帧;
畸变校正模块,用于根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧;
视频投影模块,用于通过显存拷贝的方式将所述校正视频帧发送给渲染管线,由渲染管线对所述校正视频帧进行视频投影。
进一步的,所述映射关系确定模块具体用于:
基于摄像头的设备ID,从相机数据库中获取对应的畸变参数;
基于所述畸变参数计算原始视频帧和校正视频帧之间的逐像素映射关系;
建立所述逐像素映射关系与设备ID的关联关系,并将所述逐像素映射关系保存在显存中。
进一步的,所述装置还包括映射关系获取模块,所述映射关系获取模块用于在所述畸变校正模块根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧之前,获取摄像头的设备ID,并基于逐像素映射关系与设备ID的关联关系,从显存中确定用于对所述原始视频帧进行畸变校正的逐像素映射关系。
进一步的,所述畸变校正模块具体用于:
根据所述逐像素映射关系,在GPU中将所述原始视频帧中的每个像素点转换为校正视频帧中的像素点;
根据像素点的对应关系从所述视频帧中确定所述校正视频帧中每个像素点的像素数据。
进一步的,所述装置还包括预处理模块,所述预处理模块用于在所述畸变校正模块根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧之后,根据视频投影的需要对校正视频帧进行投影预处理,所述投影预处理包括亮度调节、透明度调节以及边缘裁剪中的一种或多种的组合。
进一步的,所述装置还包括参数存储模块,所述参数存储模块用于在映射关系确定模块基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中之前,基于棋盘格标定法确定摄像头的畸变参数,并将所述畸变参数保存在对应摄像头的相机数据库中,所述相机数据库设置于内存中。
进一步的,所述装置还包括监视模块,所述监视模块用于在映射关系确定模块基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中之后,对相机数据库中的畸变参数进行监视,并响应于畸变参数的变化对所述逐像素映射关系进行更新。
在第三方面,本申请实施例提供了一种计算机设备,包括:存储器以及一个或多个处理器;
所述存储器,用于存储一个或多个程序;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第一方面所述的基于GPU的摄像头视频投影方法。
在第四方面,本申请实施例提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如第一方面所述的基于GPU的摄像头视频投影方法。
本申请实施例通过GPU对摄像头回传的视频流进行硬解码得到原始视频帧,并基于逐像素映射关系在GPU中对原始视频帧进行畸变校正,得到校正视频帧,此时得到的校正视频帧可传输给渲染管线进行视频投影流程,通过调节相机在三维场景中的位置姿态实现视频画面和三维场景的融合,有效提高视频的投影效果,并且视频数据全程在GPU中进行处理,减少视频数据在显存和内存之间的多次拷贝,畸变校正和视频投影相结合并统一到GPU中加速,极大提高了视频投影的效率,有效解决视频卡顿的问题。
附图说明
图1是本申请实施例提供的基于GPU的摄像头视频投影方法的流程图;
图2是本申请实施例提供的另一种基于GPU的摄像头视频投影方法的流程图;
图3是本申请实施例提供的基于GPU的摄像头视频投影装置的结构示意图;
图4是本申请实施例提供的一种计算机设备的结构示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面结合附图对本申请具体实施例作进一步的详细描述。可以理解的是,此处所描述的具体实施例仅 仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部内容。在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各项操作(或步骤)描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。此外,各项操作的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。
图1给出了本申请实施例提供的基于GPU的摄像头视频投影方法的流程图,本申请实施例提供的基于GPU的摄像头视频投影方法可以由基于GPU的摄像头视频投影装置来执行,该基于GPU的摄像头视频投影装置可以通过硬件和/或软件的方式实现,并集成在计算机设备中。
下述以基于GPU的摄像头视频投影装置执行基于GPU的摄像头视频投影方法为例进行描述。参考图1,该基于GPU的摄像头视频投影方法包括:
S101:基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中。
其中,原始视频帧应理解为由摄像头回传的视频流中的视频帧,校正视频帧应理解为由原始视频帧经畸变校正获得的视频帧。畸变参数用于对成像效果存在畸变的摄像头拍摄出的原始视频帧进行畸变校正,从而得到校正视频帧。
具体的,在安装固定摄像头后,对摄像头进行标定,获取摄像头的内外参数及畸变参数,并将摄像头的内外参数及畸变参数保存在内存中。
进一步的,在启动视频投影功能的程序后,从内存中调取对应摄像头的内外参数和畸变参数,基于畸变参数确定原始视频帧和校正视频帧之间的逐像素映射关系,并将该逐像素映射关系保存在显存中。通过该逐像素映射关系,可将原始视频帧中每个像素点的位置映射到校正视频帧相应的位置上。
S102:对摄像头回传的视频流进行GPU硬解码,得到原始视频帧。
示例性的,在接收到摄像头回传的视频流后,利用GPU的硬件解码器对视频流进行硬解码,得到原始视频帧,并保存在显存中。
本实施例中的GPU可以是NVIDIA显卡中的图形处理芯片,硬件解码器为内置在NVIDIA显卡内部的独立的视频解码模块,支持H.264和H.265解码, 最大分辨率8K。
S103:根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧。
示例性的,在得到原始视频帧后,基于原始视频帧和校正视频帧之间的逐像素映射关系,在GPU中将原始视频帧中每个像素点的坐标位置代入逐像素映射关系中,从而得到该像素点在校正视频帧中的坐标位置。
进一步的,在得到原始视频帧中每一个像素点对应于校正视频帧的位置后,GPU再根据两视频帧像素点之间的对应关系,获取原始视频帧的每个像素点的颜色值,并将颜色值赋值给校正视频帧对应的像素点上,从而得到畸变校正后的校正视频帧。
可以理解的是,在对原始视频帧进行畸变校正并得到校正视频帧后,此时校正视频帧中的畸变已经被消除,视觉效果等效于针孔相机的画面,可将校正视频帧在GPU中进行基于无畸变图像的视频投影。
S104:通过显存拷贝的方式将所述校正视频帧发送给渲染管线,由渲染管线对所述校正视频帧进行视频投影。
示例性的,在得到校正视频帧后,通过显存拷贝的方式将校正视频帧传输给GPU中的渲染管线进行视频投影的流程,由于校正视频帧已消除了摄像头畸变的影像,可按照无畸变图像的视频投影流程进行视频投影,通过调节相机在三维场景中的位置姿态实现视频画面和三维场景的融合。
具体的,渲染管线将校正视频帧在三维场景中进行映射和融合,即调节相机在三维场景中的位置姿态,确定视频帧中的像素与三维场景中的三维点之间的映射关系,并根据映射关系将视频帧在三维场景中进行颜色纹理映射,并对颜色纹理映射的重合区域进行平滑过渡处理,从而将视频帧融合在三维场景中,完成校正视频帧在三维场景中的视频投影。可以理解的是,对视频帧的视频投影基于现有的视频投影方法进行即可,在此不做赘述。
通常情况下,用于视频投影的摄像头一般是基于无畸变的针孔摄像头进行的,然而受限于针孔摄像头的成像效果,投影画面中需要进行多次边缘融合拼接,导致投影效果下降,并且对计算机的计算资源耗费较大。
若采用广角镜头、鱼眼镜头等拍摄角度更大的摄像头进行视频投影,不可避免的要对摄像头拍摄的视频帧进行畸变校正。目前,对视频帧的畸变校正都是在CPU中进行,需要在CPU和GPU中反复拷贝数据,并且在CPU中进行的 畸变校正基于串行计算进行,由此造成视频投影的效率大幅下降。为了避免这种情况的发生,市面上基本采用针孔摄像头进行视频投影的方案。
但是在一些特定的场合(如机场、车站),对摄像头的安装具有一定的要求,例如必须要用到广角镜头、鱼眼镜头等摄像头的场地,或者是由于设备受限的原因(例如先前已安装广角镜头、鱼眼镜头等摄像头,或者是现场时间紧急且只有广角镜头、鱼眼镜头等摄像头),摄像头回传的视频流在解码后需要在CPU和GPU中反复拷贝数据,投影效果不理想。本实施例提供的方案通过GPU全程加速,在GPU中完成视频帧解码、畸变校正和视频投影的流程,有效解决现有技术中视频投影效果差、效率低的问题。
上述,通过GPU对摄像头回传的视频流进行硬解码得到原始视频帧,并基于逐像素映射关系在GPU中对原始视频帧进行畸变校正,得到校正视频帧,此时得到的校正视频帧可传输给渲染管线进行视频投影流程,通过调节相机在三维场景中的位置姿态实现视频画面和三维场景的融合,有效提高视频的投影效果,并且畸变校正和视频投影相结合并统一到GPU中加速,视频数据全程在GPU中进行处理,减少视频数据在显存和内存之间的多次拷贝,极大提高了视频投影的效率,有效解决视频卡顿的问题。
图2为本申请实施例提供的另一种基于GPU的摄像头视频投影方法的流程图,该基于GPU的摄像头视频投影方法是对上述基于GPU的摄像头视频投影方法的具体化。参考图2,该基于GPU的摄像头视频投影方法包括:
S201:基于棋盘格标定法确定摄像头的畸变参数,并将所述畸变参数保存在对应摄像头的相机数据库中,所述相机数据库设置于内存中。
具体的,每一台摄像头均对应设置有一个设备ID,设备ID可以是设备的MAC地址、IP地址、设备编号等。进一步的,在内存中针对每个摄像头创建一个相机数据库,并将每个相机数据库与对应摄像头的设备ID进行关联,根据设备ID可访问对应的相机数据库并获取其中的数据。
进一步的,通过棋盘格标定法对每个摄像头进行标定,得到由标定结果确定的内外参数和畸变参数,并将内外参数和畸变参数保存在对应的相机数据库中。可以理解的是,对摄像头的标定一般在完成摄像头的安装或者在对摄像头进行调试时进行,在重新对摄像头进行标定时,将重新获得的内外参数和畸变参数对原有的内外参数和畸变参数进行覆盖更新。
其中,内外参数包括内参和外参,内参由摄像头本身确定,不因外界环境的变化而改变,相机的内参包括1/dx、1/dy、r、u0、v0、f这6个参数,在opencv中的内参是4个,分别为fx、fy、u0、v0,其实在opencv中的fx也就是F*Sx,其中F是焦距,即f,Sx是像素/每毫米,即1/dx,dx和dy表示x方向和y方向的一个像素分别占多少个单位,是反映现实中的图像物理坐标关系与像素坐标系转换的关键,u0,v0代表图像的中心像素坐标和图像原点像素坐标之间相差的横向和纵向像素数。
相机的外参包括旋转参数R和平移参数T,三个轴(x、y、z)的旋转参数分别为(ω、δ、θ),把每个轴的3*3旋转矩阵进行组合(即矩阵之间相乘),得到集合三个轴旋转信息的R,其大小还是3*3;T的三个轴的平移参数为(Tx、Ty、Tz)。R、T组合成3*4的矩阵,其是转换到标定纸坐标的关键。
畸变参数包括径向畸变系数k1、k2、k3和切向畸变系数p1、p2。其中径向畸变发生在相机坐标系转图像物理坐标系的过程中,而切向畸变是由于感光元平面跟透镜不平行而发生。
S202:基于摄像头的设备ID,从相机数据库中获取对应的畸变参数。
具体的,在启动视频投影功能的程序后,根据视频投影功能需要调用的画面对应的摄像头,确定这些摄像头的设备ID,并基于设备ID和相机数据库的关联关系,确定保存这些摄像头的内外参数和畸变参数的相机数据库并进行访问,并从相机数据库中调取内外参数和畸变参数。
可以理解的是,内外参数和畸变参数的获取在启动视频投影功能的程序并进行初始化时进行即可,在程序运行过程中,只要摄像头的内外参数和畸变参数没有发生变化,无需重复从相机数据库中调取内外参数和畸变参数。
S203:基于所述畸变参数计算原始视频帧和校正视频帧之间的逐像素映射关系。
其中,原始视频帧应理解为由摄像头回传的视频流中的视频帧,校正视频帧应理解为由原始视频帧经畸变校正获得的视频帧。
具体的,基于内外参数和畸变参数计算原始视频帧和校正视频帧之间的逐像素映射关系。其中,对于径向畸变:
x'=x(1+k1r 2+k2r 4+k3r 6)
y'=y(1+k1r 2+k2r 4+k3r 6)
其中,(x,y)是畸变点(原始视频帧上的像素点)的位置坐标,(x′,y′)是校正后的新位置(校正视频帧上的像素点),r为该点距离成像中心(相机)的距离,k1、k2、k3为径向畸变系统。
对于切向畸变:
x'=x+[2p1y+p2(r 2+2x 2)]
y'=y+[2p1x+p2(r 2+2y 2)]
其中,(x,y)是畸变点(原始视频帧上的像素点)的位置坐标,(x′,y′)是校正后的新位置(校正视频帧上的像素点),r为该点距离成像中心(相机)的距离,p1、p2为切向畸变系统。
基于以上公式可建立原始视频帧和校正视频帧之间的逐像素映射关系,将原始视频帧中的像素点坐标带入逐像素映射关系中即可得到对应校正视频帧上的像素点坐标。
S204:建立所述逐像素映射关系与设备ID的关联关系,并将所述逐像素映射关系保存在显存中。
具体的,在确定原始视频帧和校正视频帧之间的逐像素映射关系后,将该逐像素映射关系与对应摄像头的设备ID进行关联,从而建立逐像素映射关系与摄像头的关联关系。
进一步的,完成逐像素映射关系与摄像头的关联后,将逐像素映射关系及其与摄像头的关联关系保存在显存中。
S205:对摄像头回传的视频流进行GPU硬解码,得到原始视频帧。
S206:获取摄像头的设备ID,并基于逐像素映射关系与设备ID的关联关系,从显存中确定用于对所述原始视频帧进行畸变校正的逐像素映射关系。
具体的,摄像头回传的视频流所携带的设备信息中包含设备ID,在接受摄像头回传的视频流并对视频流进行解码后,获取视频流所携带的设备ID,并根据逐像素映射关系与设备ID的关联关系,确定并获取用于对原始视频帧进行畸变校正的逐像素映射关系在显存中的保存位置,并从该保存位置中获取逐像素映射关系。
S207:根据所述逐像素映射关系,在GPU中将所述原始视频帧中的每个像素点转换为校正视频帧中的像素点。
具体的,在获取逐像素映射关系后,将原始视频帧中每个像素点的坐标位置带入逐像素映射关系中,并计算出该像素点在校正视频帧中的坐标位置。遍历原始视频帧的所有像素点,将原始视频帧中的每个像素点都转换为校正视频帧中的像素点。
S208:根据像素点的对应关系从所述视频帧中确定所述校正视频帧中每个像素点的像素数据。
具体的,在完成原始视频帧中的每个像素点到校正视频帧的映射后,遍历校正视频帧中的每个像素点,获取每个像素点在原始视频帧中的像素数据,并赋值到校正视频帧的对应像素点上,从而得到畸变校正后的校正视频帧,此时校正视频帧中的畸变已经被消除。其中,像素数据包括颜色值、深度等。
可以理解的是,像素数据的获取与赋值还可以在确定每个像素点在原始视频帧和校正视频帧之间的映射的同步进行。
S209:根据视频投影的需要对校正视频帧进行投影预处理,所述投影预处理包括亮度调节、透明度调节以及边缘裁剪中的一种或多种的组合。
具体的,在得到矫正后的校正视频帧后,根据视频投影的需要对校正视频帧进行投影预处理,以使传输给渲染管线的矫正视频帧满足投影需求。
其中投影预处理包括亮度调节、透明度调节以及边缘裁剪中的一种或多种的组合。根据摄像头的不同位置及摄像头的成像效果,可设置不同的投影预处理的方式,例如对成像效果较暗的摄像头,可提高其对应的校正视频帧的亮度或透明度。其中边缘剪裁可根据投影的需求以校正视频帧的像素中心点为中心进行裁剪。
S210:通过显存拷贝的方式将所述校正视频帧发送给渲染管线,由渲染管线对所述校正视频帧进行视频投影。
S211:对相机数据库中的畸变参数进行监视,并响应于畸变参数的变化对所述逐像素映射关系进行更新。
具体的,在投影功能程序的运行过程中,根据视频投影功能需要调用的画面对应的摄像头,确定这些摄像头的设备ID,并基于设备ID和相机数据库的关联关系,确定需要进行监视的相机数据库。
进一步的,调用数据监视器对相机数据库中的内外参数和畸变参数进行监视,并在内外参数或畸变参数发生变化时,从内存的相机数据库中获取变化后的内外参数和畸变参数,并根据该内外参数和畸变参数重新进行逐像素映射关 系的计算,并对原逐像素映射关系进行覆盖更新。
上述,通过GPU对摄像头回传的视频流进行硬解码得到原始视频帧,并基于逐像素映射关系在GPU中对原始视频帧进行畸变校正,得到校正视频帧,此时得到的校正视频帧可传输给渲染管线进行视频投影流程,通过调节相机在三维场景中的位置姿态实现视频画面和三维场景的融合,有效提高视频的投影效果,并且视频数据全程在GPU中进行处理,减少视频数据在显存和内存之间的多次拷贝,极大提高了视频投影的效率。同时,在GPU中通过数据并行处理的方式对初始视频帧进行畸变校正,提高GPU执行畸变校正的效率。并对相机数据库进行监视,并在相机数据库发生变化时实时更新逐像素映射关系,确保畸变校正和视频投影的效果。
图3为本申请实施例提供的基于GPU的摄像头视频投影装置的结构示意图。参考图3,本实施例提供的基于GPU的摄像头视频投影装置包括映射关系确定模块31、视频解码模块32、畸变校正模块33和视频投影模块34。
其中,映射关系确定模块31,用于基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中;视频解码模块32,用于对摄像头回传的视频流进行GPU硬解码,得到原始视频帧;畸变校正模块33,用于根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧;视频投影模块34,用于通过显存拷贝的方式将所述校正视频帧发送给渲染管线,由渲染管线对所述校正视频帧进行视频投影。
上述,通过GPU对摄像头回传的视频流进行硬解码得到原始视频帧,并基于逐像素映射关系在GPU中对原始视频帧进行畸变校正,得到校正视频帧,此时得到的校正视频帧可传输给渲染管线进行视频投影流程,通过调节相机在三维场景中的位置姿态实现视频画面和三维场景的融合,有效提高视频的投影效果,并且视频数据全程在GPU中进行处理,减少视频数据在显存和内存之间的多次拷贝,极大提高了视频投影的效率,减少了视频卡顿现象的发生。
在一个可能的实施例中,所述映射关系确定模块31具体用于:
基于摄像头的设备ID,从相机数据库中获取对应的畸变参数;
基于所述畸变参数计算原始视频帧和校正视频帧之间的逐像素映射关系;
建立所述逐像素映射关系与设备ID的关联关系,并将所述逐像素映射关系 保存在显存中。
在一个可能的实施例中,所述装置还包括映射关系获取模块,所述映射关系获取模块用于在所述畸变校正模块33根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧之前,获取摄像头的设备ID,并基于逐像素映射关系与设备ID的关联关系,从显存中确定用于对所述原始视频帧进行畸变校正的逐像素映射关系。
在一个可能的实施例中,所述畸变校正模块33具体用于:
根据所述逐像素映射关系,在GPU中将所述原始视频帧中的每个像素点转换为校正视频帧中的像素点;
根据像素点的对应关系从所述视频帧中确定所述校正视频帧中每个像素点的像素数据。
在一个可能的实施例中,所述装置还包括预处理模块,所述预处理模块用于在所述畸变校正模块33根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧之后,根据视频投影的需要对校正视频帧进行投影预处理,所述投影预处理包括亮度调节、透明度调节以及边缘裁剪中的一种或多种的组合。
在一个可能的实施例中,所述装置还包括参数存储模块,所述参数存储模块用于在映射关系确定模块31基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中之前,基于棋盘格标定法确定摄像头的畸变参数,并将所述畸变参数保存在对应摄像头的相机数据库中,所述相机数据库设置于内存中。
在一个可能的实施例中,所述装置还包括监视模块,所述监视模块用于在映射关系确定模块31基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中之后,对相机数据库中的畸变参数进行监视,并响应于畸变参数的变化对所述逐像素映射关系进行更新。
本申请实施例还提供了一种计算机设备,该计算机设备可集成本申请实施例提供的基于GPU的摄像头视频投影装置。图4是本申请实施例提供的一种计算机设备的结构示意图。参考图4,该计算机设备包括:输入装置43、输出装 置44、存储器42以及一个或多个处理器41;所述存储器42,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器41执行,使得所述一个或多个处理器41实现如上述实施例提供的基于GPU的摄像头视频投影方法。其中输入装置43、输出装置44、存储器42和处理器41可以通过总线或者其他方式连接,图4中以通过总线连接为例。
存储器42作为一种计算设备可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请任意实施例所述的基于GPU的摄像头视频投影方法对应的程序指令/模块(例如,基于GPU的摄像头视频投影装置中的映射关系确定模块31、视频解码模块32、畸变校正模块33和视频投影模块34)。存储器42可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据设备的使用所创建的数据等。此外,存储器42可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器42可进一步包括相对于处理器41远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置43可用于接收输入的数字或字符信息,以及产生与设备的用户设置以及功能控制有关的键信号输入。输出装置44可包括显示屏等显示设备。
处理器41通过运行存储在存储器42中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的基于GPU的摄像头视频投影方法。
上述提供的基于GPU的摄像头视频投影装置和计算机可用于执行上述实施例提供的基于GPU的摄像头视频投影方法,具备相应的功能和有益效果。
本申请实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如上述实施例提供的基于GPU的摄像头视频投影方法,该基于GPU的摄像头视频投影方法包括:基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中,所述校正视频帧由所述原始视频帧经畸变校正获得;对摄像头回传的视频流进行GPU硬解码,得到原始视频帧;根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧;通 过显存拷贝的方式将所述校正视频帧发送给渲染管线,由渲染管线对所述校正视频帧进行视频投影。
存储介质——任何的各种类型的存储器设备或存储设备。术语“存储介质”旨在包括:安装介质,例如CD-ROM、软盘或磁带装置;计算机系统存储器或随机存取存储器,诸如DRAM、DDR RAM、SRAM、EDO RAM,兰巴斯(Rambus)RAM等;非易失性存储器,诸如闪存、磁介质(例如硬盘或光存储);寄存器或其它相似类型的存储器元件等。存储介质可以还包括其它类型的存储器或其组合。另外,存储介质可以位于程序在其中被执行的第一计算机系统中,或者可以位于不同的第二计算机系统中,第二计算机系统通过网络(诸如因特网)连接到第一计算机系统。第二计算机系统可以提供程序指令给第一计算机用于执行。术语“存储介质”可以包括可以驻留在不同位置中(例如在通过网络连接的不同计算机系统中)的两个或更多存储介质。存储介质可以存储可由一个或多个处理器执行的程序指令(例如具体实现为计算机程序)。
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的基于GPU的摄像头视频投影方法,还可以执行本申请任意实施例所提供的基于GPU的摄像头视频投影方法中的相关操作。
上述实施例中提供的基于GPU的摄像头视频投影装置、设备及存储介质可执行本申请任意实施例所提供的基于GPU的摄像头视频投影方法,未在上述实施例中详尽描述的技术细节,可参见本申请任意实施例所提供的基于GPU的摄像头视频投影方法。
上述仅为本申请的较佳实施例及所运用的技术原理。本申请不限于这里所述的特定实施例,对本领域技术人员来说能够进行的各种明显变化、重新调整及替代均不会脱离本申请的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本申请不仅仅限于以上实施例,在不脱离本申请构思的情况下,还可以包括更多其他等效实施例,而本申请的范围由权利要求的范围决定。

Claims (14)

  1. 基于GPU的摄像头视频投影方法,其特征在于,包括:
    基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中,所述校正视频帧由所述原始视频帧经畸变校正获得;
    对摄像头回传的视频流进行GPU硬解码,得到原始视频帧;
    根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧;
    通过显存拷贝的方式将所述校正视频帧发送给渲染管线,由渲染管线对所述校正视频帧进行视频投影。
  2. 根据权利要求1所述的基于GPU的摄像头视频投影方法,其特征在于,所述基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中,包括:
    基于摄像头的设备ID,从相机数据库中获取对应的畸变参数;
    基于所述畸变参数计算原始视频帧和校正视频帧之间的逐像素映射关系;
    建立所述逐像素映射关系与设备ID的关联关系,并将所述逐像素映射关系保存在显存中。
  3. 根据权利要求2所述的基于GPU的摄像头视频投影方法,其特征在于,所述根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧之前,还包括:
    获取摄像头的设备ID,并基于逐像素映射关系与设备ID的关联关系,从显存中确定用于对所述原始视频帧进行畸变校正的逐像素映射关系。
  4. 根据权利要求1所述的基于GPU的摄像头视频投影方法,其特征在于,所述根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧,包括:
    根据所述逐像素映射关系,在GPU中将所述原始视频帧中的每个像素点转换为校正视频帧中的像素点;
    根据像素点的对应关系从所述视频帧中确定所述校正视频帧中每个像素点的像素数据。
  5. 根据权利要求1所述的基于GPU的摄像头视频投影方法,其特征在于,所述根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧之后,还包括:
    根据视频投影的需要对校正视频帧进行投影预处理,所述投影预处理包括亮度调节、透明度调节以及边缘裁剪中的一种或多种的组合。
  6. 根据权利要求1-5任一项所述的基于GPU的摄像头视频投影方法,其特征在于,所述基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中之前,还包括:
    基于棋盘格标定法确定摄像头的畸变参数,并将所述畸变参数保存在对应摄像头的相机数据库中,所述相机数据库设置于内存中。
  7. 根据权利要求6任一项所述的基于GPU的摄像头视频投影方法,其特征在于,所述基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中之后,还包括:
    对相机数据库中的畸变参数进行监视,并响应于畸变参数的变化对所述逐像素映射关系进行更新。
  8. 基于GPU的摄像头视频投影装置,其特征在于,包括映射关系确定模块、视频解码模块、畸变校正模块和视频投影模块,其中:
    映射关系确定模块,用于基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中;
    视频解码模块,用于对摄像头回传的视频流进行GPU硬解码,得到原始视频帧;
    畸变校正模块,用于根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧;
    视频投影模块,用于通过显存拷贝的方式将所述校正视频帧发送给渲染管线,由渲染管线对所述校正视频帧进行视频投影。
  9. 基于GPU的摄像头视频投影装置,其特征在于,还包括:
    映射关系获取模块,用于在所述畸变校正模块根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧之前,获取摄像头的设备ID,并基于逐像素映射关系与设备ID的关联关系,从显存中确定用于对所述原始视频帧进行畸变校正的逐像素映射关系。
  10. 基于GPU的摄像头视频投影装置,其特征在于,还包括:
    预处理模块,用于在所述畸变校正模块根据所述逐像素映射关系,在GPU中对所述原始视频帧进行畸变校正,得到校正视频帧之后,根据视频投影的需要对校正视频帧进行投影预处理,所述投影预处理包括亮度调节、透明度调节 以及边缘裁剪中的一种或多种的组合。
  11. 基于GPU的摄像头视频投影装置,其特征在于,还包括:
    参数存储模块,用于在映射关系确定模块基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中之前,基于棋盘格标定法确定摄像头的畸变参数,并将所述畸变参数保存在对应摄像头的相机数据库中,所述相机数据库设置于内存中。
  12. 基于GPU的摄像头视频投影装置,其特征在于,还包括:
    监视模块,用于在映射关系确定模块基于摄像头的畸变参数,确定原始视频帧和校正视频帧之间的逐像素映射关系,并将所述逐像素映射关系保存在显存中之后,对相机数据库中的畸变参数进行监视,并响应于畸变参数的变化对所述逐像素映射关系进行更新。
  13. 一种计算机设备,其特征在于,包括:存储器以及一个或多个处理器;
    所述存储器,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7任一所述的基于GPU的摄像头视频投影方法。
  14. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-7任一所述的基于GPU的摄像头视频投影方法。
PCT/CN2020/121661 2020-03-12 2020-10-16 基于gpu的摄像头视频投影方法、装置、设备及存储介质 WO2021179605A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010172352.0 2020-03-12
CN202010172352.0A CN111294580B (zh) 2020-03-12 2020-03-12 基于gpu的摄像头视频投影方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021179605A1 true WO2021179605A1 (zh) 2021-09-16

Family

ID=71028734

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/121661 WO2021179605A1 (zh) 2020-03-12 2020-10-16 基于gpu的摄像头视频投影方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN111294580B (zh)
WO (1) WO2021179605A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111294580B (zh) * 2020-03-12 2022-05-03 佳都科技集团股份有限公司 基于gpu的摄像头视频投影方法、装置、设备及存储介质
CN112437276B (zh) * 2020-11-20 2023-04-07 埃洛克航空科技(北京)有限公司 一种基于WebGL的三维视频融合方法及系统
CN117152400B (zh) * 2023-10-30 2024-03-19 武汉苍穹融新科技有限公司 交通道路上多路连续视频与三维孪生场景融合方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323561A1 (en) * 2015-04-29 2016-11-03 Lucid VR, Inc. Stereoscopic 3d camera for virtual reality experience
CN107527327A (zh) * 2017-08-23 2017-12-29 珠海安联锐视科技股份有限公司 一种基于gpu的鱼眼校正方法
CN107644402A (zh) * 2017-08-14 2018-01-30 天津大学 基于gpu的快速鱼眼矫正方法
CN108053385A (zh) * 2018-01-24 2018-05-18 桂林电子科技大学 一种鱼眼视频实时矫正系统及方法
CN110533577A (zh) * 2018-05-25 2019-12-03 杭州海康威视数字技术股份有限公司 鱼眼图像校正方法及装置
CN110796722A (zh) * 2019-11-01 2020-02-14 广东三维家信息科技有限公司 三维渲染呈现方法及装置
CN111294580A (zh) * 2020-03-12 2020-06-16 佳都新太科技股份有限公司 基于gpu的摄像头视频投影方法、装置、设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100662569B1 (ko) * 2005-01-03 2006-12-28 삼성전자주식회사 프로젝션 tv 및 그 제어방법
CN107680047A (zh) * 2017-09-05 2018-02-09 北京小鸟看看科技有限公司 一种虚拟现实场景渲染方法、图像处理器和头戴显示设备
CN107707874A (zh) * 2017-09-18 2018-02-16 天津大学 鱼眼相机视频矫正及传输系统及方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323561A1 (en) * 2015-04-29 2016-11-03 Lucid VR, Inc. Stereoscopic 3d camera for virtual reality experience
CN107644402A (zh) * 2017-08-14 2018-01-30 天津大学 基于gpu的快速鱼眼矫正方法
CN107527327A (zh) * 2017-08-23 2017-12-29 珠海安联锐视科技股份有限公司 一种基于gpu的鱼眼校正方法
CN108053385A (zh) * 2018-01-24 2018-05-18 桂林电子科技大学 一种鱼眼视频实时矫正系统及方法
CN110533577A (zh) * 2018-05-25 2019-12-03 杭州海康威视数字技术股份有限公司 鱼眼图像校正方法及装置
CN110796722A (zh) * 2019-11-01 2020-02-14 广东三维家信息科技有限公司 三维渲染呈现方法及装置
CN111294580A (zh) * 2020-03-12 2020-06-16 佳都新太科技股份有限公司 基于gpu的摄像头视频投影方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN111294580B (zh) 2022-05-03
CN111294580A (zh) 2020-06-16

Similar Documents

Publication Publication Date Title
WO2021179605A1 (zh) 基于gpu的摄像头视频投影方法、装置、设备及存储介质
WO2021227359A1 (zh) 一种无人机投影方法、装置、设备及存储介质
WO2018214365A1 (zh) 图像校正方法、装置、设备、系统及摄像设备和显示设备
WO2022193559A1 (zh) 投影校正方法、装置、存储介质及电子设备
EP2870585B1 (en) A method and system for correcting a distorted image
JP5437311B2 (ja) 画像補正方法、画像補正システム、角度推定方法、および角度推定装置
JP4960992B2 (ja) 魚眼補正と透視歪み削減の画像処理方法及び画像処理装置
JP2022528659A (ja) プロジェクタの台形補正方法、装置、システム及び読み取り可能な記憶媒体
CN107424118A (zh) 基于改进径向畸变校正的球状全景拼接方法
WO2021031781A1 (zh) 投影图像校准方法、装置及投影设备
CN108564551A (zh) 鱼眼图像处理方法及鱼眼图像处理装置
CN114727081A (zh) 投影仪投影校正方法、装置及投影仪
CN111292278A (zh) 图像融合方法及装置、存储介质、终端
CN109785265B (zh) 畸变矫正图像处理方法及图像处理装置
JP2002014611A (ja) プラネタリウムのまたは球面スクリーンへのビデオ投映方法と装置
CN111862240B (zh) 全景相机及其标定方法、全景图像的拼接方法及存储介质
CN114125411A (zh) 投影设备校正方法、装置、存储介质以及投影设备
WO2024002023A1 (zh) 全景立体图像的生成方法、装置和电子设备
CN110942475B (zh) 紫外与可见光图像融合系统及快速图像配准方法
CN112598751A (zh) 标定方法及装置、终端和存储介质
CN109785225B (zh) 一种用于图像矫正的方法和装置
WO2022062604A1 (zh) 投影画面调节方法、装置、投影仪和存储介质
CN103945103B (zh) 基于柱面的多平面二次投影消除全景摄像机图像畸变的方法
CN111161148B (zh) 一种全景图像生成方法、装置、设备和存储介质
TWI688274B (zh) 影像校正方法及影像校正系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20923964

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20923964

Country of ref document: EP

Kind code of ref document: A1