CN116883231B - Image data generation method, device and equipment of fisheye camera and storage medium - Google Patents

Image data generation method, device and equipment of fisheye camera and storage medium Download PDF

Info

Publication number
CN116883231B
CN116883231B CN202311147820.9A CN202311147820A CN116883231B CN 116883231 B CN116883231 B CN 116883231B CN 202311147820 A CN202311147820 A CN 202311147820A CN 116883231 B CN116883231 B CN 116883231B
Authority
CN
China
Prior art keywords
pixel
fisheye
camera
pinhole
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311147820.9A
Other languages
Chinese (zh)
Other versions
CN116883231A (en
Inventor
谢子锐
胡兰
张如高
虞正华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Magic Vision Intelligent Technology Co ltd
Original Assignee
Shenzhen Magic Vision Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Magic Vision Intelligent Technology Co ltd filed Critical Shenzhen Magic Vision Intelligent Technology Co ltd
Priority to CN202311147820.9A priority Critical patent/CN116883231B/en
Publication of CN116883231A publication Critical patent/CN116883231A/en
Application granted granted Critical
Publication of CN116883231B publication Critical patent/CN116883231B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of computers, and discloses an image data generation method, device, equipment and storage medium of a fisheye camera, wherein the method comprises the following steps: dividing the fisheye camera into a plurality of pinhole camera models based on fisheye imaging parameters of the fisheye camera; performing image rendering on an initial image acquired by a pinhole camera model to generate a pinhole rendering image and a pinhole optical flow image thereof; acquiring a direction vector of each pixel under a camera coordinate system of the fisheye camera based on a first pixel coordinate of each pixel under the fisheye pixel plane; transforming the direction vector based on pinhole camera parameters of a pinhole camera model to obtain a second pixel coordinate of the pixel under a pinhole pixel plane; and processing the pinhole rendering image and the pinhole optical flow image based on the corresponding relation between the first pixel coordinates and the second pixel coordinates to obtain a fisheye rendering image and a fisheye optical flow image of the fisheye camera. The invention can solve the problem that the rendering image and the optical flow data of the fisheye camera cannot be effectively acquired.

Description

Image data generation method, device and equipment of fisheye camera and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for generating image data of a fisheye camera, a computer device, and a computer readable storage medium.
Background
In the field of autopilot, the motion information between adjacent image frames is typically calculated by calculating the pixel changes and the correlation between adjacent image frames in an image sequence using an optical flow method to find the spatial transformation relationship that exists between the previous image frame and the current image frame. Currently, a deep learning method is mainly adopted to estimate the optical flow, but a large amount of optical flow data is often needed to train a deep learning model by the method. However, since the image data of the fisheye camera is generated by calculating the motion of pixels in the super-pixel plane, it is difficult for the optical flow data to directly label each pixel of the image by a manual labeling method.
In the related art, dense optical flow data is mainly generated in the following manner: generating a rendered image by using three-dimensional modeling and rendering software, and extracting shader data in the rendering process to generate dense optical flow data. Because the current rendering software is based on a pinhole camera model, a great amount of matrix operation is involved in the process of image rendering, and the pinhole camera model can directly project three-dimensional coordinates to a pixel plane through linear change. However, for the fisheye camera, a series of nonlinear transformations are required when the fisheye camera is used for image rendering, so that the conventional rendering software cannot effectively perform matrix dimension operation, and thus a fisheye rendered image with high distortion cannot be effectively rendered, and optical flow data of the fisheye camera cannot be obtained.
Disclosure of Invention
In view of the above, the present invention provides a method, apparatus, device and storage medium for generating image data of a fisheye camera, so as to solve the problem that the rendered image of the fisheye camera and the optical flow data cannot be obtained effectively.
In a first aspect, the present invention provides a method for generating image data of a fisheye camera, the method comprising:
acquiring fisheye imaging parameters of a fisheye camera;
dividing the fisheye camera into a plurality of pinhole camera models based on the fisheye imaging parameters, and acquiring pinhole camera parameters corresponding to the pinhole camera models, wherein the pinhole camera parameters comprise a pinhole camera internal parameter and a pinhole camera external parameter;
performing image rendering on an initial image acquired by the pinhole camera model in real time to generate a pinhole rendering image and a corresponding pinhole optical flow image;
acquiring a first direction vector of each pixel under a first camera coordinate system corresponding to the fisheye camera based on a first pixel coordinate of each pixel under a fisheye pixel plane corresponding to the fisheye camera;
transforming the first direction vector based on the pinhole camera parameters to obtain a second pixel coordinate of the pixel under a pinhole pixel plane corresponding to the pinhole camera model;
And processing the pinhole rendering image and the pinhole optical flow image based on the corresponding relation between the first pixel coordinates and the second pixel coordinates to obtain a fisheye rendering image and a fisheye optical flow image corresponding to the fisheye camera.
In the method, a fisheye camera is divided into a plurality of pinhole camera models based on fisheye imaging parameters, then a pixel direction vector projection principle of imaging of the pinhole camera models is utilized, a first direction vector of pixels under a first camera coordinate system corresponding to the fisheye camera is obtained based on first pixel coordinates of pixels under a fisheye pixel plane corresponding to the fisheye camera, and coordinate system transformation is carried out on the first direction vector to obtain a second pixel coordinate of the pixels under the pinhole camera models. Therefore, the fisheye-rendered image of the fisheye camera can be obtained by effectively processing the pinhole-rendered images of the plurality of pinhole camera models by using the correspondence between the first pixel coordinates and the second pixel coordinates, and the fisheye-light stream image of the fisheye camera can be obtained by effectively processing the pinhole-light stream images of the plurality of pinhole camera models.
In an optional implementation manner, the obtaining, based on the first pixel coordinates of each pixel in the fisheye pixel plane corresponding to the fisheye camera, a first direction vector of the pixel in the first camera coordinate system corresponding to the fisheye camera includes:
Determining a grid vector corresponding to each pixel under a fisheye pixel plane based on a first pixel coordinate of each pixel under the fisheye pixel plane corresponding to the fisheye camera;
and back projecting the grid vector of the pixel to a first camera coordinate system corresponding to the fisheye camera to obtain a first direction vector of the pixel.
In an optional implementation manner, the transforming the coordinate system of the first direction vector based on the pinhole camera parameter to obtain a second pixel coordinate of the pixel under a pinhole pixel plane corresponding to the pinhole camera model includes:
based on the pinhole camera external parameters, transforming the first direction vector of the pixel to a second camera coordinate system corresponding to the pinhole camera model to obtain a first vector coordinate corresponding to the pixel;
normalizing the first vector coordinate corresponding to the pixel to obtain a second vector coordinate corresponding to the pixel;
and based on the pinhole camera internal parameters, projecting the second vector coordinates corresponding to the pixels to the pinhole pixel plane corresponding to the pinhole camera model to obtain the second pixel coordinates of the pixels.
In this mode, the second pixel coordinates of the pixel on the pinhole pixel plane are obtained based on the first direction vector transformation of the pixel using the principle of pixel direction vector projection of pinhole camera model imaging. Accordingly, pixel values and optical flow data corresponding to pixels located at the first pixel coordinates can be determined from the pinhole rendering image and the pinhole optical flow image based on the second pixel coordinates of the pixels to restore the fisheye rendering image and the fisheye optical flow image of the fisheye camera.
In an optional implementation manner, the processing the pinhole rendering image and the pinhole optical flow image based on the correspondence between the first pixel coordinates and the second pixel coordinates to obtain a fisheye rendering image and a fisheye optical flow image corresponding to the fisheye camera includes:
acquiring a pixel value corresponding to the first pixel coordinate from the pinhole rendering image based on the corresponding relation between the first pixel coordinate and the second pixel coordinate;
performing linear interpolation processing on the pixel value corresponding to the first pixel coordinate to obtain a fisheye rendering image corresponding to the fisheye camera;
acquiring first optical flow data corresponding to the first pixel coordinates from the fish-eye optical flow image based on the corresponding relation between the first pixel coordinates and the second pixel coordinates;
and carrying out coordinate system transformation on the first optical flow data corresponding to the first pixel coordinates based on the first pixel coordinates and the pinhole camera parameters to obtain a fisheye optical flow image corresponding to the fisheye camera.
In this aspect, the pixel value and the first optical flow data corresponding to the first pixel coordinate are acquired from the pinhole rendering image and the pinhole optical flow image, respectively, based on the correspondence between the first pixel coordinate and the second pixel coordinate, thereby effectively generating the fisheye rendering image and the fisheye optical flow image. Meanwhile, the pixel value corresponding to the first pixel coordinate is subjected to linear interpolation processing to obtain the fisheye rendering image corresponding to the fisheye camera, and the image quality of the fisheye rendering image can be improved to a certain extent.
In an optional implementation manner, the performing, based on the first pixel coordinates and the pinhole camera parameters, a coordinate system transformation on first optical flow data corresponding to the first pixel coordinates to obtain a fisheye optical flow image corresponding to the fisheye camera includes:
obtaining absolute coordinates of the pixel in a pinhole rendering image of a previous frame based on the sum of the first pixel coordinates of the pixel and the corresponding first optical flow data;
performing linear interpolation processing on the absolute coordinates based on second pixel coordinates of the pixels to obtain second optical flow data of the pixels under the pinhole pixel plane;
based on the pinhole camera internal parameters, back-projecting second optical flow data of the pixels to a second camera coordinate system corresponding to the pinhole camera model to obtain second direction vectors of the pixels;
transforming the second direction vector of the pixel to the first camera coordinate system based on the pinhole camera external parameters to obtain a third direction vector of the pixel;
and projecting the third direction vector of the pixel under the fisheye pixel plane, and processing the projected third direction vector based on the corresponding first pixel coordinate to obtain a fisheye optical flow image corresponding to the fisheye camera.
In this manner, first, the absolute coordinates of the pixels in the pinhole rendering image of the previous frame are determined based on the sum of the first pixel coordinates of the pixels and the corresponding first optical flow data, and then linear interpolation processing is performed on the absolute coordinates to obtain second optical flow data of the pixels under the pinhole pixel plane of the previous frame, so that the second optical flow data can be transformed into a fisheye optical flow image corresponding to the fisheye camera accurately.
In an alternative embodiment, the method further comprises:
constructing a pixel mask based on an imaging plane range corresponding to the first vector coordinate and the pinhole camera model, wherein the pixel mask is used for masking pixels exceeding the imaging plane range;
processing the fisheye rendering image according to the pixel shade to obtain a target fisheye rendering image;
and processing the fish-eye optical flow image according to the pixel shade to obtain a target fish-eye optical flow image.
In this manner, a pixel mask is constructed based on an imaging plane range corresponding to the first vector coordinate and the pinhole camera model to respectively process the fisheye-rendered image and the fisheye-optical flow image to obtain a target fisheye-rendered image and a target fisheye-optical flow image. Therefore, the pixel values and the optical flow data of the fisheye pixel plane which are not correctly projected to the fisheye camera in the fisheye rendering image and the fisheye optical flow image can be removed, so that the accuracy of the fisheye rendering image and the fisheye optical flow image is improved.
In an optional embodiment, the fisheye imaging parameters include the fisheye imaging length, the fisheye imaging width, and the fisheye field angle, the dividing the fisheye camera into a plurality of pinhole camera models based on the fisheye imaging parameters, and acquiring pinhole camera parameters corresponding to the pinhole camera models, includes:
confirming the maximum value of the fisheye imaging length and the fisheye imaging width as a target imaging parameter;
calculating the pinhole imaging size of a pinhole camera model according to the target imaging parameters and the fish eye field angle;
determining a pinhole camera internal parameter of the pinhole camera model according to the pinhole imaging size;
dividing the fisheye camera into a plurality of pinhole camera models according to a preset rotation angle based on the pinhole camera internal parameters;
and determining a pinhole camera external parameter of the pinhole camera model based on a preset rotation angle corresponding to the pinhole camera model.
In the method, the maximum value of the fish-eye imaging length and the fish-eye imaging width is firstly confirmed to be used as a target imaging parameter, then calculation is carried out based on the target imaging parameter and the fish-eye field angle, the pinhole camera internal parameters of the pinhole camera model are determined, and the fish-eye camera is divided into a plurality of pinhole camera models according to a preset rotation angle based on the pinhole camera internal parameters. Therefore, the pinhole rendering images and the pinhole optical flow images of the plurality of pinhole camera models can be made to completely cover the imaging images corresponding to the fisheye cameras.
In a second aspect, the present invention provides an image data generating apparatus of a fisheye camera, the apparatus comprising:
the fish-eye data acquisition module is used for acquiring fish-eye imaging parameters of the fish-eye camera;
the fish eye camera dividing module is used for dividing the fish eye camera into a plurality of pinhole camera models based on the fish eye imaging parameters and acquiring pinhole camera parameters corresponding to the pinhole camera models, wherein the pinhole camera parameters comprise a pinhole camera internal parameter and a pinhole camera external parameter;
the pinhole image rendering module is used for performing image rendering on the initial image acquired by the pinhole camera model in real time to generate a pinhole rendering image and a corresponding pinhole optical flow image;
the direction vector calculation module is used for obtaining a first direction vector of each pixel under a first camera coordinate system corresponding to the fisheye camera based on a first pixel coordinate of each pixel under a fisheye pixel plane corresponding to the fisheye camera;
the vector coordinate conversion module is used for carrying out coordinate system conversion on the first direction vector based on the pinhole camera parameters to obtain a second pixel coordinate of the pixel under a pinhole pixel plane corresponding to the pinhole camera model;
And the image data conversion module is used for processing the pinhole rendering image and the pinhole optical flow image based on the corresponding relation between the first pixel coordinates and the second pixel coordinates to obtain a fisheye rendering image and a fisheye optical flow image corresponding to the fisheye camera.
In a third aspect, the present invention provides a computer device comprising: the device comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions, so that the image data generating method of the fisheye camera according to the first aspect or any corresponding embodiment of the first aspect is executed.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon computer instructions for causing a computer to execute the image data generating method of the fisheye camera of the first aspect or any of its corresponding embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a first method for generating image data of a fisheye camera according to an embodiment of the invention;
FIG. 2 is a schematic illustration of an imaging plane of a pinhole camera model according to an embodiment of the invention;
fig. 3 is a schematic diagram of an image corresponding to a fisheye camera according to an embodiment of the invention;
FIG. 4 is a schematic illustration of an image corresponding to a pinhole camera model according to an embodiment of the invention;
fig. 5 is a flowchart illustrating an image data generating method of a second fisheye camera according to an embodiment of the invention;
fig. 6 is a flowchart illustrating an image data generating method of a third fisheye camera according to an embodiment of the invention;
fig. 7 is a flowchart of an image data generating method of a fourth fisheye camera according to an embodiment of the invention;
fig. 8 is a flowchart of an image data generating method of a fifth fisheye camera according to an embodiment of the invention;
fig. 9 is a flowchart of an image data generating method of a sixth fisheye camera according to an embodiment of the invention;
fig. 10 is a block diagram of the structure of an image data generating apparatus of a fisheye camera according to an embodiment of the invention;
fig. 11 is a schematic structural view of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the automatic driving field, the method of deep learning such as FlowNet and RAFT is mainly used for estimating the optical flow, but a large amount of optical flow data is often needed to train a deep learning model. The current method for acquiring optical flow data is mainly divided into the following two types: the first is real scene acquisition data represented by a KITTI data set, wherein in the scheme, scene flow is generated by binocular camera and laser radar acquisition data, and sparse optical flow data is generated after a series of manual processing; the second is a composite dataset represented by a Sintel dataset, which uses three-dimensional modeling and rendering software to generate a rendered image and extracts shader data that is cached during rendering to generate dense optical flow data.
The first kind of optical flow data collected by the real scene is generally sparse, the collected optical flow data cannot cover pixels of all images, and the missing pixels can cause the lack of optical flow information of the pixels during training of the subsequent deep learning model, so that the trained deep learning model cannot correctly predict the optical flow of the pixels. In addition, the acquisition of real scene data inevitably generates noise in the process of data calibration and data integration of multiple sensors, which has a certain influence on the accuracy of optical flow data, thereby influencing the accuracy of a deep learning model obtained through training. Therefore, in the related art, the second composite data set is mainly used to provide optical flow data to avoid the problem of data precision error and data sparseness, however, the composite optical flow data needs to render a scene by using rendering software to obtain a rendered image, buffer the shader data of the rendered image of the previous frame in the rendering process, and compare the buffered shader data with the shader data of the rendered image of the current frame to obtain optical flow. For the fisheye camera, a series of nonlinear transformations are required to be performed when the image rendering is performed on the fisheye camera, so that the conventional rendering software based on the pinhole camera model cannot effectively perform matrix dimension operation, and thus a fisheye rendering image with high distortion cannot be effectively rendered, and optical flow data of the fisheye camera cannot be obtained.
In view of this, there is provided an image data generating method embodiment of a fisheye camera according to an embodiment of the present invention, it is to be noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
In this embodiment, there is provided an image data generating method of a fisheye camera, which may be used for a vehicle provided with the fisheye camera, and in particular, an electronic control unit for the vehicle, fig. 1 is a flowchart of an image data generating method of a first fisheye camera according to an embodiment of the present invention, as shown in fig. 1, and the flowchart includes the following steps:
step S101, acquiring fisheye imaging parameters of the fisheye camera.
Specifically, the fisheye imaging parameters include a fisheye imaging length, a fisheye imaging width, and a fisheye field angle.
Step S102, dividing the fisheye camera into a plurality of pinhole camera models based on the fisheye imaging parameters, and acquiring pinhole camera parameters corresponding to the pinhole camera models, wherein the pinhole camera parameters comprise a pinhole camera internal parameter and a pinhole camera external parameter.
Specifically, pinhole camera parameters of a pinhole camera model are determined according to fisheye imaging parameters, and five pinhole camera models are constructed at the same optical center as the fisheye camera in a mode that imaging planes are different by 90 degrees from each other. As shown in fig. 2, in a relative fisheye camera position90 degrees upward, 90 degrees downward, 90 degrees left, and 90 degrees right to generate an imaging plane of the pinhole camera model.
Step S103, performing image rendering on the initial image acquired by the pinhole camera model in real time, and generating a pinhole rendering image and a corresponding pinhole optical flow image.
It should be noted that, in step S103, the initial image acquired in real time by the pinhole camera model may be subjected to image rendering by the related rendering software to generate a pinhole rendered image (i.e., RGB image). While optical flow is the instantaneous speed of pixel motion of a spatially moving object on the viewing imaging plane, i.e., the direction and distance each pixel in the pinhole rendered image of the current frame moves in the pinhole pixel plane relative to the pinhole rendered image of the previous frame. Thus, in particular operations, extraction of optical flow data may be achieved by programming shaders and rendering pipelines used for image rendering. Illustratively, in a dynamic blur shader, shader data of a pinhole rendering image of a previous frame is cached and compared with shader data of a pinhole rendering image of a current frame, so as to extract optical flow data from the current frame time t to the previous frame time t-1, so as to generate a pinhole optical flow image of a pinhole camera model.
It should be noted that, after the fisheye camera is divided into five pinhole camera models, the images of the five areas 1 to 5 divided by the fisheye camera in fig. 3 correspond to the images of the five pinhole camera models 1 to 5 in fig. 4, respectively, and the purpose of the present invention is to efficiently convert the pinhole rendering image of the pinhole camera model into the fisheye rendering image of the fisheye camera by the grid sampling manner, and convert the pinhole optical flow image of the pinhole camera model into the fisheye optical flow image of the fisheye camera.
Step S104, based on the first pixel coordinates of each pixel under the fisheye pixel plane corresponding to the fisheye camera, a first direction vector of the pixel under the first camera coordinate system corresponding to the fisheye camera is obtained.
Specifically, the first pixel coordinates may be processed based on a fisheye camera internal parameter of the fisheye camera to obtain a first direction vector of the pixel under a first camera coordinate system corresponding to the fisheye camera.
Step S105, carrying out coordinate system transformation on the first direction vector based on the pinhole camera parameters to obtain second pixel coordinates of the pixels under the pinhole pixel plane corresponding to the pinhole camera model.
Specifically, the first direction vector may be transformed under a second camera coordinate system corresponding to the pinhole camera model based on the pinhole camera parameters, and then transformed under a pinhole pixel plane corresponding to the pinhole camera model from the second camera coordinate system to obtain a second pixel coordinate of the pixel.
And step S106, processing the pinhole rendering image and the pinhole optical flow image based on the corresponding relation between the first pixel coordinates and the second pixel coordinates to obtain a fisheye rendering image and a fisheye optical flow image corresponding to the fisheye camera.
According to the image data generation method of the fisheye camera, firstly, the fisheye camera is divided into a plurality of pinhole camera models based on fisheye imaging parameters, then, based on the pixel direction vector projection principle of the pinhole camera models, first direction vectors of pixels under a first camera coordinate system corresponding to the fisheye camera are obtained based on first pixel coordinates of pixels under a fisheye pixel plane corresponding to the fisheye camera, and coordinate system transformation is conducted on the first direction vectors to obtain second pixel coordinates of the pixels under the pinhole camera models. Therefore, the fisheye-rendered image of the fisheye camera can be obtained by effectively processing the pinhole-rendered images of the plurality of pinhole camera models by using the correspondence between the first pixel coordinates and the second pixel coordinates, and the fisheye-light stream image of the fisheye camera can be obtained by effectively processing the pinhole-light stream images of the plurality of pinhole camera models.
Fig. 5 is a flowchart of an image data generating method of a second fisheye camera according to an embodiment of the invention, and as shown in fig. 5, the fisheye imaging parameters obtained in the step S101 include a fisheye imaging length, a fisheye imaging width, and a fisheye angle of view. Then, the step S102 includes:
Step S1021, the maximum value of the fisheye imaging length and the fisheye imaging width is confirmed as the target imaging parameter.
Step S1022, calculating the pinhole imaging size of the pinhole camera model according to the target imaging parameters and the fish-eye field angle.
It should be noted that, in order to generate an image imaged by the complete coverage fish-eye camera, a pinhole camera reference of the corresponding pinhole camera model must be determined first, and before determining the pinhole camera reference, the pinhole imaging size of the pinhole camera model must be calculated first. Specifically, assuming that an image imaged by the pinhole camera model is a square with equal length and width, the pinhole imaging size of the pinhole camera model is calculated according to the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>For pinhole imaging width +>For pinhole imaging length, ceil is an upward rounding function, +.>Imaging width for fish eyes->Fish eye imaging length, < >>For the fish-eye angle of view, the pinhole imaging width and the pinhole imaging length are the pinhole imaging dimensions to be calculated in step S1022.
Step S1023, determining a pinhole camera internal reference of the pinhole camera model according to the pinhole imaging size.
Specifically, the pinhole camera internal reference of the pinhole camera model is calculated according to the following formula: The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>For the offset of the pinhole camera model in the x-axis direction, +.>For the offset of the pinhole camera model in the y-axis direction,for the focal length of the pinhole camera model in the x-axis direction, +.>Is the focal length of the pinhole camera model in the y-axis direction. Since the pinhole imaging width is the same as the pinhole imaging length, the pinhole imaging width in the formula may be used to replace the pinhole imaging length when calculating the internal parameters of the pinhole camera.
Step S1024, dividing the fish-eye camera into a plurality of pinhole camera models according to the preset rotation angle based on the pinhole camera internal parameters.
Specifically, the preset rotation angle includes a straight ahead (0 degrees), an upward 90 degrees, a downward 90 degrees, a leftward 90 degrees, and a rightward 90 degrees.
Step S1025, determining the pinhole camera external parameters of the pinhole camera model based on the preset rotation angles corresponding to the pinhole camera model.
It can be appreciated that since the pinhole camera model is divided according to a preset rotation angle, the rotation angle between the pinhole camera model and the fisheye camera can be known, so that the pinhole camera external parameters of the pinhole camera model can be determined.
According to the image data generation method of the fisheye camera, firstly, the maximum value of the fisheye imaging length and the fisheye imaging width is confirmed to serve as a target imaging parameter, then calculation is carried out based on the target imaging parameter and the fisheye angle, a pinhole camera internal reference of a pinhole camera model is determined, and the fisheye camera is divided into a plurality of pinhole camera models according to a preset rotation angle based on the pinhole camera internal reference. Therefore, the pinhole rendering images and the pinhole optical flow images of the plurality of pinhole camera models can be made to completely cover the imaging images corresponding to the fisheye cameras.
Fig. 6 is a flowchart of an image data generating method of a third fisheye camera according to an embodiment of the invention, and as shown in fig. 6, the step S104 includes:
in step S1041, a grid vector corresponding to the pixel under the fisheye pixel plane is determined based on the first pixel coordinates of each pixel under the fisheye pixel plane corresponding to the fisheye camera.
Specifically, the mesh vector is calculated according to the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Wherein i and j are the first pixel coordinates of the pixel, < >>Is a grid vector of pixels, R 2 Representing a two-dimensional real number vector, ">C is the grid vector of all pixels +.>Matrix of formation->Representation->Real vectors of dimensions, i.e.)>Is +.>
In step S1042, the grid vector of the pixel is back projected to the first camera coordinate system corresponding to the fisheye camera to obtain the first direction vector of the pixel.
Specifically, the network vector of pixels may be back projected under a first camera coordinate system corresponding to the fisheye camera based on fisheye camera parameters of the fisheye camera.
Specifically, the first direction vector is calculated according to the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>For the first direction vector of the pixel, the unproject represents the back projection of the fisheye camera, R 3 Representing three-dimensional real vectors, " >P is a matrix of first direction vectors of all pixels in the first camera coordinate system.
Fig. 7 is a flowchart of an image data generating method of a fourth fisheye camera according to an embodiment of the invention, and as shown in fig. 7, the step S105 includes:
step S1051, based on the pinhole camera parameters, transforms the first direction vector of the pixel to the second camera coordinate system corresponding to the pinhole camera model to obtain the first vector coordinate corresponding to the pixel.
Specifically, the first vector coordinates are calculated according to the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>For the first vector coordinate to which the pixel corresponds,for the transformation matrix of the pinhole camera from the first camera coordinate system to the second camera coordinate system, which corresponds to the external parameters of the pinhole camera,/o>,/>Is a matrix of first vector coordinates corresponding to all pixels.
Step S1052, normalize the first vector coordinates corresponding to the pixels to obtain the second vector coordinates corresponding to the pixels.
Specifically, the step S1052 includes: and calculating the ratio of the coordinates of the first vector coordinates in the directions of the x axis and the y axis to the coordinates of the first vector coordinates in the direction of the z axis, so as to obtain the second vector coordinates corresponding to the pixels.
Step S1053, based on the pinhole camera internal parameters, projecting the second vector coordinates corresponding to the pixels to the pinhole pixel plane corresponding to the pinhole camera model, to obtain the second pixel coordinates of the pixels.
Specifically, the second pixel coordinates are calculated according to the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>A second pixel coordinate, K, being a pixel 10 For the projection matrix corresponding to the pinhole camera reference, < >>For the coordinates of the first vector coordinates of the pixel in the x-axis and y-axis directions, +.>Is the coordinate of the first vector coordinate of the pixel in the z-axis direction, +.>,/>Is a matrix of second pixel coordinates of all pixels in the pinhole pixel plane.
In the process of actually generating the fisheye-rendered image and the fisheye-optical flow image, the second pixel coordinates of the primary pixels are calculated only when the fisheye-rendered image and the fisheye-optical flow image of the first frame are generated, and the second pixel coordinates of the primary pixels can be directly multiplexed subsequently, so that the operation amount is reduced, and the nonlinear operation of repeatedly operating the back projection of the fisheye camera is avoided.
According to the image data generation method of the fisheye camera, the principle of pixel direction vector projection of the pinhole camera model imaging is utilized, and the second pixel coordinates of the pixels on the pinhole pixel plane are obtained based on the first direction vector transformation of the pixels. Accordingly, pixel values and optical flow data corresponding to pixels located at the first pixel coordinates can be determined from the pinhole rendering image and the pinhole optical flow image based on the second pixel coordinates of the pixels to restore the fisheye rendering image and the fisheye optical flow image of the fisheye camera.
Fig. 8 is a flowchart of an image data generating method of a fifth fisheye camera according to an embodiment of the invention, and as shown in fig. 8, the step S106 includes:
step S1061, based on the correspondence between the first pixel coordinate and the second pixel coordinate, acquires a pixel value corresponding to the first pixel coordinate from the pinhole rendering image.
Step S1062, performing linear interpolation processing on the pixel value corresponding to the first pixel coordinate to obtain a fisheye rendering image corresponding to the fisheye camera.
Specifically, bilinear interpolation processing is performed on the pixel value corresponding to the first pixel coordinate, so as to obtain a fisheye rendering image corresponding to the fisheye camera.
It should be noted that, because the pixel corresponds to the first pixel coordinate (i, j) of the fish-eye camera to the second pixel coordinate of the pinhole camera modelAnd, the pixel coordinates of the image are in the sub-pixel plane. Therefore, when a fisheye-rendered image is generated based on pinhole-rendered images corresponding to a plurality of pinhole camera models, resolution and accuracy of the generated fisheye-rendered image may be affected. Therefore, it is necessary to combine the pinhole rendering image corresponding to the pinhole camera model by linear interpolation, and calculate RGB pixel values of the pixel positions corresponding to the fisheye rendering image, so as to improve the resolution and accuracy of the generated fisheye rendering image.
In step S1063, first optical flow data corresponding to the first pixel coordinate is acquired from the fisheye optical flow image based on the correspondence between the first pixel coordinate and the second pixel coordinate.
It will be appreciated that in the preceding steps, the first pixel coordinates (i, j) of each pixel on the fisheye pixel plane of the fisheye camera and the second pixel coordinates of the corresponding pinhole camera model may be obtainedThus, the first pixel coordinates (i, j) corresponding to the second pixel coordinates +.>Is the first optical flow data of (1)The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>For the optical flow of the first optical flow data in the x-axis direction, +.>Is the optical flow of the first optical flow data in the y-axis direction.
Step S1064, based on the first pixel coordinates and the pinhole camera parameters, performing coordinate system transformation on the first optical flow data corresponding to the first pixel coordinates to obtain a fisheye optical flow image corresponding to the fisheye camera.
According to the image data generation method of the fisheye camera, based on the corresponding relation between the first pixel coordinates and the second pixel coordinates, the pixel value and the first optical flow data corresponding to the first pixel coordinates are respectively obtained from the pinhole rendering image and the pinhole optical flow image, so that the fisheye rendering image and the fisheye optical flow image are effectively generated. Meanwhile, the pixel value corresponding to the first pixel coordinate is subjected to linear interpolation processing to obtain the fisheye rendering image corresponding to the fisheye camera, and the image quality of the fisheye rendering image can be improved to a certain extent.
Further, the step S1064 includes:
and a step a1, obtaining absolute coordinates of the pixel in the pinhole rendering image of the previous frame based on the sum value of the first pixel coordinates of the pixel and the corresponding first optical flow data.
Specifically, the absolute coordinates are calculated according to the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Rendering absolute coordinates in the image for the pinhole of the pixel in the previous frame, for>First optical flow data for a pixel, +.>A matrix of absolute coordinates in the image is rendered for the pinholes of the previous frame by all pixels.
And a step a2, performing linear interpolation processing on the absolute coordinates based on the second pixel coordinates of the pixels to obtain second optical flow data of the pixels under the pinhole pixel plane.
Specifically, bilinear interpolation processing is performed on the absolute coordinates based on second pixel coordinates of the pixels, so as to obtain second optical flow data of the pixels under a pinhole pixel plane.
And a step a3 of back projecting second optical flow data of the pixels to a second camera coordinate system corresponding to the pinhole camera based on the internal parameters of the pinhole camera to obtain second direction vectors of the pixels.
Specifically, the second direction vector is calculated according to the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is the second direction vector of the pixel, K inv Inverse of the projection matrix corresponding to the reference in the pinhole camera,>second optical flow data for the pixel.
And a step a4, based on the pinhole camera external parameters, transforming the second direction vector of the pixel to the first camera coordinate system to obtain a third direction vector of the pixel.
Specifically, the third direction vector is calculated according to the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is the third direction vector of the pixel, T inv Is the inverse matrix of the transformation matrix corresponding to the external parameters of the pinhole camera.
And a step a5, projecting the third direction vector of the pixel under the fisheye pixel plane, and processing the projected third direction vector based on the corresponding first pixel coordinate to obtain a fisheye optical flow image corresponding to the fisheye camera.
Specifically, the optical flow data in the fish-eye optical flow image is calculated according to the following formula:,/>the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>For the third direction vector after projection of the pixel correspondence +.>For optical flow data whose pixels correspond to the fisheye camera, project represents the projection of the fisheye camera.
It should be noted that, the projected third direction vector corresponding to the pixel is an absolute value of a pixel position corresponding to the pixel at time t-1 (i.e., the previous frame of fisheye-rendered image), and in order to obtain optical flow data corresponding to the pixel to the fisheye camera, the current first pixel coordinate needs to be subtracted.
In particular, the third directional vector of pixels may be projected under the fisheye pixel plane based on fisheye camera internal parameters of the fisheye camera.
In practical operation, referring to fig. 2 to 4, when the fisheye rendering image and the fisheye optical flow image are generated based on the pinhole rendering image, a rendering image and an optical flow image including 1 to 5 areas are obtained, a complete fisheye rendering image is obtained by overlapping the rendering images of the five areas, and a complete fisheye optical flow image is obtained by overlapping the optical flow images of the five areas.
According to the image data generation method of the fisheye camera, firstly, the absolute coordinates of the pixels in the pinhole rendering image of the previous frame are determined based on the sum value of the first pixel coordinates of the pixels and the corresponding first optical flow data, and then linear interpolation processing is carried out on the absolute coordinates to obtain second optical flow data of the pixels under the pinhole pixel plane of the previous frame, so that the second optical flow data can be transformed by a coordinate system to accurately generate the fisheye optical flow image corresponding to the fisheye camera.
Fig. 9 is a flowchart of an image data generating method of a sixth fisheye camera according to an embodiment of the invention, and as shown in fig. 9, the method further includes:
Step S107, constructing a pixel mask based on the imaging plane range corresponding to the first vector coordinate and the pinhole camera model, wherein the pixel mask is used for shielding pixels beyond the imaging plane range.
Specifically, the pixel mask is constructed according to the following formula:
wherein m is i,j Is the boolean value corresponding to the pixel in the pixel mask.
It will be appreciated that when the first vector coordinates corresponding to a pixel are outside the range of the imaging plane corresponding to the pinhole camera model and the coordinates of the first vector coordinates in the z-axis direction are less than zero (i.e., the first direction vector is projected onto the back of the imaging plane of the pinhole camera model), the pixel mask is constructed to mask that portion of the pixel.
Note that, assuming that, in fig. 3, the mask of the available pixel point corresponding to the 1 st area of the fisheye camera is M1, the mask of the available pixel point corresponding to the 2 nd area is M2, the mask of the available pixel point corresponding to the 3 rd area is M3, the mask of the available pixel point corresponding to the 4 th area is M4, and the mask of the available pixel point corresponding to the 5 th area is M5, the pixel mask of the whole fisheye camera may be expressed as:
and S108, processing the fisheye rendering image according to the pixel shade to obtain a target fisheye rendering image.
Step S109, processing the fisheye optical flow image according to the pixel mask to obtain the target fisheye optical flow image.
According to the image data generation method of the fisheye camera, the pixel mask is constructed based on the imaging plane range corresponding to the first vector coordinate and the pinhole camera model, so that the fisheye rendering image and the fisheye optical flow image are respectively processed, and the target fisheye rendering image and the target fisheye optical flow image are obtained. Therefore, the pixel values and the optical flow data of the fisheye pixel plane which are not correctly projected to the fisheye camera in the fisheye rendering image and the fisheye optical flow image can be removed, so that the accuracy of the fisheye rendering image and the fisheye optical flow image is improved.
It should be noted that, the overall flow of the image data generating method of the fisheye camera provided by the invention can be understood as follows: according to the fish-eye imaging parameters, determining pinhole camera parameters of a pinhole camera model, and constructing five pinhole camera models at the same optical center of the pinhole camera in a mode that imaging planes are different by 90 degrees from each other; generating pinhole rendering images of the five pinhole camera models and corresponding pinhole optical flow images; the method comprises the steps of projecting pinhole rendering images and pinhole optical flow images of five pinhole camera models from corresponding pinhole pixel planes to a three-dimensional space (namely, a space corresponding to a first camera coordinate system and a second camera coordinate system) through direction vectors, and projecting the direction vectors in the three-dimensional space to a fisheye pixel plane through fisheye camera parameters; and a series of processing and synthesis are carried out to generate a fisheye rendering image and a fisheye optical flow image of the fisheye camera.
It is worth to say that, compared with the method for acquiring optical flow data based on the data acquisition sensor, the image data generation method of the fish-eye camera provided by the invention generates the pinhole rendering image and the pinhole optical flow image in an image rendering mode, and then synthesizes the fish-eye rendering image and the fish-eye optical flow image by the pinhole rendering image and the pinhole optical flow image, firstly, the data errors caused by optical flow data acquisition and processing can be avoided, and the optical flow data is more accurate; secondly, the problem that a general rendering pipeline based on a pinhole camera model in the related art cannot render images of a high-distortion fisheye camera and extract optical flows can be solved; thirdly, the method only needs to calculate the second pixel coordinates of the pinhole camera model corresponding to the pixels of the first pixel coordinates of the fisheye camera, the corresponding relation between the first pixel coordinates and the second pixel coordinates can be directly utilized in the dimension of the matrix, and in the subsequent processing process, the fisheye rendering image and the fisheye optical flow image are generated in real time by the pinhole rendering image and the pinhole optical flow image of the pinhole camera model. Fourth, dense optical-flow data thus flow synthesized may augment the training dataset to effectively train a deep learning model for optical-flow estimation.
The embodiment also provides an image data generating device of a fisheye camera, which is used for implementing the foregoing embodiments and preferred implementations, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides an image data generating apparatus of a fisheye camera, as shown in fig. 10, including:
a fisheye data acquisition module 201, configured to acquire fisheye imaging parameters of a fisheye camera;
the fisheye camera dividing module 202 is configured to divide the fisheye camera into a plurality of pinhole camera models based on fisheye imaging parameters, and obtain pinhole camera parameters corresponding to the pinhole camera models, where the pinhole camera parameters include a pinhole camera internal parameter and a pinhole camera external parameter;
the pinhole image rendering module 203 is configured to perform image rendering on an initial image acquired by the pinhole camera model in real time, and generate a pinhole rendered image and a pinhole optical flow image corresponding to the pinhole rendered image;
the direction vector calculation module 204 is configured to obtain a first direction vector of the pixel under a first camera coordinate system corresponding to the fisheye camera based on a first pixel coordinate of each pixel under a fisheye pixel plane corresponding to the fisheye camera;
The vector coordinate conversion module 205 is configured to perform coordinate system transformation on the first direction vector based on the pinhole camera parameters to obtain a second pixel coordinate of the pixel under the pinhole pixel plane corresponding to the pinhole camera model;
the image data conversion module 206 is configured to process the pinhole rendering image and the pinhole optical flow image based on the correspondence between the first pixel coordinate and the second pixel coordinate, so as to obtain a fisheye rendering image and a fisheye optical flow image corresponding to the fisheye camera.
In some alternative embodiments, the fisheye imaging parameters acquired by the fisheye data acquisition module 201 include a fisheye imaging length, a fisheye imaging width, and a fisheye field angle; then, the fisheye camera division module 202 includes:
an imaging parameter confirmation unit configured to confirm a maximum value of a fisheye imaging length and a fisheye imaging width as a target imaging parameter;
the pinhole imaging calculation unit is used for calculating the pinhole imaging size of the pinhole camera model according to the target imaging parameters and the fish eye field angle;
the camera internal reference calculation unit is used for determining a pinhole camera internal reference of the pinhole camera model according to the pinhole imaging size;
the pinhole camera dividing unit is used for dividing the fisheye camera into a plurality of pinhole camera models according to a preset rotation angle based on the pinhole camera internal parameters;
The camera external parameter calculating unit is used for determining the pinhole camera external parameter of the pinhole camera model based on the preset rotation angle corresponding to the pinhole camera model.
In some alternative embodiments, the direction vector calculation module 204 includes:
the grid vector calculation unit is used for determining grid vectors corresponding to pixels under the fisheye pixel plane based on first pixel coordinates of the pixels under the fisheye pixel plane corresponding to the fisheye camera;
and the grid vector projection unit is used for back-projecting the grid vector of the pixel to a first camera coordinate system corresponding to the fish-eye camera to obtain a first direction vector of the pixel.
In some alternative embodiments, the vector coordinate conversion module 205 includes:
the direction vector conversion unit is used for converting the first direction vector of the pixel into a second camera coordinate system corresponding to the pinhole camera model based on the pinhole camera external parameters to obtain a first vector coordinate corresponding to the pixel;
the coordinate normalization unit is used for normalizing the first vector coordinates corresponding to the pixels to obtain second vector coordinates corresponding to the pixels;
the vector coordinate projection unit is used for projecting the second vector coordinate corresponding to the pixel to the position below the pinhole pixel plane corresponding to the pinhole camera model based on the pinhole camera internal reference, so as to obtain the second pixel coordinate of the pixel.
In some alternative embodiments, the image data conversion module 206 includes:
the pixel data acquisition unit is used for acquiring a pixel value corresponding to the first pixel coordinate from the pinhole rendering image based on the corresponding relation between the first pixel coordinate and the second pixel coordinate;
the rendering image conversion unit is used for performing linear interpolation processing on the pixel value corresponding to the first pixel coordinate to obtain a fisheye rendering image corresponding to the fisheye camera;
an optical flow data acquisition unit, configured to acquire first optical flow data corresponding to the first pixel coordinate from the fish-eye optical flow image based on a correspondence between the first pixel coordinate and the second pixel coordinate;
and the optical flow image conversion unit is used for carrying out coordinate system transformation on the first optical flow data corresponding to the first pixel coordinates based on the first pixel coordinates and the pinhole camera parameters to obtain a fish eye optical flow image corresponding to the fish eye camera.
Further, the optical flow image conversion unit includes:
an absolute coordinate calculating subunit, configured to obtain an absolute coordinate in a pinhole rendering image of the pixel in the previous frame based on a sum value of a first pixel coordinate of the pixel and first optical flow data corresponding to the first pixel coordinate;
the coordinate interpolation processing subunit is used for carrying out linear interpolation processing on the absolute coordinates based on the second pixel coordinates of the pixels to obtain second optical flow data of the pixels under the pinhole pixel plane;
The optical flow back projection subunit is used for back projecting second optical flow data of the pixels to a second camera coordinate system corresponding to the pinhole camera model based on the pinhole camera internal reference to obtain second direction vectors of the pixels;
the coordinate system transformation subunit is used for transforming the second direction vector of the pixel to the first camera coordinate system based on the pinhole camera external parameters to obtain a third direction vector of the pixel;
the direction vector projection subunit is configured to project a third direction vector of a pixel under a fisheye pixel plane, and process the projected third direction vector based on the corresponding first pixel coordinate, so as to obtain a fisheye optical flow image corresponding to the fisheye camera.
In some alternative embodiments, the apparatus further comprises:
the pixel shade construction module is used for constructing a pixel shade based on the imaging plane range corresponding to the first vector coordinate and the pinhole camera model, and the pixel shade is used for shading pixels exceeding the imaging plane range;
the rendering image processing module is used for processing the fisheye rendering image according to the pixel shade to obtain a target fisheye rendering image;
and the optical flow image processing module is used for processing the fish-eye optical flow image according to the pixel shade to obtain the target fish-eye optical flow image.
Further functional descriptions of the above respective modules and units are the same as those of the above corresponding embodiments, and are not repeated here.
The image data generating apparatus of the fisheye camera in this embodiment is presented in the form of functional units, where the units refer to ASIC (Application Specific Integrated Circuit ) circuits, processors and memories executing one or more software or fixed programs, and/or other devices that can provide the above functions.
The embodiment of the invention also provides computer equipment, which is provided with the image data generating device of the fish-eye camera shown in the figure 10.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a computer device according to an alternative embodiment of the present invention, as shown in fig. 11, the computer device includes: one or more processors 10, memory 20, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are communicatively coupled to each other using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the computer device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In some alternative embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computer devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 10 is illustrated in fig. 11.
The processor 10 may be a central processor, a network processor, or a combination thereof. The processor 10 may further include a hardware chip, among others. The hardware chip may be an application specific integrated circuit, a programmable logic device, or a combination thereof. The programmable logic device may be a complex programmable logic device, a field programmable gate array, a general-purpose array logic, or any combination thereof.
Wherein the memory 20 stores instructions executable by the at least one processor 10 to cause the at least one processor 10 to perform a method for implementing the embodiments described above.
The memory 20 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the computer device, etc. In addition, the memory 20 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some alternative embodiments, memory 20 may optionally include memory located remotely from processor 10, which may be connected to the computer device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Memory 20 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as flash memory, hard disk, or solid state disk; the memory 20 may also comprise a combination of the above types of memories.
The computer device further comprises input means 30 and output means 40. The processor 10, memory 20, input device 30, and output device 40 may be connected by a bus or other means, for example in fig. 11.
The input device 30 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus, such as a touch screen, a keypad, a mouse, a trackpad, a touchpad, a pointer stick, one or more mouse buttons, a trackball, a joystick, and the like. The output means 40 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. Such display devices include, but are not limited to, liquid crystal displays, light emitting diodes, displays and plasma displays. In some alternative implementations, the display device may be a touch screen.
The embodiments of the present invention also provide a computer readable storage medium, and the method according to the embodiments of the present invention described above may be implemented in hardware, firmware, or as a computer code which may be recorded on a storage medium, or as original stored in a remote storage medium or a non-transitory machine readable storage medium downloaded through a network and to be stored in a local storage medium, so that the method described herein may be stored on such software process on a storage medium using a general purpose computer, a special purpose processor, or programmable or special purpose hardware. The storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, a flash memory, a hard disk, a solid state disk or the like; further, the storage medium may also comprise a combination of memories of the kind described above. It will be appreciated that a computer, processor, microprocessor controller or programmable hardware includes a storage element that can store or receive software or computer code that, when accessed and executed by the computer, processor or hardware, implements the methods illustrated by the above embodiments.
Although embodiments of the present invention have been described in connection with the accompanying drawings, various modifications and variations may be made by those skilled in the art without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope of the invention as defined by the appended claims.

Claims (9)

1. A method of generating image data for a fisheye camera, the method comprising:
acquiring fisheye imaging parameters of a fisheye camera;
dividing the fisheye camera into a plurality of pinhole camera models based on the fisheye imaging parameters, and acquiring pinhole camera parameters corresponding to the pinhole camera models, wherein the pinhole camera parameters comprise a pinhole camera internal parameter and a pinhole camera external parameter;
performing image rendering on an initial image acquired by the pinhole camera model in real time to generate a pinhole rendering image and a corresponding pinhole optical flow image;
based on first pixel coordinates of each pixel under a fisheye pixel plane corresponding to the fisheye camera, a first direction vector of the pixel under a first camera coordinate system corresponding to the fisheye camera is obtained, and the first direction vector is calculated according to the following formula:the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>For the first direction vector of the pixel, the unproject represents the back projection of the fisheye camera, R 3 Representing three-dimensional real vectors, ">P is a matrix of first direction vectors of all pixels in a first camera coordinate system, +.>For the grid direction of the pixelsQuantity (S)>Representation ofReal vector of dimension>Is +.>
Transforming the first direction vector based on the pinhole camera parameters to obtain a second pixel coordinate of the pixel under a pinhole pixel plane corresponding to the pinhole camera model;
Processing the pinhole rendering image and the pinhole optical flow image based on the corresponding relation between the first pixel coordinates and the second pixel coordinates to obtain a fisheye rendering image and a fisheye optical flow image corresponding to the fisheye camera;
the processing the pinhole rendering image and the pinhole optical flow image based on the correspondence between the first pixel coordinates and the second pixel coordinates to obtain a fisheye rendering image and a fisheye optical flow image corresponding to the fisheye camera includes:
acquiring a pixel value corresponding to the first pixel coordinate from the pinhole rendering image based on the corresponding relation between the first pixel coordinate and the second pixel coordinate;
performing linear interpolation processing on the pixel value corresponding to the first pixel coordinate to obtain a fisheye rendering image corresponding to the fisheye camera;
acquiring first optical flow data corresponding to the first pixel coordinates from the fish-eye optical flow image based on the corresponding relation between the first pixel coordinates and the second pixel coordinates;
and carrying out coordinate system transformation on the first optical flow data corresponding to the first pixel coordinates based on the first pixel coordinates and the pinhole camera parameters to obtain a fisheye optical flow image corresponding to the fisheye camera.
2. The method according to claim 1, wherein the obtaining, based on the first pixel coordinates of each pixel in the fisheye pixel plane corresponding to the fisheye camera, a first direction vector of the pixel in the first camera coordinate system corresponding to the fisheye camera includes:
determining a grid vector corresponding to each pixel under a fisheye pixel plane based on a first pixel coordinate of each pixel under the fisheye pixel plane corresponding to the fisheye camera;
and back projecting the grid vector of the pixel to a first camera coordinate system corresponding to the fisheye camera to obtain a first direction vector of the pixel.
3. The method of claim 1, wherein the transforming the first direction vector based on the pinhole camera parameters to obtain second pixel coordinates of the pixel in a pinhole pixel plane corresponding to the pinhole camera model includes:
based on the pinhole camera external parameters, transforming the first direction vector of the pixel to a second camera coordinate system corresponding to the pinhole camera model to obtain a first vector coordinate corresponding to the pixel;
normalizing the first vector coordinate corresponding to the pixel to obtain a second vector coordinate corresponding to the pixel;
And based on the pinhole camera internal parameters, projecting the second vector coordinates corresponding to the pixels to the pinhole pixel plane corresponding to the pinhole camera model to obtain the second pixel coordinates of the pixels.
4. The method of claim 1, wherein the performing a coordinate system transformation on the first optical flow data corresponding to the first pixel coordinates based on the first pixel coordinates and the pinhole camera parameters to obtain a fisheye optical flow image corresponding to the fisheye camera comprises:
obtaining absolute coordinates of the pixel in a pinhole rendering image of a previous frame based on the sum of the first pixel coordinates of the pixel and the corresponding first optical flow data;
performing linear interpolation processing on the absolute coordinates based on second pixel coordinates of the pixels to obtain second optical flow data of the pixels under the pinhole pixel plane;
based on the pinhole camera internal parameters, back-projecting second optical flow data of the pixels to a second camera coordinate system corresponding to the pinhole camera model to obtain second direction vectors of the pixels;
transforming the second direction vector of the pixel to the first camera coordinate system based on the pinhole camera external parameters to obtain a third direction vector of the pixel;
And projecting the third direction vector of the pixel under the fisheye pixel plane, and processing the projected third direction vector based on the corresponding first pixel coordinate to obtain a fisheye optical flow image corresponding to the fisheye camera.
5. A method according to claim 3, characterized in that the method further comprises:
constructing a pixel mask based on an imaging plane range corresponding to the first vector coordinate and the pinhole camera model, wherein the pixel mask is used for masking pixels exceeding the imaging plane range;
processing the fisheye rendering image according to the pixel shade to obtain a target fisheye rendering image;
and processing the fish-eye optical flow image according to the pixel shade to obtain a target fish-eye optical flow image.
6. The method of claim 1, wherein the fisheye imaging parameters include a fisheye imaging length, a fisheye imaging width, and a fisheye field angle, wherein the dividing the fisheye camera into a plurality of pinhole camera models based on the fisheye imaging parameters and acquiring pinhole camera parameters corresponding to the pinhole camera models comprises:
confirming the maximum value of the fisheye imaging length and the fisheye imaging width as a target imaging parameter;
Calculating the pinhole imaging size of a pinhole camera model according to the target imaging parameters and the fish eye field angle;
determining a pinhole camera internal parameter of the pinhole camera model according to the pinhole imaging size;
dividing the fisheye camera into a plurality of pinhole camera models according to a preset rotation angle based on the pinhole camera internal parameters;
and determining a pinhole camera external parameter of the pinhole camera model based on a preset rotation angle corresponding to the pinhole camera model.
7. An image data generating apparatus of a fisheye camera, the apparatus comprising:
the fish-eye data acquisition module is used for acquiring fish-eye imaging parameters of the fish-eye camera;
the fish eye camera dividing module is used for dividing the fish eye camera into a plurality of pinhole camera models based on the fish eye imaging parameters and acquiring pinhole camera parameters corresponding to the pinhole camera models, wherein the pinhole camera parameters comprise a pinhole camera internal parameter and a pinhole camera external parameter;
the pinhole image rendering module is used for performing image rendering on the initial image acquired by the pinhole camera model in real time to generate a pinhole rendering image and a corresponding pinhole optical flow image;
the direction vector calculation module is used for obtaining a first direction vector of each pixel under a first camera coordinate system corresponding to the fisheye camera based on a first pixel coordinate of each pixel under a fisheye pixel plane corresponding to the fisheye camera, and the first direction vector is calculated according to the following formula: The method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>For the first direction vector of the pixel, the unproject represents the back projection of the fisheye camera, R 3 Representing a three-dimensional real number vector,p is a matrix of first direction vectors of all pixels in a first camera coordinate system, +.>Is a grid vector of pixels, +.>Representation->Real vector of dimension>Is +.>
The vector coordinate conversion module is used for carrying out coordinate system conversion on the first direction vector based on the pinhole camera parameters to obtain a second pixel coordinate of the pixel under a pinhole pixel plane corresponding to the pinhole camera model;
the image data conversion module is used for processing the pinhole rendering image and the pinhole optical flow image based on the corresponding relation between the first pixel coordinates and the second pixel coordinates to obtain a fisheye rendering image and a fisheye optical flow image corresponding to the fisheye camera;
the image data conversion module includes:
a pixel data obtaining unit, configured to obtain a pixel value corresponding to the first pixel coordinate from the pinhole rendering image based on a correspondence between the first pixel coordinate and the second pixel coordinate;
the rendering image conversion unit is used for performing linear interpolation processing on the pixel value corresponding to the first pixel coordinate to obtain a fisheye rendering image corresponding to the fisheye camera;
An optical flow data acquisition unit, configured to acquire first optical flow data corresponding to the first pixel coordinate from the fisheye optical flow image based on a correspondence between the first pixel coordinate and the second pixel coordinate;
and the optical flow image conversion unit is used for carrying out coordinate system transformation on the first optical flow data corresponding to the first pixel coordinates based on the first pixel coordinates and the pinhole camera parameters to obtain a fisheye optical flow image corresponding to the fisheye camera.
8. A computer device, comprising:
a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the method of generating image data for a fisheye camera of any of claims 1 to 6.
9. A computer-readable storage medium having stored thereon computer instructions for causing a computer to execute the image data generation method of the fisheye camera of any one of claims 1 to 6.
CN202311147820.9A 2023-09-07 2023-09-07 Image data generation method, device and equipment of fisheye camera and storage medium Active CN116883231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311147820.9A CN116883231B (en) 2023-09-07 2023-09-07 Image data generation method, device and equipment of fisheye camera and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311147820.9A CN116883231B (en) 2023-09-07 2023-09-07 Image data generation method, device and equipment of fisheye camera and storage medium

Publications (2)

Publication Number Publication Date
CN116883231A CN116883231A (en) 2023-10-13
CN116883231B true CN116883231B (en) 2024-02-02

Family

ID=88272105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311147820.9A Active CN116883231B (en) 2023-09-07 2023-09-07 Image data generation method, device and equipment of fisheye camera and storage medium

Country Status (1)

Country Link
CN (1) CN116883231B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930312A (en) * 2018-09-19 2020-03-27 驭势(上海)汽车科技有限公司 Method and device for generating fisheye camera image
CN113129346A (en) * 2021-04-22 2021-07-16 北京房江湖科技有限公司 Depth information acquisition method and device, electronic equipment and storage medium
CN114049479A (en) * 2021-11-10 2022-02-15 苏州魔视智能科技有限公司 Self-supervision fisheye camera image feature point extraction method and device and storage medium
CN116579962A (en) * 2023-05-12 2023-08-11 中山大学 Panoramic sensing method, device, equipment and medium based on fisheye camera

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107333051B (en) * 2016-04-28 2019-06-21 杭州海康威视数字技术股份有限公司 A kind of interior panoramic video generation method and device
CN109964245A (en) * 2016-12-06 2019-07-02 深圳市大疆创新科技有限公司 System and method for correcting wide angle picture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930312A (en) * 2018-09-19 2020-03-27 驭势(上海)汽车科技有限公司 Method and device for generating fisheye camera image
CN113129346A (en) * 2021-04-22 2021-07-16 北京房江湖科技有限公司 Depth information acquisition method and device, electronic equipment and storage medium
CN114049479A (en) * 2021-11-10 2022-02-15 苏州魔视智能科技有限公司 Self-supervision fisheye camera image feature point extraction method and device and storage medium
CN116579962A (en) * 2023-05-12 2023-08-11 中山大学 Panoramic sensing method, device, equipment and medium based on fisheye camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Visual Odometry of a Low-Profile Pallet Robot Based on Ortho-Rectified Ground Plane Image From Fisheye Camera";Soon-Yong Park etal.;《MDPI》;第1-18页 *

Also Published As

Publication number Publication date
CN116883231A (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN114549731B (en) Method and device for generating visual angle image, electronic equipment and storage medium
JP6471780B2 (en) New view synthesis using deep convolutional neural networks
Banz et al. Real-time stereo vision system using semi-global matching disparity estimation: Architecture and FPGA-implementation
US11290704B2 (en) Three dimensional scanning system and framework
JP2009134509A (en) Device for and method of generating mosaic image
US11704853B2 (en) Techniques for feature-based neural rendering
US11132586B2 (en) Rolling shutter rectification in images/videos using convolutional neural networks with applications to SFM/SLAM with rolling shutter images/videos
KR20210056149A (en) Depth image generation method and depth image generation apparatus
JP2024041895A (en) Modular image interpolation method
CN114640885B (en) Video frame inserting method, training device and electronic equipment
JP2018063693A (en) Image processing device, image processing method, and program
Zhang et al. Video extrapolation in space and time
CN116883231B (en) Image data generation method, device and equipment of fisheye camera and storage medium
CN112634439B (en) 3D information display method and device
CN115272575A (en) Image generation method and device, storage medium and electronic equipment
JP2015197374A (en) Three-dimensional shape estimation device and three-dimensional shape estimation method
JP6967150B2 (en) Learning device, image generator, learning method, image generation method and program
CN113709388B (en) Multi-source video splicing method and device
CN117095131B (en) Three-dimensional reconstruction method, equipment and storage medium for object motion key points
CN117351157B (en) Single-view three-dimensional scene pose estimation method, system and equipment
JP7364726B2 (en) A method of forming an image of an object, a computer program product, and an image forming system for carrying out the method
CN112734895B (en) Three-dimensional face processing method and electronic equipment
WO2024125245A1 (en) Panoramic image processing method and apparatus, and electronic device and storage medium
CN111345023B (en) Image jitter elimination method, device, terminal and computer readable storage medium
Yamao et al. A sequential online 3d reconstruction system using dense stereo matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant