CN115953306A - Three-dimensional imaging method and device based on fisheye camera and imaging system - Google Patents
Three-dimensional imaging method and device based on fisheye camera and imaging system Download PDFInfo
- Publication number
- CN115953306A CN115953306A CN202211581191.6A CN202211581191A CN115953306A CN 115953306 A CN115953306 A CN 115953306A CN 202211581191 A CN202211581191 A CN 202211581191A CN 115953306 A CN115953306 A CN 115953306A
- Authority
- CN
- China
- Prior art keywords
- images
- image
- camera
- spherical coordinate
- fisheye
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention provides a three-dimensional imaging method, a three-dimensional imaging device and an imaging system based on a fisheye camera, wherein the method comprises the following steps: respectively converting the spherical coordinate systems of the images acquired by the fisheye camera; splicing the images converted by the spherical coordinate system to obtain spliced images; carrying out color correction on the splicing seams of the spliced images; and performing fusion processing on image data acquired by the laser radar based on the spliced image subjected to color correction to obtain a target image. By the method, the imaging quality can be improved, and high-precision color three-dimensional point cloud information can be obtained.
Description
Technical Field
The invention relates to the technical field of three-dimensional panoramic shooting, in particular to a three-dimensional imaging method and device based on a fisheye camera and an imaging system.
Background
By taking 3D images with a dual lens camera of a 3D camera, it is possible to take 3D images within some view angles, but some view angles are limited by the photographing range of the device, for example, a panoramic image around 360 degrees needs to be taken by a photographer who holds the camera and turns around. However, the photographer must spend a lot of time photographing the panoramic image using this method.
In general, a method of taking a 3D panoramic image by using a plurality of 3D cameras, there are now three cameras to several tens of cameras, but they all belong to a monocular vision system because the overlapping of the photograph ranges of the cameras is less, and depth information cannot be calculated or acquired by using parallax, and the depth information is required for 3D information of virtual reality and augmented reality. Therefore, how to obtain 3D depth information by using a camera is very important.
In the prior art, three-dimensional reconstruction service is provided by two modes of a panoramic system, a laser radar and a monocular camera and the laser radar, the existing camera product and radar product are purchased for assembly, the acquisition and processing capacity of original data is lost, and therefore the imaging quality is not high.
Disclosure of Invention
The invention provides a three-dimensional imaging method, a three-dimensional imaging device and a three-dimensional imaging system based on a fisheye camera, and solves the technical problem of poor three-dimensional imaging quality in the prior art.
The technical scheme of the invention is as follows: provided is a three-dimensional imaging method based on a fish-eye camera, comprising the following steps:
respectively converting the spherical coordinate systems of the images acquired by the fisheye camera;
splicing the images converted by the spherical coordinate system to obtain spliced images;
carrying out color correction on the splicing seams of the spliced images;
and carrying out fusion processing on image data acquired by the laser radar based on the spliced image subjected to color correction to obtain a target image.
In an alternative mode, the converting the spherical coordinate systems of the images acquired by the fisheye cameras respectively includes:
respectively carrying out distortion removal processing on images acquired by the fisheye camera;
respectively converting the images subjected to distortion processing into isometric projection images;
and respectively converting the spherical coordinate systems of the images converted by the equidistant projection drawings.
In an optional mode, the fisheye cameras are three, three the fisheye cameras are fixed to three sides of the camera fixing seat respectively, the conversion of the spherical coordinate system is performed on the image converted by the equidistant projection drawing respectively, and the conversion includes:
and respectively converting the images converted by the equidistant projection drawings into a spherical coordinate system based on a first formula.
In an optional manner, the stitching the images converted by the spherical coordinate system to obtain a stitched image includes:
and carrying out image splicing based on the spherical coordinate systems of the three fisheye cameras to obtain spliced images.
In an optional manner, the performing image stitching based on the spherical coordinate systems of the three fisheye cameras to obtain a stitched image includes:
gradient analysis of pixel points is carried out in the overlapping area of two adjacent fisheye cameras;
and carrying out image splicing according to the gradient analysis result.
In an optional manner, the performing color correction on the stitching seam of the stitched image to obtain the target image includes:
and carrying out pixel equalization processing on the splicing seams of the spliced images based on a second formula.
In an optional manner, the time for acquiring data by the lidar is synchronized with the time for acquiring an image by the fisheye camera, and the fusion processing is performed on the image data acquired by the lidar based on the color-corrected stitching image to obtain a target image, including:
extracting pixel values of pixel points from the spliced image subjected to color correction;
converting the acquired data in a spherical coordinate system to obtain two-dimensional point cloud data;
and assigning the extracted pixel value to the cloud point of the corresponding point in the two-dimensional point cloud data to obtain a target image, wherein the target image carries color three-dimensional point cloud information.
The present invention also provides a three-dimensional imaging device based on a fisheye camera, the device comprising:
the conversion module is used for respectively converting the spherical coordinate systems of the images acquired by the fisheye camera;
the splicing module is used for splicing the images converted by the spherical coordinate system to obtain spliced images;
the correction module is used for carrying out color correction on the splicing seams of the spliced images;
and the fusion module is used for carrying out fusion processing on the image data acquired by the laser radar based on the spliced image subjected to color correction to obtain a target image.
The present invention also provides an imaging system comprising: laser radar, radar seat, camera fixing base, two at least fisheye cameras, the camera fixing base with radar seat fixed connection, laser radar is fixed in the radar seat, two at least fisheye cameras are fixed in on the camera fixing base.
In an alternative form, the at least two fisheye cameras are two, or
The number of the at least two fisheye cameras is three, the radar base is fixed on the motor base, the motor base is fixedly connected with the camera fixing base, the motor base is provided with a rotating motor, and the radar base drives the laser radar to rotate through the rotating motor; or
The number of the at least two fisheye cameras is n, the n fisheye cameras are distributed on the side wall of the camera fixing seat, and the n is larger than 3.
The present invention also provides a computing device comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is for storing at least one executable instruction that causes the processor to perform the steps of the three-dimensional imaging method of the fisheye camera.
The present invention also provides a computer storage medium having stored therein at least one executable instruction for causing a processor to perform the steps of the three-dimensional imaging method of the fisheye camera.
Compared with the prior art, the invention has the beneficial effects that: the method comprises the steps of firstly carrying out distortion removal processing on an image acquired by a fisheye camera, then carrying out conversion of a spherical coordinate system, splicing the converted image, carrying out color correction on a spliced seam, and finally carrying out fusion coloring processing on the image, so that the quality of three-dimensional imaging can be improved, and high-precision color three-dimensional point cloud information can be obtained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required in the embodiments are briefly introduced below, and the drawings in the following description are only corresponding to some embodiments of the present invention.
Fig. 1 shows a schematic flow chart of a three-dimensional imaging method based on a fisheye camera according to an embodiment of the invention;
fig. 2 is a schematic diagram illustrating a fish-eye camera arrangement structure of a three-dimensional imaging method based on a fish-eye camera according to an embodiment of the present invention;
fig. 3 shows an isometric projection diagram after a spherical coordinate system transformation of a three-dimensional imaging method based on a fisheye camera according to an embodiment of the invention;
FIG. 4 is a schematic diagram illustrating a spherical coordinate to cylindrical coordinate conversion of a fisheye camera-based three-dimensional imaging method according to an embodiment of the invention;
FIG. 5 is a schematic diagram illustrating image stitching of a fish-eye camera-based three-dimensional imaging method according to an embodiment of the present invention;
fig. 6 shows a schematic diagram of a stitching seam artifact of a three-dimensional imaging method based on a fisheye camera according to an embodiment of the present invention;
fig. 7 shows a schematic structural diagram of a three-dimensional imaging device based on a fisheye camera according to an embodiment of the invention;
FIG. 8 illustrates a partial block diagram of a preferred embodiment of an imaging system in accordance with the present invention;
FIG. 9 shows a schematic partial block diagram of another preferred embodiment of an imaging system according to the present invention;
fig. 10 is a schematic structural diagram of a computing device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In the present invention, directional terms such as "up", "down", "front", "back", "left", "right", "inner", "outer", "side", "top" and "bottom" are used only with reference to the orientation of the drawings, and the directional terms are used for illustration and understanding of the present invention, and are not intended to limit the present invention.
The terms "first," "second," and the like in the terms of the invention are used for descriptive purposes only and not for purposes of indication or implication relative importance, nor as a limitation on the order of precedence.
In the present invention, unless otherwise explicitly stated or limited, the terms "mounted," "connected," "fixed," and the like are to be construed broadly, e.g., as being permanently connected, detachably connected, or integral; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Referring to fig. 1, a preferred embodiment of the present invention provides a flow diagram of a three-dimensional imaging method based on a fish-eye camera, which may be applied to a panoramic system (or panoramic camera) having a plurality of fish-eye cameras, wherein the number of fish-eye cameras may be three, two, or more, for example, the panoramic system is provided with a laser radar and two fish-eye cameras, or the panoramic system is provided with a laser radar and three fish-eye cameras, or the panoramic system is provided with a laser radar and n fish-eye cameras (n is greater than 3), or the panoramic system is provided with a laser radar and three fish-eye cameras, the laser radar may rotate, which is not limited herein, in this embodiment, the panoramic system is provided with a laser radar and three fish-eye cameras, and the laser radar may not rotate, which exemplifies the solution of the present invention, and the method includes:
s1, respectively converting a spherical coordinate system for images acquired by a fisheye camera;
specifically, the fisheye camera first acquires an image, and then performs conversion of a spherical coordinate system on the acquired image.
S2, splicing the images converted by the spherical coordinate system to obtain spliced images;
specifically, the images converted by the spherical coordinate system are stitched to obtain a stitched image, and for example, the images may be stitched according to a relationship between the coordinate systems.
S3, performing color correction on the splicing seams of the spliced images to obtain target images;
specifically, the spliced images are spliced, and color correction is mainly performed on the spliced seams, so that the colors of the spliced seams cannot change suddenly, and the splicing quality of the images can be improved to a certain extent.
And S4, carrying out fusion processing on image data acquired by the laser radar based on the spliced image subjected to color correction to obtain a target image.
Specifically, a target image is obtained by performing fusion processing on a spliced image subjected to color correction based on data acquired by a laser radar, for example, by performing coloring fusion on the acquired data, wherein the target image is a high-precision color three-dimensional point cloud information image, that is, the target image carries color three-dimensional point cloud information.
In the embodiment of the invention, the image acquired by the fisheye camera is firstly subjected to distortion removal processing, then the conversion of a spherical coordinate system is carried out, then the converted image is spliced, the color of the spliced seam is corrected, and then the image fusion coloring processing is carried out, so that the quality of three-dimensional imaging can be improved, and high-precision color three-dimensional point cloud information can also be obtained.
In an optional manner of this embodiment, the step S1 includes:
respectively carrying out distortion removal processing on images acquired by the fisheye camera;
in this embodiment, preferably, the number of the fish-eye cameras is three, and the three fish-eye cameras are respectively disposed on three sides of the camera fixing base, the camera fixing base is preferably rectangular, in this case, one of the three fish-eye cameras is disposed on the first side, and the other two fish-eye cameras are disposed on two sides of the first side, as shown in fig. 2, preferably, a fish-eye camera with a field of view range of 0-185 ° is used, a calibration model of the fish-eye camera is KB8, and the acquired image is respectively distorted based on the KB8 principle, for example, the KB8 model may be regarded as that a relationship between a distance from an optical center of the image to a projection point and an angle has a proportional relationship, and the reflected light can be traced back to an incident point of the convex lens from an original path through the proportional relationship, and re-incident on the optical center from the incident point, and this process achieves distortion removal.
Respectively converting the images subjected to distortion processing into isometric projection images;
specifically, the image after the distortion processing needs to be converted into a spherical coordinate system. Because the spliced panoramic image is based on the unit sphere, the image after distortion removal processing is that panoramic splicing cannot be carried out by a rectangular coordinate system, coordinate conversion is needed, the image is projected onto the unit sphere coordinate system after distortion removal by a KB8 fisheye camera, and then conversion of the spherical coordinate system is carried out.
Further, as shown in fig. 2, three fisheye cameras (including a Front view camera (Front), a Left view camera (Left), and a right view camera (right)) with a field of view of 185 ° are selected as a part of the panoramic system, the overlapping area range of the three fisheye cameras is about 95 °, the Front view camera is mainly responsible for image acquisition within a range of 90 ° in Front of the field of view, the Left view camera and the right view camera are symmetrically arranged and responsible for a photographing range of 145 ° of the panoramic system, the stitching seam is located at positions of 45 ° and 145 ° of the Front view camera, the Left view camera is about 145 °, and the right view camera is about 145 °, and both belong to an area with small camera distortion. The distortion can be reduced by the arrangement, and the quality of three-dimensional imaging is improved to a certain extent. The undistorted image is actually the coordinates of the three-dimensional actual points, which are projected under the spherical coordinate system of one unit sphere after the KB8 model operation, but the image of the spherical coordinate system is a three-dimensional image and cannot be stored, so it is necessary to convert the image of the spherical coordinate system into a cylindrical coordinate image.
Respectively converting the spherical coordinate systems of the images converted by the equidistant projection drawings;
specifically, after the panoramic system is calibrated by external reference, the position relationship among the multiple cameras can be obtained, and if the optical center distances of the multiple cameras are relatively close, the rotation relationship among the cameras is only used as a transformation condition and is used as an initial condition of a splicing algorithm. The positions of the pixel points under the spherical coordinate systems of the left-view camera and the right-view camera are converted into the spherical coordinate system of the front-view camera, namely the coordinate system of the unit sphere of unified pixel projection. The images acquired by all the fisheye cameras are then converted into equidistant projection views. As shown in fig. 3, the white area is a pixel area, and the left-view, front-view and right-view fisheye camera converted panoramic camera isometric projection images are sequentially from top to bottom, and as shown in fig. 2, a physically fixed position is arranged among the three cameras, and parameters are predetermined. Each camera has its own spherical coordinate system, and the three spherical coordinate systems can be coincided together by translation and rotation. Whereby the images of all three cameras are also stitched together.
In an optional manner of this embodiment, the number of the fisheye cameras is three, the three fisheye cameras are respectively fixed to three side surfaces of the camera fixing base, that is, as shown in fig. 2, the fisheye cameras are respectively a front view camera, a left view camera, and a right view camera, and the converting of the spherical coordinate system for the image converted by the equidistant projection drawing respectively specifically includes:
and respectively converting the images converted by the equidistant projection drawings into a spherical coordinate system based on a first formula. The conversion process is shown in fig. 4, and the first formula is:wherein t is a horizontal angle, and ν represents a plumb angle; r is Column Denotes the distance of the point from the central axis of the cylinder, not the radius of the cylinder, r Ball with ball-shaped section Indicating the distance of the point from the center of the sphere, not the radius of the sphere.
In a preferred embodiment of the present invention, the step S2 includes: performing image splicing based on a spherical coordinate system of the three fisheye cameras to obtain spliced images;
specifically, gradient analysis of pixel points is performed in an overlapping area of two adjacent fisheye cameras, and then image stitching is performed according to a gradient analysis result.
For example: the image after the conversion of the spherical coordinate system is changed into an equidistant projection image due to the known visual field range and the camera external parameters, so that a clear overlapping region can be obtained, the overlapping region can be clearly defined, and then image splicing is performed through a matching algorithm, for example: image splicing is carried out based on a spherical coordinate system of three fisheye cameras to obtain spliced images, and further, due to the fact that the images of the three fisheye cameras are spliced together, a certain overlapping area exists between every two adjacent cameras, gradient analysis of pixel points is carried out on the overlapping area to obtain feature points, for example, pixel change gradient values of the pixel points of the overlapping area are obtained, then the gradient values are sequenced, the pixel points of the overlapping area are matched according to the gradient values, splicing is carried out according to matching results, then splicing of other pixel points of the overlapping area is completed, and then splicing of the images is achieved, and the three fisheye cameras are shown in figure 5.
In a preferred embodiment of the present invention, the step S3 includes: performing pixel equalization processing on a splicing seam of the spliced image based on a second formula;
specifically, as shown in fig. 6, after the equidistant projection images are spliced, a relatively obvious seam artifact appears at the seam, an overlapped region (such as a spliced seam) is equalized, and a second formula is adopted to perform pixel equalization processing, where the second formula is:wherein, B is LF(r,c) 、B FR(r,c) Respectively representing the pixel points of the r-th row and the c-th column in the overlapping area, L (r, c) and F L (r,c)、F R (R, c) and R (R, c) respectively represent coordinates of pixel points of the image captured by the left-view camera, the image captured by the front-view camera and the image captured by the right-view camera, and alpha 1 、α 2 Representing a parameter or a luminance specific gravity.
Because the incident angles of two adjacent pictures are different when the pictures are shot, the quantity of light rays is different, and the brightness is different, so that a gap artifact exists, and the weighted average of the brightness is carried out on two overlapped pixel points through the formula so as to eliminate the gap artifact.
In a preferred embodiment of the present invention, the time when the laser radar acquires data is synchronized with the time when the fisheye camera acquires images, so that the similarity of the images can be improved, in an actual situation, the radar, the camera and other firmware all have their own independent clock systems, and need to be unified by an additional clock system, and triggered and scheduled simultaneously, where step S4 specifically includes:
extracting pixel values of pixel points from the spliced image subjected to color correction;
specifically, traversing each pixel point of the spliced image, and collecting a corresponding pixel value;
converting a spherical coordinate system of data acquired by a laser radar to obtain two-dimensional point cloud data;
specifically, the laser radar acquires data (point cloud information) by scanning, each point in the point cloud information has (x, y, z) three-dimensional coordinate information relative to a radar coordinate, the coordinate of each point in the point cloud can be converted into (u, v) two-dimensional coordinates (namely, the three-dimensional point cloud is tiled into a two-dimensional photo and is subsequently called a point cloud image) in a camera coordinate system through coordinate projection transformation, the image is firstly mapped under a unit spherical coordinate system, then mapped under a cylindrical coordinate system, and then expanded to obtain a two-dimensional picture, so that the size of the point cloud picture is the same as that of the spliced image, and the corresponding relation between a unit pixel in the spliced image and the point cloud point in the point cloud picture can be obtained.
For example: the coordinate system (x, y, z) of the radar is aligned with the camera coordinate system (x 1, y1, z 1). The coordinate systems of the radar and the fisheye camera are determined by mechanical structures, three-dimensional point cloud information under the camera coordinate system can be obtained only by performing coordinate rotation translation transformation on the coordinate system of the laser radar, and then the camera coordinate system is converted into a unit spherical coordinate system. Converting the rectangular coordinate system into a unit spherical coordinate system by adopting a conversion formula, wherein the conversion formula is as follows:obtaining a spherical coordinate M (r, theta, phi) and a coordinate r which is the distance from the point M to the original point, wherein phi is an angle formed by a semi-plane passing through the z-axis and the point M and a coordinate plane zOx; theta is the angle between the line segment OM and the positive direction of the z-axis.
And assigning the extracted pixel value to the cloud point of the corresponding point in the two-dimensional point cloud data to obtain a target image, wherein the target image carries color three-dimensional point cloud information.
Specifically, a pixel value extracted from a stitched image is assigned to a corresponding point cloud point in the two-dimensional point cloud data (point cloud image), that is, a pixel point of the stitched image corresponds to a point cloud point of the point cloud image, preferably, the corresponding point cloud point can be matched according to a coordinate, that is, a pixel point with a consistent coordinate corresponds to the point cloud point, the pixel value of the pixel point is assigned to the corresponding point cloud point, the pixel value of the stitched image replaces the pixel value of the point cloud image, coloring and fusing of the point cloud image are completed, and a target image is obtained, wherein the target image carries color three-dimensional point cloud information.
In the embodiment of the invention, firstly, the image acquired by the fisheye camera is subjected to distortion removal processing, then the spherical coordinate system is converted, then the converted image is spliced, the spliced seam is subjected to color correction, and then the data acquired by the laser radar is subjected to coloring fusion based on the corrected spliced image, so that the quality of three-dimensional imaging can be improved.
Based on the above embodiment, the present invention further provides a three-dimensional imaging device based on a fisheye camera, as shown in fig. 7: the device comprises: a conversion module 71, a control module 72 connected with the conversion module 71, a splicing module 73 connected with the control module 72, and a fusion module 74 connected with the splicing module 73, wherein:
the conversion module 71 is configured to perform conversion of a spherical coordinate system on the image after the distortion processing;
the splicing module 72 is used for splicing the images converted by the spherical coordinate system to obtain spliced images;
the correction module 73 is used for performing color correction on the splicing seams of the spliced images to obtain target images;
and the fusion module 74 is configured to perform fusion processing on the image data acquired by the laser radar based on the color-corrected spliced image to obtain a target image.
In an alternative manner, the conversion module 71 is specifically configured to:
respectively carrying out distortion removal processing on images acquired by the fisheye camera;
respectively converting the images subjected to distortion processing into isometric projection images;
and respectively converting the spherical coordinate systems of the images converted by the equidistant projection drawings.
In an optional manner, there are three fisheye cameras, the three fisheye cameras are respectively fixed to three side surfaces of the camera fixing base, and the conversion module 71 is specifically configured to:
and respectively converting the images converted by the equidistant projection drawings into a spherical coordinate system based on a first formula.
In an alternative, the splicing module 72 is therefore specifically configured to: and performing image splicing based on the three spherical coordinate systems of the fisheye cameras to obtain spliced images.
In an alternative, the splicing module 72 is therefore specifically configured to:
gradient analysis of pixel points is carried out in an overlapping area of two adjacent fisheye cameras;
and carrying out image splicing according to the gradient analysis result.
In an alternative manner, the correction module 73 is specifically configured to:
and carrying out pixel equalization processing on the splicing seams of the spliced images based on a second formula.
In an alternative, the time of the laser radar acquiring data is synchronized with the time of the fisheye camera acquiring images, and the fusion module 74 is specifically configured to;
extracting pixel values of pixel points from the spliced image subjected to color correction;
converting the spherical coordinate system of the data collected by the laser radar to obtain two-dimensional point cloud data;
and assigning the extracted pixel value to the cloud point of the corresponding point in the two-dimensional point cloud data to obtain a target image, wherein the target image carries color three-dimensional point cloud information.
In the embodiment of the invention, the image acquired by the fisheye camera is firstly subjected to distortion removal processing, then the conversion of a spherical coordinate system is carried out, then the converted image is spliced, the color of the spliced seam is corrected, and then the data acquired by the laser radar is colored and fused based on the corrected spliced image, so that the quality of three-dimensional imaging can be improved.
Based on the above embodiment, the present invention further provides an imaging system, as shown in fig. 8, including: laser radar 4, radar seat 5, camera fixing base 6, two at least fisheye cameras (1, 2, 3), camera fixing base 6 with radar seat 5 fixed connection, laser radar 4 is fixed in radar seat 5, two at least fisheye cameras are fixed in on the camera fixing base 6.
In a preferred mode of this embodiment, the imaging system further includes a three-dimensional imaging device based on a fisheye camera as described in the foregoing embodiment, and a specific structure, a working principle, and a technical effect of the three-dimensional imaging device are consistent with those described in the foregoing embodiment, and are not described herein again.
Alternatively, the imaging system is applied to the three-dimensional imaging method based on the fisheye camera described in the above embodiment.
In a preferred mode of this embodiment, the at least two fisheye cameras are two, or the at least two fisheye cameras are three (see fig. 1), the camera holder has a rectangular shape, the three cameras (1, 2, 3) are mounted on three adjacent faces of the rectangle, and are respectively a front view camera 2, a left view camera 1, and a right view camera 3, and the left view camera 1 and the right view camera 3 each have two overlapping regions in a range of 90 °. (each fisheye camera is a field angle range of 180 °) and
in a preferred mode of the present embodiment, as shown in fig. 9, the imaging system further includes: the radar base 5 is fixed on the motor base 7, the motor base 7 is fixedly connected with the camera fixing base 6, the motor base 7 is provided with the rotating motor 8, and the radar base 5 drives the laser radar 4 to rotate through the rotating motor 8; increase its rotational degree of freedom through rotatory laser radar 4 for the scanning range is more comprehensive, can increase scanning efficiency, shortens the scanning time, raises the efficiency.
In another preferred mode of this embodiment, the number of the at least two fisheye cameras is n, and the n is distributed on the side wall of the camera fixing base, where n > 3. The number of the fisheye cameras is increased, the range covered by the cameras is averagely divided into n parts, and the overlapping area is enlarged. And the quality of the spliced image is improved.
In another preferred mode of this embodiment, the fisheye cameras are two, and two fisheye cameras axis are fan-shaped acute angle distribution, guarantee partly overlap area, can improve the quality of concatenation image to a certain extent.
Embodiments of the present invention provide a non-volatile computer storage medium, where the computer storage medium stores at least one executable instruction, and the computer executable instruction may execute the three-dimensional imaging method of the inverter in any of the above method embodiments.
The executable instructions may be specifically configured to cause the processor to:
respectively converting the spherical coordinate systems of the images acquired by the fisheye camera;
splicing the images converted by the spherical coordinate system to obtain spliced images;
carrying out color correction on the splicing seams of the spliced images;
and performing fusion processing on image data acquired by the laser radar based on the spliced image subjected to color correction to obtain a target image.
In an alternative form, the executable instructions cause the processor to:
respectively carrying out distortion removal processing on images acquired by the fisheye camera;
respectively converting the images subjected to distortion processing into isometric projection images;
and respectively converting the spherical coordinate systems of the images converted by the equidistant projection drawings.
In an alternative form, there are three fisheye cameras, and the three fisheye cameras are respectively fixed on three sides of the camera fixing base, and the executable instructions cause the processor to perform the following operations:
and respectively converting the images converted by the equidistant projection drawings into a spherical coordinate system based on a first formula.
In an alternative, the executable instructions cause the processor to:
and carrying out image splicing based on the spherical coordinate systems of the three fisheye cameras to obtain spliced images.
In an alternative, the executable instructions cause the processor to:
gradient analysis of pixel points is carried out in an overlapping area of two adjacent fisheye cameras;
and carrying out image splicing according to the gradient analysis result.
In an alternative form, the executable instructions cause the processor to:
and carrying out pixel equalization processing on the splicing seams of the spliced images based on a second formula.
In an alternative, the time at which the lidar acquires data is synchronized with the time at which the fisheye camera acquires images, the executable instructions causing the processor to:
extracting pixel values of pixel points from the spliced image subjected to color correction;
converting the spherical coordinate system of the data collected by the laser radar to obtain two-dimensional point cloud data;
and assigning the extracted pixel value to the cloud point of the corresponding point in the two-dimensional point cloud data to obtain a target image, wherein the target image carries color three-dimensional point cloud information.
In the embodiment of the invention, the image acquired by the fisheye camera is firstly subjected to distortion removal processing, then the conversion of a spherical coordinate system is carried out, then the converted image is spliced, the color of the spliced seam is corrected, and then the data acquired by the laser radar is colored and fused based on the corrected spliced image, so that the quality of three-dimensional imaging can be improved.
Fig. 10 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and a specific embodiment of the present invention does not limit a specific implementation of the device.
As shown in fig. 10, the computing device may include: a processor (processor) 1002, a communication Interface 1004, a memory 1006, and a communication bus 1008.
Wherein: the processor 1002, communication interface 1004, and memory 1006 communicate with each other via a communication bus 1008. A communication interface 1004 for communicating with network elements of other devices, such as clients or other servers. The processor 1002 is configured to execute the program 1010, and may specifically execute relevant steps in the three-dimensional imaging method embodiment of the fisheye camera.
In particular, the program 1010 may include program code that includes computer operating instructions.
The processor 1002 may be a central processing unit CPU, or an Application Specific Integrated Circuit ASIC (Application Specific Integrated Circuit), or one or more Integrated circuits configured to implement embodiments of the present invention. The one or each processor included in the device may be the same type of processor, such as one or each CPU; or may be different types of processors such as one or each CPU and one or each ASIC.
The memory 1006 is used for storing the program 1010. The memory 1006 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 1010 may be specifically configured to cause the processor 1002 to perform the following operations:
respectively converting the spherical coordinate systems of the images acquired by the fisheye camera;
splicing the images converted by the spherical coordinate system to obtain spliced images;
carrying out color correction on the splicing seams of the spliced images;
and performing fusion processing on image data acquired by the laser radar based on the spliced image subjected to color correction to obtain a target image.
In an alternative manner, the program 1010 may be specifically configured to cause the processor 1002 to perform the following operations:
respectively carrying out distortion removal processing on images acquired by the fisheye camera;
respectively converting the images subjected to distortion processing into isometric projection images;
and respectively converting the spherical coordinate systems of the images converted by the equidistant projection drawings.
In an alternative manner, the number of the fisheye cameras is three, the three fisheye cameras are respectively fixed to three sides of the camera fixing base, and the program 1010 may be specifically configured to enable the processor 1002 to perform the following operations:
and respectively converting the images converted by the equidistant projection drawing into a spherical coordinate system based on a first formula.
In an alternative manner, the program 1010 may be specifically configured to cause the processor 1002 to perform the following operations:
and performing image splicing based on the three spherical coordinate systems of the fisheye cameras to obtain spliced images.
In an alternative manner, the program 1010 may be specifically configured to cause the processor 1002 to perform the following operations:
gradient analysis of pixel points is carried out in the overlapping area of two adjacent fisheye cameras;
and carrying out image splicing according to the gradient analysis result.
In an alternative manner, the program 1010 may be specifically configured to cause the processor 1002 to perform the following operations:
and carrying out pixel equalization processing on the splicing seams of the spliced images based on a second formula.
In an alternative manner, when the time of the lidar acquiring data is synchronized with the time of the fisheye camera acquiring the image, the program 1010 may be specifically configured to cause the processor 1002 to:
extracting pixel values of pixel points from the spliced image subjected to color correction;
converting the spherical coordinate system of the data collected by the laser radar to obtain two-dimensional point cloud data;
and assigning the extracted pixel value to the cloud point of the corresponding point in the two-dimensional point cloud data to obtain a target image, wherein the target image carries color three-dimensional point cloud information.
In the invention, firstly, the image acquired by the fisheye camera is subjected to distortion removal processing, then the conversion of a spherical coordinate system is carried out, then the converted image is spliced, the color of the spliced seam is corrected, and then the data acquired by the laser radar is colored and fused based on the corrected spliced image, so that the quality of three-dimensional imaging can be improved, and high-precision colored three-dimensional point cloud information can also be obtained.
In summary, although the present invention has been described with reference to the preferred embodiments, the above-described preferred embodiments are not intended to limit the present invention, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention, therefore, the scope of the present invention shall be determined by the appended claims.
Claims (12)
1. A three-dimensional imaging method based on a fish-eye camera is characterized by comprising the following steps:
respectively converting the spherical coordinate system of the images acquired by the fisheye camera;
splicing the images converted by the spherical coordinate system to obtain spliced images;
carrying out color correction on the splicing seams of the spliced images;
and carrying out fusion processing on image data acquired by the laser radar based on the spliced image subjected to color correction to obtain a target image.
2. The three-dimensional imaging method according to claim 1, wherein the transforming the distorted images into spherical coordinate systems comprises:
respectively carrying out distortion removal processing on images acquired by the fisheye camera;
respectively converting the images subjected to distortion processing into isometric projection images;
and respectively converting the spherical coordinate systems of the images converted by the equidistant projection drawings.
3. The three-dimensional imaging method according to claim 1, wherein there are three fisheye cameras, the three fisheye cameras are respectively fixed on three sides of the camera fixing base, and the converting of the spherical coordinate system is performed on the images converted by the equidistant projection drawing, respectively, and includes:
and respectively converting the images converted by the equidistant projection drawings into a spherical coordinate system based on a first formula.
4. The three-dimensional imaging method according to claim 3, wherein the stitching the images transformed by the spherical coordinate system to obtain a stitched image comprises:
and performing image splicing based on the three spherical coordinate systems of the fisheye cameras to obtain spliced images.
5. The three-dimensional imaging method according to claim 4, wherein the image stitching based on the spherical coordinate systems of the three fisheye cameras to obtain a stitched image comprises:
gradient analysis of pixel points is carried out in an overlapping area of two adjacent fisheye cameras;
and carrying out image splicing according to the gradient analysis result.
6. The three-dimensional imaging method according to claim 5, wherein the color correcting the stitching seams of the stitched image to obtain the target image comprises:
and carrying out pixel equalization processing on the splicing seams of the spliced images based on a second formula.
7. The three-dimensional imaging method according to claim 1, wherein the time for collecting data by the lidar is synchronized with the time for collecting images by the fisheye camera, and the fusion processing is performed on the image data collected by the lidar based on the color-corrected stitched image to obtain the target image, including:
extracting pixel values of pixel points from the spliced image subjected to color correction;
converting the spherical coordinate system of the data collected by the laser radar to obtain two-dimensional point cloud data;
and assigning the extracted pixel value to the cloud point of the corresponding point in the two-dimensional point cloud data to obtain a target image, wherein the target image carries color three-dimensional point cloud information.
8. A three-dimensional imaging device based on a fisheye camera, comprising:
the conversion module is used for respectively converting the spherical coordinate system of the images acquired by the fisheye camera;
the splicing module is used for splicing the images converted by the spherical coordinate system to obtain spliced images;
the correction module is used for carrying out color correction on the splicing seams of the spliced images;
and the fusion module is used for carrying out fusion processing on the image data acquired by the laser radar based on the spliced image subjected to color correction to obtain a target image.
9. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the steps of the panoramic system based three-dimensional imaging method of any one of claims 1-7.
10. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform the steps of the panoramic system based three-dimensional imaging method of any one of claims 1-7.
11. An imaging system, comprising: laser radar, radar seat, camera fixing base, two at least fisheye cameras, the camera fixing base with radar seat fixed connection, laser radar is fixed in the radar seat, two at least fisheye cameras are fixed in on the camera fixing base.
12. The imaging system of claim 11, wherein the at least two fisheye cameras are two, or
The number of the at least two fisheye cameras is three, the radar base is fixed on the motor base, the motor base is fixedly connected with the camera fixing base, the motor base is provided with a rotating motor, and the radar base drives the laser radar to rotate through the rotating motor; or
The number of the at least two fisheye cameras is n, the at least two fisheye cameras are distributed on the side wall of the camera fixing seat, and n is larger than 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211581191.6A CN115953306A (en) | 2022-12-09 | 2022-12-09 | Three-dimensional imaging method and device based on fisheye camera and imaging system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211581191.6A CN115953306A (en) | 2022-12-09 | 2022-12-09 | Three-dimensional imaging method and device based on fisheye camera and imaging system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115953306A true CN115953306A (en) | 2023-04-11 |
Family
ID=87288708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211581191.6A Pending CN115953306A (en) | 2022-12-09 | 2022-12-09 | Three-dimensional imaging method and device based on fisheye camera and imaging system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115953306A (en) |
-
2022
- 2022-12-09 CN CN202211581191.6A patent/CN115953306A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021120407A1 (en) | Parallax image stitching and visualization method based on multiple pairs of binocular cameras | |
US10425638B2 (en) | Equipment and method for promptly performing calibration and verification of intrinsic and extrinsic parameters of a plurality of image capturing elements installed on electronic device | |
TWI555378B (en) | An image calibration, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
EP2328125B1 (en) | Image splicing method and device | |
JP2017112602A (en) | Image calibrating, stitching and depth rebuilding method of panoramic fish-eye camera and system thereof | |
US7837330B2 (en) | Panoramic three-dimensional adapter for an optical instrument and a combination of such an adapter and such an optical instrument | |
US6791598B1 (en) | Methods and apparatus for information capture and steroscopic display of panoramic images | |
TW201915944A (en) | Image processing method, apparatus, and storage medium | |
CN107424118A (en) | Based on the spherical panorama mosaic method for improving Lens Distortion Correction | |
CN104778656B (en) | Fisheye image correcting method based on spherical perspective projection | |
CN108200360A (en) | A kind of real-time video joining method of more fish eye lens panoramic cameras | |
CN206563985U (en) | 3-D imaging system | |
Aggarwal et al. | Panoramic stereo videos with a single camera | |
CN106534670B (en) | It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group | |
CN111009030A (en) | Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method | |
CN110211220A (en) | The image calibration suture of panorama fish eye camera and depth reconstruction method and its system | |
CN110782507A (en) | Texture mapping generation method and system based on face mesh model and electronic equipment | |
JP2020191624A (en) | Electronic apparatus and control method for the same | |
CN113763480B (en) | Combined calibration method for multi-lens panoramic camera | |
CN110278366A (en) | A kind of panoramic picture weakening method, terminal and computer readable storage medium | |
KR20190019059A (en) | System and method for capturing horizontal parallax stereo panoramas | |
JP2004135209A (en) | Generation device and method for wide-angle view high-resolution video image | |
JP2000112019A (en) | Electronic triplet lens camera apparatus | |
JP2001016621A (en) | Multi-eye data input device | |
US11137582B2 (en) | Omnidirectional catadioptric lens with odd aspheric contour or multi-lens |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |