CN112308985A - Vehicle-mounted image splicing method, system and device - Google Patents

Vehicle-mounted image splicing method, system and device Download PDF

Info

Publication number
CN112308985A
CN112308985A CN202011211542.5A CN202011211542A CN112308985A CN 112308985 A CN112308985 A CN 112308985A CN 202011211542 A CN202011211542 A CN 202011211542A CN 112308985 A CN112308985 A CN 112308985A
Authority
CN
China
Prior art keywords
image
images
initial
adjacent
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011211542.5A
Other languages
Chinese (zh)
Other versions
CN112308985B (en
Inventor
何恒
杨帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haowei Technology Wuhan Co ltd
Original Assignee
Haowei Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haowei Technology Wuhan Co ltd filed Critical Haowei Technology Wuhan Co ltd
Priority to CN202011211542.5A priority Critical patent/CN112308985B/en
Publication of CN112308985A publication Critical patent/CN112308985A/en
Application granted granted Critical
Publication of CN112308985B publication Critical patent/CN112308985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention provides a vehicle-mounted image splicing method, an image splicing system and an image splicing device, which are used for mapping at least two acquired initial images into a three-dimensional mathematical model to form at least two conversion images, wherein overlapping areas of two adjacent conversion images are overlapped and the image contents are the same. Stretching one side of one of the two adjacent converted images towards the other converted image until the image contents of the overlapped areas of the two adjacent converted images are overlapped; and then, according to the fusion weight graph, searching a first weight value corresponding to each first sampling point in the overlapping area of one of the two adjacent conversion images, and subtracting the first weight value by 1 to obtain a second weight value corresponding to a second sampling point, which has the same target image content as the first sampling point, in the overlapping area of the other of the two adjacent conversion images. And the weight sum of the same target image in the two adjacent converted images is 1 during fusion, so that the problem of abnormal fusion brightness is prevented.

Description

Vehicle-mounted image splicing method, system and device
Technical Field
The invention relates to the field of image stitching, in particular to a vehicle-mounted image stitching method, a vehicle-mounted image stitching system and a vehicle-mounted image stitching device.
Background
Along with the popularization of automobiles, more and more automobiles enter thousands of households, the living consumption level of people is continuously improved, the number of automobiles is also continuously increased, the intelligent requirements of people on electric appliances in the automobiles are higher and higher, and ADAS and vehicle-mounted 360-degree panoramic images in the intelligent automobiles are important configurations of high-vehicle-distribution types. The vehicle-mounted 3D panoramic system utilizes wide-angle cameras installed around the vehicle to reconstruct the vehicle and surrounding scenes, and generates a vehicle-mounted panoramic image. The driver can safely park the vehicle, avoid obstacles and eliminate visual blind areas by observing the panoramic image, thereby achieving the purpose of safe driving.
The concept of an on-board look-around system was first proposed by k.kato et al in 2006. Then, various active safety technologies such as lane detection, parking space detection and tracking, parking assistance, and moving object detection are applied to the vehicle-mounted all-round system. Byeongchaen Jeon et al proposed a solution to a high resolution panoramic surround view system in 2015. These schemes all feature the use of multiple cameras to complete the modeling of the actual scene, producing visual effects including 2D and pseudo-3D. The number of the cameras is determined according to the actual car model, the general household car is modeled by adopting 4-way fisheye cameras, and the final purpose is to unify images of multiple cameras in the same visual coordinate system to form a complete visual field for a driver to observe the conditions around the car.
When the existing images are spliced, the images of the adjacent cameras adopt a gradually-in and gradually-out fusion strategy in a splicing fusion area, and the strategy is realized through an image mixing function of OpenGL. The color mixing formula in OpenGL is: cresult Csrc + Fdst Cdst, wherein Csrc represents a source color from the texture; cdst represents the target color stored in the buffer; fdst denotes the source factor value, i.e. the fusion weight of the source colors. Cdst denotes a target factor value, i.e., a fusion weight of a target color. And taking the A channel values of the source image and the target image as fusion weights in OpenGL. When image splicing is performed based on an OpenGL platform, when a ghost problem occurring during splicing of adjacent images is solved, one of the adjacent images is usually stretched towards the other of the adjacent images, so that texture coordinates of points on the stretched images are updated, a fusion weight map corresponding to the stretched converted images is not updated, so that the weight sum of two adjacent converted images is not equal to 1 during fusion of the same target image, and brightness of the two adjacent converted images is abnormal during fusion.
Disclosure of Invention
The invention aims to provide a vehicle-mounted image splicing method, a system and a device, which are used for solving the problem of abnormal fusion brightness caused by mismatching of images in a fusion area and corresponding fusion weights when the images are spliced by the conventional vehicle-mounted image splicing method.
In order to solve the above problems, the present invention provides a vehicle-mounted image stitching method, including:
acquiring at least two initial images;
constructing a three-dimensional mathematical model with a world coordinate system, and sequentially mapping at least two initial images into the three-dimensional mathematical model to form at least two converted images, wherein the overlapping areas of two adjacent converted images are overlapped and the image contents are the same;
obtaining a fusion weight map corresponding to each conversion image, and stretching one of two adjacent conversion images towards the other conversion image until the image contents of the overlapping area of the two adjacent conversion images are overlapped;
searching a first weight value corresponding to each first sampling point in an overlapping area of one of two adjacent converted images in the fusion weight graph corresponding to one of the two adjacent converted images to calculate a second weight value of each second sampling point, corresponding to the first sampling point, in the overlapping area of the other of the two adjacent converted images; wherein a sum of the first weight value and the second weight value is 1;
and fusing the adjacent overlapping areas according to the first weight value and the second weight value to generate a spliced image.
Optionally, the method for obtaining the fusion weight map corresponding to each of the converted images includes:
and acquiring an initial fusion weight map according to the splicing seam corresponding to each initial image, and updating the initial fusion weight map to acquire the fusion weight map.
Optionally, the method for obtaining the initial fusion weight map according to the stitching seam corresponding to each of the initial images includes:
acquiring the position of a splicing seam corresponding to each initial image;
obtaining an image area corresponding to the initial fusion weight map according to the position of the splicing seam corresponding to the initial image;
and setting the weight of each mapping point in the image region corresponding to the initial fusion weight map to be 1 so as to obtain the initial fusion weight map.
Optionally, the method for obtaining the image region corresponding to the initial fusion weight map according to the position of the stitching seam corresponding to the initial image includes:
mapping the splicing seams corresponding to the initial images into the three-dimensional mathematical model, and forming an initial image area with the model boundary and the model origin of the three-dimensional mathematical model;
and mapping each initial point in the initial image area to an image acquisition equipment model to obtain a plurality of mapping points, wherein the mapping points form an image area corresponding to the initial fusion weight map.
Optionally, the method for updating the initial fusion weight map to obtain the fusion weight map includes:
and calculating the weight of the overlapping area of the initial fusion weight map and the corresponding area of the overlapping area of the initial image according to a weight calculation formula, and updating the weight of the corresponding area of the overlapping area of the initial fusion weight map and the initial image according to the weight of the overlapping area to obtain the fusion weight map.
Optionally, the weight calculation formula is: w is 0.25 theta +0.5-0.25 thetas
Wherein theta is an included angle between a connecting line between a point on a region corresponding to the overlapping region of the initial fusion weight map and the initial image and the origin of the three-dimensional mathematical model and the X axis, and theta issIs the included angle between the splicing seam and the X axis.
Optionally, the method for fusing the adjacent overlapping regions according to the first weight value and the second weight value includes:
calculating the fused pixel values of the adjacent overlapping regions according to a formula RGBResult ═ 1-Adst RGBSrc + Adst RGBSst to fuse the adjacent overlapping regions;
wherein RGBRESult represents the fused pixel value adjacent to the overlapping region, RGBSrc represents the pixel value adjacent to one of the converted images, RGDSt represents the pixel value corresponding to the other of the two adjacent converted images, and Adst represents the second weight value.
Optionally, the vehicle-mounted image stitching method further includes:
rendering the converted images according to a preset sequence before generating the spliced images;
or when the spliced image is generated, the rendering weight value of the spliced image is 1.
Optionally, the method for rendering the stitched image with a weight value of 1 includes:
calculating the rendering weight value according to a formula Aresult-1-Asrc + 0-Adst, and enabling the rendering weight value to be 1;
wherein Aresult represents the rendering weight value, Adst represents the first weight value, and Asrc represents the second weight value; asrc has a value of 1 and Adst is not 1.
Optionally, the rendering order of the converted images according to the predetermined order is:
rendering a part, which is not equal to 1 in the first weight value, of the converted image, and rendering a part, which is equal to 1 in the second weight value, of the converted image.
The present invention also provides a vehicle-mounted image stitching system, which does not solve the above problems, and comprises:
the image acquisition module is used for acquiring at least two initial images;
the three-dimensional mathematical model building module is used for building a three-dimensional mathematical model with a world coordinate system, and mapping at least two initial images into the three-dimensional mathematical model in sequence to form at least two conversion images, wherein the overlapping areas of two adjacent conversion images are overlapped and the image contents are the same;
the data processing module is used for obtaining a fusion weight map corresponding to each conversion image, stretching one of two adjacent conversion images towards the other conversion image until the image contents of the overlapping regions of the two adjacent conversion images are overlapped, and searching a first weight value corresponding to each first sampling point in the overlapping region of one of the two adjacent conversion images in the fusion weight map corresponding to one of the two adjacent conversion images so as to calculate a second weight value of each second sampling point, which is the same as the target image content corresponding to the first sampling point, in the overlapping region of the other one of the two adjacent conversion images; wherein a sum of the first weight value and the second weight value is 1;
and the image splicing module is used for fusing the adjacent overlapping areas according to the first weight value and the second weight value so as to generate a spliced image.
In order to solve the above problems, the invention also provides a vehicle-mounted image stitching device, which comprises a central control host and the vehicle-mounted image stitching system;
the image acquisition module comprises image acquisition equipment, the image acquisition equipment is connected with a central control host, and the acquired initial image is transmitted to the central control host for image processing so as to complete image splicing;
the three-dimensional mathematical model building module, the data processing module and the image splicing module are arranged in the central control host.
The invention discloses an image splicing method, which is characterized in that a three-dimensional mathematical model is constructed, at least two acquired initial images are mapped into the three-dimensional mathematical model to form at least two converted images, and the overlapping areas of two adjacent converted images are overlapped and the image contents are the same. Stretching one side of one of the two adjacent converted images towards the other converted image until the image contents of the overlapped areas of the two adjacent converted images are overlapped; and then, according to the fusion weight graph, searching a first weight value corresponding to each first sampling point in the overlapping area of one of the two adjacent conversion images, and subtracting the first weight value by 1 to obtain a second weight value corresponding to a second sampling point, which has the same target image content as the first sampling point, in the overlapping area of the other one of the two adjacent conversion images. In this way, the weight sum of the same target image in the two adjacent converted images is made to be 1 when fusing, so as to prevent the problem of abnormal fusion brightness.
Drawings
FIG. 1 is a flow chart of a vehicle-mounted image stitching method in an embodiment of the invention;
FIG. 2 is a schematic diagram of a construction equation of a three-dimensional mathematical model established in the vehicle-mounted image stitching method in an embodiment of the present invention;
FIG. 3 is a schematic model diagram of a three-dimensional mathematical model established in the vehicle-mounted image stitching method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a fusion weight map in a vehicle-mounted image stitching method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a vehicle-mounted image stitching system in an embodiment of the present invention;
FIG. 6 is a schematic diagram of an on-board image stitching device according to an embodiment of the present invention;
reference numerals
A1-graceful edge; a2-bowl bottom;
1-an image acquisition module; 2-a three-dimensional mathematical model construction module;
3-a data processing module; 4-an image stitching module;
100-a central control host; 200-vehicle ethernet.
Detailed Description
The following describes the vehicle-mounted image stitching method, system and apparatus in detail with reference to the accompanying drawings and specific embodiments. The advantages and features of the present invention will become more apparent from the following description. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is merely for the purpose of facilitating and distinctly claiming the embodiments of the present invention. Further, the structures illustrated in the drawings are often part of actual structures. In particular, the drawings may have different emphasis points and may sometimes be scaled differently.
Fig. 1 is a schematic flow chart of a vehicle-mounted image stitching method in an embodiment of the present invention. In the present embodiment, the vehicle-mounted image stitching method of the present embodiment as shown in fig. 1 includes the following steps S10 to S50.
In step S10, at least two initial images are acquired. In this step, at least two initial images are acquired by at least two image acquisition devices, wherein at least two of the initial images are located around the vehicle.
The at least two image capturing devices may be fisheye cameras, and in a specific embodiment, for example, four fisheye cameras may be disposed, and the four fisheye cameras are respectively disposed at front, rear, left, and right positions of the vehicle body, for example, at a front mirror, a rear mirror, a left rear-view mirror, and a right rear-view mirror of the vehicle body, so as to capture images of a surrounding area of the vehicle in real time. The image content of the at least two initial images of the surroundings of the vehicle acquired by the at least two image acquisition devices may include a ground portion and an aerial portion, the image of the ground portion may include a sidewalk zebra crossing, a road edge and the like on the ground, and the image of the aerial portion may include pedestrians, surrounding vehicles, traffic lights and the like.
In step S20, a three-dimensional mathematical model with a world coordinate system is constructed, and at least two of the initial images are sequentially mapped into the three-dimensional mathematical model to form at least two converted images, wherein the overlapping regions of two adjacent converted images are overlapped and the image contents are the same.
Fig. 2 is a schematic view of a construction equation of a three-dimensional mathematical model established in the vehicle-mounted image stitching method in an embodiment of the present invention. Fig. 3 is a schematic model diagram of a three-dimensional mathematical model established in the vehicle-mounted image stitching method in an embodiment of the present invention. As shown in fig. 2 and 3, in the present embodiment, the three-dimensional mathematical model is a three-dimensional bowl-shaped mathematical model, the construction equation for constructing the three-dimensional bowl-shaped mathematical model is shown in fig. 2, and X, Y, Z is a world coordinate system, where X0Y represents the ground, 0 represents the geometric center of the projection of the vehicle on the ground, 0Y represents the advancing direction of the vehicle, 0Z represents the rotation axis, and 0R represents the rotation axis0P represents a bus, the bowl-shaped curved surface is formed by rotating the bus around a rotating shaft, and the formula of a bus equation for constructing the three-dimensional bowl-shaped model is shown as formula (1).
Figure BDA0002758908270000061
Wherein R is0Represents the radius of the bowl bottom A2, the radius R of the bowl bottom A20Radius R of said bowl bottom A2 in relation to vehicle size0Is typically about 100cm larger than one-half the size of the vehicle, in this embodiment, the radius R of the bowl bottom a20Is 250 cm-350 cm, preferably, the radius R of the bowl bottom A20The size of (2) is 300 cm; the units of the camera coordinate system and the world coordinate system are cm.
And K is an adjusting coefficient of the bowl edge a1, in this embodiment, the relative size between the bowl edge a1 and the bowl bottom a2 is adjusted by the adjusting coefficient K of the bowl edge a1, that is, the larger the K value, the larger the area corresponding to the bowl edge a 1. Regardless of whether the bowl edge a1 area is too large, the bowl bottom a2 area is too small, or the bowl bottom a2 area is too small, the bowl edge a1 area is too large, which results in poor splicing effect, so that the adjustment coefficient k of the bowl edge a1 needs to be given a value in a proper range. In this embodiment, the k value ranges from 0.1 to 0.2. Preferably, in this embodiment, the K value ranges from 0.15.
Further, after the three-dimensional mathematical model with the pair of world coordinate systems is constructed, at least two initial images around the vehicle are sequentially mapped into the three-dimensional mathematical model to form at least two converted images, wherein the overlapping regions of adjacent converted images are overlapped and the image contents are the same. For example, the images acquired by the front-view camera and the images acquired by the right-view camera each have the same overlap region with the same image content, such as traffic lights.
In step S30, a fusion weight map corresponding to each of the converted images is obtained, and one side of one of the two adjacent converted images is stretched toward the other until the image contents of the overlapping regions of the two adjacent converted images coincide.
In this embodiment, the method for obtaining the fusion weight map of each converted image includes: and acquiring an initial fusion weight map according to the splicing seam corresponding to each initial image, and updating the initial fusion weight map to acquire the fusion weight map.
Further, in this embodiment, the method for obtaining the initial fusion weight map according to the stitching seam corresponding to each of the initial images includes the following steps one to three.
In the first step, the position of the splicing seam L corresponding to each initial image is obtained. In the embodiment, the position of the vehicle is taken as the center, one of the front, the back, the left and the right of the vehicle is taken as the starting point A1 of the splicing seam, and the connecting line between the starting point A1 and the end point B1 of the edge of the initial image far away from the vehicle is taken as the splicing seam L. In this embodiment, the position of the joint line L corresponding to each of the initial images is determined by an included angle θ between the joint line L and the X-axis of the three-dimensional bowl-shaped mathematical modelsDetermining; wherein said θsThe angle range of the angle is between 40 degrees and 50 degrees.
In the second step, an image area corresponding to the initial fusion weight map is obtained according to the position of the splicing seam L corresponding to the initial image.
In this embodiment, the method for obtaining the image region corresponding to the initial fusion weight map may include the following steps.
Firstly, mapping a splicing seam L corresponding to the initial image into the three-dimensional mathematical model, and forming an initial image area with a model boundary and a model origin of the three-dimensional mathematical model.
Specifically, in this embodiment, the joint L is divided into a first segment L1 and a second segment L2, where the first segment L1 is a joint of a portion located on the ground, and the second segment L2 is a joint of a portion located in the air. The first section L1 is mapped to the bottom of the three-dimensional bowl-shaped mathematical model bowl, and the second section L2 is mapped to the curved part of the three-dimensional bowl-shaped mathematical model according to the generatrix equation of the three-dimensional bowl-shaped mathematical model. According to the mapping method, the left and right splicing seams L corresponding to the initial image are mapped into the three-dimensional bowl-shaped mathematical model. Then, the left and right two mapped splicing seams L, the model boundary of the three-dimensional bowl-shaped mathematical model, and the model origin constitute an initial image region, and in this embodiment, the model origin is a connecting line between two adjacent corners in the four corners of the vehicle.
And then, mapping each initial point in the initial image area to an image acquisition equipment model to obtain a plurality of mapping points, wherein the mapping points form an image area corresponding to the initial fusion weight map.
In step three, the weight of each mapping point in the image area is set to 1, so as to obtain the initial fusion weight map.
Fig. 4 is a schematic diagram of a weight map in the vehicle-mounted image stitching method according to an embodiment of the present invention. Further, in this embodiment, the method for updating the initial fusion weight map to obtain the fusion weight map includes: and calculating the weight of the overlapping area of the area corresponding to the overlapping area of the initial fusion weight map and the initial image according to a weight calculation formula, namely formula (2), and updating the weight of the area corresponding to the overlapping area of the initial fusion weight map and the initial image according to the weight of the overlapping area to obtain the fusion weight map.
In this embodiment, the weight calculation formula is: w is 0.25 theta +0.5-0.25 thetas-formula (2).
Wherein θ is an angle between a connecting line between a point on a region corresponding to the overlapping region of the initial fusion weight map and the initial image and the origin of the three-dimensional mathematical model and the X axis, and θ is an angle betweensIs the angle between the splicing seam and the X axis. Calculating by the weight calculation formula to update the weight value of the region corresponding to the overlapping region of the initial fusion weight map and the initial image to obtain the fusion weight map as shown in fig. 4, in this embodiment, the white part in fig. 4 represents the region with the weight of 1, the black part represents the region with the weight of 0, and the black part represents the region with the weight of 1And the gray part between white represents the weight of the overlap region as calculated by equation (2) in the figure.
Further, one of the two adjacent converted images is stretched toward the other until the image contents of the overlapping regions of the two adjacent converted images coincide. In this embodiment, after stretching, the image contents of the overlapping areas of two adjacent converted images are overlapped, so that the image contents of the points where the overlapping areas are fused at the time of subsequent fusion can be the same, thereby preventing the problem of poor splicing. The method of stretching the converted image is not described herein in any greater detail. In addition, in this embodiment, the order of obtaining the fusion weight map corresponding to each of the converted images and stretching two adjacent converted images is not particularly limited. The fusion weight map corresponding to each of the converted images may be obtained first, or two adjacent converted images may be stretched first.
In step S40, in the fused weight map corresponding to one of the two adjacent transformed images, the first weight value D1 corresponding to each first sampling point P1 located in the overlapping region of one of the two adjacent transformed images is searched to calculate the second weight value D2 of each second sampling point P2 located in the overlapping region of the other image and having the same content as the target image corresponding to the first sampling point P1, wherein the sum of the first weight value D1 and the second weight value D2 is 1. In this embodiment, since D1+ D2 is 1, the second weight value D2 is 1-D1.
According to the vehicle-mounted image stitching method, the three-dimensional mathematical model is constructed, the at least two acquired initial images are mapped into the three-dimensional mathematical model to form at least two converted images, and the overlapping areas of the two adjacent converted images are overlapped and the image contents are the same. And stretching one side of one of the two adjacent converted images towards the other until the image contents of the overlapping regions of the two adjacent converted images coincide. Then, according to the fusion weight map, a first weight value D1 corresponding to each first sampling point P1 in the overlapping area of one of the two adjacent conversion images is searched, and the first weight value D1 is subtracted by 1 to obtain a second weight value D2 corresponding to a second sampling point P2 in the overlapping area of the other one of the two adjacent conversion images, wherein the content of the second sampling point P2 is the same as that of the target image of the first sampling point P1. And then the weight sum of the same target image in two adjacent converted images is 1 when fusing, so as to prevent the problem of abnormal fusion brightness.
Further, the image stitching method of the embodiment is performed based on an OpenGL platform, wherein in OpenGL, the texture image adopts an RGBA format, and the fusion weight is stored in the a channel. During fusion, the other of the two adjacent converted images is stored in a buffer area to serve as a target image, one of the two adjacent converted images serves as a source image and is fused with the other of the two adjacent converted images, and then the fused image is displayed in display equipment, so that the spliced image is displayed on the display equipment after all the converted images are fused.
In this embodiment, the number of the obtained initial images is 2N, and the initial images are sequentially mapped to the three-dimensional mathematical model to form 2N sequentially-arranged transformation images, where N is greater than or equal to 1. Specifically, for example, when the number of image capturing devices is 4, the number of the captured initial images is 4, and the number of the converted images formed after mapping is 4. The forward view converted image corresponding to the forward view camera and the rear view converted image corresponding to the rear view camera may be used as a buffer area to be stored as a target image, and then the right view converted image corresponding to the right view camera and the left view converted image corresponding to the left view camera may be used as a source image to be fused with the forward view converted image and the rear view converted image.
In this embodiment, the method for fusing the adjacent overlapping areas according to the first weight value and the second weight value includes: and (4) calculating the fused pixel values of the adjacent overlapping areas according to the formula (3) so as to fuse the adjacent overlapping areas.
RGBRESult ═ 1-Adst ═ RGBSrc + Adst RGBSst-equation (3)
Wherein, rgbreult represents the pixel value after fusing adjacent to the overlapping region, RGBsrc represents the pixel value of the source image (i.e. adjacent to one of the transformation graphs), RGBdst is the pixel value of the target image (i.e. adjacent to the other of the transformation graphs), and Adst is the fusion weight value (i.e. the second weight value) of the target image.
In step S50, the adjacent overlapping regions are fused according to the first and second weight values D1 and D2 to generate a stitched image. In this embodiment, the method of fusion to generate the stitched image is not described too much.
Further, before generating the stitched image, the method further comprises: rendering the converted images according to a preset sequence. In this embodiment, the OpenGL platform is applied to render and then display in the display device. The display device can be a computer, a mobile phone, a tablet and the like. The rendering sequence of the conversion image according to the preset sequence is to render a part, with a first weight value not being 1, of the conversion image first and then render a part, with a second weight value being 1, of the conversion image.
And, while generating the stitched image, the method further comprises: and the rendering weight value of the spliced image is 1. In this embodiment, if the rendering weight value of the stitched image is 1, the background fusion weight is 0 when the converted image is fused, so that the problem of abnormal platform rendering can be avoided when the OpenGL platform is applied to image stitching.
In this embodiment, the method for rendering a weight value of 1 for a stitched image includes: is calculated according to equation (4) such that the predetermined rendering weight value is set to 1.
Aresult ═ 1 × Asrc +0 × Adst-formula (4)
Wherein the Aresult represents the predetermined weight value, Adst represents the first weight value, Asrc represents the second weight value, the value of Asrc is 1, and Adst is not 1.
FIG. 5 is a diagram of a vehicle-mounted image stitching system according to an embodiment of the present invention. As shown in fig. 5, this embodiment further discloses a vehicle-mounted image stitching system, where the vehicle-mounted image stitching system includes:
the image acquisition module 1 is used for acquiring at least two initial images;
the three-dimensional mathematical model building module 2 is used for building a three-dimensional mathematical model with a world coordinate system, and sequentially mapping at least two initial images into the three-dimensional mathematical model to form at least two conversion images, wherein the overlapping areas of two adjacent conversion images are overlapped and the image contents are the same;
the data processing module 3 is configured to obtain a fusion weight map corresponding to each of the conversion images, stretch one of the two adjacent conversion images toward the other conversion image until image contents of overlapping regions of the two adjacent conversion images coincide, and search, in the fusion weight map corresponding to one of the two adjacent conversion images, a first weight value corresponding to each first sampling point located in an overlapping region of one of the two adjacent conversion images, so as to calculate a second weight value corresponding to each second sampling point, which is located in an overlapping region of the other one of the two adjacent conversion images and is the same as a target image content corresponding to the first sampling point; wherein a sum of the first weight value and the second weight value is 1.
And the image splicing module 4 is configured to fuse the adjacent overlapping regions according to the first weight value and the second weight value to generate a spliced image.
FIG. 6 is a schematic diagram of an on-vehicle image stitching device according to an embodiment of the invention. As shown in fig. 6, the embodiment further discloses a vehicle-mounted image stitching device, wherein the vehicle-mounted image stitching device includes a central control host 100 and the vehicle-mounted image stitching system. The image acquisition module 1 comprises an image acquisition device, the image acquisition device is connected with the hollow host, and transmits the acquired initial image to the central control host for image processing so as to complete image splicing; the three-dimensional mathematical model building module 2, the data processing module 3 and the image splicing module 4 are located in the central control host.
In the present embodiment, the image capturing devices 1 are fisheye cameras, the number of the fisheye cameras is 4, and the 4 image capturing devices 1 are respectively mounted at the front, the rear, the left, and the right positions of the vehicle body.
The above description is only for the purpose of describing the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention, and any variations and modifications made by those skilled in the art based on the above disclosure are within the scope of the appended claims.

Claims (12)

1. A vehicle-mounted image stitching method is characterized by comprising the following steps:
acquiring at least two initial images;
constructing a three-dimensional mathematical model with a world coordinate system, and sequentially mapping at least two initial images into the three-dimensional mathematical model to form at least two converted images, wherein the overlapping areas of two adjacent converted images are overlapped and the image contents are the same;
obtaining a fusion weight map corresponding to each conversion image, and stretching one of two adjacent conversion images towards the other conversion image until the image contents of the overlapping area of the two adjacent conversion images are overlapped;
searching a first weight value corresponding to each first sampling point in an overlapping area of one of two adjacent converted images in the fusion weight graph corresponding to one of the two adjacent converted images to calculate a second weight value corresponding to each second sampling point, corresponding to the same target image content, corresponding to the first sampling point, in the overlapping area of the other of the two adjacent converted images; wherein a sum of the first weight value and the second weight value is 1;
and fusing the adjacent overlapping areas according to the first weight value and the second weight value to generate a spliced image.
2. The vehicle-mounted image stitching method according to claim 1, wherein the method for obtaining the fusion weight map corresponding to each converted image comprises the following steps:
and acquiring an initial fusion weight map according to the splicing seam corresponding to each initial image, and updating the initial fusion weight map to acquire the fusion weight map.
3. The vehicle-mounted image stitching method according to claim 2, wherein the method for obtaining the initial fusion weight map according to the stitching seam corresponding to each initial image comprises the following steps:
acquiring the position of a splicing seam corresponding to each initial image;
obtaining an image area corresponding to the initial fusion weight map according to the position of the splicing seam corresponding to the initial image;
and setting the weight of each mapping point in the image region corresponding to the initial fusion weight map to be 1 so as to obtain the initial fusion weight map.
4. The vehicle-mounted image stitching method according to claim 3, wherein the method for obtaining the image area corresponding to the initial fusion weight map according to the stitching seam position corresponding to the initial image comprises:
mapping the splicing seams corresponding to the initial images into the three-dimensional mathematical model, and forming an initial image area with the model boundary and the model origin of the three-dimensional mathematical model;
and mapping each initial point in the initial image area to an image acquisition equipment model to obtain a plurality of mapping points, wherein the mapping points form an image area corresponding to the initial fusion weight map.
5. The vehicle-mounted image stitching method according to claim 2, wherein the method of updating the initial fusion weight map to obtain the fusion weight map comprises:
and calculating the weight of the overlapping area of the initial fusion weight map and the corresponding area of the overlapping area of the initial image according to a weight calculation formula, and updating the weight of the corresponding area of the overlapping area of the initial fusion weight map and the initial image according to the weight of the overlapping area to obtain the fusion weight map.
6. The vehicle-mounted image stitching method according to claim 5, wherein the weight calculation formula is: w is 0.25 theta +0.5-0.25 thetas
Wherein theta is an included angle between a connecting line between a point on a region corresponding to the overlapping region of the initial fusion weight map and the initial image and the origin of the three-dimensional mathematical model and the X axis, and theta issIs the included angle between the splicing seam and the X axis.
7. The vehicle-mounted image stitching method according to claim 1, wherein the method of fusing adjacent overlapping regions according to the first weight value and the second weight value comprises:
calculating the fused pixel values of the adjacent overlapping regions according to a formula RGBResult ═ 1-Adst RGBSrc + Adst RGBSst to fuse the adjacent overlapping regions;
wherein RGBRESult represents the fused pixel value adjacent to the overlapping region, RGBSrc represents the pixel value adjacent to one of the converted images, RGDSt represents the pixel value corresponding to the other of the two adjacent converted images, and Adst represents the second weight value.
8. The vehicle-mounted image stitching method according to claim 1, further comprising:
rendering the converted images according to a preset sequence before generating the spliced images;
or when the spliced image is generated, the rendering weight value of the spliced image is 1.
9. The vehicle-mounted image stitching method according to claim 8, wherein the method for stitching the images with the rendering weight value of 1 comprises the following steps:
calculating the rendering weight value according to a formula Aresult-1-Asrc + 0-Adst, and enabling the rendering weight value to be 1;
wherein Aresult represents the rendering weight value, Adst represents the first weight value, and Asrc represents the second weight value; asrc has a value of 1 and Adst is not 1.
10. The vehicle-mounted image stitching method according to claim 8, wherein the order of rendering the converted images in a predetermined order is:
rendering a part, which is not equal to 1 in the first weight value, of the converted image, and rendering a part, which is equal to 1 in the second weight value, of the converted image.
11. An on-vehicle image stitching system, comprising:
the image acquisition module is used for acquiring at least two initial images;
the three-dimensional mathematical model building module is used for building a three-dimensional mathematical model with a world coordinate system, and mapping at least two initial images into the three-dimensional mathematical model in sequence to form at least two conversion images, wherein the overlapping areas of two adjacent conversion images are overlapped and the image contents are the same;
the data processing module is used for obtaining a fusion weight map corresponding to each conversion image, stretching one of the two adjacent conversion images towards the other conversion image until the image contents of the overlapping regions of the two adjacent conversion images are overlapped, and searching a first weight value corresponding to each first sampling point in the overlapping region of one of the two adjacent conversion images in the fusion weight map corresponding to one of the two adjacent conversion images so as to calculate a second weight value corresponding to each second sampling point, which is the same as the target image content corresponding to the first sampling point, in the overlapping region of the other one of the two adjacent conversion images; wherein a sum of the first weight value and the second weight value is 1;
and the image splicing module is used for fusing the adjacent overlapping areas according to the first weight value and the second weight value so as to generate a spliced image.
12. A vehicle-mounted image stitching device, which is characterized by comprising a central control host and the vehicle-mounted image stitching system as claimed in claim 11;
the image acquisition module comprises image acquisition equipment, the image acquisition equipment is connected with the central control host, and transmits the acquired initial image to the central control host for image processing so as to complete image splicing;
the three-dimensional mathematical model building module, the data processing module and the image splicing module are arranged in the central control host.
CN202011211542.5A 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device Active CN112308985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011211542.5A CN112308985B (en) 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011211542.5A CN112308985B (en) 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device

Publications (2)

Publication Number Publication Date
CN112308985A true CN112308985A (en) 2021-02-02
CN112308985B CN112308985B (en) 2024-02-02

Family

ID=74332715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011211542.5A Active CN112308985B (en) 2020-11-03 2020-11-03 Vehicle-mounted image stitching method, system and device

Country Status (1)

Country Link
CN (1) CN112308985B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150154776A1 (en) * 2013-12-03 2015-06-04 Huawei Technologies Co., Ltd. Image splicing method and apparatus
CN104732485A (en) * 2015-04-21 2015-06-24 深圳市深图医学影像设备有限公司 Method and system for splicing digital X-ray images
CN107784632A (en) * 2016-08-26 2018-03-09 南京理工大学 A kind of infrared panorama map generalization method based on infra-red thermal imaging system
CN108510445A (en) * 2018-03-30 2018-09-07 长沙全度影像科技有限公司 A kind of Panorama Mosaic method
CN111028190A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111369486A (en) * 2020-04-01 2020-07-03 浙江大华技术股份有限公司 Image fusion processing method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150154776A1 (en) * 2013-12-03 2015-06-04 Huawei Technologies Co., Ltd. Image splicing method and apparatus
CN104732485A (en) * 2015-04-21 2015-06-24 深圳市深图医学影像设备有限公司 Method and system for splicing digital X-ray images
CN107784632A (en) * 2016-08-26 2018-03-09 南京理工大学 A kind of infrared panorama map generalization method based on infra-red thermal imaging system
CN108510445A (en) * 2018-03-30 2018-09-07 长沙全度影像科技有限公司 A kind of Panorama Mosaic method
CN111028190A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111369486A (en) * 2020-04-01 2020-07-03 浙江大华技术股份有限公司 Image fusion processing method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LE YU: "Towards the automatic selection of optimal seam line locations when merging optical remote-sensing images", JOURNAL OF REMOTE SENSING, vol. 33, no. 4, pages 1000 - 1014 *
许德智: "基于权重量化与信息压缩的车载图像超分辨率重建", 计算机应用, vol. 39, no. 12, pages 3644 - 3649 *
谢晶梅: "图像拼接中权重的改进设计研究", 广东工业大学学报, vol. 34, no. 6, pages 49 - 53 *

Also Published As

Publication number Publication date
CN112308985B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN107792179B (en) A kind of parking guidance method based on vehicle-mounted viewing system
CN104851076B (en) Panoramic looking-around parking assisting system and camera installation method for commercial car
EP3049285B1 (en) Driver assistance system for displaying surroundings of a vehicle
CN110381255A (en) Using the Vehicular video monitoring system and method for 360 panoramic looking-around technologies
US8514282B2 (en) Vehicle periphery display device and method for vehicle periphery image
CN109087251B (en) Vehicle-mounted panoramic image display method and system
JP6213567B2 (en) Predicted course presentation device and predicted course presentation method
CN105321160B (en) The multi-camera calibration that 3 D stereo panorama is parked
CN112224132A (en) Vehicle panoramic all-around obstacle early warning method
CN109948398A (en) The image processing method and panorama parking apparatus that panorama is parked
KR20190047027A (en) How to provide a rearview mirror view of the vehicle's surroundings in the vehicle
CN110363085B (en) Method for realizing looking around of heavy articulated vehicle based on articulation angle compensation
CN106994936A (en) A kind of 3D panoramic parking assist systems
US20170024851A1 (en) Panel transform
EP3326146B1 (en) Rear cross traffic - quick looks
CN108174089B (en) Backing image splicing method and device based on binocular camera
JP2011039727A (en) Image display device for vehicle control, and method of the same
CN112308986B (en) Vehicle-mounted image stitching method, system and device
CN112308985B (en) Vehicle-mounted image stitching method, system and device
CN110400255B (en) Vehicle panoramic image generation method and system and vehicle
CN206436911U (en) Panorama reverse image processing unit
CN112308987A (en) Vehicle-mounted image splicing method, system and device
CN114734989A (en) Auxiliary parking device and method based on around vision
CN112308984B (en) Vehicle-mounted image stitching method, system and device
CN113362232A (en) Vehicle panoramic all-around image generation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant