CN115439326A - Image splicing optimization method and device in vehicle all-around viewing system - Google Patents

Image splicing optimization method and device in vehicle all-around viewing system Download PDF

Info

Publication number
CN115439326A
CN115439326A CN202211104952.9A CN202211104952A CN115439326A CN 115439326 A CN115439326 A CN 115439326A CN 202211104952 A CN202211104952 A CN 202211104952A CN 115439326 A CN115439326 A CN 115439326A
Authority
CN
China
Prior art keywords
view
target
overlapping
image
error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211104952.9A
Other languages
Chinese (zh)
Inventor
闫海龙
裴鹏飞
张晶
王帅炀
杨玉玲
朱海荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hopechart Iot Technology Co ltd
Original Assignee
Hangzhou Hopechart Iot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hopechart Iot Technology Co ltd filed Critical Hangzhou Hopechart Iot Technology Co ltd
Priority to CN202211104952.9A priority Critical patent/CN115439326A/en
Publication of CN115439326A publication Critical patent/CN115439326A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image splicing optimization method and device in a vehicle all-around system, wherein the method comprises the following steps: extracting the features of the images in the overlapping areas to obtain overlapping feature points corresponding to the images in the overlapping areas; determining a first error of converting the first target view into the second target view based on a first feature point in the first target view in the multiple views and an overlapping feature point corresponding to a target overlapping region image in the multiple overlapping region images; determining a first target view corresponding to the minimum first error as a standard view, and determining a first target view corresponding to the maximum first error as an optimized view; transforming the optimized view perspective into a first view based on the standard view; and splicing the first view and the standard view to acquire a panoramic view. The method can improve the panoramic stitching effect.

Description

Image splicing optimization method and device in vehicle all-around viewing system
Technical Field
The invention relates to the technical field of image splicing, in particular to an image splicing optimization method and device in a vehicle all-around viewing system.
Background
Panoramic image stitching technology is widely applied to the field of automotive electronics. In the related technology, a panoramic looking-around system comprises a plurality of cameras around a vehicle, image acquisition equipment, a video synthesis/processing component, display equipment and the like, image information around the vehicle is acquired through the plurality of cameras and is simultaneously input into the video synthesis/processing component, the images are synthesized into a panoramic image without dead angles around the vehicle in a view field range through technologies such as distortion correction, perspective transformation, splicing fusion and the like, but a dislocation effect still exists between adjacent views spliced by the method, and the panoramic splicing effect is influenced.
Disclosure of Invention
The invention provides an image splicing optimization method and device in a vehicle all-round system, which are used for solving the defect of poor panoramic splicing effect in the prior art and improving the panoramic splicing effect.
The invention provides an image splicing optimization method in a vehicle all-round viewing system, which comprises the following steps:
performing feature extraction on the multiple overlapping area images to acquire overlapping feature points corresponding to the overlapping area images; the overlapping region image is an image of an overlapping region with an adjacent view on any view in the plurality of views;
determining a first error of the first target view to the second target view based on a first feature point in the first target view in the multiple views and an overlapping feature point corresponding to a target overlapping area image in the multiple overlapping area images; the second target view is adjacent to the first target view, and the target overlapping area image is an overlapping area of the first target view and the second target view;
determining a first target view corresponding to the smallest first error as a standard view, and determining a first target view corresponding to the largest first error as an optimized view;
transforming the optimized view perspective into a first view based on the standard view;
and splicing the first view and the standard view to acquire a panoramic view.
According to the image stitching optimization method in the vehicle all-round system provided by the invention, the step of performing feature extraction on the multiple overlapping area images to obtain the overlapping feature points corresponding to the overlapping area images comprises the following steps:
respectively inputting the multiple overlapping area images to a feature point detection model, and acquiring first feature points corresponding to the overlapping area images output by the feature point detection model;
and inputting the first feature point into a feature point matching model, and acquiring the overlapped feature point output by the feature point matching model.
According to the image stitching optimization method in the vehicle all-around system provided by the invention, the determining a first error of the transition from the first target view to the second target view based on a first feature point in a first target view in the multiple views and an overlapping feature point corresponding to a target overlapping area image in the multiple overlapping area images comprises:
acquiring perspective transformation points from first feature points in the first target view to overlapped feature points corresponding to the target overlapped area image;
determining the first error based on an error value between a first feature point in the first target view and the perspective transformation point.
According to the image stitching optimization method in the vehicle all-round system provided by the invention, the obtaining of the perspective transformation point from the first feature point in the first target view to the overlapping feature point corresponding to the target overlapping region image comprises the following steps:
acquiring a homography matrix from the first characteristic point to the overlapped characteristic points;
determining the perspective transformation point based on the homography matrix and the first feature point.
According to the image stitching optimization method in the vehicle all-round system provided by the invention, the perspective transformation of the optimized view into the first view based on the standard view comprises the following steps:
determining an optimal homography matrix from the optimized view to the standard view by adopting an iterative algorithm;
determining the first view based on the optimal homography matrix and the optimized view.
According to the image stitching optimization method in the vehicle all-round system provided by the invention, the determining of the optimal homography matrix from the optimized view to the standard view by adopting an iterative algorithm comprises the following steps:
acquiring a plurality of groups of first overlapping feature points with the number of targets from the target overlapping area image corresponding to the optimized view;
determining a homography matrix corresponding to the standard image mapped to the first overlapping feature points corresponding to each group based on the standard image;
acquiring a second error corresponding to each homography matrix;
and under the condition that the second error is smaller than a target threshold value, determining a homography matrix corresponding to the second error as the optimal homography matrix.
The invention also provides an image splicing optimization device in the vehicle all-round system, which comprises the following components:
the first processing module is used for extracting the features of the images in the multiple overlapping areas and acquiring the overlapping feature points corresponding to the images in the overlapping areas; the overlapping region image is an image of an overlapping region with an adjacent view on any view in the multiple views;
the second processing module is used for determining a first error of converting a first target view into a second target view based on a first feature point in the first target view in the multiple views and an overlapping feature point corresponding to a target overlapping area image in the multiple overlapping area images; the second target view is adjacent to the first target view, and the target overlapping area image is an overlapping area of the first target view and the second target view;
the third processing module is used for determining the first target view corresponding to the minimum first error as a standard view and determining the first target view corresponding to the maximum first error as an optimized view;
a fourth processing module for perspectively transforming the optimized view into a first view based on the standard view;
and the fifth processing module is used for splicing the first view and the standard view to acquire a panoramic view.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the image stitching optimization method in the vehicle all-around system.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for image stitching optimization in a vehicle surround view system as described in any one of the above.
The invention also provides a computer program product comprising a computer program, wherein the computer program is used for realizing the image splicing optimization method in the vehicle all-around system when being executed by a processor.
According to the image splicing optimization method and device in the vehicle all-round system, the overlapping feature points in the images in the overlapping area are extracted, the standard view and the optimized view are determined based on the first error of the overlapping feature points and the first feature points, then the optimized view is subjected to perspective transformation to generate the first view, the error after the feature points in the overlapping area are mapped can be effectively reduced, the effect of the first view is improved, then the first view and the standard view are spliced to obtain the panoramic view, the dislocation degree between the adjacent views can be remarkably reduced, and the splicing effect of the finally generated panoramic image is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of an image stitching optimization method in a vehicle all-round system provided by the invention;
FIG. 2 is a schematic diagram illustrating an image stitching optimization method in a vehicle all-round system according to the present invention;
FIG. 3 is a second schematic flowchart of the image stitching optimization method in the vehicle all-round system according to the present invention;
FIG. 4 is a schematic structural diagram of an image stitching optimization device in a vehicle all-round system provided by the invention;
fig. 5 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The image stitching optimization method in the vehicle all-round system of the present invention is described below with reference to fig. 1 to 3.
It should be noted that an execution subject of the image stitching optimization method in the vehicle all-round viewing system may be a vehicle, or may be a server in communication connection with the vehicle, or may be an image stitching optimization device in the vehicle all-round viewing system in communication connection with the vehicle, or may also be a terminal of a user, including but not limited to a mobile terminal such as a mobile phone, a watch, and a vehicle-mounted terminal, and a non-mobile terminal such as a PC terminal.
As shown in fig. 1, the image stitching optimization method in the vehicle all-round system includes: step 110, step 120, step 130, step 140 and step 150.
110, extracting the features of the images in the multiple overlapping areas to obtain overlapping feature points corresponding to the images in the overlapping areas; the overlapping region image is an image of an overlapping region with an adjacent view on any view in the plurality of views;
in this step, the views are images acquired by an image sensor, with different views corresponding to different orientations.
In the actual implementation process, the image sensors can be cameras arranged in different areas of the front, the back, the left and the right of the vehicle so as to collect images in different directions; the multiple views include differently oriented images acquired by cameras of different regions.
There is an overlapping area in any adjacent view, and the image of the overlapping area is the image of the area in the view, which overlaps with any view adjacent to the view.
As shown in fig. 2, the plurality of image sensors includes: an image sensor C1 provided in front of the vehicle, an image sensor C3 provided in rear of the vehicle, an image sensor C2 provided in the right of the vehicle, and an image sensor C4 provided in the left of the vehicle; the plurality of views includes: a front view a acquired by the image sensor C1, a rear view B acquired by the image sensor C3, a left view C acquired by the image sensor C4, and a right view D acquired by the image sensor C2.
The overlap area image may include: a front left overlap region image, a front right overlap region, a rear left overlap region, a rear right overlap region, a left front overlap region, a left rear overlap region, a right front overlap region, a right rear overlap region.
Wherein, the image of the overlapping area of the front view A and the left view C is the front left overlapping area image; an image of an overlapping area of the front view A and the right view D is a front right overlapping area image; the image of the overlapping area of the left view C and the front view A is the image of the left front overlapping area; and the image of the overlapping area of the left view C and the back view B is the image of the left back overlapping area.
The overlapping characteristic points are acquired by image sensors corresponding to adjacent views in the overlapping area image and are used for representing pixel points at the same position.
The overlapping feature points include: a description of the overlapping feature points and coordinates of the overlapping feature points.
In an actual implementation process, the overlapping feature points may be obtained by performing feature extraction on the overlapping region image, for example, by using a pre-trained neural network model to perform feature extraction on the overlapping region image.
The neural network model may be any structure or model in any form, and the invention is not limited thereto.
Step 120, determining a first error of converting the first target view into the second target view based on a first feature point in the first target view in the multiple views and an overlapping feature point corresponding to a target overlapping region image in the multiple overlapping region images; the second target view is adjacent to the first target view, and the target overlapping area image is an overlapping area of the first target view and the second target view;
in this step, the first target view may be any of a plurality of views.
The second target view is any one of the plurality of views adjacent to the first target view.
The target overlapping area image is an image of an area where the first target view and the second target view overlap.
The first feature point is a feature point of a region in the first target view that overlaps with the second target view.
The first error is used for representing the difference degree between the obtained mapping point and the overlapped characteristic point corresponding to the first characteristic point after the first characteristic point in the first target view is mapped to the second target view. I.e. the "cost" to be paid for the translation of the first target view to the second target view.
It will be appreciated that the smaller the first error, the more accurate the resulting mapped point.
In an actual implementation, the same method may be adopted to determine the first error corresponding to each of the multiple views.
Step 130, determining the first target view corresponding to the minimum first error as a standard view, and determining the first target view corresponding to the maximum first error as an optimized view;
in this step, the standard view is a view of the plurality of views that does not need to be mapped.
The optimized view is a view which needs to be mapped to the standard view to obtain a mapped view.
In some embodiments, the standard view comprises at least one of a plurality of views and the optimized view comprises at least one of a plurality of views.
The step is explained by taking four views of front, back, left and right as an example.
In some embodiments, the first errors corresponding to the respective views may be sorted according to size, and there are the following cases for different sorting orders:
1) If the first errors corresponding to the front view A and the rear view B are smaller than the first errors corresponding to the left view C and the right view D, determining the front view A and the rear view B as reference views, and determining the left view C and the right view D as optimized views;
for example, a first error a corresponding to the front view A, a first error B corresponding to the back view B, a first error C corresponding to the left view C and a first error D corresponding to the right view D are respectively obtained, wherein a < B < C < D; the front view a and the rear view B may be determined as standard views and the left view C and the right view D may be determined as optimized views.
2) And if the first errors corresponding to the left view C and the right view D are smaller than those corresponding to the front view A and the rear view B, determining the left view C and the right view D as reference views, and determining the front view A and the rear view B as optimized views.
3) In the other cases except the above case, the view with the largest first error is used as the optimized view, and the other three views are used as the reference views.
For example, a first error a corresponding to a front view A, a first error B corresponding to a back view B, a first error C corresponding to a left view C and a first error D corresponding to a right view D are respectively obtained, wherein a is more than D and more than C is less than B; the front view a may be determined as the optimized view and the rear view B, the left view C and the right view D may be determined as the standard views.
In this step, by determining the view with the largest first error as the optimized view for mapping, the misalignment can be eliminated to the greatest extent, so as to improve the effect of the panorama obtained by subsequent stitching.
Step 140, based on the standard view, transforming the optimized view perspective into a first view;
in this step, the first view is the view generated by mapping the optimized view to the standard view.
One optimized view corresponds to one first view.
In actual implementation, the optimized view may be perspectively transformed into the first view using the homography matrix of the optimized view to the standard view.
For example, with continued reference to fig. 2, after determining front view a and back view B as standard views and left view C and right view D as optimized views, left view C is perspectively transformed into first view E using homography matrices of left view C to front view a and back view B, respectively;
and respectively adopting homography matrixes from the right view D to the front view A and the rear view B to perspectively transform the right view D into a first view F.
And 150, splicing the first view and the standard view to obtain a panoramic view.
In this step, after the first view E corresponding to the perspective transformation of the left view C and the first view F corresponding to the perspective transformation of the right view D are obtained in step 140, the front view a, the rear view B, the first view E, and the right view D are subjected to panorama stitching, and a panorama view can be obtained.
According to the image splicing optimization method in the vehicle all-round viewing system provided by the embodiment of the invention, the overlapping feature points in the images in the overlapping area are extracted, the standard view and the optimized view are determined based on the first error of the overlapping feature points and the first feature points, then the optimized view is subjected to perspective transformation to generate the first view, the error after the feature points in the overlapping area are mapped can be effectively reduced, the effect of the first view is improved, then the first view and the standard view are spliced to obtain the panoramic view, the dislocation degree between the adjacent views can be obviously reduced, and the splicing effect of the finally generated panoramic image is improved.
As shown in fig. 3, in some embodiments, step 110 may include:
respectively inputting the multiple overlapping area images into a feature point detection model, and acquiring first feature points corresponding to the overlapping area images output by the feature point detection model;
and inputting the first feature points into the feature point matching model, and acquiring the overlapped feature points output by the feature point matching model.
Optionally, the feature point detection model is used to extract feature points.
The feature point matching model is used for matching any two feature points from different overlapped region images so as to output feature points which are successfully matched.
The feature point detection model and the feature point matching model can both be neural network models, wherein the output of the feature point detection model is connected with the input of the feature point matching model.
Specifically, the feature point detection model may be a deep learning model, a SuperPoint model, and the feature point matching model may be a SuperGlue model.
For example, with reference to fig. 2, an image of an overlapping area between adjacent views is captured, and is input into the feature point matching model SuperPoint after image preprocessing (such as normalization) operation, where the feature point matching model obtains relevant information corresponding to a first feature point of the image of the overlapping area, such as obtaining a coordinate of the first feature point and description of the first feature point;
the description is a 256-dimensional vector, each dimension representing some property of the first feature point (e.g., gradient, color, etc.).
And inputting the coordinates of the first characteristic points and the description of the first characteristic points into a characteristic point matching model SuperGlue model, pairing every two first characteristic points according to the similarity of the description of different first characteristic points by characteristic point matching, and finally obtaining the coordinates of the paired characteristic points between two images in the overlapping area of the panoramic picture, namely the overlapping characteristic points.
In some embodiments, the sample overlap area image may be used as a sample, and the sample feature points corresponding to the sample overlap area image are used as sample labels to train the feature point detection model.
And the sample characteristic points are actual pixel points in the sample overlapping area image.
In some embodiments, the feature point matching model may be trained by using the sample feature point as a sample and using a sample overlap feature point corresponding to the sample feature point as a sample label.
The sample overlapping feature points are sample feature points existing in two adjacent sample overlapping area images in the sample feature points.
According to the image splicing optimization method in the vehicle all-round system, the overlapped characteristic points of the images in the overlapped area of the adjacent views are obtained by using the characteristic point matching method (SuperPoint and SuperGlue) based on deep learning, so that the method has high precision and accuracy and high learning capacity.
In some embodiments, step 120 may include:
obtaining perspective transformation points from a first characteristic point in a first target view to an overlapped characteristic point corresponding to a target overlapped region image;
a first error is determined based on an error value between the first feature point and the perspective transformation point in the first target view.
Optionally, the perspective transformation point is a mapping point at which the first feature point maps onto the second target view.
The implementation of this step will be described below by taking the first target view as the front view a as an example.
For example, the first feature point of the front view A is taken as a group of point sets, and P is taken as front_1 Representing the overlapping feature points of the images of the overlapping area with the front view A in the left view C and the right view D as another set of points, denoted by P left_right Represents, at this time P front_1 To P left_right Perspective transformation ofPoint with P front_2 Represents;
then calculate P front_1 And P front_2 The error value is the first error a corresponding to the front view a.
In the same way, a first error c corresponding to the left-view to front-back view conversion, a first error b corresponding to the back-view to left-right view conversion and a first error d corresponding to the right-view to front-back view conversion can be calculated.
In some embodiments, obtaining a perspective transformation point from a first feature point in the first target view to an overlapping feature point corresponding to the target overlapping region image may include:
acquiring a homography matrix from the first characteristic point to the overlapped characteristic points;
based on the homography matrix and the first feature points, perspective transformation points are determined.
Optionally, the homography matrix is a projection mapping of one plane to another plane.
In the actual implementation process, the first target view is taken as the front view a as an example, and the implementation manner of the embodiment is explained.
Taking the first feature point of the front view A as a group of point sets, and taking P as the point set front_1 Representing that the overlapping feature points of the overlapping region image with the front view A in the left view C and the right view D are taken as another group of point sets, P left_right Expressed, then, by the following formula:
P front_2 =H*P front_1
determining P front_2 Wherein P is front_2 Is P front_1 To P left_right H is P front_1 To P left_right The homography matrix of (a).
The determination of the homography matrix is explained in detail below.
The homography matrix is a three-dimensional matrix, and the effect of perspective transformation can be achieved by multiplying the homography matrix by the homogeneous coordinates. The perspective transformation includes transformation manners such as rotation, translation, and affine transformation, and each part of the homography matrix also includes transformation manners such as rotation, translation, and affine transformation.
The principle of perspective transformation is shown as follows:
Figure BDA0003841316880000121
wherein (x) 1 ,y 1 1) representing the homogeneous coordinates of the feature points in the source image (i.e. the image before mapping, such as the target view); (x) 2 ,y 2 1) representing the homogeneous coordinates of the characteristic points in the target image (i.e. the image obtained after mapping); h is a total of 11 、h 12 、h 21 And h 22 Representing transformations of rotation and scaling, h 13 And h 23 Representing a translation transformation, h 31 And h 32 Spatial transformation for perspective transformation, h 33 Is 1.
It should be noted that only h needs to be output 11 ~h 32 The homography matrix H can be obtained by eight unknowns, two unknowns can be solved by each pair of matching points, and the homography matrix can be solved by only four pairs of matching points.
According to the image splicing optimization method in the vehicle all-round viewing system, perspective transformation points from a first feature point in a first target view to an overlapping feature point corresponding to a target overlapping area image are obtained; and then, a first error is determined based on an error value between the first characteristic point in the first target view and the perspective transformation point, and the determination of the subsequent standard view by the user can minimize the error of the determined standard view, so that the error of subsequent image stitching is reduced, and the image stitching effect is improved.
With continued reference to fig. 3, in some embodiments, step 140 may include:
determining an optimal homography matrix from an optimized view to a standard view by adopting an iterative algorithm;
a first view is determined based on the optimal homography matrix and the optimized view.
Optionally, the optimal homography matrix is a homography matrix capable of minimizing an error of the first view obtained after the optimized view is mapped to the standard view.
In the actual implementation process, a Random sample consensus (RANSAC) algorithm may be used to select feature points at Random continuously, and then select mismatching points and regress the selected mismatching points to obtain an optimal homography matrix, so as to determine the optimal homography matrix.
Of course, in other embodiments, other methods may be used, such as determining the optimal homography matrix by using a neural network model, and the like, which is not limited in the present invention.
Based on the optimal homography matrix and the optimized view, a first view is determined, which may be expressed as a product of the optimal homography matrix and the optimized view as the first view.
In some embodiments, determining the optimal homography matrix of the optimized view to the standard view using an iterative algorithm may include:
acquiring a plurality of groups of first overlapping feature points with the number of targets from the target overlapping area image corresponding to the optimized view;
determining that the first overlapping characteristic points corresponding to each group are mapped to the homography matrix corresponding to the standard image based on the standard image;
acquiring a second error corresponding to each homography matrix;
and under the condition that the second error is smaller than the target threshold, determining the homography matrix corresponding to the second error as the optimal homography moment.
Alternatively, the target number may be customized based on the user, and the invention is not limited thereto.
The target threshold may be user-defined, and the invention is not limited thereto.
It should be noted that the first overlapping feature points corresponding to different groups should not be identical.
The first overlapping feature point is any of a plurality of overlapping feature points in the same overlapping region image.
The second error is used for representing the error size of the mapping point obtained by mapping based on the current homography matrix and the actual characteristic point.
For example, first overlapping feature points of a target number are randomly selected to calculate a homography matrix corresponding to the group of points;
then putting other first overlapped characteristic points into the model to calculate a second error, and judging whether the first overlapped characteristic points accord with the model or not;
if the second error is smaller than the target threshold, the first overlapping feature point is determined as an inner point, and if not, the first overlapping feature point is determined as an outer point;
and (3) iterating for multiple times and selecting the first overlapped characteristic points of different groups to calculate a plurality of models, wherein the model with the most inner points is the optimal model (namely the optimal homography matrix) which accords with all the points.
According to the image splicing optimization method in the vehicle all-round viewing system, the optimal homography matrix from the optimized view to the standard view is determined through the iterative algorithm, mismatching points can be screened out, the determined homography matrix is enabled to be an optimal value, and the splicing effect is further improved.
The following describes the image stitching optimization device in the vehicle all-round viewing system provided by the present invention, and the image stitching optimization device in the vehicle all-round viewing system described below and the image stitching optimization method in the vehicle all-round viewing system described above can be referred to correspondingly.
As shown in fig. 4, the image stitching optimization apparatus in the vehicle all-round system includes: a first processing module 410, a second processing module 420, a third processing module 430, a fourth processing module 440, and a fifth processing module 450.
The first processing module 410 is configured to perform feature extraction on the multiple overlapping area images, and acquire overlapping feature points corresponding to the overlapping area images; the overlapping region image is an image of an overlapping region with an adjacent view on any view in the plurality of views;
a second processing module 420, configured to determine a first error of switching from the first target view to the second target view based on a first feature point in a first target view in the multiple views and an overlapping feature point corresponding to a target overlapping area image in the multiple overlapping area images; the second target view is adjacent to the first target view, and the target overlapping area image is an overlapping area of the first target view and the second target view;
the third processing module 430 is configured to determine the first target view corresponding to the smallest first error as a standard view, and determine the first target view corresponding to the largest first error as an optimized view;
a fourth processing module 440 for perspectively transforming the optimized view into the first view based on the standard view;
and a fifth processing module 450, configured to splice the first view and the standard view to obtain a panoramic view.
According to the image splicing optimization device in the vehicle all-round viewing system provided by the embodiment of the invention, the overlapping feature points in the images in the overlapping area are extracted, the standard view and the optimized view are determined based on the first error of the overlapping feature points and the first feature points, then the first view is generated by carrying out perspective transformation on the optimized view, the error after the feature points in the overlapping area are mapped can be effectively reduced, the effect of the first view is improved, then the first view and the standard view are spliced to obtain the panoramic view, the dislocation degree between the adjacent views can be obviously reduced, and the splicing effect of the finally generated panoramic image is improved.
In some embodiments, the first processing module 410 may be further configured to:
respectively inputting the multiple overlapping area images into a feature point detection model, and acquiring first feature points corresponding to the overlapping area images output by the feature point detection model;
and inputting the first feature points into the feature point matching model, and acquiring the overlapped feature points output by the feature point matching model.
In some embodiments, the second processing module 420 may be further configured to:
obtaining perspective transformation points from a first characteristic point in a first target view to an overlapped characteristic point corresponding to a target overlapped region image;
a first error is determined based on an error value between the first feature point and the perspective transformation point in the first target view.
In some embodiments, the second processing module 420 may be further configured to:
acquiring a homography matrix from the first characteristic point to the overlapped characteristic points;
based on the homography matrix and the first feature points, perspective transformation points are determined.
In some embodiments, the fourth processing module 440 may further be configured to:
determining an optimal homography matrix from an optimized view to a standard view by adopting an iterative algorithm;
a first view is determined based on the optimal homography matrix and the optimized view.
In some embodiments, the fourth processing module 440 may further be configured to:
acquiring a plurality of groups of first overlapping feature points of the number of targets from the target overlapping area image corresponding to the optimized view;
determining the mapping of the first overlapping characteristic points corresponding to each group to the homography matrix corresponding to the standard image based on the standard image;
acquiring a second error corresponding to each homography matrix;
and under the condition that the second error is smaller than the target threshold, determining the homography matrix corresponding to the second error as the optimal homography moment.
Fig. 5 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 5: a processor (processor) 510, a communication Interface (Communications Interface) 520, a memory (memory) 530 and a communication bus 540, wherein the processor 510, the communication Interface 520 and the memory 530 communicate with each other via the communication bus 540. Processor 510 may invoke logic instructions in memory 530 to perform a method of image stitching optimization in a vehicle look-around system, the method comprising: performing feature extraction on the multiple overlapping area images to acquire overlapping feature points corresponding to the overlapping area images; the overlapping region image is an image of an overlapping region with an adjacent view on any view in the plurality of views; determining a first error of converting the first target view into the second target view based on a first feature point in the first target view in the multiple views and an overlapping feature point corresponding to a target overlapping region image in the multiple overlapping region images; the second target view is adjacent to the first target view, and the target overlapping area image is an overlapping area of the first target view and the second target view; determining a first target view corresponding to the minimum first error as a standard view, and determining a first target view corresponding to the maximum first error as an optimized view; transforming the optimized view perspective into a first view based on the standard view; and splicing the first view and the standard view to acquire a panoramic view.
Furthermore, the logic instructions in the memory 530 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, which when executed by a computer, enable the computer to perform the image stitching optimization method in a vehicle all-round system provided by the above methods, the method comprising: extracting the features of the images in the overlapping areas to obtain overlapping feature points corresponding to the images in the overlapping areas; the overlapping region image is an image of an overlapping region with an adjacent view on any view in the plurality of views; determining a first error of converting the first target view into the second target view based on a first feature point in the first target view in the multiple views and an overlapping feature point corresponding to a target overlapping region image in the multiple overlapping region images; the second target view is adjacent to the first target view, and the target overlapping area image is an overlapping area of the first target view and the second target view; determining a first target view corresponding to the minimum first error as a standard view, and determining a first target view corresponding to the maximum first error as an optimized view; transforming the optimized view perspective into a first view based on the standard view; and splicing the first view and the standard view to acquire a panoramic view.
In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the image stitching optimization method in the vehicle all around system provided above, the method comprising: performing feature extraction on the multiple overlapping area images to acquire overlapping feature points corresponding to the overlapping area images; the overlapping region image is an image of an overlapping region with an adjacent view on any view in the plurality of views; determining a first error of converting the first target view into the second target view based on a first feature point in the first target view in the multiple views and an overlapping feature point corresponding to a target overlapping region image in the multiple overlapping region images; the second target view is adjacent to the first target view, and the target overlapping area image is an overlapping area of the first target view and the second target view; determining a first target view corresponding to the minimum first error as a standard view, and determining a first target view corresponding to the maximum first error as an optimized view; transforming the optimized view perspective into a first view based on the standard view; and splicing the first view and the standard view to acquire a panoramic view.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An image stitching optimization method in a vehicle all-round system is characterized by comprising the following steps:
extracting the features of the images in the overlapping areas to obtain overlapping feature points corresponding to the images in the overlapping areas; the overlapping region image is an image of an overlapping region with an adjacent view on any view in the multiple views;
determining a first error of a transition from a first target view to a second target view based on a first feature point in the first target view in the multiple views and an overlapping feature point corresponding to a target overlapping region image in the multiple overlapping region images; the second target view is adjacent to the first target view, and the target overlapping area image is an overlapping area of the first target view and the second target view;
determining a first target view corresponding to the minimum first error as a standard view, and determining a first target view corresponding to the maximum first error as an optimized view;
based on the standard view, perspectively transforming the optimized view into a first view;
and splicing the first view and the standard view to acquire a panoramic view.
2. The method for optimizing image stitching in the vehicle all-round system according to claim 1, wherein the extracting features of the images in the plurality of overlapping regions to obtain the overlapping feature points corresponding to the images in the overlapping regions comprises:
respectively inputting the multiple overlapping area images to a feature point detection model, and acquiring first feature points corresponding to the overlapping area images output by the feature point detection model;
and inputting the first feature point into a feature point matching model, and acquiring the overlapped feature point output by the feature point matching model.
3. The method of claim 1, wherein the determining a first error for switching from the first target view to the second target view based on a first feature point in a first target view of the multiple views and an overlapping feature point corresponding to a target overlapping area image of the multiple overlapping area images comprises:
acquiring perspective transformation points from first feature points in the first target view to overlapped feature points corresponding to the target overlapped area image;
determining the first error based on an error value between a first feature point in the first target view and the perspective transformation point.
4. The method for optimizing image stitching in the vehicle all-round system according to claim 3, wherein the obtaining of the perspective transformation point from the first feature point in the first target view to the overlapping feature point corresponding to the target overlapping region image comprises:
acquiring a homography matrix from the first characteristic point to the overlapped characteristic points;
determining the perspective transformation point based on the homography matrix and the first feature point.
5. The method for image stitching optimization in a vehicle all around system according to any one of claims 1 to 4, wherein the perspectively transforming the optimized view into a first view based on the standard view comprises:
determining an optimal homography matrix from the optimized view to the standard view by adopting an iterative algorithm;
determining the first view based on the optimal homography matrix and the optimized view.
6. The method for optimizing image stitching in a vehicle all-round system according to claim 5, wherein the determining the optimal homography matrix from the optimized view to the standard view by using an iterative algorithm comprises:
acquiring a plurality of groups of first overlapping feature points with the number of targets from the target overlapping area image corresponding to the optimized view;
determining that the first overlapping characteristic points corresponding to each group are mapped to the homography matrix corresponding to the standard image based on the standard image;
acquiring a second error corresponding to each homography matrix;
and determining the homography matrix corresponding to the second error as the optimal homography matrix under the condition that the second error is smaller than a target threshold.
7. An image stitching optimization device in a vehicle all-round system, comprising:
the first processing module is used for extracting the features of the images in the multiple overlapping areas and acquiring the overlapping feature points corresponding to the images in the overlapping areas; the overlapping region image is an image of an overlapping region with an adjacent view on any view in the multiple views;
a second processing module, configured to determine a first error of switching from a first target view to a second target view based on a first feature point in the first target view in the multiple views and an overlapping feature point corresponding to a target overlapping area image in the multiple overlapping area images; the second target view is adjacent to the first target view, and the target overlapping area image is an overlapping area of the first target view and the second target view;
the third processing module is used for determining a first target view corresponding to the minimum first error as a standard view and determining a first target view corresponding to the maximum first error as an optimized view;
a fourth processing module for perspectively transforming the optimized view into a first view based on the standard view;
and the fifth processing module is used for splicing the first view and the standard view to acquire a panoramic view.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements a method of image stitching optimization in a vehicle look-around system as claimed in any one of claims 1 to 6.
9. A non-transitory computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the image stitching optimization method in the vehicle all-around system according to any one of claims 1 to 6.
10. A computer program product comprising a computer program, wherein the computer program when executed by a processor implements a method of image stitching optimization in a vehicle look-around system as claimed in any one of claims 1 to 6.
CN202211104952.9A 2022-09-09 2022-09-09 Image splicing optimization method and device in vehicle all-around viewing system Pending CN115439326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211104952.9A CN115439326A (en) 2022-09-09 2022-09-09 Image splicing optimization method and device in vehicle all-around viewing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211104952.9A CN115439326A (en) 2022-09-09 2022-09-09 Image splicing optimization method and device in vehicle all-around viewing system

Publications (1)

Publication Number Publication Date
CN115439326A true CN115439326A (en) 2022-12-06

Family

ID=84247233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211104952.9A Pending CN115439326A (en) 2022-09-09 2022-09-09 Image splicing optimization method and device in vehicle all-around viewing system

Country Status (1)

Country Link
CN (1) CN115439326A (en)

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN109509226B (en) Three-dimensional point cloud data registration method, device and equipment and readable storage medium
CN111667520B (en) Registration method and device for infrared image and visible light image and readable storage medium
US7702131B2 (en) Segmenting images and simulating motion blur using an image sequence
US9767383B2 (en) Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image
US20120274627A1 (en) Self calibrating stereo camera
CN107767339B (en) Binocular stereo image splicing method
CN108305281B (en) Image calibration method, device, storage medium, program product and electronic equipment
CN113240031B (en) Panoramic image feature point matching model training method and device and server
CN114862973B (en) Space positioning method, device and equipment based on fixed point location and storage medium
Bastanlar et al. Multi-view structure-from-motion for hybrid camera scenarios
CN111191582A (en) Three-dimensional target detection method, detection device, terminal device and computer-readable storage medium
CN111383254A (en) Depth information acquisition method and system and terminal equipment
CN111461998A (en) Environment reconstruction method and device
CN110120013A (en) A kind of cloud method and device
CN110136048B (en) Image registration method and system, storage medium and terminal
CN111325107A (en) Detection model training method and device, electronic equipment and readable storage medium
CN113379815A (en) Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server
CN112446926A (en) Method and device for calibrating relative position of laser radar and multi-eye fisheye camera
CN114445583A (en) Data processing method and device, electronic equipment and storage medium
CN115439326A (en) Image splicing optimization method and device in vehicle all-around viewing system
CN112270693B (en) Method and device for detecting motion artifact of time-of-flight depth camera
CN114913105A (en) Laser point cloud fusion method and device, server and computer readable storage medium
CN114140429A (en) Real-time parking space detection method and device for vehicle end
CN108426566B (en) Mobile robot positioning method based on multiple cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination