CN110400255B - Vehicle panoramic image generation method and system and vehicle - Google Patents

Vehicle panoramic image generation method and system and vehicle Download PDF

Info

Publication number
CN110400255B
CN110400255B CN201810380394.6A CN201810380394A CN110400255B CN 110400255 B CN110400255 B CN 110400255B CN 201810380394 A CN201810380394 A CN 201810380394A CN 110400255 B CN110400255 B CN 110400255B
Authority
CN
China
Prior art keywords
image
calibration
area
vehicle
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810380394.6A
Other languages
Chinese (zh)
Other versions
CN110400255A (en
Inventor
陈敬濠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201810380394.6A priority Critical patent/CN110400255B/en
Publication of CN110400255A publication Critical patent/CN110400255A/en
Application granted granted Critical
Publication of CN110400255B publication Critical patent/CN110400255B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a method and a system for generating a vehicle panoramic image and a vehicle, wherein the method for generating the vehicle panoramic image comprises the following steps: acquiring original images around the vehicle through a plurality of cameras; determining a plurality of calibration areas based on overlapping areas in original images around the vehicle; acquiring image pixel point transformation mathematical models of a plurality of calibration points in a calibration area; transforming a mathematical model according to image pixel points of a plurality of calibration points, and respectively transforming original images around the vehicle, which are acquired by a plurality of cameras, into target images; and splicing the target images into panoramic images. According to the method and the system for generating the vehicle panoramic image and the vehicle, the fusion degree of the spliced images is higher, the dislocation phenomenon is avoided, and the alignment effect is better.

Description

Vehicle panoramic image generation method and system and vehicle
Technical Field
The invention relates to the technical field of automobiles, in particular to a method and a system for generating a vehicle panoramic image and a vehicle.
Background
With the advent of the information age, automobiles are also becoming more intelligent, which is mainly embodied in vehicle-mounted terminals mounted on vehicle cabs. The vehicle-mounted terminal can have multiple functions of internet wireless communication, GPS navigation, entertainment audio and video and the like, and provides better driving experience for users. Among them, the vehicle panoramic image function is one of the most practical functions. The function can help a user to watch images around the vehicle through the vehicle display screen in the parking process, eliminate the visual field blind area around the vehicle and enable the parking to be more visual and convenient. However, panoramic images are often formed by splicing images collected by a plurality of cameras, and during splicing, the same parts between the images shot by two adjacent cameras need to be subjected to distortion correction and registration processing. At this time, a calibration module is needed. In the panoramic image, the calibration module may be a plurality of diamond-shaped areas. During calibration, the upper and lower vertices of each diamond are usually selected as standard points. The connecting line between the upper and lower vertexes of each rhombus is parallel, namely, the connecting line is vertical. When images are spliced, the images need to be rotated by a certain angle, and at the moment, stretching in the horizontal direction is easily caused, so that the matching is not accurate, and the finally spliced image effect is poor.
Disclosure of Invention
The invention provides a method and a system for generating a vehicle panoramic image and a vehicle, and aims to solve at least one technical problem.
The embodiment of the invention provides a method for generating a vehicle panoramic image, which comprises the following steps:
acquiring original images around the vehicle through a plurality of cameras;
determining a plurality of calibration areas based on the overlapping areas in the original images around the vehicle;
acquiring image pixel point transformation mathematical models of a plurality of calibration points in the calibration area;
transforming a mathematical model according to the image pixel points of the plurality of calibration points, and respectively transforming original images around the vehicle, which are acquired by the plurality of cameras, into target images; and
and splicing the target images into a panoramic image.
Optionally, acquiring original images around the vehicle through a plurality of cameras;
determining a plurality of calibration areas based on the overlapping areas in the original images around the vehicle;
acquiring image pixel point transformation mathematical models of a plurality of calibration points in the calibration area;
transforming a mathematical model according to the image pixel points of the plurality of calibration points, and respectively transforming original images around the vehicle, which are acquired by the plurality of cameras, into target images; and
and splicing the target images into a panoramic image.
Optionally, obtaining an image pixel point transformation mathematical model of a plurality of calibration points in the calibration region includes:
determining a central point and at least two vertexes of the first calibration area as calibration points of the first calibration area, acquiring coordinates of the calibration points of the first calibration area, and generating a first image pixel point transformation mathematical model of a plurality of calibration points in the first calibration area according to the coordinates of the calibration points of the first calibration area;
determining a central point and at least two vertexes of the second calibration area as calibration points of the second calibration area, acquiring coordinates of the calibration points of the second calibration area, and generating a second image pixel point transformation mathematical model of a plurality of calibration points in the second calibration area according to the coordinates of the calibration points of the second calibration area;
determining a central point and at least two vertexes of the third calibration area as calibration points of the third calibration area, acquiring coordinates of the calibration points of the third calibration area, and generating a third image pixel point transformation mathematical model of a plurality of calibration points in the third calibration area according to the coordinates of the calibration points of the third calibration area;
determining a central point and at least two vertexes of the fourth calibration area as calibration points of the fourth calibration area, acquiring coordinates of the calibration points of the fourth calibration area, and generating a fourth image pixel point transformation mathematical model of a plurality of calibration points in the fourth calibration area according to the coordinates of the calibration points of the fourth calibration area.
Optionally, transforming the original images around the vehicle collected by the plurality of cameras into target images respectively according to the mathematical model transformed by the image pixel points of the plurality of calibration points, including:
converting the front image and the left image into a first image and a second image respectively by utilizing the first image pixel point transformation mathematical model, and splicing the first image and the second image at a preset angle to generate a first target image;
converting the front image and the right image into a third image and a fourth image respectively by utilizing the second image pixel point transformation mathematical model, and splicing the third image and the fourth image at a preset angle to generate a second target image;
converting the rear image and the left image into a fifth image and a sixth image respectively by utilizing the third image pixel point transformation mathematical model, and splicing the fifth image and the sixth image at a preset angle to generate a third target image;
and transforming a mathematical model by using the fourth image pixel points, respectively converting the rear image and the right image into a seventh image and an eighth image, and splicing the seventh image and the eighth image at a preset angle to generate a fourth target image.
Optionally, stitching the target images into a panoramic image includes:
and splicing the first target image, the second target image, the third target image and the fourth target image to generate the panoramic image.
Optionally, the calibration area is a polygonal area.
Optionally, the method further comprises:
acquiring included angle information between the current position of the vehicle and a preset position;
and carrying out coordinate transformation on the coordinate information of the plurality of calibration areas according to the included angle information.
Another embodiment of the present invention provides a system for generating a panoramic image of a vehicle, including:
the system comprises a plurality of cameras, a plurality of image acquisition units and a plurality of image processing units, wherein the cameras are used for acquiring original images around a vehicle;
the image receiving device is used for receiving original images of the periphery of the vehicle collected by the plurality of cameras;
the image transformation device is used for determining a plurality of calibration regions based on the overlapping regions in the original images around the vehicle, acquiring image pixel point transformation mathematical models of a plurality of calibration points in the calibration regions, and respectively transforming the original images around the vehicle collected by the plurality of cameras into target images according to the image pixel point transformation mathematical models of the plurality of calibration points; and
and the image splicing device is used for splicing the target images into panoramic images.
Optionally, the original images of the periphery of the vehicle include a front image, a rear image, a left image, and a right image, and the image transformation device is configured to:
acquiring a first overlapping area between the front image and the left image, and performing binarization processing on the first overlapping area to determine coordinate information of a first calibration area in the first overlapping area;
acquiring a second overlapping area between the front image and the right image, and performing binarization processing on the second overlapping area to determine coordinate information of a second calibration area in the second overlapping area;
acquiring a third overlapping area between the rear image and the left image, and performing binarization processing on the third overlapping area to determine coordinate information of a third calibration area in the third overlapping area;
and acquiring a fourth overlapping area between the rear image and the right image, and performing binarization processing on the fourth overlapping area to determine coordinate information of a fourth calibration area in the fourth overlapping area.
Optionally, the image transformation device is configured to:
determining a central point and at least two vertexes of the first calibration area as calibration points of the first calibration area, acquiring coordinates of the calibration points of the first calibration area, and generating a first image pixel point transformation mathematical model of a plurality of calibration points in the first calibration area according to the coordinates of the calibration points of the first calibration area;
determining a central point and at least two vertexes of the second calibration area as calibration points of the second calibration area, acquiring coordinates of the calibration points of the second calibration area, and generating a second image pixel point transformation mathematical model of a plurality of calibration points in the second calibration area according to the coordinates of the calibration points of the second calibration area;
determining a central point and at least two vertexes of the third calibration area as calibration points of the third calibration area, acquiring coordinates of the calibration points of the third calibration area, and generating a third image pixel point transformation mathematical model of a plurality of calibration points in the third calibration area according to the coordinates of the calibration points of the third calibration area;
determining a central point and at least two vertexes of the fourth calibration area as calibration points of the fourth calibration area, acquiring coordinates of the calibration points of the fourth calibration area, and generating a fourth image pixel point transformation mathematical model of a plurality of calibration points in the fourth calibration area according to the coordinates of the calibration points of the fourth calibration area.
Optionally, the image transformation device is configured to:
converting the front image and the left image into a first image and a second image respectively by utilizing the first image pixel point transformation mathematical model, and splicing the first image and the second image at a preset angle to generate a first target image;
converting the front image and the right image into a third image and a fourth image respectively by utilizing the second image pixel point transformation mathematical model, and splicing the third image and the fourth image at a preset angle to generate a second target image;
converting the rear image and the left image into a fifth image and a sixth image respectively by utilizing the third image pixel point transformation mathematical model, and splicing the fifth image and the sixth image at a preset angle to generate a third target image;
and transforming a mathematical model by using the fourth image pixel points, respectively converting the rear image and the right image into a seventh image and an eighth image, and splicing the seventh image and the eighth image at a preset angle to generate a fourth target image.
Optionally, the image stitching device is used for
And splicing the first target image, the second target image, the third target image and the fourth target image to generate the panoramic image.
Optionally, the calibration area is a polygonal area.
Optionally, the image transformation apparatus is further configured to:
and acquiring included angle information between the current position of the vehicle and a preset position, and performing coordinate transformation on the coordinate information of the plurality of calibration areas according to the included angle information.
The invention further provides a vehicle, which comprises the vehicle panoramic image generation system.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
the method comprises the steps of acquiring original images around a vehicle through a plurality of cameras, determining a plurality of calibration areas based on overlapping areas in the original images around the vehicle, acquiring image pixel point transformation mathematical models of a plurality of calibration points in the calibration areas, respectively transforming the original images around the vehicle acquired by the plurality of cameras into target images according to the image pixel point transformation mathematical models of the plurality of calibration points, and splicing the target images into panoramic images, so that the spliced images are higher in fusion degree, the dislocation phenomenon is avoided, and the alignment effect is better.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a method for generating a panoramic image of a vehicle according to an embodiment of the present invention;
FIG. 2 is a schematic view showing the effect of image calibration of a vehicle in the related art;
FIG. 3 is a diagram illustrating the effect of determining a targeting region according to one embodiment of the present invention;
FIG. 4 is a diagram illustrating the effects of determining a mathematical model of a transformation, according to one embodiment of the present invention;
FIG. 5 is a schematic diagram of the effect of coordinate transformation of a vehicle according to one embodiment of the invention;
fig. 6 is a block diagram of a system for generating a panoramic image of a vehicle according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a vehicle panoramic image generation method, a vehicle panoramic image generation system and a vehicle according to an embodiment of the present invention with reference to the drawings.
Fig. 1 is a flowchart of a method for generating a panoramic image of a vehicle according to an embodiment of the present invention.
As shown in fig. 1, the method for generating a vehicle panoramic image includes:
s101, acquiring original images of the periphery of the vehicle through a plurality of cameras.
At present, more and more automobiles are equipped with a vehicle panoramic image function. The function can help a user to watch images around the vehicle through the vehicle display screen in the parking process, eliminate the visual field blind area around the vehicle and enable the parking to be more visual and convenient. Specifically, a plurality of cameras can be installed around the vehicle, images in different directions are collected through the cameras, and then the images are spliced, so that a panoramic image is generated. As shown in fig. 2, the white rectangle at the center of the figure represents a vehicle, and six cameras, namely, a front camera, a rear camera, a left front camera, a left rear camera, a right front camera and a right rear camera, are mounted on the periphery of the vehicle. The images acquired by the six cameras, two adjacent images, will usually have overlapping areas (stitching areas), and each overlapping area will select a calibration area (white block in fig. 2) for calibration. And during calibration, the upper and lower vertexes of the calibration area are used as standard points. And the connecting lines between the upper and lower vertexes are parallel, i.e. both are vertical in fig. 2. When the images are spliced, the images need to rotate by a certain angle, the images are stretched in the horizontal direction easily, the matching is not accurate, and the finally spliced image effect is poor.
In order to solve the above problems, the present invention provides a method for generating a panoramic image of a vehicle, which can reduce the cost and improve the effect of generating a panoramic image.
In one embodiment of the invention, a plurality of cameras can be arranged around the vehicle, and original images around the vehicle are collected through the plurality of cameras. For example: arranging a camera in front of the vehicle to acquire images in front of the vehicle; arranging a camera behind the vehicle to acquire images behind the vehicle; arranging a camera on the left side of the vehicle to acquire an image of the left side of the vehicle; a camera is arranged on the right side of the vehicle to acquire images on the right side of the vehicle. Of course, the images in the four directions are not limited to being captured by four cameras, and may be captured using more cameras. The four cameras are adopted in the embodiment, so that the cost can be saved, and the calculation amount of processed images can be reduced.
S102, determining a plurality of calibration areas based on the overlapping areas in the original images around the vehicle.
The calibration area may be a polygonal area.
Specifically, a first overlapping area between the front image and the left image may be acquired, and binarization processing may be performed on the first overlapping area to determine coordinate information of a first calibration area in the first overlapping area, so as to determine the first calibration area. As shown in fig. 3, the diamond area at the upper left is the first calibration area.
Similarly, a second overlapping area between the front image and the right image can be obtained, and binarization processing is performed on the second overlapping area to determine coordinate information of a second calibration area in the second overlapping area, so that the second calibration area is determined. As shown in fig. 3, the diamond area at the upper right is the second calibration area.
Similarly, a third overlapping area between the rear image and the left image can be obtained, and binarization processing is performed on the third overlapping area to determine coordinate information of a third calibration area in the third overlapping area, so that the third calibration area is determined. As shown in fig. 3, the diamond-shaped area at the lower left is the third calibration area.
Similarly, a fourth overlapping area between the rear image and the right image can be obtained, and binarization processing is performed on the fourth overlapping area to determine coordinate information of a fourth calibration area in the fourth overlapping area, so that the fourth calibration area is determined. As shown in fig. 3, the diamond area at the lower right is the fourth calibration area.
S103, obtaining image pixel point transformation mathematical models of a plurality of calibration points in the calibration area.
Specifically, a center point and at least two vertexes of the first calibration region may be determined as calibration points of the first calibration region, coordinates of the calibration points of the first calibration region are obtained, and a first image pixel point transformation mathematical model of a plurality of calibration points in the first calibration region is generated according to the coordinates of the calibration points of the first calibration region.
Similarly, the central point and at least two vertexes of the second calibration region can be determined as calibration points of the second calibration region, coordinates of the calibration points of the second calibration region are obtained, and a second image pixel point transformation mathematical model of a plurality of calibration points in the second calibration region is generated according to the coordinates of the calibration points of the second calibration region.
Similarly, the central point and at least two vertexes of the third calibration region can be determined as calibration points of the third calibration region, coordinates of the calibration points of the third calibration region are obtained, and a third image pixel point transformation mathematical model of a plurality of calibration points in the third calibration region is generated according to the coordinates of the calibration points of the third calibration region.
Similarly, the center point and at least two vertexes of the fourth calibration region can be determined as the calibration points of the fourth calibration region, the coordinates of the calibration points of the fourth calibration region are obtained, and a fourth image pixel point transformation mathematical model of a plurality of calibration points in the fourth calibration region is generated according to the coordinates of the calibration points of the fourth calibration region.
The first calibration area on the upper left will be described in detail below as an example.
As shown in fig. 4, the white rectangle at the center of the drawing is the coordinates of the center point of the vehicle, the length and width of the first calibration area are W and H, respectively, the rotation angle of the first calibration area is θ, and the distance d from the vertex a to the horizontal line where the center point is located is knownAThen the abscissa x of the point AA=W/2-dATan theta, ordinate y of point AA=H/2+dA. In the same way, the coordinates of the point B can be obtained. Based on the coordinates of the center point, point a and point B, the corresponding transformed mathematical model may be determined. Specifically, the expression can be expressed as formula one to formula three.
The formula I is as follows:
Figure DEST_PATH_IMAGE001
the formula II is as follows:
Figure 545268DEST_PATH_IMAGE002
the formula III is as follows:
Figure DEST_PATH_IMAGE003
wherein,
Figure 493632DEST_PATH_IMAGE004
in order to be the abscissa before the transformation,
Figure DEST_PATH_IMAGE005
in order to be the ordinate before transformation,
Figure 648539DEST_PATH_IMAGE006
in order to obtain the transformed abscissa of the bar,
Figure DEST_PATH_IMAGE007
in order to obtain the transformed ordinate, the transformation is carried out,
Figure 682354DEST_PATH_IMAGE008
and
Figure DEST_PATH_IMAGE009
in order to be a factor of the error,
Figure 393958DEST_PATH_IMAGE010
in order to be a function of the error,
Figure DEST_PATH_IMAGE011
are parameters.
To obtain a panoramic image with the best stitching effect, the images in the first overlapping area between the front image and the left image need to be perfectly overlapped. That is, the coordinate error of the center point, the point a and the point B of the calibration area in the front image and the left image after the distortion correction is the smallest, and the effect after the stitching is the best. Then, iterative correction adjustment can be performed on the parameter a in the first formula and the second formula, and finally the minimum error function delta is achieved, so that the best splicing effect is achieved.
Similarly, the second calibration region, the third calibration region and the fourth calibration region are processed in the above manner, and are not further described here.
And S104, converting the original images around the vehicle, which are acquired by the cameras, into target images respectively according to the image pixel point conversion mathematical model of the calibration points.
Specifically, a first image pixel point transformation mathematical model can be utilized to respectively convert the front image and the left image into a first image and a second image, and the first image and the second image are spliced at a preset angle to generate a first target image.
Similarly, the front image and the right image can be respectively converted into a third image and a fourth image by utilizing the second image pixel point transformation mathematical model, and the third image and the fourth image are spliced at a preset angle to generate a second target image.
Similarly, the third image pixel point transformation mathematical model can be utilized to respectively convert the rear image and the left image into a fifth image and a sixth image, and the fifth image and the sixth image are spliced at a preset angle to generate a third target image.
Similarly, the fourth image pixel point transformation mathematical model can be used for respectively converting the rear image and the right image into a seventh image and an eighth image, and the seventh image and the eighth image are spliced at a preset angle to generate a fourth target image.
The following description will proceed in detail by taking the front image and the left image as an example.
Firstly, the left image is rotated based on a transformation mathematical model, a central point, a point A and a point B of a calibration area in the left image are obtained through transformation, and three points are connected. Meanwhile, three-point connecting lines are also carried out on the central point, the point A and the point B of the calibration area in the front image. And splicing the two connecting lines to ensure that the angle between the two connecting lines is 90 degrees. Further, the front image or the left image may be scaled equally such that the sizes of the two match each other.
Similarly, images in other directions are processed in the same manner, and are not described herein again.
And S105, splicing the target images into a panoramic image.
Specifically, the first target image, the second target image, the third target image, and the fourth target image may be stitched to generate the panoramic image.
According to the method for generating the panoramic image of the vehicle, the original images around the vehicle are collected through the cameras, the calibration areas are determined based on the overlapping areas in the original images around the vehicle, the image pixel point transformation mathematical models of the calibration points in the calibration areas are obtained, the original images around the vehicle collected by the cameras are respectively transformed into the target images according to the image pixel point transformation mathematical models of the calibration points, and the target images are spliced into the panoramic image, so that the spliced images are high in fusion degree, the dislocation phenomenon is avoided, and the alignment effect is good.
In another embodiment of the invention, included angle information between the current position of the vehicle and the preset position can be further acquired, and then coordinate transformation can be performed on coordinate information of the plurality of calibration areas according to the included angle information. In the previous embodiments, the vehicle was described in the exact center of the figure. It is also understood that the position of the vehicle is at a predetermined position. In fact, the parking position of the vehicle can be in or out of the preset position, namely, a certain included angle is formed between the current position of the vehicle and the preset position. The calibration area corresponds to the predetermined position. Therefore, if the vehicle is not parked at the preset position, coordinate transformation is required before calibration. As shown in fig. 5, the lower left corner of the preset position is the origin O, and the lower left corner of the current position of the actual vehicle is O'. The coordinate transformation between the two can be realized by a formula four.
The formula four is as follows:
Figure 446097DEST_PATH_IMAGE012
Figure DEST_PATH_IMAGE013
and
Figure 522637DEST_PATH_IMAGE014
is the horizontal and vertical coordinates of a certain point in the coordinate system corresponding to the origin O,
Figure DEST_PATH_IMAGE015
and
Figure 293016DEST_PATH_IMAGE016
is the origin
Figure DEST_PATH_IMAGE017
The horizontal and vertical coordinates of a certain point in the corresponding coordinate system,
Figure 480415DEST_PATH_IMAGE011
is the angle between the two coordinate systems, L1And L2Respectively, transformation parameters.
After the coordinate transformation, the step of determining the calibration area may be continued.
In practical application, the vehicle is not required to be limited to be completely coincided with a preset fixed parking space, and only simple coordinate transformation is required, so that the subsequent steps can be continued, and the method is more flexible and convenient.
In order to implement the above embodiments, the present invention further provides a system for generating a vehicle panoramic image, and fig. 6 is a block diagram of a system for generating a vehicle panoramic image according to an embodiment of the present invention, and as shown in fig. 6, the apparatus includes a plurality of cameras 610, an image receiving apparatus 620, an image transforming apparatus 630, and an image stitching apparatus 640.
The cameras 610 are used for acquiring original images around the vehicle.
And the image receiving device 620 is used for receiving original images of the periphery of the vehicle collected by the plurality of cameras.
The image transformation device 630 is configured to determine a plurality of calibration regions based on an overlapping region in the original image around the vehicle, obtain image pixel transformation mathematical models of a plurality of calibration points in the calibration regions, and respectively transform the original image around the vehicle, which is acquired by the plurality of cameras, into a target image according to the image pixel transformation mathematical models of the plurality of calibration points.
And the image splicing device 640 is used for splicing the target images into the panoramic image.
It should be noted that the foregoing explanation of the method for generating a vehicle panoramic image is also applicable to the system for generating a vehicle panoramic image according to the embodiment of the present invention, and details not disclosed in the embodiment of the present invention are not repeated herein.
According to the system for generating the panoramic image of the vehicle, the original images around the vehicle are collected through the cameras, the calibration areas are determined based on the overlapping areas in the original images around the vehicle, the image pixel point transformation mathematical models of the calibration points in the calibration areas are obtained, the original images around the vehicle collected by the cameras are respectively transformed into the target images according to the image pixel point transformation mathematical models of the calibration points, and the target images are spliced into the panoramic image, so that the spliced images are high in fusion degree, the dislocation phenomenon is avoided, and the alignment effect is good.
In order to implement the above embodiment, the present invention further provides a vehicle having the system for generating a vehicle panoramic image according to the previous embodiment.
According to the vehicle provided by the embodiment of the invention, the original images around the vehicle are acquired through the multiple cameras, the multiple calibration areas are determined based on the overlapping areas in the original images around the vehicle, the image pixel point transformation mathematical model of the multiple calibration points in the calibration areas is acquired, the mathematical model is transformed according to the image pixel points of the multiple calibration points, the original images around the vehicle acquired by the multiple cameras are respectively transformed into the target images, and the target images are spliced into the panoramic images, so that the spliced images are higher in fusion degree, the dislocation phenomenon is avoided, and the alignment effect is better.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a random access memory (ram), a read-only memory (rom), an erasable programmable read-only memory (eeprom or flash memory), an optical fiber device, and a portable compact disc read-only memory (cdrom). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for realizing a logic function for a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a programmable gate array (pga), a field programmable gate array (fpga), and the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware that is related to instructions of a program, and the program may be stored in a computer-readable storage medium, and when executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (15)

1. A method for generating a vehicle panoramic image is characterized by comprising the following steps:
acquiring original images around the vehicle through a plurality of cameras;
determining a plurality of calibration areas based on the overlapping areas in the original images around the vehicle;
acquiring an image pixel point transformation mathematical model of a plurality of calibration points in the calibration area, wherein the plurality of calibration points comprise a central point and at least two vertexes of the calibration area, and the image pixel point transformation mathematical model is expressed as a formula I to a formula III:
the formula I is as follows:
Figure 575357DEST_PATH_IMAGE001
the formula II is as follows:
Figure 13291DEST_PATH_IMAGE002
the formula III is as follows:
Figure 620859DEST_PATH_IMAGE003
wherein,
Figure 93429DEST_PATH_IMAGE004
in order to be the abscissa before the transformation,
Figure 144561DEST_PATH_IMAGE005
in order to be the ordinate before transformation,
Figure 69792DEST_PATH_IMAGE006
in order to obtain the transformed abscissa of the bar,
Figure 294100DEST_PATH_IMAGE007
in order to obtain the transformed ordinate, the transformation is carried out,
Figure 542547DEST_PATH_IMAGE008
and
Figure 826898DEST_PATH_IMAGE009
in order to be a factor of the error,
Figure 239425DEST_PATH_IMAGE010
in order to be a function of the error,
Figure 205107DEST_PATH_IMAGE011
is a parameter;
transforming a mathematical model according to the image pixel points of the plurality of calibration points, and respectively transforming original images around the vehicle, which are acquired by the plurality of cameras, into target images; and
and splicing the target images into a panoramic image.
2. The method of claim 1, wherein the raw images of the surroundings of the vehicle include a front image, a rear image, a left image, and a right image, and wherein determining the plurality of calibration regions based on overlapping regions in the raw images of the surroundings of the vehicle comprises:
acquiring a first overlapping area between the front image and the left image, and performing binarization processing on the first overlapping area to determine coordinate information of a first calibration area in the first overlapping area;
acquiring a second overlapping area between the front image and the right image, and performing binarization processing on the second overlapping area to determine coordinate information of a second calibration area in the second overlapping area;
acquiring a third overlapping area between the rear image and the left image, and performing binarization processing on the third overlapping area to determine coordinate information of a third calibration area in the third overlapping area;
and acquiring a fourth overlapping area between the rear image and the right image, and performing binarization processing on the fourth overlapping area to determine coordinate information of a fourth calibration area in the fourth overlapping area.
3. The method of claim 2, wherein obtaining a mathematical model of image pixel point transformations for a plurality of calibration points in the calibration region comprises:
determining a central point and at least two vertexes of the first calibration area as calibration points of the first calibration area, acquiring coordinates of the calibration points of the first calibration area, and generating a first image pixel point transformation mathematical model of a plurality of calibration points in the first calibration area according to the coordinates of the calibration points of the first calibration area;
determining a central point and at least two vertexes of the second calibration area as calibration points of the second calibration area, acquiring coordinates of the calibration points of the second calibration area, and generating a second image pixel point transformation mathematical model of a plurality of calibration points in the second calibration area according to the coordinates of the calibration points of the second calibration area;
determining a central point and at least two vertexes of the third calibration area as calibration points of the third calibration area, acquiring coordinates of the calibration points of the third calibration area, and generating a third image pixel point transformation mathematical model of a plurality of calibration points in the third calibration area according to the coordinates of the calibration points of the third calibration area;
determining a central point and at least two vertexes of the fourth calibration area as calibration points of the fourth calibration area, acquiring coordinates of the calibration points of the fourth calibration area, and generating a fourth image pixel point transformation mathematical model of a plurality of calibration points in the fourth calibration area according to the coordinates of the calibration points of the fourth calibration area.
4. The method of claim 3, wherein transforming the original images of the vehicle surroundings collected by the plurality of cameras into the target images according to the mathematical model of image pixel point transformation of the plurality of calibration points comprises:
converting the front image and the left image into a first image and a second image respectively by utilizing the first image pixel point transformation mathematical model, and splicing the first image and the second image at a preset angle to generate a first target image;
converting the front image and the right image into a third image and a fourth image respectively by utilizing the second image pixel point transformation mathematical model, and splicing the third image and the fourth image at a preset angle to generate a second target image;
converting the rear image and the left image into a fifth image and a sixth image respectively by utilizing the third image pixel point transformation mathematical model, and splicing the fifth image and the sixth image at a preset angle to generate a third target image;
and transforming a mathematical model by using the fourth image pixel points, respectively converting the rear image and the right image into a seventh image and an eighth image, and splicing the seventh image and the eighth image at a preset angle to generate a fourth target image.
5. The method of claim 4, wherein stitching the target images into a panoramic image comprises:
and splicing the first target image, the second target image, the third target image and the fourth target image to generate the panoramic image.
6. The method of claim 1, wherein the calibration area is a polygonal area.
7. The method of claim 2, further comprising:
acquiring included angle information between the current position of the vehicle and a preset position;
and carrying out coordinate transformation on the coordinate information of the plurality of calibration areas according to the included angle information.
8. A vehicle panoramic image generation system is characterized by comprising:
the system comprises a plurality of cameras, a plurality of image acquisition units and a plurality of image processing units, wherein the cameras are used for acquiring original images around a vehicle;
the image receiving device is used for receiving original images of the periphery of the vehicle collected by the plurality of cameras;
the image transformation device is used for determining a plurality of calibration regions based on the overlapping regions in the original images around the vehicle and acquiring image pixel point transformation mathematical models of a plurality of calibration points in the calibration regions, wherein the calibration points comprise a central point and at least two vertexes of the calibration regions, and the image pixel point transformation mathematical models are expressed as a formula I to a formula III:
the formula I is as follows:
Figure 386690DEST_PATH_IMAGE001
the formula II is as follows:
Figure 763313DEST_PATH_IMAGE002
the formula III is as follows:
Figure 663136DEST_PATH_IMAGE003
wherein,
Figure 229247DEST_PATH_IMAGE012
in order to be the abscissa before the transformation,
Figure 468598DEST_PATH_IMAGE013
in order to be the ordinate before transformation,
Figure 829172DEST_PATH_IMAGE006
in order to obtain the transformed abscissa of the bar,
Figure 216291DEST_PATH_IMAGE007
in order to obtain the transformed ordinate, the transformation is carried out,
Figure 773043DEST_PATH_IMAGE008
and
Figure 663639DEST_PATH_IMAGE009
in order to be a factor of the error,
Figure 460694DEST_PATH_IMAGE014
in order to be a function of the error,
Figure 7213DEST_PATH_IMAGE015
the parameters are used, and according to the image pixel points of the calibration points, a mathematical model is transformed, and original images around the vehicle collected by the cameras are respectively transformed into target images; and
and the image splicing device is used for splicing the target images into panoramic images.
9. The system of claim 8, wherein the original images of the vehicle surroundings include a front image, a rear image, a left image, and a right image, and the image transformation means is configured to:
acquiring a first overlapping area between the front image and the left image, and performing binarization processing on the first overlapping area to determine coordinate information of a first calibration area in the first overlapping area;
acquiring a second overlapping area between the front image and the right image, and performing binarization processing on the second overlapping area to determine coordinate information of a second calibration area in the second overlapping area;
acquiring a third overlapping area between the rear image and the left image, and performing binarization processing on the third overlapping area to determine coordinate information of a third calibration area in the third overlapping area;
and acquiring a fourth overlapping area between the rear image and the right image, and performing binarization processing on the fourth overlapping area to determine coordinate information of a fourth calibration area in the fourth overlapping area.
10. The system of claim 9, wherein the image transformation means is for:
determining a central point and at least two vertexes of the first calibration area as calibration points of the first calibration area, acquiring coordinates of the calibration points of the first calibration area, and generating a first image pixel point transformation mathematical model of a plurality of calibration points in the first calibration area according to the coordinates of the calibration points of the first calibration area;
determining a central point and at least two vertexes of the second calibration area as calibration points of the second calibration area, acquiring coordinates of the calibration points of the second calibration area, and generating a second image pixel point transformation mathematical model of a plurality of calibration points in the second calibration area according to the coordinates of the calibration points of the second calibration area;
determining a central point and at least two vertexes of the third calibration area as calibration points of the third calibration area, acquiring coordinates of the calibration points of the third calibration area, and generating a third image pixel point transformation mathematical model of a plurality of calibration points in the third calibration area according to the coordinates of the calibration points of the third calibration area;
determining a central point and at least two vertexes of the fourth calibration area as calibration points of the fourth calibration area, acquiring coordinates of the calibration points of the fourth calibration area, and generating a fourth image pixel point transformation mathematical model of a plurality of calibration points in the fourth calibration area according to the coordinates of the calibration points of the fourth calibration area.
11. The system of claim 10, wherein the image transformation means is for:
converting the front image and the left image into a first image and a second image respectively by utilizing the first image pixel point transformation mathematical model, and splicing the first image and the second image at a preset angle to generate a first target image;
converting the front image and the right image into a third image and a fourth image respectively by utilizing the second image pixel point transformation mathematical model, and splicing the third image and the fourth image at a preset angle to generate a second target image;
converting the rear image and the left image into a fifth image and a sixth image respectively by utilizing the third image pixel point transformation mathematical model, and splicing the fifth image and the sixth image at a preset angle to generate a third target image;
and transforming a mathematical model by using the fourth image pixel points, respectively converting the rear image and the right image into a seventh image and an eighth image, and splicing the seventh image and the eighth image at a preset angle to generate a fourth target image.
12. The system of claim 11, wherein the image stitching device is configured to stitch the images together
And splicing the first target image, the second target image, the third target image and the fourth target image to generate the panoramic image.
13. The system of claim 8, wherein the calibration area is a polygonal area.
14. The system of claim 9, wherein the image transformation device is further configured to:
and acquiring included angle information between the current position of the vehicle and a preset position, and performing coordinate transformation on the coordinate information of the plurality of calibration areas according to the included angle information.
15. A vehicle comprising a vehicle panoramic image generation system according to any one of claims 8 to 14.
CN201810380394.6A 2018-04-25 2018-04-25 Vehicle panoramic image generation method and system and vehicle Active CN110400255B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810380394.6A CN110400255B (en) 2018-04-25 2018-04-25 Vehicle panoramic image generation method and system and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810380394.6A CN110400255B (en) 2018-04-25 2018-04-25 Vehicle panoramic image generation method and system and vehicle

Publications (2)

Publication Number Publication Date
CN110400255A CN110400255A (en) 2019-11-01
CN110400255B true CN110400255B (en) 2022-03-15

Family

ID=68319973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810380394.6A Active CN110400255B (en) 2018-04-25 2018-04-25 Vehicle panoramic image generation method and system and vehicle

Country Status (1)

Country Link
CN (1) CN110400255B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956585B (en) * 2019-11-29 2020-09-15 深圳市英博超算科技有限公司 Panoramic image splicing method and device and computer readable storage medium
CN111428616B (en) * 2020-03-20 2023-05-23 东软睿驰汽车技术(沈阳)有限公司 Parking space detection method, device, equipment and storage medium
CN112380963B (en) * 2020-11-11 2024-05-31 东软睿驰汽车技术(沈阳)有限公司 Depth information determining method and device based on panoramic looking-around system
CN114399557A (en) * 2021-12-25 2022-04-26 深圳市全景达科技有限公司 Panoramic calibration method and system and computer equipment
CN115529414A (en) * 2022-08-31 2022-12-27 深圳市豪恩汽车电子装备股份有限公司 Image debugging device and method for motor vehicle panoramic image system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751409A (en) * 2013-12-27 2015-07-01 比亚迪股份有限公司 Auto panoramic image calibration system, forming system and calibration method
CN105635551A (en) * 2014-10-29 2016-06-01 浙江大华技术股份有限公司 Method of dome camera for generating panoramic image, and dome camera

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9036000B1 (en) * 2011-09-27 2015-05-19 Google Inc. Street-level imagery acquisition and selection
CN203759754U (en) * 2014-04-03 2014-08-06 深圳市德赛微电子技术有限公司 Vehicular panoramic imaging system calibrator

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751409A (en) * 2013-12-27 2015-07-01 比亚迪股份有限公司 Auto panoramic image calibration system, forming system and calibration method
CN105635551A (en) * 2014-10-29 2016-06-01 浙江大华技术股份有限公司 Method of dome camera for generating panoramic image, and dome camera

Also Published As

Publication number Publication date
CN110400255A (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN110400255B (en) Vehicle panoramic image generation method and system and vehicle
CN106799993B (en) Streetscape acquisition method and system and vehicle
US10434877B2 (en) Driver-assistance method and a driver-assistance apparatus
CN109741455B (en) Vehicle-mounted stereoscopic panoramic display method, computer readable storage medium and system
CN109948398B (en) Image processing method for panoramic parking and panoramic parking device
JP5739584B2 (en) 3D image synthesizing apparatus and method for visualizing vehicle periphery
CN102045546B (en) Panoramic parking assist system
JP5455124B2 (en) Camera posture parameter estimation device
CN106856000B (en) Seamless splicing processing method and system for vehicle-mounted panoramic image
KR101339121B1 (en) An apparatus for generating around view image of vehicle using multi look-up table
CN111223038A (en) Automatic splicing method and display device for vehicle-mounted all-around images
TWI578271B (en) Dynamic image processing method and dynamic image processing system
CN110288527B (en) Panoramic aerial view generation method of vehicle-mounted panoramic camera
CN111768332B (en) Method for splicing vehicle-mounted panoramic real-time 3D panoramic images and image acquisition device
CN103065318B (en) The curved surface projection method and device of multiple-camera panorama system
CN110139084A (en) Vehicle periphery image treatment method and device
KR20090078463A (en) Distorted image correction apparatus and method
CN113658262B (en) Camera external parameter calibration method, device, system and storage medium
KR20110067437A (en) Apparatus and method for processing image obtained by a plurality of wide angle camera
CN110264395A (en) A kind of the camera lens scaling method and relevant apparatus of vehicle-mounted monocular panorama system
CN111652937A (en) Vehicle-mounted camera calibration method and device
CN115936995A (en) Panoramic splicing method for four-way fisheye cameras of vehicle
CN111815752A (en) Image processing method and device and electronic equipment
CN115439548A (en) Camera calibration method, image splicing method, device, medium, camera and vehicle
CN116228535A (en) Image processing method and device, electronic equipment and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant