CN112215917A - Vehicle-mounted panorama generation method, device and system - Google Patents

Vehicle-mounted panorama generation method, device and system Download PDF

Info

Publication number
CN112215917A
CN112215917A CN201910615707.6A CN201910615707A CN112215917A CN 112215917 A CN112215917 A CN 112215917A CN 201910615707 A CN201910615707 A CN 201910615707A CN 112215917 A CN112215917 A CN 112215917A
Authority
CN
China
Prior art keywords
vehicle
image
spliced
images
mounted camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910615707.6A
Other languages
Chinese (zh)
Inventor
王泽文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910615707.6A priority Critical patent/CN112215917A/en
Publication of CN112215917A publication Critical patent/CN112215917A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application provides a vehicle-mounted panorama generating method, a device and a system, which are characterized in that the method comprises the following steps: acquiring images around the vehicle, which are acquired by each vehicle-mounted camera at the same time, wherein the number of the vehicle-mounted cameras is at least three, and the vehicle-mounted cameras are respectively arranged on the left side and the right side of the vehicle and the tail of the vehicle; and splicing the acquired frame images into a vehicle-mounted panoramic image and displaying the panoramic image on a rearview mirror in the vehicle. The images of the left side, the right side and the tail of the vehicle are collected by at least three vehicle-mounted cameras, and the collected images are spliced into a complete panoramic image with a large view field to be directly output and displayed on the inside rearview mirror, so that a driver can observe scenes of key blind areas (such as view blind areas of an A column and a B column of the vehicle) around the vehicle, and the driving safety is ensured.

Description

Vehicle-mounted panorama generation method, device and system
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a system for generating a vehicle-mounted panorama.
Background
The vehicle rearview mirror is one of important ways for a driver to indirectly obtain a visual field, but the visual field of the vehicle rearview mirror is sometimes blocked by objects such as a rear passenger, a rear seat headrest, a rear car bumper and the like, so that the driver is influenced to view a scene behind the vehicle.
At present, a single vehicle-mounted camera is arranged at the tail of a vehicle to acquire a view map behind the vehicle, and the acquired view map is displayed on a display screen, so that a driver can view a scene behind the vehicle in real time.
However, the field of view that can be collected by the vehicle-mounted camera at the tail of the vehicle is limited, and the driver cannot observe the blind areas at the left and right sides of the vehicle, such as the blind areas of the a column and the B column.
Disclosure of Invention
In view of this, the present application provides a method for generating a vehicle-mounted panorama, so as to solve the problem of limited visual field observable by a driver.
According to a first aspect of an embodiment of the present application, a vehicle-mounted panorama generation method is provided, where the method includes:
acquiring images around the vehicle, which are acquired by each vehicle-mounted camera at the same time, wherein the number of the vehicle-mounted cameras is at least three, and the vehicle-mounted cameras are respectively arranged on the left side and the right side of the vehicle and the tail of the vehicle;
and splicing the acquired frame images into a vehicle-mounted panoramic image and displaying the panoramic image on a rearview mirror in the vehicle.
According to a second aspect of embodiments of the present application, there is provided an in-vehicle panorama generating system, the system including:
the vehicle-mounted cameras are used for acquiring images around the vehicle, and at least three vehicle-mounted cameras are arranged on the left side and the right side of the vehicle and the tail of the vehicle respectively;
the processor is used for acquiring images around the vehicle, acquired by the vehicle-mounted camera at the same time, splicing the acquired images into a vehicle-mounted panoramic image and sending the vehicle-mounted panoramic image to a rearview mirror in the vehicle;
and the rearview mirror is used for displaying the vehicle-mounted panoramic image.
According to a third aspect of the embodiments of the present application, there is provided an in-vehicle panorama generating apparatus, the apparatus including:
the vehicle-mounted cameras are arranged on the left side and the right side of the vehicle and at the tail of the vehicle respectively;
the splicing module is used for splicing the acquired images into a vehicle-mounted panoramic image;
and the display module is used for displaying the vehicle-mounted panoramic picture on a rearview mirror in the vehicle.
By applying the embodiment of the application, the images around the vehicle, which are acquired by the vehicle-mounted cameras at the same time, are acquired, the number of the vehicle-mounted cameras is at least three, the vehicle-mounted cameras are respectively arranged on the left side, the right side and the tail of the vehicle, and then the acquired frame images are spliced into the vehicle-mounted panoramic image and are displayed on the rearview mirror in the vehicle.
Based on the above description, at least three vehicle-mounted cameras are used for collecting images of the left side, the right side and the tail of the vehicle, and the collected images are combined into a complete panoramic image with a relatively large view field to be directly output and displayed on the inside rearview mirror, so that a driver can observe scenes of key blind areas (such as view blind areas of an A column and a B column of the vehicle) around the vehicle, and driving safety is ensured.
Drawings
FIG. 1A is a block diagram of an in-vehicle panorama generation system according to an exemplary embodiment of the present application;
FIG. 1B is a schematic diagram of an onboard camera arrangement shown in the embodiment of FIG. 1A according to the present application;
FIG. 2A is a flowchart illustrating an embodiment of a method for generating a vehicle-mounted panorama according to an exemplary embodiment of the present application;
FIG. 2B is a diagram illustrating a relationship between a virtual viewpoint and a viewing plane according to the embodiment shown in FIG. 2A;
FIG. 2C is a schematic diagram illustrating an Euler angle rotation of the present application according to the embodiment of FIG. 2A;
fig. 2D is a schematic diagram of a splicing relationship of to-be-spliced subgraphs according to the embodiment shown in fig. 2A;
FIG. 2E is a graph illustrating the effect of stitching according to the embodiment of FIG. 2A;
FIG. 2F is a schematic diagram illustrating a process for generating a vehicle-mounted panorama according to the embodiment shown in FIG. 2A;
fig. 3 is a block diagram of an in-vehicle panorama generating apparatus according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Because the vehicle-mounted camera mode is arranged at the tail of the vehicle adopted at present, the visual field observable by a driver is limited, particularly, the visual field blind areas of the A column and the B column at the left side and the right side of the vehicle can not be observed by the driver, and therefore the driving safety can not be ensured by the mode adopted at present.
In order to solve the problems, the application provides a vehicle-mounted panorama generation method, images around a vehicle, which are acquired at the same time by vehicle-mounted cameras, wherein the number of the vehicle-mounted cameras is at least three, and the vehicle-mounted cameras are respectively arranged on the left side and the right side of the vehicle and the tail of the vehicle, and then all the acquired frames of images are spliced into the vehicle-mounted panorama and displayed on a rearview mirror in the vehicle.
Based on the above description, at least three vehicle-mounted cameras are used for collecting images of the left side, the right side and the tail of the vehicle, and the collected images are combined into a complete panoramic image with a relatively large view field to be directly output and displayed on the inside rearview mirror, so that a driver can observe scenes of key blind areas (such as view blind areas of an A column and a B column of the vehicle) around the vehicle, and driving safety is ensured.
Fig. 1A is a block diagram of a vehicle-mounted panorama generating system according to an exemplary embodiment of the present application, including: at least three onboard cameras (left onboard camera, right onboard camera, and rear onboard camera shown in fig. 1A), a processor, and a rear view mirror; the three vehicle-mounted cameras are respectively arranged on the left side and the right side of the vehicle and the tail of the vehicle and are respectively used for acquiring images of the left side, the right side and the rear of the vehicle, so that images around the vehicle are formed; the processor is used for acquiring images around the vehicle, acquired by the vehicle-mounted cameras at the same time, splicing the acquired images into a vehicle-mounted panoramic image and sending the vehicle-mounted panoramic image to a rearview mirror in the vehicle; the rearview mirror is used for displaying the received vehicle-mounted panoramic picture.
In order to enlarge the rear view of the vehicle which can be observed by a driver, vehicle-mounted cameras, namely a left vehicle-mounted camera and a right vehicle-mounted camera, are additionally arranged on the left side and the right side of the vehicle so as to collect the key blind area views on the left side and the right side. In addition, the spliced vehicle-mounted panoramic picture is displayed on the inside rearview mirror, so that the driving habit of a user is better matched compared with a display screen.
For example, the rear view mirror in the vehicle may be a streaming media rear view mirror, or may also be a vehicle data recorder, which is not limited in this application.
As shown in fig. 1B, the rear vehicle-mounted camera is disposed at the rear of the vehicle, the left vehicle-mounted camera is disposed on the left side rearview mirror of the vehicle, the right vehicle-mounted camera is disposed on the right side rearview mirror of the vehicle, images around the vehicle can be collected by the three vehicle-mounted cameras, and the spliced vehicle-mounted panoramic image is displayed by the streaming media rearview mirror in the vehicle.
Fig. 2A is a flowchart of an embodiment of a vehicle-mounted panorama generating method according to an exemplary embodiment of the present application, where the vehicle-mounted panorama generating method may be applied to a processor in the system shown in fig. 1A, as shown in fig. 2A, and the vehicle-mounted panorama generating method includes the following steps:
step 201: and acquiring images around the vehicle, which are acquired by the vehicle-mounted cameras at the same time.
In an embodiment, the images acquired at the current moment may be acquired from each vehicle-mounted camera at preset time intervals, and the acquired images are used as images around the vehicle for subsequent stitching.
Step 202: and splicing the acquired images into a vehicle-mounted panoramic image to be displayed on a rearview mirror in the vehicle.
Before step 202 is executed, since the relative position relationship between the vehicle-mounted cameras set on the vehicle does not change all the time, the stitching mapping table of each vehicle-mounted camera can be predetermined and stored before stitching, that is, the coordinate position of each pixel point in the image acquired by the vehicle-mounted camera corresponding to the sub-image to be stitched is determined, so that the calculation overhead of subsequent image stitching is saved, and the generation efficiency of the vehicle-mounted panoramic image is improved.
The determination process of the stitching mapping table for each vehicle-mounted camera may be: determining internal parameters and external parameters of the vehicle-mounted camera, acquiring a homography matrix corresponding to the vehicle-mounted camera according to the internal parameters and the external parameters, acquiring a perspective transformation matrix corresponding to the vehicle-mounted camera according to a preset pitching angle, wherein the preset pitching angle refers to an angle of view corresponding to a visual plane for projecting an image to the front of a human eye, and then determining a coordinate position of each pixel point in the image acquired by the vehicle-mounted camera corresponding to a sub-image to be spliced according to the homography matrix and the perspective transformation matrix and storing the coordinate position in a splicing mapping table corresponding to the vehicle-mounted camera.
The homography matrix is used for converting images collected by the vehicle-mounted cameras into a top view with a view plane as the ground, the preset pitching angle is used for converting the top view into the view plane vertical to the ground, and the viewpoint of the view plane is the rear of the vehicle, so that a driver can conveniently observe the external environment at a comfortable angle. As shown in fig. 2B, the virtual viewpoint is the rear of the vehicle, and the view plane perpendicular to the virtual optical axis of the virtual viewpoint is perpendicular to the ground, so that the image can be converted to the view plane with the rear of the vehicle as the viewpoint through the mosaic mapping table obtained by the homography matrix and the perspective transformation matrix.
For example, the internal parameters refer to fixed parameters inside the camera, such as focal length, principal point coordinates, electromechanical coefficients and the like, the external parameters refer to external parameters of the camera, such as yaw angle, roll angle, pitch angle and the like, and the external parameters are different from each other in the installation direction of the camera. The internal reference matrix can be obtained by internal reference, the external reference matrix can be obtained by external reference, the product of the internal reference matrix and the external reference matrix is a homography matrix corresponding to the camera, and the image acquired by the camera can be projected to the ground plane through the homography matrix.
The following describes the setting process of the preset pitch viewing angle in detail:
as shown in fig. 2C, in euler angles, a commonly used rotation sequence is pitch-yaw-roll (pitch-yaw-roll), that is, a pitch angle pitch is obtained by rotating around an X axis, a roll angle roll is obtained by rotating around a Y axis after rotation, and a yaw angle yaw is obtained by rotating around a Z axis after rotation.
Rotating the angle pitch around the X axis to obtain a rotation matrix of:
Figure BDA0002123866780000061
the angle yaw is rotated around the Y axis, and the resulting rotation matrix is:
Figure BDA0002123866780000062
rotating the angle roll around the Z axis, and obtaining a rotation matrix as follows:
Figure BDA0002123866780000063
the perspective transformation matrix can be obtained from the above pitch-yaw-roll rotation sequence as follows:
t ═ PITCH x YAW x ROLL (formula 4)
The subgraph to be spliced obtained through the homography matrix is equivalent to projecting an image collected by a camera onto the ground, in order to project the subgraph to be spliced onto a visual plane viewed by human eyes, namely, a viewpoint is the rear of a vehicle, the visual plane to be projected is vertical to a ground plane and accords with the perception of the human eyes, the subgraph to be spliced is further required to be rotated 90 degrees anticlockwise around an X axis, namely, a preset pitch angle is set to-90 degrees, namely, pitch is-90 degrees, yaw is 0 degrees, and roll is 0 degrees and is respectively substituted into the formulas (1), (2) and (3), so that each element value in a perspective transformation matrix T can be obtained, and the perspective transformation matrix is used for projecting the image from the ground plane to the visual plane vertical to the ground plane.
It should be noted that the homography matrix is also obtained by the above formula (4), and the pitch, yaw, and roll that are substituted into the formulas (1), (2), and (3) are the actual yaw, roll, and pitch angles of the camera after the camera is mounted.
For example, the formula for determining the coordinate position of each pixel point in the image corresponding to the sub-image to be stitched according to the homography matrix and the perspective transformation matrix may be:
Figure BDA0002123866780000064
wherein (x, y) represents the coordinates in the subgraphs to be spliced, (x ', y') represents the coordinates in the original image, (x ', y') ∈ [ (0, 0) - (m-1, n-1) ], H represents a homography matrix, and T represents a perspective transformation matrix.
In an embodiment, after obtaining the stitching mapping table corresponding to each vehicle-mounted camera, the obtained frames of images may be stitched into a vehicle-mounted panorama, and the stitching process may be: and for each vehicle-mounted camera, converting the image acquired by the vehicle-mounted camera into sub-images to be spliced according to the splicing mapping table corresponding to the vehicle-mounted camera, and splicing the sub-images to be spliced obtained by conversion into a vehicle-mounted panoramic image.
And the sub-image to be spliced and the vehicle-mounted panoramic image obtained by conversion are positioned in the same image coordinate system, and the image coordinate system is a coordinate system of a view plane corresponding to a preset pitch angle.
In an embodiment, for the process of splicing the frames of the sub-images to be spliced obtained through conversion into the vehicle-mounted panoramic image, the frames of the sub-images to be spliced are sequenced according to a specified sequence, then the sequenced frames of the sub-images to be spliced are sequentially merged, and overlapping regions in the frames of the sub-images to be spliced are fused to obtain the vehicle-mounted panoramic image; the overlapping area refers to an area formed by pixel points with the same coordinate information in two adjacent sub-images to be spliced.
For example, the arrangement order of the subgraphs to be spliced can be the order of the left side, the rear side and the right side of the vehicle, as shown in fig. 2D, the left sub-view is the subgraph to be spliced on the left side of the vehicle, the rear sub-view is the subgraph to be spliced on the rear side of the vehicle, and the right sub-view is the subgraph to be spliced on the right side of the vehicle.
It can be understood by those skilled in the art that the process of fusing overlapping regions can be implemented by related technologies, such as a weighted fusion method, which is not limited in the present application.
As shown in fig. 2E, a splicing effect diagram is obtained by splicing the subgraph to be spliced on the left side of the vehicle, the subgraph to be spliced on the rear side of the vehicle and the subgraph to be spliced on the right side of the vehicle with the rear side of the vehicle as a viewpoint.
Based on the above description, the driver of the existing implementation manner views the image with the viewpoint of the camera lens, and is not favorable for the driver to view the external environment at a comfortable angle, and the application can perform viewpoint transformation according to the habit of the driver, namely, perform viewing angle transformation on the image to project the image to the viewing plane conforming to the perception of human eyes for splicing, so that the driver can view the external environment at a comfortable angle.
As for the process from step 201 to step 202, as shown in fig. 2F, the vehicle-mounted panorama generating process is: before splicing, calibrating three vehicle-mounted cameras arranged on a vehicle to obtain respective corresponding splicing mapping tables, then converting images acquired by the three vehicle-mounted cameras by using the respective corresponding splicing mapping tables to obtain corresponding subgraphs to be spliced, and further splicing the three frames of subgraphs to be spliced according to the sequence of left, back and right to obtain a vehicle-mounted panorama.
It should be noted that, in the process of obtaining the vehicle-mounted panoramic image, details in each frame of image used for splicing the panoramic image are all lost, and in order to see more details, after the obtained frame of image is spliced into the vehicle-mounted panoramic image and displayed on a rearview mirror in the vehicle, the spliced images with different emphasis angles can be switched and displayed according to the requirements of the user, so that the driver can see more details at a certain angle.
The switching display implementation process may be: when a first switching instruction for indicating that the vehicle left side image is displayed by the vehicle-mounted camera on the left side of the vehicle and a spliced image of the image acquired by the vehicle-mounted camera on the tail of the vehicle are received, displaying the spliced image on a rearview mirror; when a second switching instruction for indicating that the right image of the vehicle is displayed by emphasis is received, displaying a spliced image of an image acquired by a vehicle-mounted camera positioned on the right side of the vehicle and an image acquired by a vehicle-mounted camera positioned at the tail of the vehicle on a rearview mirror; when a third switching instruction for indicating the vehicle bottom dead zone to be displayed with emphasis is received, determining pixel information of a vehicle bottom area in the vehicle-mounted panoramic image according to the historical frame, filling the pixel information into the vehicle bottom area, and displaying the filled vehicle-mounted panoramic image on the rearview mirror.
The splicing map on the right side of the vehicle with the side weight is a splicing map of subgraphs to be spliced, which correspond to images acquired by a vehicle-mounted camera on the right side of the vehicle, and images acquired by a vehicle-mounted camera on the tail of the vehicle, which correspond to the subgraphs to be spliced; the splicing map on the left side of the vehicle with the side weight is a splicing map of sub-images to be spliced, which are corresponding to images acquired by a vehicle-mounted camera on the left side of the vehicle, and images acquired by a vehicle-mounted camera on the tail of the vehicle, which are corresponding to the sub-images to be spliced.
For example, the first switching instruction may be generated when the vehicle turns to the left, the second switching instruction may be generated when the vehicle turns to the right, and the third switching instruction may be generated when the vehicle reverses backwards. Of course, it is also possible to trigger generation of a fourth switching instruction to display the in-vehicle panorama again on the rear view mirror when the vehicle is traveling forward.
The process of determining the pixel information of the vehicle bottom area in the vehicle-mounted panorama according to the historical frame can be realized by adopting a related technology, and the application is not limited herein.
In the embodiment of the application, images around a vehicle, which are acquired by the vehicle-mounted cameras at the same time, are acquired, the number of the vehicle-mounted cameras is at least three, the vehicle-mounted cameras are respectively arranged on the left side, the right side and the tail of the vehicle, and then the acquired images are spliced into a vehicle-mounted panoramic image and displayed on a rearview mirror in the vehicle.
Based on the above description, at least three vehicle-mounted cameras are used for collecting images of the left side, the right side and the tail of the vehicle, and the collected images are combined into a complete panoramic image with a relatively large view field to be directly output and displayed on the inside rearview mirror, so that a driver can observe scenes of key blind areas (such as view blind areas of an A column and a B column of the vehicle) around the vehicle, and driving safety is ensured.
Fig. 3 is a block diagram of an in-vehicle panorama generating apparatus according to an exemplary embodiment of the present application, which may be applied to a processor in the system shown in fig. 1A, as shown in fig. 2A, and includes:
the acquisition module 310 is configured to acquire images around the vehicle, which are acquired by each vehicle-mounted camera at the same time, where the number of the vehicle-mounted cameras is at least three, and the three vehicle-mounted cameras are respectively arranged at the left side, the right side and the tail of the vehicle;
the splicing module 320 is used for splicing the acquired frames of images into a vehicle-mounted panoramic image;
and the display module 330 is configured to display the vehicle-mounted panorama on a rearview mirror in the vehicle.
In an optional implementation manner, the stitching module 320 is specifically configured to, for each vehicle-mounted camera, convert an image acquired by the vehicle-mounted camera into a to-be-stitched sub-image according to a predetermined stitching mapping table corresponding to the vehicle-mounted camera, where the stitching mapping table records a coordinate position of each pixel point in the image corresponding to the to-be-stitched sub-image; splicing the frames of sub-images to be spliced obtained by conversion into a vehicle-mounted panoramic image; and the sub-images to be spliced of each frame are positioned in the same image coordinate system.
In an alternative implementation, the apparatus further comprises (not shown in fig. 3):
the splicing table establishing module is used for determining internal parameters and external parameters of the vehicle-mounted camera; acquiring a homography matrix corresponding to the vehicle-mounted camera according to the internal parameters and the external parameters; acquiring a perspective transformation matrix corresponding to the vehicle-mounted camera according to a preset pitch angle, wherein the preset pitch angle refers to a visual angle corresponding to a visual plane for projecting an image to the front of human eyes; and determining the coordinate position of each pixel point in the image acquired by the vehicle-mounted camera corresponding to the sub-image to be spliced according to the homography matrix and the perspective transformation matrix, and storing the coordinate position into a splicing mapping table corresponding to the vehicle-mounted camera.
In an optional implementation manner, the splicing module 320 is specifically configured to sort the frames of the sub-images to be spliced according to a specified order in a process of splicing the frames of the sub-images to be spliced obtained through conversion into the vehicle-mounted panoramic image; sequentially merging the sequenced frames of sub-images to be spliced and fusing overlapping areas in the frames of sub-images to be spliced to obtain a vehicle-mounted panoramic image; the overlapping area refers to an area formed by pixel points with the same coordinate information in two adjacent sub-images to be spliced.
In an alternative implementation, the apparatus further comprises (not shown in fig. 3):
the switching display module is used for displaying a spliced image of an image acquired by a vehicle-mounted camera positioned on the left side of the vehicle and an image acquired by a vehicle-mounted camera positioned at the tail of the vehicle on the rearview mirror when receiving a first switching instruction for indicating that the left image of the vehicle is displayed by a driver; when a second switching instruction for indicating that the right side image of the vehicle is displayed in a side-weighted mode is received, displaying a spliced image of an image collected by a vehicle-mounted camera positioned on the right side of the vehicle and an image collected by a vehicle-mounted camera positioned at the tail of the vehicle on the rearview mirror; when a third switching instruction for indicating the vehicle bottom dead zone to be displayed by the emphasis is received, determining the pixel information of the vehicle bottom area in the vehicle-mounted panoramic image according to the historical frame, filling the pixel information into the vehicle bottom area, and displaying the filled vehicle-mounted panoramic image on the rearview mirror.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A vehicle-mounted panorama generation method is characterized by comprising the following steps:
acquiring images around the vehicle, which are acquired by each vehicle-mounted camera at the same time, wherein the number of the vehicle-mounted cameras is at least three, and the vehicle-mounted cameras are respectively arranged on the left side and the right side of the vehicle and the tail of the vehicle;
and splicing the acquired frame images into a vehicle-mounted panoramic image and displaying the panoramic image on a rearview mirror in the vehicle.
2. The method of claim 1, wherein stitching the acquired frame images into a vehicle-mounted panorama comprises:
for each vehicle-mounted camera, converting an image acquired by the vehicle-mounted camera into a to-be-spliced sub-image according to a predetermined splicing mapping table corresponding to the vehicle-mounted camera, wherein the splicing mapping table records a coordinate position of each pixel point in the image corresponding to the to-be-spliced sub-image;
splicing the frames of sub-images to be spliced obtained by conversion into a vehicle-mounted panoramic image;
and the sub-images to be spliced of each frame are positioned in the same image coordinate system.
3. The method of claim 2, wherein the stitching mapping table corresponding to the vehicle-mounted camera is determined by:
determining internal parameters and external parameters of the vehicle-mounted camera;
acquiring a homography matrix corresponding to the vehicle-mounted camera according to the internal parameters and the external parameters;
acquiring a perspective transformation matrix corresponding to the vehicle-mounted camera according to a preset pitch angle, wherein the preset pitch angle refers to a visual angle corresponding to a visual plane for projecting an image to the front of human eyes;
and determining the coordinate position of each pixel point in the image acquired by the vehicle-mounted camera corresponding to the sub-image to be spliced according to the homography matrix and the perspective transformation matrix, and storing the coordinate position into a splicing mapping table corresponding to the vehicle-mounted camera.
4. The method of claim 2, wherein the splicing of the frames of the sub-images to be spliced obtained by conversion into the vehicle-mounted panorama comprises:
sequencing the sub-graphs to be spliced of each frame according to a specified sequence;
sequentially merging the sequenced frames of sub-images to be spliced and fusing overlapping areas in the frames of sub-images to be spliced to obtain a vehicle-mounted panoramic image;
the overlapping area refers to an area formed by pixel points with the same coordinate information in two adjacent sub-images to be spliced.
5. The method of claim 1, wherein after stitching the acquired frame images into a vehicle-mounted panorama and displaying the panorama on a rearview mirror inside the vehicle, the method further comprises:
when a first switching instruction for indicating that the vehicle left side image is displayed by the vehicle-mounted camera on the left side of the vehicle and a spliced image of the image acquired by the vehicle-mounted camera on the tail of the vehicle are received, the spliced image is displayed on the rearview mirror;
when a second switching instruction for indicating that the right side image of the vehicle is displayed in a side-weighted mode is received, displaying a spliced image of an image collected by a vehicle-mounted camera positioned on the right side of the vehicle and an image collected by a vehicle-mounted camera positioned at the tail of the vehicle on the rearview mirror;
when a third switching instruction for indicating the vehicle bottom dead zone to be displayed by the emphasis is received, determining the pixel information of the vehicle bottom area in the vehicle-mounted panoramic image according to the historical frame, filling the pixel information into the vehicle bottom area, and displaying the filled vehicle-mounted panoramic image on the rearview mirror.
6. An in-vehicle panorama generating system, characterized in that the system comprises:
the vehicle-mounted cameras are used for acquiring images around the vehicle, and at least three vehicle-mounted cameras are arranged on the left side and the right side of the vehicle and the tail of the vehicle respectively;
the processor is used for acquiring images around the vehicle, acquired by the vehicle-mounted camera at the same time, splicing the acquired images into a vehicle-mounted panoramic image and sending the vehicle-mounted panoramic image to a rearview mirror in the vehicle;
and the rearview mirror is used for displaying the vehicle-mounted panoramic image.
7. An on-vehicle panorama generating apparatus, characterized in that the apparatus comprises:
the vehicle-mounted cameras are arranged on the left side and the right side of the vehicle and at the tail of the vehicle respectively;
the splicing module is used for splicing the acquired images into a vehicle-mounted panoramic image;
and the display module is used for displaying the vehicle-mounted panoramic picture on a rearview mirror in the vehicle.
8. The device according to claim 7, wherein the stitching module is specifically configured to, for each vehicle-mounted camera, convert an image acquired by the vehicle-mounted camera into a to-be-stitched sub-image according to a predetermined stitching mapping table corresponding to the vehicle-mounted camera, where the stitching mapping table records a coordinate position of each pixel point in the image corresponding to the to-be-stitched sub-image; splicing the frames of sub-images to be spliced obtained by conversion into a vehicle-mounted panoramic image; and the sub-images to be spliced of each frame are positioned in the same image coordinate system.
9. The apparatus of claim 8, further comprising:
the splicing table establishing module is used for determining internal parameters and external parameters of the vehicle-mounted camera; acquiring a homography matrix corresponding to the vehicle-mounted camera according to the internal parameters and the external parameters; acquiring a perspective transformation matrix corresponding to the vehicle-mounted camera according to a preset pitch angle, wherein the preset pitch angle refers to a visual angle corresponding to a visual plane for projecting an image to the front of human eyes; and determining the coordinate position of each pixel point in the image acquired by the vehicle-mounted camera corresponding to the sub-image to be spliced according to the homography matrix and the perspective transformation matrix, and storing the coordinate position into a splicing mapping table corresponding to the vehicle-mounted camera.
10. The apparatus of claim 7, further comprising:
the switching display module is used for displaying a spliced image of an image acquired by a vehicle-mounted camera positioned on the left side of the vehicle and an image acquired by a vehicle-mounted camera positioned at the tail of the vehicle on the rearview mirror when receiving a first switching instruction for indicating that the left image of the vehicle is displayed by a driver; when a second switching instruction for indicating that the right side image of the vehicle is displayed in a side-weighted mode is received, displaying a spliced image of an image collected by a vehicle-mounted camera positioned on the right side of the vehicle and an image collected by a vehicle-mounted camera positioned at the tail of the vehicle on the rearview mirror; when a third switching instruction for indicating the vehicle bottom dead zone to be displayed by the emphasis is received, determining the pixel information of the vehicle bottom area in the vehicle-mounted panoramic image according to the historical frame, filling the pixel information into the vehicle bottom area, and displaying the filled vehicle-mounted panoramic image on the rearview mirror.
CN201910615707.6A 2019-07-09 2019-07-09 Vehicle-mounted panorama generation method, device and system Pending CN112215917A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910615707.6A CN112215917A (en) 2019-07-09 2019-07-09 Vehicle-mounted panorama generation method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910615707.6A CN112215917A (en) 2019-07-09 2019-07-09 Vehicle-mounted panorama generation method, device and system

Publications (1)

Publication Number Publication Date
CN112215917A true CN112215917A (en) 2021-01-12

Family

ID=74048273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910615707.6A Pending CN112215917A (en) 2019-07-09 2019-07-09 Vehicle-mounted panorama generation method, device and system

Country Status (1)

Country Link
CN (1) CN112215917A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113879214A (en) * 2021-11-17 2022-01-04 阿维塔科技(重庆)有限公司 Display method of electronic rearview mirror, electronic rearview mirror display system and related equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005039547A (en) * 2003-07-15 2005-02-10 Denso Corp Vehicular front view support device
JP2012124610A (en) * 2010-12-06 2012-06-28 Fujitsu Ten Ltd Image display system, image processing device, and image display method
CN103763517A (en) * 2014-03-03 2014-04-30 惠州华阳通用电子有限公司 Vehicle-mounted around view display method and system
CN104590118A (en) * 2014-11-28 2015-05-06 广东好帮手电子科技股份有限公司 Driving environment panoramic display system and method based on rearview mirror box
CN105894549A (en) * 2015-10-21 2016-08-24 乐卡汽车智能科技(北京)有限公司 Panorama assisted parking system and device and panorama image display method
CN107021015A (en) * 2015-11-08 2017-08-08 欧特明电子股份有限公司 System and method for image procossing
CN107274342A (en) * 2017-05-22 2017-10-20 纵目科技(上海)股份有限公司 A kind of underbody blind area fill method and system, storage medium, terminal device
CN107396057A (en) * 2017-08-22 2017-11-24 厦门纵目实业有限公司 A kind of joining method based on the visual angle of vehicle-mounted camera five splicing stereoscopic panoramic image
CN107730558A (en) * 2017-02-14 2018-02-23 上海大学 360 ° of vehicle-running recording systems and method based on two-way fish eye camera
US20180121742A1 (en) * 2016-11-02 2018-05-03 Lg Electronics Inc. Apparatus for providing around view image, and vehicle
US20180170259A1 (en) * 2016-12-21 2018-06-21 Toyota Jidosha Kabushiki Kaisha Vehicle periphery monitoring apparatus

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005039547A (en) * 2003-07-15 2005-02-10 Denso Corp Vehicular front view support device
JP2012124610A (en) * 2010-12-06 2012-06-28 Fujitsu Ten Ltd Image display system, image processing device, and image display method
CN103763517A (en) * 2014-03-03 2014-04-30 惠州华阳通用电子有限公司 Vehicle-mounted around view display method and system
CN104590118A (en) * 2014-11-28 2015-05-06 广东好帮手电子科技股份有限公司 Driving environment panoramic display system and method based on rearview mirror box
CN105894549A (en) * 2015-10-21 2016-08-24 乐卡汽车智能科技(北京)有限公司 Panorama assisted parking system and device and panorama image display method
CN107021015A (en) * 2015-11-08 2017-08-08 欧特明电子股份有限公司 System and method for image procossing
US20180121742A1 (en) * 2016-11-02 2018-05-03 Lg Electronics Inc. Apparatus for providing around view image, and vehicle
US20180170259A1 (en) * 2016-12-21 2018-06-21 Toyota Jidosha Kabushiki Kaisha Vehicle periphery monitoring apparatus
CN107730558A (en) * 2017-02-14 2018-02-23 上海大学 360 ° of vehicle-running recording systems and method based on two-way fish eye camera
CN107274342A (en) * 2017-05-22 2017-10-20 纵目科技(上海)股份有限公司 A kind of underbody blind area fill method and system, storage medium, terminal device
CN107396057A (en) * 2017-08-22 2017-11-24 厦门纵目实业有限公司 A kind of joining method based on the visual angle of vehicle-mounted camera five splicing stereoscopic panoramic image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113879214A (en) * 2021-11-17 2022-01-04 阿维塔科技(重庆)有限公司 Display method of electronic rearview mirror, electronic rearview mirror display system and related equipment

Similar Documents

Publication Publication Date Title
CN110341597B (en) Vehicle-mounted panoramic video display system and method and vehicle-mounted controller
JP4596978B2 (en) Driving support system
CN104918825B (en) Backsight imaging system for vehicle
JP4642723B2 (en) Image generating apparatus and image generating method
CN107021015B (en) System and method for image processing
JP4934308B2 (en) Driving support system
JP3286306B2 (en) Image generation device and image generation method
US8754760B2 (en) Methods and apparatuses for informing an occupant of a vehicle of surroundings of the vehicle
JP4257356B2 (en) Image generating apparatus and image generating method
US7728879B2 (en) Image processor and visual field support device
CN112224132B (en) Vehicle panoramic all-around obstacle early warning method
US8773534B2 (en) Image processing apparatus, medium recording image processing program, and image processing method
CN104584541B (en) Image forming apparatus, image display system, parameter acquisition device, image generating method and parameter acquiring method
CN105894549A (en) Panorama assisted parking system and device and panorama image display method
JP5209578B2 (en) Image display device for vehicle
US20170116710A1 (en) Merging of Partial Images to Form an Image of Surroundings of a Mode of Transport
WO2000064175A1 (en) Image processing device and monitoring system
CN105321160B (en) The multi-camera calibration that 3 D stereo panorama is parked
CN106855999A (en) The generation method and device of automobile panoramic view picture
WO2014166449A1 (en) Panoramic video-based vehicle onboard navigation method and system, and storage medium
KR20100081964A (en) Around image generating method and apparatus
CN112825546A (en) Generating a composite image using an intermediate image surface
CN112184545A (en) Vehicle-mounted ring view generating method, device and system
CN110428361A (en) A kind of multiplex image acquisition method based on artificial intelligence
CN112215917A (en) Vehicle-mounted panorama generation method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination