CN111489288A - Image splicing method and device - Google Patents

Image splicing method and device Download PDF

Info

Publication number
CN111489288A
CN111489288A CN201910082526.1A CN201910082526A CN111489288A CN 111489288 A CN111489288 A CN 111489288A CN 201910082526 A CN201910082526 A CN 201910082526A CN 111489288 A CN111489288 A CN 111489288A
Authority
CN
China
Prior art keywords
camera
coordinate system
coordinate
origin
world
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910082526.1A
Other languages
Chinese (zh)
Other versions
CN111489288B (en
Inventor
李天威
吴林隆
徐抗
谢国富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momenta Technology Co Ltd
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201910082526.1A priority Critical patent/CN111489288B/en
Publication of CN111489288A publication Critical patent/CN111489288A/en
Application granted granted Critical
Publication of CN111489288B publication Critical patent/CN111489288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a method and a device for splicing images, wherein the method comprises the following steps: acquiring original images acquired by cameras on each road of a vehicle, and determining first coordinates of pixels in each original image in an image pixel coordinate system; calculating the distance from the position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system based on a first mapping relation between the camera coordinate system and a world coordinate system and a second mapping relation between a camera physical imaging plane and a camera normalization plane; obtaining a second coordinate of the first coordinate in a camera coordinate system according to the distance, the camera imaging model and the distortion model; and determining target coordinates of the first coordinates in a world coordinate system according to the second coordinates and the first mapping relation, and splicing the original images based on the target coordinates to obtain a panoramic top view. By adopting the technical scheme, the overlook mosaic can be quickly and effectively generated under the condition that homography matrixes corresponding to all paths of cameras do not need to be independently calculated.

Description

Image splicing method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for splicing images.
Background
At present, in a vision-based S L AM (instant positioning and mapping) scheme in the field of automatic driving, a lightweight scheme is to generate a top view mosaic based on a panoramic camera original image for subsequent sensing, positioning and mapping, and for the mosaic, considering the mosaic as input of all subsequent operations and the actual motion condition of an unmanned vehicle, the mosaic must meet two requirements of 1, correctness, namely the mosaic must be top view mosaic under a world coordinate system, and 2, dynamism, namely the correct mosaic is still obtained when a vehicle body is subjected to pose change.
The existing splicing scheme is divided into two phases: the first stage is to lay a calibration plate to calculate homography matrixes corresponding to the four cameras respectively, and the second stage is to transform original images of the four cameras to a top view plane through homography matrixes obtained by previous calculation to complete splicing in the process of unmanned vehicle movement. However, this solution presents two significant problems: 1. a homography matrix is acquired in an early calibration stage; 2. the homography matrix is only suitable for a road surface consistent with a calibration field, once a vehicle body changes in the moving process, such as passing through a speed bump, an error splicing map is output, and the error splicing map generates great errors for subsequent positioning and mapping.
Disclosure of Invention
The embodiment of the invention discloses an image splicing method and device, which can quickly and effectively generate a top-view spliced image under the condition of not independently calculating homography matrixes corresponding to all paths of cameras.
In a first aspect, an embodiment of the present invention discloses an image stitching method, including:
acquiring original images acquired by cameras of each road of a vehicle, and determining a first coordinate of each pixel in the original image in an image pixel coordinate system for any original image;
calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system based on a first mapping relation between the camera coordinate system and a world coordinate system and a second mapping relation between a physical imaging plane of the camera and a camera normalization plane, wherein the physical imaging plane and the camera normalization plane are both established under the camera coordinate system, and the distances from the origin of the camera coordinate system are respectively a camera focal length and a preset unit distance;
obtaining a corresponding second coordinate of the first coordinate in the camera coordinate system according to the distance, a preset imaging model and a preset distortion model of the camera;
determining a target coordinate of the first coordinate in the world coordinate system according to the second coordinate and the first mapping relation;
and splicing the original images based on corresponding target coordinates of different pixels in the original images in a world coordinate system to obtain a panoramic top view.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, according to a transformation matrix from a world coordinate system to a camera coordinate system, an origin mapping coordinate of an origin coordinate of the world coordinate system in the camera coordinate system is determined to establish a first mapping relationship between the world coordinate system and the camera coordinate system;
and obtaining a third coordinate of the first coordinate on a camera normalization plane according to the first coordinate and the parameters of the camera so as to establish a second mapping relation between the physical imaging plane of the camera and the camera normalization plane.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, calculating a distance from a corresponding position of the first coordinate in the camera coordinate system to an origin of the camera coordinate system based on a first mapping relationship between the camera coordinate system and the world coordinate system and a second mapping relationship between a physical imaging plane of the camera and a camera normalization plane, includes:
obtaining a first vector according to the origin mapping coordinate and the origin coordinate of the camera coordinate system, and obtaining a second vector according to the origin coordinate of the camera coordinate system and a third coordinate corresponding to the first coordinate in a camera normalization plane;
and calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system according to the trigonometric function relation between the first vector and the second vector.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, calculating a distance from a corresponding position of the first coordinate in the camera coordinate system to an origin of the camera coordinate system according to a trigonometric function relationship between the first vector and the second vector includes:
and calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system according to the following formula:
Figure BDA0001960813540000021
wherein | OP | is the distance between the corresponding position of the first coordinate in the camera coordinate system and the origin of the camera coordinate system, | OA | is the modulus of the first vector, | cos ∠ POP is the trigonometric function relationship between the first vector and the second vector, | u, v) is the first coordinate of the pixel in the image pixel coordinate system,
Figure BDA0001960813540000022
mapping coordinates for an origin of a world coordinate system in a camera coordinate system; (f)x,fy,cx,cy) Is an internal reference of the camera.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, determining the target coordinate of the first coordinate in the world coordinate system according to the second coordinate and the first mapping relationship includes:
and determining target coordinates of the first coordinates in a world coordinate system based on a transformation matrix between the world coordinate system and a camera coordinate system, wherein the transformation matrix is an external parameter of the camera.
As an alternative implementation manner, in the first aspect of the embodiment of the present invention, the transformation matrix is obtained by:
determining a vehicle pose based on data collected by a positioning sensor mounted on the vehicle;
and determining the transformation matrix according to the vehicle pose and the installation position of the camera in the vehicle.
In a second aspect, an embodiment of the present invention further provides an apparatus for stitching images, where the apparatus includes:
the first coordinate determination module is used for acquiring original images acquired by all cameras of a vehicle and determining first coordinates of all pixels in any original image in an image pixel coordinate system;
the distance calculation module is used for calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system based on a first mapping relation between the camera coordinate system and a world coordinate system and a second mapping relation between a physical imaging plane of the camera and a camera normalization plane, wherein the physical imaging plane and the camera normalization plane are both established under the camera coordinate system, and the distances from the origin of the camera coordinate system are respectively a camera focal length and a preset unit distance;
the second coordinate determination module is used for obtaining a corresponding second coordinate of the first coordinate in the camera coordinate system according to the distance, a preset imaging model and a preset distortion model of the camera;
the target coordinate determination module is used for determining a target coordinate of the first coordinate in the world coordinate system according to the second coordinate and the first mapping relation;
and the image splicing module is used for splicing the original images based on corresponding target coordinates of different pixels in the original images in a world coordinate system to obtain a panoramic top view.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, according to a transformation matrix from a world coordinate system to a camera coordinate system, determining corresponding origin mapping coordinates of origin coordinates of the world coordinate system in the camera coordinate system, so as to establish a first mapping relationship between the world coordinate system and the camera coordinate system;
and obtaining a third coordinate of the first coordinate on a camera normalization plane according to the first coordinate and the parameters of the camera so as to establish a second mapping relation between the physical imaging plane of the camera and the camera normalization plane.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the distance calculating module includes:
the vector determining unit is used for obtaining a first vector according to the origin mapping coordinate and the origin coordinate of the camera coordinate system, and obtaining a second vector according to the origin coordinate of the camera coordinate system and a third coordinate corresponding to the first coordinate in a camera normalization plane;
and the distance calculation unit is used for calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system according to the trigonometric function relation between the first vector and the second vector.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the distance calculating unit is specifically configured to:
Figure BDA0001960813540000041
wherein | OP | is the distance between the corresponding position of the first coordinate in the camera coordinate system and the origin of the camera coordinate system, | OA | is the modulus of the first vector, | cos ∠ POP is the trigonometric function relationship between the first vector and the second vector, | u, v) is the first coordinate of the pixel in the image pixel coordinate system,
Figure BDA0001960813540000042
mapping coordinates for an origin of a world coordinate system in a camera coordinate system; (f)x,fy,cx,cy) Is an internal reference of the camera.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the target coordinate determination module is specifically configured to:
and determining target coordinates of the first coordinates in a world coordinate system based on a transformation matrix between the world coordinate system and a camera coordinate system, wherein the transformation matrix is an external parameter of the camera.
As an alternative implementation manner, in the second aspect of the embodiment of the present invention, the transformation matrix is obtained by:
the vehicle pose determination module is used for determining the vehicle pose based on data collected by a positioning sensor installed on the vehicle;
and the change matrix determining module is used for determining the transformation matrix according to the vehicle pose and the installation position of the camera in the vehicle.
In a third aspect, an embodiment of the present invention further provides a vehicle-mounted terminal, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program codes stored in the memory to execute part or all of the steps of the image splicing method provided by any embodiment of the invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, where the computer program includes instructions for executing part or all of the steps of the image stitching method provided in any embodiment of the present invention.
In a fifth aspect, the embodiment of the present invention further provides a computer program product, which when run on a computer, causes the computer to execute part or all of the steps of the image stitching method provided in any embodiment of the present invention.
The technical scheme provided by the embodiment of the invention avoids the calibration of the homography matrix in the picture splicing process, and particularly converts the coordinate system into the camera coordinate system by establishing the camera imaging plane, the camera normalization plane and the mapping relation between the camera imaging plane and the camera normalization plane with the camera coordinate system as a reference and establishing the mapping relation between the world coordinate system and the camera coordinate system. After the first coordinates of each pixel in the original image shot by the camera are obtained, the corresponding target coordinates of the second coordinates in the world coordinate system can be calculated by determining the second coordinates of each pixel in the camera coordinate system and according to the mapping relation between the camera coordinate system and the world coordinate system. After the first coordinates of different pixels in the original images are converted into corresponding target coordinates in a world coordinate system, all the original images can be spliced to obtain a panoramic top view. By adopting the technical scheme, the calibration of the homography matrix of the camera during image splicing is avoided, and for different road surfaces, the panoramic overlook spliced image can be rapidly and accurately obtained. By utilizing the panoramic overlook splicing picture provided by the embodiment of the invention, the vehicle can be accurately positioned.
The invention comprises the following steps:
1. the invention is one of the invention points that the coordinate system is converted into the camera coordinate system uniformly by establishing the camera imaging plane, the camera normalization plane and the mapping relation between the camera imaging plane and the camera normalization plane and establishing the mapping relation between the world coordinate system and the camera coordinate system, thereby avoiding the calibration of the corresponding matrix.
2. When calculating the distance from the corresponding position of the first coordinate in the image pixel coordinate system in the camera coordinate system to the origin of the camera coordinate system, the calculation is carried out by adopting the following formula:
Figure BDA0001960813540000051
by adopting the formula and combining a preset imaging model of the camera, the second coordinate of the first coordinate in the camera coordinate system can be determined. Since there is a mapping relationship between the world coordinate system and the camera coordinate system, the second coordinate may be converted to a target coordinate in the world coordinate system. The conversion method is simple and easy to implement, has higher accuracy and real-time performance, and is one of the invention points of the invention.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image stitching method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a coordinate system mapping relationship provided in an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an image stitching apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of an image stitching method according to an embodiment of the present invention. The method is applied to automatic driving, can be executed by an image splicing method device, can be realized in a software and/or hardware mode, and can be generally integrated in vehicle-mounted terminals such as a vehicle-mounted Computer, a vehicle-mounted Industrial control Computer (IPC), and the like. As shown in fig. 1, the image stitching method provided in this embodiment specifically includes:
110. the method comprises the steps of obtaining original images collected by cameras of a vehicle, and determining first coordinates of pixels in the original images in an image pixel coordinate system for any original image.
The cameras are generally installed at different positions of the vehicle to capture road information in different directions. Preferably, each camera can be respectively arranged in the front direction, the rear direction, the left direction and the right direction of the vehicle, and the view range of each camera at least comprises the ground below the camera.
In this embodiment, the camera is preferably a fisheye camera, and a Field OF View (FOV) OF the fisheye camera is large, so that a target image captured by a single fisheye camera includes the surrounding environment OF the vehicle as much as possible, the integrity OF observation is improved, and the accuracy OF subsequent vehicle positioning is improved. The cameras arranged in the four directions form a camera around-looking scheme, so that the vehicle-mounted terminal can acquire environmental information of all directions around the vehicle at one time, and a local map constructed by using the target image acquired at a single time contains more information. In addition, image data acquired by the four cameras has certain redundancy, if one camera fails, the image data acquired by the other cameras can be supplemented, and the influence on map construction and positioning of the vehicle-mounted terminal is low.
In this embodiment, images captured by the cameras installed in the respective directions of the vehicle at the same time can be stitched, and the obtained top view stitched map contains 360-degree environmental information centered on the vehicle. By identifying the top-view mosaic, the position information of each semantic feature can be obtained. However, each pixel in the original image collected by the camera is located by the image pixel coordinate system, and the pixel coordinate systems of the images shot by the cameras are not in the same coordinate system. Therefore, in order to observe the environmental information of vehicles in different directions simultaneously, the image pixel coordinate system of each pixel in the original image needs to be converted into the same coordinate system, i.e. the world coordinate system mentioned below in this embodiment. For an original image shot by any camera, after determining a target position corresponding to a first coordinate of each pixel in an image pixel coordinate system in a world coordinate system in the original image, images shot by all cameras can be spliced into a top view taking the world coordinate system as a reference.
120. And calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system based on a first mapping relation between the camera coordinate system and the world coordinate system and a second mapping relation between a physical imaging plane of the camera and a camera normalization plane.
First, the first mapping relationship in this embodiment is explained as follows:
in this embodiment, the first mapping relationship refers to a mapping relationship between a camera coordinate system and a world coordinate system. As an alternative, the first mapping relationship may be established by mapping any point in the world coordinate system to the camera coordinate system. In this embodiment, it is preferable that the origin of the world coordinate system is mapped to the camera coordinate system, and the mapping relationship between the origin of the world coordinate system and the camera coordinate system can be established by determining the coordinates of the origin of the world coordinate system in the camera coordinate system. The origin of the world coordinate system may be set at any position of the vehicle, for example, at the center of the vehicle body.
For example, the first mapping relationship between the world coordinate system and the camera coordinate system may be established by determining corresponding origin mapping coordinates of origin coordinates of the world coordinate system in the camera coordinate system according to a transformation matrix from the world coordinate system to the camera coordinate system. The transformation matrix is a position conversion relation between a camera coordinate system and a world coordinate system. As will be appreciated by those skilled in the art, the transformation matrix may be obtained by:
and determining the vehicle pose based on the data collected by the positioning sensor arranged on the vehicle, and determining a transformation matrix according to the vehicle pose and the installation position of the camera in the vehicle. The positioning sensor may be an image sensor, a gyroscope, a ground distance meter, or the like. The vehicle pose includes the position and attitude of the vehicle. In this embodiment, since the camera is mounted on the vehicle, the pose of the camera, that is, the phase position relationship of the camera with respect to the world coordinate system, that is, the transformation matrix can be determined according to the pose of the vehicle and the mounting position of the camera, and the transformation matrix can be used as an external parameter of the camera.
Preferably, the external parameters of the camera can be determined by a VO (Visual odometer) method. The input of the VO method is data collected by the sensors, and the output is external parameters of the camera.
The second mapping relationship in this embodiment is explained below:
in this embodiment, the second mapping relationship refers to a mapping relationship between a physical imaging plane of the camera and a camera normalization plane. As an alternative implementation, the second mapping relationship may be established by mapping coordinates of any point in the world coordinate system in the camera physical imaging plane into the camera normalization plane, and determining coordinates of the point in the physical imaging plane in the camera normalization plane to establish the mapping relationship between the two.
The physical imaging plane of the camera is an imaging plane such as a negative film, a Charge-coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS). In this embodiment, the physical imaging plane of the camera is established in the camera coordinate system, and the distance between the optical center of the camera and the negative direction of the Z axis is the focal length of the camera. The camera normalization plane is also established in the camera coordinate system, and the distance from the positive direction of the Z axis to the optical center of the camera is a preset unit distance, and the preset unit distance is preferably set to be 1 meter. The camera coordinate system takes the optical center of the camera as an origin, the leftward direction of a plane parallel to the optical center is an X axis, the downward direction perpendicular to the X axis is a Y axis, and the outward direction of the plane perpendicular to the optical center is a Z axis.
Specifically, fig. 2 is a schematic diagram of a coordinate system mapping relationship according to an embodiment of the present invention. As shown in fig. 2, the origin of the world coordinate system is set at point a (point a is an arbitrary point on the ground), and the normal vector of the ground is set
Figure BDA0001960813540000081
The transformation matrix from the world coordinate system to the camera coordinate system is R. In addition, the direction vectors of the x, y and z axes in the camera coordinate system are respectively
Figure BDA0001960813540000082
And
Figure BDA0001960813540000083
projection length of point A on x, y and z axes in camera coordinate system
Figure BDA0001960813540000084
And
Figure BDA0001960813540000085
respectively as follows:
Figure BDA0001960813540000086
and
Figure BDA0001960813540000087
thus, the coordinates of point A in the camera coordinate system
Figure BDA0001960813540000088
Can be expressed as:
Figure BDA0001960813540000089
thus, a first mapping relation between the world coordinate system and the camera coordinate system is established.
For any point P in the world coordinate system, the first coordinate of the point in the image pixel coordinate system is (u, v), and the coordinate in the camera coordinate system is (u, v)
Figure BDA00019608135400000810
According to the first coordinate and the internal reference (f) of the camerax,fy,cx,cy) And obtaining a third coordinate of a point p' of the first coordinate on the camera normalization plane as follows:
Figure BDA00019608135400000811
thus, a second mapping relation between the physical imaging plane of the camera and the camera normalization plane is established.
After the first mapping relationship and the second mapping relationship are established, a distance from a corresponding position of the first coordinate in the camera coordinate system to an origin of the camera coordinate system, that is, a distance of the OP in fig. 2, may be calculated based on the first mapping relationship and the second mapping relationship.
In the embodiment, when calculating the distance of the OP, the coordinate system is unified with the camera coordinate system for calculation, that is, the coordinates of each point used in the calculation process are all the coordinates of each point in the camera coordinate system. Of course, the calculation method of OP may also be performed in other coordinate systems, for example, in a world coordinate system, and in this case, the coordinates of each point are the corresponding coordinates in the world coordinate system.
In the present embodiment, the calculation can be performed based on the geometric relationship between OA and OP, and as shown in fig. 2, there are various geometric relationships that can be constructed between OA and OP, and the present embodiment preferably performs the calculation using trigonometric functional relationships between OA, OP, and ∠ POA.
Specifically, coordinates may be mapped according to the origin of the world coordinate system origin in the camera coordinate system
Figure BDA0001960813540000091
Obtaining a first vector from an origin coordinate O of a camera coordinate system
Figure BDA0001960813540000092
And according to the origin coordinate O of the camera coordinate system and the coordinate of the corresponding position of the first coordinate on the camera normalization plane
Figure BDA0001960813540000093
Obtain the second vector
Figure BDA0001960813540000094
According to a first vector
Figure BDA0001960813540000095
And a second vector
Figure BDA0001960813540000096
Cos ∠ POA can be calculated, specifically:
Figure BDA0001960813540000097
wherein | OP "| is the distance from the corresponding position of the first coordinate in the camera normalization plane to the origin of the camera coordinate system, | OA | is the modulus of the first vector representing the distance from the camera to the ground, which can be measured by a ground distance meter, | cos ∠ POA is the cosine between the first vector and the second vector, | u, v) is the first coordinate of the pixel in the image pixel coordinate system,
Figure BDA0001960813540000098
mapping coordinates for an origin of a world coordinate system in a camera coordinate system; (f)x,fy,cx,cy) Is an internal reference of the camera.
After calculating cos ∠ POA, the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system can be calculated according to the trigonometric functional relationship between OA, OP and ∠ POA.
Figure BDA0001960813540000099
130. And obtaining a second coordinate corresponding to the first coordinate in the camera coordinate system according to the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system, the preset imaging model and the preset distortion model of the camera.
Wherein, the preset imaging model of the camera is: zpuv=Kpc. In this model, puvRepresenting the coordinates of any point in the world coordinate system in the image pixel coordinate system, pcAnd K represents the coordinate of any point in the world coordinate system in the camera coordinate system, and the reference of the camera. The model represents the transformation of pixels in the pixel coordinate system of the image into the camera coordinate system. The second coordinate P of the point P in the camera coordinate system can be obtained through the preset imaging modelcThe formula for the distance | OP | from the second coordinate to the camera origin is expressed by (x, y, z), i.e.
Figure BDA0001960813540000101
Further, in the present embodiment, the preset distortion model may be represented by the following equation system:
Figure BDA0001960813540000102
wherein (x)distort,ydistor)tRepresenting the corresponding coordinates of the first coordinate corresponding point in the physical imaging plane of the camera; g (x)distort,ydistort) And l (x)distort,ydistort) Respectively, the x-axis and y-axis inverse distortion functions in the camera coordinate system.
The first and second equations in the above equation set represent the mapping between the image pixel plane and the camera's physical imaging plane, and the third and fourth equations represent the mapping between the physical imaging plane and the camera's normalization plane. By adopting the above formula, the coordinates in the physical imaging plane can be subjected to inverse distortion to obtain the corresponding coordinates in the camera normalization plane, that is, the third coordinates in the present embodiment. Since the camera normalization plane is established in the camera coordinate system, and the distance to the plane where the optical center of the camera is located is 1 meter, a second coordinate of any point in the world coordinate system in the camera coordinate system can be obtained according to the third coordinate, and the second coordinate can be represented by the following formula:
Figure BDA0001960813540000111
it should be noted that the anti-distortion model referred to in the present embodiment is for a non-pinhole camera, such as a fisheye camera. However, if a pinhole camera is adopted when a picture is shot, since the pinhole camera has no distortion, a second coordinate corresponding to the first coordinate in the camera coordinate system can be obtained directly according to the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system and the preset imaging model of the camera.
140. And determining the target coordinate of the first coordinate in the world coordinate system according to the second coordinate and the mapping relation between the world coordinate system and the camera coordinate system.
The mapping relation between the world coordinate system and the camera coordinate system can be represented by a transformation matrix, the transformation matrix comprises a rotation matrix R and an offset vector t, and the camera coordinate system can be transformed into the world coordinate system only through rotation and translation. The transformation matrix can be used as an external parameter of the camera. The determination manner of the transformation matrix may refer to the explanation of step 120, and is not described herein again.
In this embodiment, the target coordinate of the first coordinate in the world coordinate system may be determined based on a transformation matrix between the world coordinate system and a camera coordinate system.
150. And splicing the original images based on corresponding target coordinates of different pixels in the original images in a world coordinate system to obtain a panoramic top view.
For an original image acquired by any path of camera, the target coordinate p of each pixel in the world coordinate system can be obtained according to the process by determining the coordinates (u, v) of each pixel in the imagew=(xw,yw,zw). According to the size of the preset panoramic top view, x in a world coordinate system can be adjustedw、ywAnd zwAnd scaling to a certain degree so as to determine the positions of corresponding pixels of each point in the world coordinate system in the panoramic top view, so as to obtain the panoramic top view.
According to the technical scheme provided by the embodiment, the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system can be calculated by determining the first coordinate of each pixel in the original image shot by the camera in the image pixel coordinate system and based on the first mapping relation between the camera coordinate system and the world coordinate system and the second mapping relation between the physical imaging plane of the camera and the camera normalization plane. And according to the distance, the preset imaging model and the preset distortion model of the camera, obtaining a second coordinate corresponding to the first coordinate in the camera coordinate system. And determining the corresponding target coordinate of the first coordinate in the world coordinate system according to the second coordinate and the transformation matrix between the world coordinate system and the camera coordinate system. And after converting the first coordinates of different pixels in the original images into corresponding target coordinates in a world coordinate system, splicing the original images to obtain a panoramic top view. By adopting the technical scheme, the calibration of the homography matrix of each camera is avoided when the images are spliced. For different road surfaces, the panoramic overlooking splicing map can be quickly and accurately obtained, and the method is simple and high in practicability.
Example two
Referring to fig. 3, fig. 3 is a schematic structural diagram of an image stitching apparatus according to an embodiment of the present invention. As shown in fig. 3, the apparatus includes: a first coordinate determination module 210, a distance calculation module 220, a second coordinate determination module 230, a target coordinate determination module 240, and an image stitching module 250.
The first coordinate determination module 210 is configured to obtain original images acquired by cameras in each road of a vehicle, and determine, for any one of the original images, a first coordinate of each pixel in the original image in an image pixel coordinate system;
the distance calculation module 220 is configured to calculate a distance from a corresponding position of the first coordinate in the camera coordinate system to an origin of the camera coordinate system based on a first mapping relationship between the camera coordinate system and a world coordinate system and a second mapping relationship between a physical imaging plane of the camera and a camera normalization plane, where the physical imaging plane and the camera normalization plane are both established in the camera coordinate system, and the distances from the origin of the camera coordinate system are a camera focal length and a preset unit distance, respectively;
a second coordinate determining module 230, configured to obtain a second coordinate, corresponding to the first coordinate in the camera coordinate system, according to the distance, a preset imaging model of the camera, and a preset distortion model;
a target coordinate determination module 240, configured to determine a target coordinate of the first coordinate in the world coordinate system according to the second coordinate and the first mapping relationship;
and the image stitching module 250 is configured to stitch the original images based on the target coordinates of the different pixels in the original images in the world coordinate system to obtain a panoramic top view.
According to the technical scheme provided by the embodiment, the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system can be calculated by determining the first coordinate of each pixel in the original image shot by the camera in the image pixel coordinate system and based on the first mapping relation between the camera coordinate system and the world coordinate system and the second mapping relation between the physical imaging plane of the camera and the camera normalization plane. And according to the distance, the preset imaging model and the preset distortion model of the camera, obtaining a second coordinate corresponding to the first coordinate in the camera coordinate system. And determining the corresponding target coordinate of the first coordinate in the world coordinate system according to the second coordinate and the transformation matrix between the world coordinate system and the camera coordinate system. And after converting the first coordinates of different pixels in the original images into corresponding target coordinates in a world coordinate system, splicing the original images to obtain a panoramic top view. By adopting the technical scheme, the calibration of the homography matrix of each camera is avoided when the images are spliced. For different road surfaces, the panoramic overlook splicing map can be quickly and accurately obtained, the method is simple, and the practicability is high.
On the basis of the foregoing embodiment, in a second aspect of the embodiment of the present invention, according to a transformation matrix from a world coordinate system to a camera coordinate system, an origin mapping coordinate of an origin coordinate of the world coordinate system in the camera coordinate system is determined, so as to establish a first mapping relationship between the world coordinate system and the camera coordinate system;
and obtaining a third coordinate of the first coordinate on a camera normalization plane according to the first coordinate and the parameters of the camera so as to establish a second mapping relation between the physical imaging plane of the camera and the camera normalization plane.
On the basis of the above embodiment, in a second aspect of the embodiment of the present invention, the distance calculation module includes:
the vector determining unit is used for obtaining a first vector according to the origin mapping coordinate and the origin coordinate of the camera coordinate system, and obtaining a second vector according to the origin coordinate of the camera coordinate system and a third coordinate corresponding to the first coordinate in a camera normalization plane;
and the distance calculation unit is used for calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system according to the trigonometric function relation between the first vector and the second vector.
On the basis of the foregoing embodiment, in a second aspect of the embodiment of the present invention, the distance calculating unit is specifically configured to:
Figure BDA0001960813540000131
wherein | OP | is the distance between the corresponding position of the first coordinate in the camera coordinate system and the origin of the camera coordinate system, | OA | is the modulus of the first vector, | cos ∠ POP is the trigonometric function relationship between the first vector and the second vector, | u, v) is the first coordinate of the pixel in the image pixel coordinate system,
Figure BDA0001960813540000132
mapping coordinates for an origin of a world coordinate system in a camera coordinate system; (f)x,fy,cx,cy) Is an internal reference of the camera.
On the basis of the foregoing embodiment, in a second aspect of the embodiment of the present invention, the target coordinate determination module is specifically configured to:
and determining target coordinates of the first coordinates in a world coordinate system based on a transformation matrix between the world coordinate system and a camera coordinate system, wherein the transformation matrix is an external parameter of the camera.
On the basis of the foregoing embodiments, in a second aspect of embodiments of the present invention, the transformation matrix is obtained by:
the vehicle pose determination module is used for determining the vehicle pose based on data collected by a positioning sensor installed on the vehicle;
and the change matrix determining module is used for determining the transformation matrix according to the vehicle pose and the installation position of the camera in the vehicle.
The positioning device of the vehicle provided by the embodiment of the invention can execute the image splicing method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in the above embodiments, reference may be made to a method for stitching images provided in any embodiment of the present invention.
EXAMPLE III
Referring to fig. 4, fig. 4 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention. As shown in fig. 4, the in-vehicle terminal may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute the image stitching method provided by any embodiment of the present invention.
The embodiment of the invention also provides another vehicle-mounted terminal which comprises a memory stored with executable program codes; a processor coupled to the memory; the processor calls the executable program codes stored in the memory to execute the map construction method provided by any embodiment of the invention.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute the image splicing method provided by any embodiment of the invention.
The embodiment of the invention discloses a computer program product, wherein when the computer program product runs on a computer, the computer is enabled to execute part or all of the steps of the image stitching method provided by any embodiment of the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to A" means that B is associated with A from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The driving strategy generating method and device based on the automatic driving electronic navigation map disclosed by the embodiment of the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. An image stitching method is characterized by comprising the following steps:
acquiring original images acquired by cameras of each road of a vehicle, and determining a first coordinate of each pixel in the original image in an image pixel coordinate system for any original image;
calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system based on a first mapping relation between the camera coordinate system and a world coordinate system and a second mapping relation between a physical imaging plane of the camera and a camera normalization plane, wherein the physical imaging plane and the camera normalization plane are both established under the camera coordinate system, and the distances from the origin of the camera coordinate system are respectively a camera focal length and a preset unit distance;
obtaining a corresponding second coordinate of the first coordinate in the camera coordinate system according to the distance, a preset imaging model and a preset distortion model of the camera;
determining a target coordinate of the first coordinate in the world coordinate system according to the second coordinate and the first mapping relation;
and splicing the original images based on corresponding target coordinates of different pixels in the original images in a world coordinate system to obtain a panoramic top view.
2. The method of claim 1, wherein the first mapping relationship and the second mapping relationship are obtained by:
determining an origin mapping coordinate corresponding to an origin coordinate of the world coordinate system in the camera coordinate system according to a transformation matrix from the world coordinate system to the camera coordinate system so as to establish a first mapping relation between the world coordinate system and the camera coordinate system;
and obtaining a third coordinate of the first coordinate on a camera normalization plane according to the first coordinate and the parameters of the camera so as to establish a second mapping relation between the physical imaging plane of the camera and the camera normalization plane.
3. The method of claim 2, wherein calculating the distance of the corresponding position of the first coordinate in the camera coordinate system from the origin of the camera coordinate system based on a first mapping relationship between the camera coordinate system and a world coordinate system and a second mapping relationship between a physical imaging plane of the camera and a camera normalization plane comprises:
obtaining a first vector according to the origin mapping coordinate and the origin coordinate of the camera coordinate system, and obtaining a second vector according to the origin coordinate of the camera coordinate system and a third coordinate corresponding to the first coordinate in a camera normalization plane;
and calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system according to the trigonometric function relation between the first vector and the second vector.
4. The method of claim 3, wherein calculating a distance from a corresponding position of the first coordinate in a camera coordinate system to an origin of the camera coordinate system according to a trigonometric functional relationship between the first vector and the second vector comprises:
and calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system according to the following formula:
Figure FDA0001960813530000021
wherein | OP | is the distance between the corresponding position of the first coordinate in the camera coordinate system and the origin of the camera coordinate system, | OA | is the modulus of the first vector, | cos ∠ POP is the trigonometric function relationship between the first vector and the second vector, | u, v) is the first coordinate of the pixel in the image pixel coordinate system,
Figure FDA0001960813530000022
mapping coordinates for an origin of a world coordinate system in a camera coordinate system; (f)x,fy,cx,cy) Is an internal reference of the camera.
5. The method according to any one of claims 1-4, wherein determining the target coordinate of the first coordinate in the world coordinate system according to the second coordinate and the first mapping relationship comprises:
and determining target coordinates of the first coordinates in a world coordinate system based on a transformation matrix between the world coordinate system and a camera coordinate system, wherein the transformation matrix is an external parameter of the camera.
6. The method according to any of claims 1-5, wherein the transformation matrix is obtained by:
determining a vehicle pose based on data collected by a positioning sensor mounted on the vehicle;
and determining the transformation matrix according to the vehicle pose and the installation position of the camera in the vehicle.
7. An image stitching device applied to automatic driving is characterized by comprising:
the first coordinate determination module is used for acquiring original images acquired by all cameras of a vehicle and determining first coordinates of all pixels in any original image in an image pixel coordinate system;
the distance calculation module is used for calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system based on a first mapping relation between the camera coordinate system and a world coordinate system and a second mapping relation between a physical imaging plane of the camera and a camera normalization plane, wherein the physical imaging plane and the camera normalization plane are both established under the camera coordinate system, and the distances from the origin of the camera coordinate system are respectively a camera focal length and a preset unit distance;
the second coordinate determination module is used for obtaining a corresponding second coordinate of the first coordinate in the camera coordinate system according to the distance, a preset imaging model and a preset distortion model of the camera;
the target coordinate determination module is used for determining a target coordinate of the first coordinate in the world coordinate system according to the second coordinate and the first mapping relation;
and the image splicing module is used for splicing the original images based on corresponding target coordinates of different pixels in the original images in a world coordinate system to obtain a panoramic top view.
8. The apparatus of claim 7, comprising:
determining an origin mapping coordinate corresponding to an origin coordinate of the world coordinate system in the camera coordinate system according to a transformation matrix from the world coordinate system to the camera coordinate system so as to establish a first mapping relation between the world coordinate system and the camera coordinate system;
and obtaining a third coordinate of the first coordinate on a camera normalization plane according to the first coordinate and the parameters of the camera so as to establish a second mapping relation between the physical imaging plane of the camera and the camera normalization plane.
9. The apparatus of claim 8, wherein the distance calculation module comprises:
the vector determining unit is used for obtaining a first vector according to the origin mapping coordinate and the origin coordinate of the camera coordinate system, and obtaining a second vector according to the origin coordinate of the camera coordinate system and a third coordinate corresponding to the first coordinate in a camera normalization plane;
and the distance calculation unit is used for calculating the distance from the corresponding position of the first coordinate in the camera coordinate system to the origin of the camera coordinate system according to the trigonometric function relation between the first vector and the second vector.
10. The apparatus according to claim 9, wherein the distance calculation unit is specifically configured to:
Figure FDA0001960813530000031
wherein | OP | is the corresponding position and shooting of the first coordinate in the camera coordinate systemDistance from the origin of the image head coordinate system, | OA | is the modulus of the first vector, | cos ∠ POP is the trigonometric relationship between the first vector and said second vector, (u, v) is the first coordinate of the pixel in the image pixel coordinate system,
Figure FDA0001960813530000032
mapping coordinates for an origin of a world coordinate system in a camera coordinate system; (f)x,fy,cx,cy) Is an internal reference of the camera.
CN201910082526.1A 2019-01-28 2019-01-28 Image splicing method and device Active CN111489288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910082526.1A CN111489288B (en) 2019-01-28 2019-01-28 Image splicing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910082526.1A CN111489288B (en) 2019-01-28 2019-01-28 Image splicing method and device

Publications (2)

Publication Number Publication Date
CN111489288A true CN111489288A (en) 2020-08-04
CN111489288B CN111489288B (en) 2023-04-07

Family

ID=71796128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910082526.1A Active CN111489288B (en) 2019-01-28 2019-01-28 Image splicing method and device

Country Status (1)

Country Link
CN (1) CN111489288B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102419A (en) * 2020-09-24 2020-12-18 烟台艾睿光电科技有限公司 Calibration method and system of dual-light imaging equipment and image registration method
CN112132874A (en) * 2020-09-23 2020-12-25 西安邮电大学 Calibration-board-free different-source image registration method and device, electronic equipment and storage medium
CN112199754A (en) * 2020-10-30 2021-01-08 久瓴(江苏)数字智能科技有限公司 Coordinate positioning method and device, storage medium and electronic equipment
CN112714282A (en) * 2020-12-22 2021-04-27 北京百度网讯科技有限公司 Image processing method, apparatus, device and program product in remote control
WO2022204855A1 (en) * 2021-03-29 2022-10-06 华为技术有限公司 Image processing method and related terminal device

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070122058A1 (en) * 2005-11-28 2007-05-31 Fujitsu Limited Method and apparatus for analyzing image, and computer product
US20080166023A1 (en) * 2007-01-05 2008-07-10 Jigang Wang Video speed detection system
CN101540046A (en) * 2009-04-10 2009-09-23 凌阳电通科技股份有限公司 Panoramagram montage method and device based on image characteristics
CN101710932A (en) * 2009-12-21 2010-05-19 深圳华为通信技术有限公司 Image stitching method and device
CN103136720A (en) * 2013-03-12 2013-06-05 中科院微电子研究所昆山分所 Vehicle-mounted 360-degree panorama mosaic method
CN103295231A (en) * 2013-05-14 2013-09-11 杭州海康希牧智能科技有限公司 Method for geometrically correcting vertically mapped images of fisheye lenses in fisheye image mosaic
CN103763517A (en) * 2014-03-03 2014-04-30 惠州华阳通用电子有限公司 Vehicle-mounted around view display method and system
CN104992408A (en) * 2015-06-30 2015-10-21 百度在线网络技术(北京)有限公司 Panorama image generation method and apparatus for user terminal
EP3016071A1 (en) * 2014-10-29 2016-05-04 Fujitsu Limited Estimating device and estimation method
KR101642975B1 (en) * 2015-04-27 2016-07-26 주식회사 피씨티 Panorama Space Modeling Method for Observing an Object
US20160217611A1 (en) * 2015-01-26 2016-07-28 Uber Technologies, Inc. Map-like summary visualization of street-level distance data and panorama data
CN106023080A (en) * 2016-05-19 2016-10-12 沈祥明 Seamless splicing processing system for vehicle-mounted panoramic image
CN106056536A (en) * 2016-05-19 2016-10-26 温州大学城市学院 Vehicle-mounted panorama image seamless splicing processing method
JP2016194895A (en) * 2015-03-31 2016-11-17 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド Method, device, and system for creating indoor two-dimentional (2d) floor plan
CN106530218A (en) * 2016-10-28 2017-03-22 浙江宇视科技有限公司 Coordinate conversion method and apparatus
WO2017080280A1 (en) * 2015-11-13 2017-05-18 杭州海康威视数字技术股份有限公司 Depth image composition method and apparatus
WO2017161608A1 (en) * 2016-03-21 2017-09-28 完美幻境(北京)科技有限公司 Geometric calibration processing method and device for camera
US20180158206A1 (en) * 2016-12-02 2018-06-07 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for testing accuracy of high-precision map
WO2018157568A1 (en) * 2017-03-01 2018-09-07 北京大学深圳研究生院 Panoramic image mapping method
US20190012804A1 (en) * 2017-07-10 2019-01-10 Nokia Technologies Oy Methods and apparatuses for panoramic image processing

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070122058A1 (en) * 2005-11-28 2007-05-31 Fujitsu Limited Method and apparatus for analyzing image, and computer product
US20080166023A1 (en) * 2007-01-05 2008-07-10 Jigang Wang Video speed detection system
CN101540046A (en) * 2009-04-10 2009-09-23 凌阳电通科技股份有限公司 Panoramagram montage method and device based on image characteristics
CN101710932A (en) * 2009-12-21 2010-05-19 深圳华为通信技术有限公司 Image stitching method and device
CN103136720A (en) * 2013-03-12 2013-06-05 中科院微电子研究所昆山分所 Vehicle-mounted 360-degree panorama mosaic method
CN103295231A (en) * 2013-05-14 2013-09-11 杭州海康希牧智能科技有限公司 Method for geometrically correcting vertically mapped images of fisheye lenses in fisheye image mosaic
CN103763517A (en) * 2014-03-03 2014-04-30 惠州华阳通用电子有限公司 Vehicle-mounted around view display method and system
EP3016071A1 (en) * 2014-10-29 2016-05-04 Fujitsu Limited Estimating device and estimation method
US20160217611A1 (en) * 2015-01-26 2016-07-28 Uber Technologies, Inc. Map-like summary visualization of street-level distance data and panorama data
JP2016194895A (en) * 2015-03-31 2016-11-17 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド Method, device, and system for creating indoor two-dimentional (2d) floor plan
KR101642975B1 (en) * 2015-04-27 2016-07-26 주식회사 피씨티 Panorama Space Modeling Method for Observing an Object
CN104992408A (en) * 2015-06-30 2015-10-21 百度在线网络技术(北京)有限公司 Panorama image generation method and apparatus for user terminal
WO2017080280A1 (en) * 2015-11-13 2017-05-18 杭州海康威视数字技术股份有限公司 Depth image composition method and apparatus
WO2017161608A1 (en) * 2016-03-21 2017-09-28 完美幻境(北京)科技有限公司 Geometric calibration processing method and device for camera
CN106023080A (en) * 2016-05-19 2016-10-12 沈祥明 Seamless splicing processing system for vehicle-mounted panoramic image
CN106056536A (en) * 2016-05-19 2016-10-26 温州大学城市学院 Vehicle-mounted panorama image seamless splicing processing method
CN106530218A (en) * 2016-10-28 2017-03-22 浙江宇视科技有限公司 Coordinate conversion method and apparatus
US20180158206A1 (en) * 2016-12-02 2018-06-07 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for testing accuracy of high-precision map
WO2018157568A1 (en) * 2017-03-01 2018-09-07 北京大学深圳研究生院 Panoramic image mapping method
US20190012804A1 (en) * 2017-07-10 2019-01-10 Nokia Technologies Oy Methods and apparatuses for panoramic image processing

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132874A (en) * 2020-09-23 2020-12-25 西安邮电大学 Calibration-board-free different-source image registration method and device, electronic equipment and storage medium
CN112132874B (en) * 2020-09-23 2023-12-05 西安邮电大学 Calibration-plate-free heterogeneous image registration method and device, electronic equipment and storage medium
CN112102419A (en) * 2020-09-24 2020-12-18 烟台艾睿光电科技有限公司 Calibration method and system of dual-light imaging equipment and image registration method
CN112102419B (en) * 2020-09-24 2024-01-26 烟台艾睿光电科技有限公司 Dual-light imaging equipment calibration method and system and image registration method
CN112199754A (en) * 2020-10-30 2021-01-08 久瓴(江苏)数字智能科技有限公司 Coordinate positioning method and device, storage medium and electronic equipment
CN112199754B (en) * 2020-10-30 2023-05-09 久瓴(江苏)数字智能科技有限公司 Coordinate positioning method and device, storage medium and electronic equipment
CN112714282A (en) * 2020-12-22 2021-04-27 北京百度网讯科技有限公司 Image processing method, apparatus, device and program product in remote control
WO2022204855A1 (en) * 2021-03-29 2022-10-06 华为技术有限公司 Image processing method and related terminal device

Also Published As

Publication number Publication date
CN111489288B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111489288B (en) Image splicing method and device
CN110146869B (en) Method and device for determining coordinate system conversion parameters, electronic equipment and storage medium
US10594941B2 (en) Method and device of image processing and camera
JP4825980B2 (en) Calibration method for fisheye camera.
CN109300143B (en) Method, device and equipment for determining motion vector field, storage medium and vehicle
JP2019537023A (en) Positioning method and device
JP2020035447A (en) Object identification method, device, apparatus, vehicle and medium
CN110717861B (en) Image splicing method and device, electronic equipment and computer readable storage medium
US20100165105A1 (en) Vehicle-installed image processing apparatus and eye point conversion information generation method
CN111800589B (en) Image processing method, device and system and robot
CN108805938B (en) Detection method of optical anti-shake module, mobile terminal and storage medium
WO2020133172A1 (en) Image processing method, apparatus, and computer readable storage medium
CN112444242A (en) Pose optimization method and device
CN111415387A (en) Camera pose determining method and device, electronic equipment and storage medium
KR101890612B1 (en) Method and apparatus for detecting object using adaptive roi and classifier
CN111711756A (en) Image anti-shake method, electronic equipment and storage medium
CN112446917B (en) Gesture determination method and device
WO2018142533A1 (en) Position/orientation estimating device and position/orientation estimating method
CN113029128A (en) Visual navigation method and related device, mobile terminal and storage medium
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN113252066A (en) Method and device for calibrating parameters of odometer equipment, storage medium and electronic device
CN114821544B (en) Perception information generation method and device, vehicle, electronic equipment and storage medium
JP2005275789A (en) Three-dimensional structure extraction method
CN115147495A (en) Calibration method, device and system for vehicle-mounted system
CN112233185A (en) Camera calibration method, image registration method, camera device and storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220302

Address after: 100083 unit 501, block AB, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing

Applicant after: BEIJING MOMENTA TECHNOLOGY Co.,Ltd.

Address before: Room 28, 4 / F, block a, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing 100089

Applicant before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant