CN113068006A - Image presentation method and device - Google Patents
Image presentation method and device Download PDFInfo
- Publication number
- CN113068006A CN113068006A CN202110280992.8A CN202110280992A CN113068006A CN 113068006 A CN113068006 A CN 113068006A CN 202110280992 A CN202110280992 A CN 202110280992A CN 113068006 A CN113068006 A CN 113068006A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- image
- texture coordinate
- target
- reference device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000013507 mapping Methods 0.000 claims abstract description 54
- 238000006243 chemical reaction Methods 0.000 claims description 81
- 239000011159 matrix material Substances 0.000 claims description 78
- 238000009877 rendering Methods 0.000 claims description 35
- 238000013519 translation Methods 0.000 claims description 27
- 239000013598 vector Substances 0.000 claims description 27
- 238000004364 calculation method Methods 0.000 claims description 23
- 230000009466 transformation Effects 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 9
- 229940050561 matrix product Drugs 0.000 claims description 7
- 238000005516 engineering process Methods 0.000 description 8
- 238000012544 monitoring process Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 206010039203 Road traffic accident Diseases 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000012806 monitoring device Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/08—Projecting images onto non-planar surfaces, e.g. geodetic screens
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C5/00—Registering or indicating the working of vehicles
- G07C5/08—Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The embodiment of the disclosure discloses an image presenting method and device. One embodiment of the method comprises: acquiring images acquired by a plurality of image acquisition devices, wherein one image acquisition device in the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two adjacent image acquisition devices have an overlapping area; aiming at a target device in a plurality of image acquisition devices, mapping an image acquired by the target device to an image texture coordinate system of a reference device based on a predetermined mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device; and presenting a plurality of images mapped in the image texture coordinate system of the reference device, wherein the plurality of images mapped in the image texture coordinate system of the reference device are panoramic images.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to an image presenting method and device.
Background
A drive recorder is generally an electronic device for recording driving data such as video and/or audio while a vehicle is running. The automobile data recorder is arranged in the automobile, so that evidence can be provided for the traffic accident, and the responsibility tracing of the traffic accident is realized.
In the related art, an electronic monitoring device for monitoring traffic data, such as a vehicle data recorder, generally helps to track traffic accidents more if the monitoring field of view is larger. Therefore, in the related art, there is a need for panoramic monitoring of electronic monitoring devices.
Disclosure of Invention
The embodiment of the disclosure provides an image presenting method and device.
In a first aspect, an embodiment of the present disclosure provides an image presenting method, including: acquiring images acquired by a plurality of image acquisition devices, wherein one image acquisition device in the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two adjacent image acquisition devices have an overlapping area; aiming at a target device in a plurality of image acquisition devices, mapping an image acquired by the target device to an image texture coordinate system of a reference device based on a predetermined mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device; and presenting a plurality of images mapped in the image texture coordinate system of the reference device, wherein the plurality of images mapped in the image texture coordinate system of the reference device are panoramic images.
In some embodiments, presenting a plurality of images mapped in an image texture coordinate system of a reference device comprises: and according to the preset image rendering and displaying step, performing image rendering on the panoramic image, and presenting a new panoramic image obtained after rendering.
In some embodiments, the image rendering and displaying step comprises: performing a matrix product calculation on the panoramic image and a target matrix, wherein the target matrix is a product of a predetermined model matrix, a view matrix and a projection matrix.
In some embodiments, there is one target device in the plurality of image capturing devices, and the reference device and the target device are two fisheye lenses mounted back-to-back.
In some embodiments, in response to the reference device and the target device both being fisheye lenses, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined by: determining a relative conversion parameter between the spherical coordinate system of the reference equipment and the spherical coordinate system of the target equipment according to a first conversion parameter between the spherical coordinate system of the target equipment and the spherical coordinate system of the target equipment, a second conversion parameter between the spherical coordinate system of the reference equipment and a preset parameter calculation formula; and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the relative conversion parameter, a third conversion parameter from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device and a fourth conversion parameter from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
In some embodiments, the parameter calculation formula includes:
wherein R is a rotation matrix in the relative transformation parameters, R2As a rotation matrix in the second transformation parameter, R1Is the rotation matrix in the first conversion parameter,is the inverse of the rotation matrix in the first transformation parameters, T is the translation vector in the relative transformation parameters, T1For translation vectors in the first conversion parameter, T2Is the translation vector in the second conversion parameter.
In some embodiments, in response to the reference device and the target device both being flat shots, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined by: and determining the mapping relation between the image texture coordinate system of the target equipment and the image texture coordinate system of the reference equipment according to the conversion parameters between the world coordinate system and the image texture coordinate system of the reference equipment and the conversion parameters between the world coordinate system and the image texture coordinate system of the target equipment.
In a second aspect, an embodiment of the present disclosure provides an image rendering apparatus, including: the image acquisition unit is configured to acquire images acquired by a plurality of image acquisition devices, wherein one image acquisition device in the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two adjacent image acquisition devices have an overlapping area; the coordinate mapping unit is configured to map an image acquired by a target device to an image texture coordinate system of a reference device based on a predetermined mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device aiming at the target device in the plurality of image acquisition devices; an image rendering unit configured to render a plurality of images mapped in an image texture coordinate system of the reference device, wherein the plurality of images mapped in the image texture coordinate system of the reference device are panoramic images.
In some embodiments, the image rendering unit is further configured to: and according to the preset image rendering and displaying step, performing image rendering on the panoramic image, and presenting a new panoramic image obtained after rendering.
In some embodiments, the image rendering and displaying step comprises: performing a matrix product calculation on the panoramic image and a target matrix, wherein the target matrix is a product of a predetermined model matrix, a view matrix and a projection matrix.
In some embodiments, there is one target device in the plurality of image capturing devices, and the reference device and the target device are two fisheye lenses mounted back-to-back.
In some embodiments, in response to the reference device and the target device both being fisheye lenses, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined by: determining a relative conversion parameter between the spherical coordinate system of the reference equipment and the spherical coordinate system of the target equipment according to a first conversion parameter between the spherical coordinate system of the target equipment and the spherical coordinate system of the target equipment, a second conversion parameter between the spherical coordinate system of the reference equipment and a preset parameter calculation formula; and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the relative conversion parameter, a third conversion parameter from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device and a fourth conversion parameter from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
In some embodiments, the parameter calculation formula includes:
wherein R is a rotation matrix in the relative transformation parameters, R2As a rotation matrix in the second transformation parameter, R1Is the rotation matrix in the first conversion parameter,is the inverse of the rotation matrix in the first transformation parameters, T is the translation vector in the relative transformation parameters, T1For translation vectors in the first conversion parameter, T2Is the translation vector in the second conversion parameter.
In some embodiments, in response to the reference device and the target device both being flat shots, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined by: and determining the mapping relation between the image texture coordinate system of the target equipment and the image texture coordinate system of the reference equipment according to the conversion parameters between the world coordinate system and the image texture coordinate system of the reference equipment and the conversion parameters between the world coordinate system and the image texture coordinate system of the target equipment.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device having one or more programs stored thereon; when executed by the one or more processors, cause the one or more processors to implement a method as described in any implementation of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which when executed by a processor implements the method as described in any of the implementations of the first aspect.
According to the image presenting method and device provided by the embodiment of the disclosure, because the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two adjacent image acquisition devices have an overlapping area, a plurality of acquired images acquired by the plurality of image acquisition devices can be combined to obtain a panoramic image, that is, monitoring of a panoramic view can be realized. In addition, when the obtained multiple images are combined into a panoramic image, mapping all the images on the same coordinate system is realized by mapping the images to the image texture coordinate system of the reference device to obtain the panoramic image. Compared with the panoramic image obtained by splicing a plurality of images in the related technology, the panoramic image obtained by the mapping method has lower calculation complexity, and is beneficial to accelerating the data processing speed, so that the panoramic image can be presented more quickly, and the presentation efficiency of the panoramic image can be improved. It is noted that the panoramic image obtained by mapping all the images on the same coordinate system is generally a spherical projection panoramic image.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of an image rendering method provided by embodiments of the present disclosure;
FIG. 2 is a schematic structural diagram of an image rendering apparatus provided in an embodiment of the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 illustrates a flow of one embodiment of an image rendering method according to the present disclosure. The image presenting method comprises the following steps:
The image acquisition device comprises a plurality of image acquisition devices, wherein one of the image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two adjacent image acquisition devices have an overlapping area.
An image capturing apparatus is generally an apparatus having an image capturing function. The reference device is typically an image acquisition device used as a reference. The target device is typically an image capturing device other than the reference device among the plurality of image capturing devices described above. In practical application, one image capturing device may be selected from the plurality of image capturing devices as a reference device, and the selection manner of the reference device is not limited in this embodiment.
In this embodiment, the execution subject of the image presentation method may be a monitoring electronic device, such as a car recorder. The execution main body can acquire the image from each image acquisition device in a wired connection mode or a wireless connection mode. In practice, if the execution subject is an automobile data recorder, the image acquisition device may be a device carried by the automobile data recorder, or may be a device in communication connection with the automobile data recorder.
In practice, the image texture coordinate system of an image capturing device generally refers to the coordinate system in which the imaging plane of the image capturing device is located.
Here, for each target device, the execution subject may determine in advance a mapping relationship between an image texture coordinate system of the target device and an image texture coordinate system of the reference device. In this way, the image acquired by the target device can be mapped to the image texture coordinate system of the reference device based on the mapping relation. For example, if the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is: the rotation matrix from the image texture coordinate system of the target device to the image texture coordinate system of the reference device is R and the translation vector is T. At this time, when the image collected by the target device is mapped to the image texture coordinate system of the reference device, for an arbitrary image texture coordinate point M in the image, the coordinate of the point mapped to the image texture coordinate system of the reference device may be N, where N is R × M + T.
And the plurality of images mapped under the image texture coordinate system of the reference device are panoramic images. In practical applications, the panoramic image is usually a spherical projection panoramic image.
Here, after mapping all the images captured by the target device into the image texture coordinate system of the reference device, a panoramic image may be obtained. After obtaining the panoramic image, the executing entity may adopt the existing technology or the technology in the future development to visually present the panoramic image.
In the method provided by this embodiment, because the image capturing ranges of the image capturing devices are different, and the image capturing ranges of every two adjacent image capturing devices have an overlapping area, a plurality of acquired images captured by a plurality of image capturing devices can be combined to obtain a panoramic image, that is, monitoring of a panoramic view can be achieved. In addition, when the obtained multiple images are combined into a panoramic image, mapping all the images on the same coordinate system is realized by mapping the images to the image texture coordinate system of the reference device to obtain the panoramic image. Compared with the panoramic image obtained by splicing a plurality of images in the related technology, the panoramic image obtained by the mapping method has lower calculation complexity, and is beneficial to accelerating the data processing speed, so that the panoramic image can be presented more quickly, and the presentation efficiency of the panoramic image can be improved.
In some optional implementations of this embodiment, presenting the plurality of images mapped in the image texture coordinate system of the reference device includes: and according to the preset image rendering and displaying step, performing image rendering on the panoramic image, and presenting a new panoramic image obtained after rendering. In practical applications, a new panoramic image obtained by image rendering a panoramic image is usually a local correction image.
The image rendering and displaying step is generally a preset step for processing the image to make the image suitable for display. As an example, the image rendering and displaying step may be to clip the panoramic image to a preset size. The implementation mode can realize better presentation of the panoramic image.
Optionally, the image rendering and displaying step may include: performing a matrix product calculation on the panoramic image and the target matrix. Wherein the target matrix is a product of a predetermined model matrix, a view matrix and a projection matrix.
Here, the model matrix may be a preset matrix. The model matrix is typically used to transform the object point coordinates from the object coordinate system to the world coordinate system. The object coordinate system is generally a coordinate system with the center point of the object as the origin. The view matrix may be a predetermined matrix. The view matrix is typically used to transform the object point coordinates from the world coordinate system to under the camera coordinate system. The camera coordinate system is generally a coordinate system with the optical center point of the camera as the origin. The projection matrix may be a predetermined matrix. Projection matrices are typically used to project object point coordinates from the camera coordinate system onto the screen. In practice, the value of the model matrix may be an identity matrix, the value of the view matrix may be an identity matrix, and the value of the projection matrix may be a matrix for selecting a field area with a field angle of 60 degrees. The realization mode can realize the accurate presentation of the panoramic image and the presentation of the field-of-view area expected by the user. In practical applications, the matrix product calculation performed on the panoramic image and the target matrix may be performed by performing matrix product calculation on each spherical vertex coordinate of the panoramic image and the target matrix.
In some optional implementations of this embodiment, there is one target device in the plurality of image capturing devices, and the reference device and the target device are two fisheye lenses installed back to back.
Here, the reference device and the target device are both fisheye lenses, and they are mounted back-to-back. Because the visual angle of the fisheye lens is close to or equal to 180 degrees, the panoramic view can be monitored through the two fisheye lenses. When the image acquisition equipment is the fisheye lens, monitoring of the panorama can be achieved only by adopting two image acquisition equipment, the number of the image acquisition equipment which needs to be installed can be reduced, and cost saving and installation space saving are facilitated.
In some optional implementations of this embodiment, in response to that both the reference device and the target device are fisheye lenses, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined as follows:
the method comprises the steps of firstly, determining a relative conversion parameter between a spherical coordinate system of a reference device and a spherical coordinate system of a target device according to a first conversion parameter between the spherical coordinate system of the target device and a world coordinate system, a second conversion parameter between the spherical coordinate system of the reference device and a preset parameter calculation formula.
Here, for the fisheye lens, the spherical coordinate system is generally a coordinate system in which a lens curved surface of the fisheye lens is located. The first conversion parameter generally refers to a rotation matrix and a translation vector corresponding to a spherical coordinate system converted from a world coordinate system to a target device. The second conversion parameter generally refers to a rotation matrix and a translation vector corresponding to a spherical coordinate system converted from a world coordinate system to a reference device. The relative transformation parameters generally refer to a rotation matrix and a translation vector corresponding to the transformation from the spherical coordinate system of the target device to the spherical coordinate system of the reference device.
The above parameter calculation formula is usually a preset formula. Optionally, the parameter calculation formula may include:
wherein R is a rotation matrix in the relative transformation parameters, R2As a rotation matrix in the second transformation parameter, R1Is the rotation matrix in the first conversion parameter,is the inverse of the rotation matrix in the first transformation parameters, T is the translation vector in the relative transformation parameters, T1For translation vectors in the first conversion parameter, T2Is the translation vector in the second conversion parameter. Where x is the sign of the product.
According to the parameter calculation formula, the rotation matrix in the relative conversion parameter can be calculated by the product of the rotation matrix in the second conversion parameter and the inverse matrix of the rotation matrix in the first conversion parameter. The translation vector in the relative conversion parameter can be obtained by subtracting the product of the rotation matrix in the second conversion parameter, the inverse matrix of the rotation matrix in the first conversion parameter and the translation vector in the first conversion parameter from the translation vector in the second conversion parameter.
Here, for the above parameter calculation formula, the derivation process may be as follows:
if the two fisheye lenses are A and B respectively, a point exists in the overlapping area of the A and the B, the coordinate of the point in a world coordinate system is X, and the coordinate projected to the spherical coordinate system of the A is X1The coordinate projected to the spherical coordinate system of B is X2. If the conversion parameter of the spherical coordinate system converted from the world coordinate system to A is (R)1,T1) Wherein R is1For a rotation matrix, T1Is a translation vector. The conversion parameter of the spherical coordinate system from the world coordinate system to B is (R)2,T2) Wherein R is2For a rotation matrix, T2Is a translation vector.
At this time, X1=R1×X+T1And X2=R2×X+T2It is possible to obtain:
thus, the conversion parameters from the spherical coordinate system of A to the spherical coordinate system of B can be obtained as (R, T), wherein,
and secondly, determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the relative conversion parameters, a third conversion parameter from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device and a fourth conversion parameter from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
The third conversion parameter is generally a rotation matrix and a translation vector corresponding to the conversion from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device. The fourth conversion parameters are typically a rotation matrix and a translation vector corresponding to the conversion from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
Here, since the internal parameters of the camera can be uniquely determined once the camera is calibrated, the conversion parameters corresponding to the conversion from the spherical coordinate system of the camera to the image texture coordinate system can also be uniquely determined. In practice, for a fisheye lens, the following imaging relationships are generally available:
x′=x/z,y′=y/z
r2=x′2+y′2,θ=atan(r)
θ′=θ(1+k1θ2+k2θ4+k3θ6+k4θ8)
x′=(θ′/r)x,y′=(θ′/r)y
u=fxx′+cx,v=fyy′+cy
wherein, { X; y; z is the coordinate of a point in space under the world coordinate system, { x; y; z is the coordinate of the point projected onto the spherical coordinate system, { u; v is the coordinate mapped to the image texture coordinate system, and R and T are the corresponding rotation matrix and translation vector converted from the world coordinate system to the spherical coordinate system. (f)x,fy,cx,cy) As camera parameters, (k)1,k2,k3,k4) For the distortion parameter, the internal parameters of the camera include a camera parameter and a distortion parameter, and the internal parameters of the camera are usually obtained by calibrating the camera. To cameraThe line calibration method is not limited in this embodiment.
Here, when the relative conversion parameter, the third conversion parameter, and the fourth conversion parameter are known, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device can be obtained.
In some optional implementation manners of this embodiment, if the reference device and the target device are both plane lenses. At this time, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined as follows:
and determining the mapping relation between the image texture coordinate system of the target equipment and the image texture coordinate system of the reference equipment according to the conversion parameters between the world coordinate system and the image texture coordinate system of the reference equipment and the conversion parameters between the world coordinate system and the image texture coordinate system of the target equipment.
Here, the planar lens is generally an aperture linear imaging, and the conversion relationship from the world coordinate system to the image texture coordinate system of the camera can be determined by using the existing technology or the technology in the future development. Therefore, the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device can be determined based on the conversion parameters from the world coordinate system to the image texture coordinate system of the reference device and the conversion parameters from the world coordinate system to the image texture coordinate system of the target device. Here, the operation of determining the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is substantially the same as the operation of determining the relative conversion parameter between the spherical coordinate systems of the two fisheye lenses, and details are not repeated here.
With further reference to fig. 2, as an implementation of the method shown in fig. 1, the present disclosure provides an embodiment of an image rendering apparatus, which corresponds to the embodiment of the method shown in fig. 1, and which may be applied in various electronic devices in particular.
As shown in fig. 2, the image presenting apparatus of the present embodiment includes: the image acquisition unit 201 is configured to acquire images acquired by a plurality of image acquisition devices, wherein one image acquisition device of the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two adjacent image acquisition devices have an overlapping area; a coordinate mapping unit 202 configured to map, for a target device of the plurality of image capturing devices, an image captured by the target device into an image texture coordinate system of a reference device based on a predetermined mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device; an image rendering unit 203 configured to render a plurality of images mapped in an image texture coordinate system of the reference device, wherein the plurality of images mapped in the image texture coordinate system of the reference device are panoramic images.
In some optional implementations of this embodiment, presenting the plurality of images mapped in the image texture coordinate system of the reference device includes: and according to the preset image rendering and displaying step, performing image rendering on the panoramic image, and presenting a new panoramic image obtained after rendering.
In some optional implementations of this embodiment, the image rendering and displaying step includes: performing a matrix product calculation on the panoramic image and a target matrix, wherein the target matrix is a product of a predetermined model matrix, a view matrix and a projection matrix.
In some optional implementations of this embodiment, there is one target device in the plurality of image capturing devices, and the reference device and the target device are two fisheye lenses installed back to back.
In some optional implementations of this embodiment, in response to that both the reference device and the target device are fisheye lenses, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined as follows: determining a relative conversion parameter between the spherical coordinate system of the reference equipment and the spherical coordinate system of the target equipment according to a first conversion parameter between the spherical coordinate system of the target equipment and the spherical coordinate system of the target equipment, a second conversion parameter between the spherical coordinate system of the reference equipment and a preset parameter calculation formula; and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the relative conversion parameter, a third conversion parameter from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device and a fourth conversion parameter from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
In some optional implementations of this embodiment, the parameter calculation formula includes:
wherein R is a rotation matrix in the relative transformation parameters, R2As a rotation matrix in the second transformation parameter, R1Is the rotation matrix in the first conversion parameter,is an inverse matrix to the rotation matrix in the first transformation parameter, T is a translation vector in the relative transformation parameter, T1For translation vectors in the first conversion parameter, T2Is the translation vector in the second conversion parameter.
In some optional implementations of this embodiment, in response to that both the reference device and the target device are planar lenses, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined as follows: and determining the mapping relation between the image texture coordinate system of the target equipment and the image texture coordinate system of the reference equipment according to the conversion parameters between the world coordinate system and the image texture coordinate system of the reference equipment and the conversion parameters between the world coordinate system and the image texture coordinate system of the target equipment.
According to the device provided by the above embodiment of the present disclosure, since the image capturing ranges of the image capturing devices are different, and the image capturing ranges of every two adjacent image capturing devices have an overlapping area, a plurality of acquired images captured by a plurality of image capturing devices can be combined to obtain a panoramic image, that is, monitoring of a panoramic view can be achieved. In addition, when the obtained multiple images are combined into a panoramic image, mapping all the images on the same coordinate system is realized by mapping the images to the image texture coordinate system of the reference device to obtain the panoramic image. Compared with the panoramic image obtained by splicing a plurality of images in the related technology, the panoramic image obtained by the mapping method has lower calculation complexity, and is beneficial to accelerating the data processing speed, so that the panoramic image can be presented more quickly, and the presentation efficiency of the panoramic image can be improved.
Referring now to FIG. 3, shown is a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device may include a processing device (e.g., a Central Processing Unit (CPU), a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage device 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, image capture device, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure. It should be noted that the computer readable medium of the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the steps of: acquiring images acquired by a plurality of image acquisition devices, wherein one image acquisition device in the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two adjacent image acquisition devices have an overlapping area; aiming at a target device in a plurality of image acquisition devices, mapping an image acquired by the target device to an image texture coordinate system of a reference device based on a predetermined mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device; and presenting a plurality of images mapped in the image texture coordinate system of the reference device, wherein the plurality of images mapped in the image texture coordinate system of the reference device are panoramic images.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor comprises an image acquisition unit, a coordinate mapping unit and an image presentation unit. The names of these units do not in some cases constitute a limitation on the units themselves, and for example, a video receiving unit may also be described as a "unit that acquires images captured by a plurality of image capturing apparatuses".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Claims (10)
1. An image rendering method, wherein the method comprises:
acquiring images acquired by a plurality of image acquisition devices, wherein one image acquisition device in the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two adjacent image acquisition devices have an overlapping area;
aiming at a target device in the image acquisition devices, mapping an image acquired by the target device to an image texture coordinate system of the reference device based on a predetermined mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device;
presenting a plurality of images mapped in an image texture coordinate system of the reference device, wherein the plurality of images mapped in the image texture coordinate system of the reference device are panoramic images.
2. The method of claim 1, wherein the rendering a plurality of images mapped in an image texture coordinate system of the reference device comprises:
and according to the preset image rendering and displaying step, performing image rendering on the panoramic image, and presenting a new panoramic image obtained after rendering.
3. The method of claim 2, wherein the image rendering and displaying step comprises: performing a matrix product calculation on the panoramic image and a target matrix, wherein the target matrix is a product of a predetermined model matrix, a view matrix and a projection matrix.
4. The method of claim 1, wherein there is one target device in the plurality of image capture devices, and the reference device and the target device are two fisheye lenses mounted back-to-back.
5. The method according to one of claims 1 to 4, wherein, in response to the reference device and the target device both being fisheye lenses, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined by:
determining a relative conversion parameter between the spherical coordinate system of the reference equipment and the spherical coordinate system of the target equipment according to a first conversion parameter between the spherical coordinate system of the target equipment and the world coordinate system, a second conversion parameter between the spherical coordinate system of the reference equipment and a preset parameter calculation formula;
and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the relative conversion parameter, a third conversion parameter from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device and a fourth conversion parameter from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
6. The method of claim 5, wherein the parameter calculation formula comprises:
wherein R is a rotation matrix in the relative transformation parameters, R2As a rotation matrix in the second transformation parameter, R1Is the rotation matrix in the first conversion parameter,is the inverse of the rotation matrix in the first transformation parameters, T is the translation vector in the relative transformation parameters, T1For translation vectors in the first conversion parameter, T2Is the translation vector in the second conversion parameter.
7. The method of claim 1, wherein in response to both the reference device and the target device being flat shots, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined by:
and determining the mapping relation between the image texture coordinate system of the target equipment and the image texture coordinate system of the reference equipment according to the conversion parameters between the world coordinate system and the image texture coordinate system of the reference equipment and the conversion parameters between the world coordinate system and the image texture coordinate system of the target equipment.
8. An image rendering apparatus, wherein the apparatus comprises:
the image acquisition unit is configured to acquire images acquired by a plurality of image acquisition devices, wherein one image acquisition device in the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two adjacent image acquisition devices have an overlapping area;
the coordinate mapping unit is configured to map an image acquired by a target device in the plurality of image acquisition devices to an image texture coordinate system of the reference device based on a predetermined mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device;
an image rendering unit configured to render a plurality of images mapped in an image texture coordinate system of the reference device, wherein the plurality of images mapped in the image texture coordinate system of the reference device are panoramic images.
9. An electronic device, comprising:
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110280992.8A CN113068006B (en) | 2021-03-16 | 2021-03-16 | Image presentation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110280992.8A CN113068006B (en) | 2021-03-16 | 2021-03-16 | Image presentation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113068006A true CN113068006A (en) | 2021-07-02 |
CN113068006B CN113068006B (en) | 2023-05-26 |
Family
ID=76560874
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110280992.8A Active CN113068006B (en) | 2021-03-16 | 2021-03-16 | Image presentation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113068006B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101710932A (en) * | 2009-12-21 | 2010-05-19 | 深圳华为通信技术有限公司 | Image stitching method and device |
CN105741341A (en) * | 2016-01-27 | 2016-07-06 | 桂林长海发展有限责任公司 | Three-dimensional space environment imaging system and method |
CN106157304A (en) * | 2016-07-01 | 2016-11-23 | 成都通甲优博科技有限责任公司 | A kind of Panoramagram montage method based on multiple cameras and system |
CN106355550A (en) * | 2016-10-31 | 2017-01-25 | 微景天下(北京)科技有限公司 | Image stitching system and image stitching method |
CN106651808A (en) * | 2016-12-29 | 2017-05-10 | 北京爱奇艺科技有限公司 | Fisheye image conversion method and device |
CN111540022A (en) * | 2020-05-14 | 2020-08-14 | 深圳市艾为智能有限公司 | Image uniformization method based on virtual camera |
CN111754381A (en) * | 2019-03-26 | 2020-10-09 | 华为技术有限公司 | Graphics rendering method, apparatus, and computer-readable storage medium |
-
2021
- 2021-03-16 CN CN202110280992.8A patent/CN113068006B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101710932A (en) * | 2009-12-21 | 2010-05-19 | 深圳华为通信技术有限公司 | Image stitching method and device |
CN105741341A (en) * | 2016-01-27 | 2016-07-06 | 桂林长海发展有限责任公司 | Three-dimensional space environment imaging system and method |
CN106157304A (en) * | 2016-07-01 | 2016-11-23 | 成都通甲优博科技有限责任公司 | A kind of Panoramagram montage method based on multiple cameras and system |
CN106355550A (en) * | 2016-10-31 | 2017-01-25 | 微景天下(北京)科技有限公司 | Image stitching system and image stitching method |
CN106651808A (en) * | 2016-12-29 | 2017-05-10 | 北京爱奇艺科技有限公司 | Fisheye image conversion method and device |
CN111754381A (en) * | 2019-03-26 | 2020-10-09 | 华为技术有限公司 | Graphics rendering method, apparatus, and computer-readable storage medium |
CN111540022A (en) * | 2020-05-14 | 2020-08-14 | 深圳市艾为智能有限公司 | Image uniformization method based on virtual camera |
Also Published As
Publication number | Publication date |
---|---|
CN113068006B (en) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111127563A (en) | Combined calibration method and device, electronic equipment and storage medium | |
CN109741388B (en) | Method and apparatus for generating a binocular depth estimation model | |
CN107516294B (en) | Method and device for splicing images | |
CN109961522B (en) | Image projection method, device, equipment and storage medium | |
CN112073748B (en) | Panoramic video processing method and device and storage medium | |
CN110728622B (en) | Fisheye image processing method, device, electronic equipment and computer readable medium | |
CN110781823B (en) | Screen recording detection method and device, readable medium and electronic equipment | |
CN110062157B (en) | Method and device for rendering image, electronic equipment and computer readable storage medium | |
CN109829447B (en) | Method and device for determining a three-dimensional frame of a vehicle | |
CN111325792B (en) | Method, apparatus, device and medium for determining camera pose | |
CN113256742B (en) | Interface display method and device, electronic equipment and computer readable medium | |
CN111766951A (en) | Image display method and apparatus, computer system, and computer-readable storage medium | |
CN112001912A (en) | Object detection method and device, computer system and readable storage medium | |
CN114125411B (en) | Projection device correction method, projection device correction device, storage medium and projection device | |
CN115409696A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN109816791B (en) | Method and apparatus for generating information | |
CN113068006B (en) | Image presentation method and device | |
CN109688381B (en) | VR monitoring method, device, equipment and storage medium | |
CN109840059B (en) | Method and apparatus for displaying image | |
KR102076635B1 (en) | Apparatus and method for generating panorama image for scattered fixed cameras | |
CN111383337B (en) | Method and device for identifying objects | |
CN115170395A (en) | Panoramic image stitching method, panoramic image stitching device, electronic equipment, panoramic image stitching medium and program product | |
CN111263115B (en) | Method, apparatus, electronic device, and computer-readable medium for presenting images | |
CN111540009A (en) | Method, apparatus, electronic device, and medium for generating detection information | |
CN115760964B (en) | Method and equipment for acquiring screen position information of target object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |