CN113068006B - Image presentation method and device - Google Patents

Image presentation method and device Download PDF

Info

Publication number
CN113068006B
CN113068006B CN202110280992.8A CN202110280992A CN113068006B CN 113068006 B CN113068006 B CN 113068006B CN 202110280992 A CN202110280992 A CN 202110280992A CN 113068006 B CN113068006 B CN 113068006B
Authority
CN
China
Prior art keywords
coordinate system
image
reference device
texture coordinate
target device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110280992.8A
Other languages
Chinese (zh)
Other versions
CN113068006A (en
Inventor
田池
谭文胜
周德志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Vyagoo Technology Co ltd
Original Assignee
Zhuhai Vyagoo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Vyagoo Technology Co ltd filed Critical Zhuhai Vyagoo Technology Co ltd
Priority to CN202110280992.8A priority Critical patent/CN113068006B/en
Publication of CN113068006A publication Critical patent/CN113068006A/en
Application granted granted Critical
Publication of CN113068006B publication Critical patent/CN113068006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • G06T3/08
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time

Abstract

The embodiment of the disclosure discloses an image presentation method and device. One embodiment of the method comprises the following steps: acquiring images acquired by a plurality of image acquisition devices, wherein one image acquisition device in the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two image acquisition devices which are adjacently distributed have overlapping areas; for a target device in a plurality of image acquisition devices, mapping an image acquired by the target device to the image texture coordinate system of a reference device based on a predetermined mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device; a plurality of images mapped under an image texture coordinate system of the reference device are presented, wherein the plurality of images mapped under the image texture coordinate system of the reference device are panoramic images.

Description

Image presentation method and device
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to an image rendering method and apparatus.
Background
A vehicle recorder is generally an electronic device for recording driving data such as images and/or sounds of a vehicle during driving. The automobile data recorder is arranged in the automobile, so that evidence can be provided for the traffic accident, and the aim of overtaking the responsibility for the traffic accident is achieved.
In the related art, an electronic monitoring device for monitoring traffic data, such as a vehicle recorder, generally has a larger monitoring field of view, which is more conducive to overtaking traffic accidents. Accordingly, there is a need for panoramic monitoring in the related art for electronic monitoring devices.
Disclosure of Invention
The embodiment of the disclosure provides an image presentation method and device.
In a first aspect, embodiments of the present disclosure provide an image rendering method, the method including: acquiring images acquired by a plurality of image acquisition devices, wherein one image acquisition device in the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two image acquisition devices which are adjacently distributed have overlapping areas; for a target device in a plurality of image acquisition devices, mapping an image acquired by the target device to the image texture coordinate system of a reference device based on a predetermined mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device; a plurality of images mapped under an image texture coordinate system of the reference device are presented, wherein the plurality of images mapped under the image texture coordinate system of the reference device are panoramic images.
In some embodiments, rendering a plurality of images mapped under an image texture coordinate system of a reference device includes: and performing image rendering on the panoramic image according to a preset image rendering and displaying step, and displaying a new panoramic image obtained after rendering.
In some embodiments, the image rendering display step includes: matrix product calculation is performed on the panoramic image and a target matrix, wherein the target matrix is a product of a predetermined model matrix, a view matrix and a projection matrix.
In some embodiments, there is one target device in the plurality of image capture devices, and the reference device and the target device are two fisheye lenses mounted back-to-back.
In some embodiments, in response to the reference device and the target device being fisheye lenses, a mapping relationship between an image texture coordinate system of the target device and an image texture coordinate system of the reference device is determined by: determining a relative conversion parameter between the spherical coordinate system of the reference device and the spherical coordinate system of the target device according to a first conversion parameter between the world coordinate system and the spherical coordinate system of the target device, a second conversion parameter between the world coordinate system and the spherical coordinate system of the reference device and a preset parameter calculation formula; and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the relative conversion parameters, the third conversion parameters from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device and the fourth conversion parameters from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
In some embodiments, the parameter calculation formula includes:
Figure BDA0002978373140000021
Figure BDA0002978373140000022
wherein R is a rotation matrix in the relative conversion parameters, R 2 R is the rotation matrix in the second conversion parameter 1 For the rotation matrix in the first conversion parameter,
Figure BDA0002978373140000023
is the inverse of the rotation matrix in the first conversion parameter, T is the translation vector in the relative conversion parameter, T 1 For translation vectors in the first conversion parameters, T 2 Is a translation vector in the second conversion parameter.
In some embodiments, in response to the reference device and the target device being planar shots, a mapping relationship between an image texture coordinate system of the target device and an image texture coordinate system of the reference device is determined by: and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the conversion parameters between the world coordinate system and the image texture coordinate system of the reference device and the conversion parameters between the world coordinate system and the image texture coordinate system of the target device.
In a second aspect, embodiments of the present disclosure provide an image presentation apparatus, the apparatus comprising: the image acquisition unit is configured to acquire images acquired by a plurality of image acquisition devices, wherein one image acquisition device in the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two adjacent image acquisition devices are provided with overlapping areas; a coordinate mapping unit configured to map, for a target device of the plurality of image capturing devices, an image captured by the target device under an image texture coordinate system of a reference device based on a predetermined mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device; an image rendering unit configured to render a plurality of images mapped under an image texture coordinate system of the reference device, wherein the plurality of images mapped under the image texture coordinate system of the reference device are panoramic images.
In some embodiments, the image rendering unit is further configured to: and performing image rendering on the panoramic image according to a preset image rendering and displaying step, and displaying a new panoramic image obtained after rendering.
In some embodiments, the image rendering display step includes: matrix product calculation is performed on the panoramic image and a target matrix, wherein the target matrix is a product of a predetermined model matrix, a view matrix and a projection matrix.
In some embodiments, there is one target device in the plurality of image capture devices, and the reference device and the target device are two fisheye lenses mounted back-to-back.
In some embodiments, in response to the reference device and the target device being fisheye lenses, a mapping relationship between an image texture coordinate system of the target device and an image texture coordinate system of the reference device is determined by: determining a relative conversion parameter between the spherical coordinate system of the reference device and the spherical coordinate system of the target device according to a first conversion parameter between the world coordinate system and the spherical coordinate system of the target device, a second conversion parameter between the world coordinate system and the spherical coordinate system of the reference device and a preset parameter calculation formula; and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the relative conversion parameters, the third conversion parameters from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device and the fourth conversion parameters from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
In some embodiments, the parameter calculation formula includes:
Figure BDA0002978373140000031
Figure BDA0002978373140000032
wherein R is a rotation matrix in the relative conversion parameters, R 2 R is the rotation matrix in the second conversion parameter 1 For the rotation matrix in the first conversion parameter,
Figure BDA0002978373140000041
is the inverse of the rotation matrix in the first conversion parameter, T is the translation vector in the relative conversion parameter, T 1 For translation vectors in the first conversion parameters, T 2 Is a translation vector in the second conversion parameter.
In some embodiments, in response to the reference device and the target device being planar shots, a mapping relationship between an image texture coordinate system of the target device and an image texture coordinate system of the reference device is determined by: and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the conversion parameters between the world coordinate system and the image texture coordinate system of the reference device and the conversion parameters between the world coordinate system and the image texture coordinate system of the target device.
In a third aspect, embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the implementations of the first aspect.
According to the image presentation method and device provided by the embodiment of the disclosure, as the image acquisition ranges of the image acquisition devices are different, the image acquisition ranges of every two image acquisition devices which are adjacently distributed have an overlapping area, a plurality of acquired images acquired by the plurality of image acquisition devices can be combined to obtain a panoramic image, namely, panoramic monitoring can be realized. In addition, when combining the obtained plurality of images into a panoramic image, mapping all the images on the same coordinate system is achieved by mapping each image to the image texture coordinate system of the reference device to obtain the panoramic image. The panoramic image is obtained in a mapping mode, and compared with the panoramic image obtained by splicing a plurality of images in the related art, the panoramic image display method has lower calculation complexity and is beneficial to accelerating the data processing speed, so that the panoramic image can be displayed faster, and the display efficiency of displaying the panoramic image can be improved. It should be noted that panoramic images obtained by mapping all images on the same coordinate system are typically spherical projection panoramic images.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 is a flow chart of an image rendering method provided by an embodiment of the present disclosure;
fig. 2 is a schematic structural view of an image presentation apparatus provided in an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows a flow of one embodiment of an image rendering method according to the present disclosure. The image presentation method comprises the following steps:
step 101, acquiring images acquired by a plurality of image acquisition devices.
One image acquisition device of the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two image acquisition devices which are adjacently distributed have an overlapping area.
An image capturing device, typically a device having an image capturing function. The reference device is typically an image acquisition device used as a reference. The target device is typically an image capturing device other than the reference device among the plurality of image capturing devices described above. In practical application, one image acquisition device may be selected from the plurality of image acquisition devices as a reference device, and the selection manner of the reference device is not limited in this embodiment.
In this embodiment, the execution subject of the image presenting method may be a monitoring-type electronic device, such as a vehicle recorder. The execution body may acquire the image from each image acquisition apparatus by a wired connection or a wireless connection. In practice, if the execution subject is a vehicle recorder, the image acquisition device may be a device carried by the vehicle recorder, or may be a device in communication connection with the vehicle recorder.
Step 102, for a target device in the plurality of image acquisition devices, mapping an image acquired by the target device to an image texture coordinate system of a reference device based on a predetermined mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device.
In practice, the image texture coordinate system of an image acquisition device generally refers to the coordinate system in which the imaging plane of the image acquisition device is located.
Here, the above-described execution subject may determine in advance, for each target device, a mapping relationship between an image texture coordinate system of the target device and an image texture coordinate system of a reference device. In this way, the image acquired by the target device can be mapped to the image texture coordinate system of the reference device based on the mapping relationship. For example, if the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is: the rotation matrix from the image texture coordinate system of the target device to the image texture coordinate system of the reference device is R, and the translation vector is T. At this time, when the image acquired by the target device is mapped to the image texture coordinate system of the reference device, for any image texture coordinate point M in the image, the coordinate of the point mapped to the image texture coordinate system of the reference device may be N, where n=r×m+t.
Step 103, presenting a plurality of images mapped under an image texture coordinate system of a reference device.
Wherein the plurality of images mapped under the image texture coordinate system of the reference device are panoramic images. In practice, the panoramic image is typically a spherically projected panoramic image.
Here, after all the images acquired by the target devices are mapped to the image texture coordinate system of the reference device, a panoramic image may be obtained. After obtaining the panoramic image, the executing body may visually present the panoramic image by using the existing technology or technology in the future.
According to the method provided by the embodiment, as the image acquisition ranges of the image acquisition devices are different, the image acquisition ranges of every two image acquisition devices which are adjacently distributed have the overlapping area, a plurality of acquired images acquired by the plurality of image acquisition devices can be combined to obtain a panoramic image, namely, panoramic monitoring can be realized. In addition, when combining the obtained plurality of images into a panoramic image, mapping all the images on the same coordinate system is achieved by mapping each image to the image texture coordinate system of the reference device to obtain the panoramic image. The panoramic image is obtained in a mapping mode, and compared with the panoramic image obtained by splicing a plurality of images in the related art, the panoramic image display method has lower calculation complexity and is beneficial to accelerating the data processing speed, so that the panoramic image can be displayed faster, and the display efficiency of displaying the panoramic image can be improved.
In some optional implementations of the present embodiments, presenting a plurality of images mapped under an image texture coordinate system of a reference device includes: and performing image rendering on the panoramic image according to a preset image rendering and displaying step, and displaying a new panoramic image obtained after rendering. In practical applications, the new panoramic image obtained by image rendering of the panoramic image is usually a local correction image.
The image rendering and displaying step is generally a preset step for processing an image to make the image suitable for display. As an example, the image rendering and displaying step may be to crop the panoramic image to a preset size. The implementation can realize better presentation of the panoramic image.
Optionally, the image rendering and displaying step may include: matrix product calculation is performed on the panoramic image and the target matrix. Wherein the target matrix is the product of a predetermined model matrix, a view matrix and a projection matrix.
Here, the model matrix may be a preset matrix. Model matrices are typically used to transform object point coordinates from an object coordinate system to a world coordinate system. The object coordinate system is usually a coordinate system with the center point of the object as the origin. The view matrix may be a preset matrix. View matrices are typically used to convert object point coordinates from the world coordinate system to under the camera coordinate system. The camera coordinate system is typically a coordinate system with the optical center point of the camera as the origin. The projection matrix may be a predetermined matrix. Projection matrices are typically used to project object point coordinates from a camera coordinate system onto a screen. In practice, the value of the model matrix may be an identity matrix, the value of the view matrix may be an identity matrix, and the value of the projection matrix may be a matrix for selecting a field of view region with a field angle of 60 degrees. The implementation method can realize the accurate presentation of the panoramic image and the presentation of the field area expected by the user. In practical applications, the matrix product calculation may be performed on the panoramic image and the target matrix by performing matrix product calculation on each spherical vertex coordinate of the panoramic image and the target matrix.
In some optional implementations of this embodiment, there is one target device in the plurality of image capturing devices, and the reference device and the target device are two fisheye lenses mounted back-to-back.
Here, the reference device and the target device are both fisheye lenses, and both are mounted back-to-back. Since the viewing angle of the fisheye lens is close to or equal to 180 degrees, panoramic monitoring can be achieved through the two fisheye lenses. When the image acquisition equipment is a fish-eye lens, the panorama can be monitored by only adopting two image acquisition equipment, so that the number of the image acquisition equipment required to be installed can be reduced, and the cost and the installation space can be saved.
In some optional implementations of this embodiment, in response to the reference device and the target device being fisheye lenses, a mapping relationship between an image texture coordinate system of the target device and an image texture coordinate system of the reference device is determined by:
step one, determining a relative conversion parameter between a spherical coordinate system of the reference device and a spherical coordinate system of the target device according to a first conversion parameter between the world coordinate system and the spherical coordinate system of the target device, a second conversion parameter between the world coordinate system and the spherical coordinate system of the reference device and a preset parameter calculation formula.
Here, for a fisheye lens, the spherical coordinate system is generally the coordinate system in which the lens curved surface of the fisheye lens is located. The first transformation parameters generally refer to a rotation matrix and translation vector corresponding to a spherical coordinate system transformed from the world coordinate system to the target device. The second transformation parameter generally refers to a rotation matrix and translation vector corresponding to the spherical coordinate system transformed from the world coordinate system to the reference device. The relative transformation parameters generally refer to a rotation matrix and translation vector corresponding to the transformation from the spherical coordinate system of the target device to the spherical coordinate system of the reference device.
The parameter calculation formula is generally a preset formula. Alternatively, the above parameter calculation formula may include:
Figure BDA0002978373140000081
Figure BDA0002978373140000082
wherein R is a rotation matrix in the relative conversion parameters, R 2 R is the rotation matrix in the second conversion parameter 1 For the rotation matrix in the first conversion parameter,
Figure BDA0002978373140000083
is the inverse of the rotation matrix in the first conversion parameter, T is the translation vector in the relative conversion parameter, T 1 For translation vectors in the first conversion parameters, T 2 Is a translation vector in the second conversion parameter. Where x is the product symbol. />
According to the parameter calculation formula, the rotation matrix in the relative conversion parameter can be calculated by multiplying the rotation matrix in the second conversion parameter by the inverse matrix of the rotation matrix in the first conversion parameter. The translation vector in the relative conversion parameter may be obtained by subtracting the product of the rotation matrix in the second conversion parameter, the inverse of the rotation matrix in the first conversion parameter, and the translation vector in the first conversion parameter from the translation vector in the second conversion parameter.
Here, for the above parameter calculation formula, the derivation process may be as follows:
if the two fisheye lenses are A and B respectively, a point exists in the overlapping area of A and B, the point has X coordinate in the world coordinate system, and X coordinate in the spherical coordinate system projected to A 1 The coordinates in the spherical coordinate system projected to B are X 2 . If the conversion parameter of the spherical coordinate system converted from the world coordinate system to A is (R) 1 ,T 1 ) Wherein R is 1 For rotating matrix, T 1 Is a translation vector. The conversion parameter of the spherical coordinate system converted from the world coordinate system to B is (R 2 ,T 2 ) Wherein R is 2 For rotating matrix, T 2 Is a translation vector.
At this time, X 1 =R 1 ×X+T 1 And X is 2 =R 2 ×X+T 2 It is possible to obtain:
Figure BDA0002978373140000091
in this way, the conversion parameter from the spherical coordinate system of a to the spherical coordinate system of B can be obtained as (R, T), wherein,
Figure BDA0002978373140000092
and step two, determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the relative conversion parameter, the third conversion parameter from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device and the fourth conversion parameter from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
Wherein the third conversion parameter is usually a rotation matrix and a translation vector corresponding to the spherical coordinate system of the reference device to the image texture coordinate system of the reference device. The fourth transformation parameter is typically a rotation matrix and translation vector corresponding to the transformation from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
Here, since the internal parameters of the camera can be uniquely determined once the camera is calibrated, the conversion parameters corresponding to the conversion from the spherical coordinate system of the camera to the image texture coordinate system can also be uniquely determined. In practice, for fisheye lenses, there is typically the following imaging relationship:
Figure BDA0002978373140000093
x′=x/z,y′=y/z
r 2 =x′ 2 +y′ 2 ,θ=atan(r)
θ′=θ(1+k 1 θ 2 +k 2 θ 4 +k 3 θ 6 +k 4 θ 8 )
x′=(θ′/r)x,y′=(θ′/r)y
u=f x x′+c x ,v=f y y′+c y
wherein, { X; y; z is the coordinate of a point in space under the world coordinate system, { x; y; z is the coordinate that projects the point under the spherical coordinate system, { u; v } is the coordinate mapped under the image texture coordinate system, and R and T are the rotation matrix and translation vector, respectively, corresponding to the transformation from the world coordinate system to the spherical coordinate system. (f) x ,f y ,c x ,c y ) Is a camera parameter, (k) 1 ,k 2 ,k 3 ,k 4 ) As the distortion parameters, internal parameters of the camera include a camera parameter and a distortion parameter, and the internal parameters of the camera are generally obtained by calibrating the camera. The mode of calibrating the camera is not limited in this embodiment.
Here, when the relative conversion parameter, the third conversion parameter, and the fourth conversion parameter are known, a mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device may be obtained.
In some optional implementations of this embodiment, if the reference device and the target device are planar lenses. At this time, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined by:
and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the conversion parameters between the world coordinate system and the image texture coordinate system of the reference device and the conversion parameters between the world coordinate system and the image texture coordinate system of the target device.
Here, the planar lens is typically a small-aperture linear imaging, and the conversion relationship between the world coordinate system and the image texture coordinate system of the camera can be determined by using the existing technology or the technology in the future. The mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device can be determined based on the conversion parameters between the world coordinate system and the image texture coordinate system of the reference device and the conversion parameters between the world coordinate system and the image texture coordinate system of the target device. Here, the operation of determining the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is substantially the same as the operation of determining the relative conversion parameters between the spherical coordinate systems of the two fisheye lenses, and will not be described herein.
With further reference to fig. 2, as an implementation of the method shown in fig. 1, the present disclosure provides an embodiment of an image rendering apparatus, which corresponds to the method embodiment shown in fig. 1, and which is particularly applicable in various electronic devices.
As shown in fig. 2, the image presenting apparatus of the present embodiment includes: an image acquisition unit 201 configured to acquire images acquired by a plurality of image acquisition devices, wherein one of the plurality of image acquisition devices is a reference device, an image acquisition device other than the reference device is a target device, image acquisition ranges of the respective image acquisition devices are different, and image acquisition ranges of every two image acquisition devices distributed adjacently have overlapping areas; a coordinate mapping unit 202 configured to map, for a target device of the plurality of image capturing devices, an image captured by the target device under an image texture coordinate system of a reference device based on a predetermined mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device; an image rendering unit 203 is configured to render a plurality of images mapped under the image texture coordinate system of the reference device, wherein the plurality of images mapped under the image texture coordinate system of the reference device are panoramic images.
In some optional implementations of the present embodiments, presenting a plurality of images mapped under an image texture coordinate system of a reference device includes: and performing image rendering on the panoramic image according to a preset image rendering and displaying step, and displaying a new panoramic image obtained after rendering.
In some optional implementations of the present embodiment, the image rendering and displaying step includes: matrix product calculation is performed on the panoramic image and a target matrix, wherein the target matrix is a product of a predetermined model matrix, a view matrix and a projection matrix.
In some optional implementations of this embodiment, there is one target device in the plurality of image capturing devices, and the reference device and the target device are two fisheye lenses mounted back-to-back.
In some optional implementations of this embodiment, in response to the reference device and the target device being fisheye lenses, a mapping relationship between an image texture coordinate system of the target device and an image texture coordinate system of the reference device is determined by: determining a relative conversion parameter between the spherical coordinate system of the reference device and the spherical coordinate system of the target device according to a first conversion parameter between the world coordinate system and the spherical coordinate system of the target device, a second conversion parameter between the world coordinate system and the spherical coordinate system of the reference device and a preset parameter calculation formula; and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the relative conversion parameters, the third conversion parameters from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device and the fourth conversion parameters from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
In some optional implementations of the present embodiment, the parameter calculation formula includes:
Figure BDA0002978373140000111
Figure BDA0002978373140000112
wherein R is a rotation matrix in the relative conversion parameters, R 2 For the second rotationRotation matrix in the conversion parameters, R 1 For the rotation matrix in the first conversion parameter,
Figure BDA0002978373140000113
for the inverse of the rotation matrix in the first conversion parameter, T is the translation vector in the relative conversion parameter, T 1 For translation vectors in the first conversion parameters, T 2 Is a translation vector in the second conversion parameter.
In some optional implementations of this embodiment, in response to the reference device and the target device being both planar shots, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined by: and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the conversion parameters between the world coordinate system and the image texture coordinate system of the reference device and the conversion parameters between the world coordinate system and the image texture coordinate system of the target device.
According to the device provided by the embodiment of the disclosure, since the image acquisition ranges of the image acquisition devices are different, the image acquisition ranges of every two image acquisition devices which are adjacently distributed have an overlapping area, so that a plurality of acquired images acquired by the plurality of image acquisition devices can be combined to obtain a panoramic image, that is, panoramic monitoring can be realized. In addition, when combining the obtained plurality of images into a panoramic image, mapping all the images on the same coordinate system is achieved by mapping each image to the image texture coordinate system of the reference device to obtain the panoramic image. The panoramic image is obtained in a mapping mode, and compared with the panoramic image obtained by splicing a plurality of images in the related art, the panoramic image display method has lower calculation complexity and is beneficial to accelerating the data processing speed, so that the panoramic image can be displayed faster, and the display efficiency of displaying the panoramic image can be improved.
Referring now to fig. 3, a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 3, the electronic device may include a processing means (e.g., a Central Processing Unit (CPU), a graphics processor, etc.) 301 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic device are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, image capture device, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 3 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via a communication device 309, or installed from a storage device 308, or installed from a ROM 302. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 301. It should be noted that the computer readable medium of the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs, which when executed by the electronic device, may cause the electronic device to perform the steps of: acquiring images acquired by a plurality of image acquisition devices, wherein one image acquisition device in the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two image acquisition devices which are adjacently distributed have overlapping areas; for a target device in a plurality of image acquisition devices, mapping an image acquired by the target device to the image texture coordinate system of a reference device based on a predetermined mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device; a plurality of images mapped under an image texture coordinate system of the reference device are presented, wherein the plurality of images mapped under the image texture coordinate system of the reference device are panoramic images.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments described in the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an image acquisition unit, a coordinate mapping unit, and an image presentation unit. The names of these units do not constitute a limitation on the unit itself in some cases, and for example, the video receiving unit may also be described as "a unit that acquires images acquired by a plurality of image acquisition devices".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention referred to in this disclosure is not limited to the specific combination of features described above, but encompasses other embodiments in which features described above or their equivalents may be combined in any way without departing from the spirit of the invention. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).

Claims (7)

1. An image rendering method, wherein the method comprises:
acquiring images acquired by a plurality of image acquisition devices, wherein one image acquisition device in the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two image acquisition devices which are adjacently distributed have overlapping areas;
for a target device in the plurality of image acquisition devices, mapping an image acquired by the target device to an image texture coordinate system of the reference device based on a predetermined mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device;
presenting a plurality of images mapped under an image texture coordinate system of the reference device, wherein the plurality of images mapped under the image texture coordinate system of the reference device are panoramic images, performing image rendering on the panoramic images according to a preset image rendering and displaying step, and presenting a new panoramic image obtained after rendering, and the image rendering and displaying step comprises the following steps: performing matrix product calculation on the panoramic image and a target matrix, wherein the target matrix is the product of a predetermined model matrix, a view matrix and a projection matrix, and the new panoramic image is used for presenting a field of view area expected by a user;
wherein, in response to the reference device and the target device being fish-eye lenses, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined by the following method: determining a relative conversion parameter between the spherical coordinate system of the reference device and the spherical coordinate system of the target device according to a first conversion parameter between the world coordinate system and the spherical coordinate system of the target device, a second conversion parameter between the world coordinate system and the spherical coordinate system of the reference device and a preset parameter calculation formula; and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the relative conversion parameter, the third conversion parameter from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device and the fourth conversion parameter from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
2. The method of claim 1, wherein there is one target device of the plurality of image capture devices, the reference device and the target device being two fisheye lenses mounted back-to-back.
3. The method of claim 1, wherein the parameter calculation formula comprises:
Figure QLYQS_1
Figure QLYQS_2
wherein R is a rotation matrix in the relative conversion parameters, R 2 R is the rotation matrix in the second conversion parameter 1 For the rotation matrix in the first conversion parameter,
Figure QLYQS_3
is the inverse of the rotation matrix in the first conversion parameter, T is the translation vector in the relative conversion parameter, T 1 For translation vectors in the first conversion parameters, T 2 Is a translation vector in the second conversion parameter.
4. The method of claim 1, wherein in response to the reference device and the target device both being planar shots, a mapping relationship between an image texture coordinate system of the target device and an image texture coordinate system of the reference device is determined by:
and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the conversion parameters between the world coordinate system and the image texture coordinate system of the reference device and the conversion parameters between the world coordinate system and the image texture coordinate system of the target device.
5. An image rendering apparatus, wherein the apparatus comprises:
the image acquisition unit is configured to acquire images acquired by a plurality of image acquisition devices, wherein one image acquisition device of the plurality of image acquisition devices is a reference device, the image acquisition devices except the reference device are target devices, the image acquisition ranges of the image acquisition devices are different, and the image acquisition ranges of every two adjacent image acquisition devices are provided with overlapping areas;
a coordinate mapping unit configured to map, for a target device of the plurality of image capturing devices, an image captured by the target device to an image texture coordinate system of the reference device based on a predetermined mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device;
an image rendering unit configured to render a plurality of images mapped in an image texture coordinate system of the reference device, where the plurality of images mapped in the image texture coordinate system of the reference device are panoramic images, and perform image rendering on the panoramic images according to a preset image rendering and displaying step, and render a new panoramic image obtained after rendering, where the image rendering and displaying step includes: performing matrix product calculation on the panoramic image and a target matrix, wherein the target matrix is the product of a predetermined model matrix, a view matrix and a projection matrix, and the new panoramic image is used for presenting a field of view area expected by a user;
wherein, in response to the reference device and the target device being fish-eye lenses, the mapping relationship between the image texture coordinate system of the target device and the image texture coordinate system of the reference device is determined by the following method: determining a relative conversion parameter between the spherical coordinate system of the reference device and the spherical coordinate system of the target device according to a first conversion parameter between the world coordinate system and the spherical coordinate system of the target device, a second conversion parameter between the world coordinate system and the spherical coordinate system of the reference device and a preset parameter calculation formula; and determining the mapping relation between the image texture coordinate system of the target device and the image texture coordinate system of the reference device according to the relative conversion parameter, the third conversion parameter from the spherical coordinate system of the reference device to the image texture coordinate system of the reference device and the fourth conversion parameter from the spherical coordinate system of the target device to the image texture coordinate system of the target device.
6. An electronic device, comprising:
a storage device having one or more programs stored thereon,
when the one or more programs are executed by one or more processors, the one or more processors are caused to implement the method of any of claims 1-4.
7. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-4.
CN202110280992.8A 2021-03-16 2021-03-16 Image presentation method and device Active CN113068006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110280992.8A CN113068006B (en) 2021-03-16 2021-03-16 Image presentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110280992.8A CN113068006B (en) 2021-03-16 2021-03-16 Image presentation method and device

Publications (2)

Publication Number Publication Date
CN113068006A CN113068006A (en) 2021-07-02
CN113068006B true CN113068006B (en) 2023-05-26

Family

ID=76560874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110280992.8A Active CN113068006B (en) 2021-03-16 2021-03-16 Image presentation method and device

Country Status (1)

Country Link
CN (1) CN113068006B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710932B (en) * 2009-12-21 2011-06-22 华为终端有限公司 Image stitching method and device
CN105741341B (en) * 2016-01-27 2018-11-06 桂林长海发展有限责任公司 A kind of three-dimensional space environment imaging system and method
CN106157304A (en) * 2016-07-01 2016-11-23 成都通甲优博科技有限责任公司 A kind of Panoramagram montage method based on multiple cameras and system
CN106355550B (en) * 2016-10-31 2024-04-09 河北鼎联科技有限公司 Image stitching system and image stitching method
CN106651808B (en) * 2016-12-29 2020-05-08 北京爱奇艺科技有限公司 Fisheye diagram conversion method and device
CN111754381A (en) * 2019-03-26 2020-10-09 华为技术有限公司 Graphics rendering method, apparatus, and computer-readable storage medium
CN111540022B (en) * 2020-05-14 2024-04-19 深圳市艾为智能有限公司 Image unification method based on virtual camera

Also Published As

Publication number Publication date
CN113068006A (en) 2021-07-02

Similar Documents

Publication Publication Date Title
US7570280B2 (en) Image providing method and device
CN111127563A (en) Combined calibration method and device, electronic equipment and storage medium
CN109961522B (en) Image projection method, device, equipment and storage medium
CN111766951B (en) Image display method and apparatus, computer system, and computer-readable storage medium
CN110728622B (en) Fisheye image processing method, device, electronic equipment and computer readable medium
CN113256742B (en) Interface display method and device, electronic equipment and computer readable medium
CN112001912B (en) Target detection method and device, computer system and readable storage medium
CN111325792B (en) Method, apparatus, device and medium for determining camera pose
WO2019171984A1 (en) Signal processing device, signal processing method, and program
WO2022166868A1 (en) Walkthrough view generation method, apparatus and device, and storage medium
CN111818265B (en) Interaction method and device based on augmented reality model, electronic equipment and medium
CN115409696A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113068006B (en) Image presentation method and device
CN109816791B (en) Method and apparatus for generating information
CN109883414B (en) Vehicle navigation method and device, electronic equipment and storage medium
CN109688381B (en) VR monitoring method, device, equipment and storage medium
CN115576637A (en) Screen capture method, system, electronic device and readable storage medium
CN115511870A (en) Object detection method and device, electronic equipment and storage medium
CN111862342A (en) Texture processing method and device for augmented reality, electronic equipment and storage medium
CN111161148A (en) Panoramic image generation method, device, equipment and storage medium
CN111263115B (en) Method, apparatus, electronic device, and computer-readable medium for presenting images
CN113592734B (en) Image processing method and device and electronic equipment
US20240096023A1 (en) Information processing method and device
TWI793584B (en) Mapping and localization system for automated valet parking and method thereof
KR102534449B1 (en) Image processing method, device, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant