CN114549289A - Image processing method, image processing device, electronic equipment and computer storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN114549289A
CN114549289A CN202210145911.8A CN202210145911A CN114549289A CN 114549289 A CN114549289 A CN 114549289A CN 202210145911 A CN202210145911 A CN 202210145911A CN 114549289 A CN114549289 A CN 114549289A
Authority
CN
China
Prior art keywords
coordinate
spherical
fisheye
longitude
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210145911.8A
Other languages
Chinese (zh)
Inventor
杨振伟
陈东生
黄寅涛
韩殿飞
赵汉玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202210145911.8A priority Critical patent/CN114549289A/en
Publication of CN114549289A publication Critical patent/CN114549289A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • G06T3/067

Abstract

The embodiment of the disclosure provides an image processing method, an image processing device, an electronic device and a computer storage medium, wherein the method comprises the following steps: acquiring a fisheye pattern acquired by a fisheye camera; converting the fisheye diagram into a spherical image under a spherical coordinate system according to a longitude and latitude expansion diagram corresponding to the fisheye diagram; converting the spherical image into a panoramic image under a screen coordinate system; in the spherical image, determining a target spherical coordinate point corresponding to a target pixel point in the panoramic image; determining the coordinates of target pixel points in the fisheye diagram according to the longitude and latitude coordinates of the target spherical coordinate points and the coordinate mapping relation between the fisheye diagram and the longitude and latitude expansion diagram; and processing the panoramic image according to the coordinates of the target pixel points in the fish eye pattern.

Description

Image processing method, image processing device, electronic equipment and computer storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer storage medium.
Background
The fisheye camera has the characteristics of wide field angle range and rich perception information, and has wide application in the fields of video monitoring, automatic driving, video conference, robot navigation and the like.
In the related art, a fisheye image acquired by a fisheye camera can be converted into a panoramic image, however, how to determine a coordinate position in the fisheye image for a pixel point in the panoramic image is a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the disclosure provides a technical scheme of image processing.
The embodiment of the present disclosure provides an image processing method, including:
acquiring a fisheye pattern acquired by a fisheye camera;
converting the fisheye diagram into a spherical image under a spherical coordinate system according to a longitude and latitude expansion diagram corresponding to the fisheye diagram;
converting the spherical image into a panoramic image under a screen coordinate system;
determining a target spherical coordinate point corresponding to a target pixel point in the panoramic image in the spherical image; the target pixel point is any one pixel point in the panoramic image;
determining the coordinates of the target pixel points in the fisheye diagram according to the longitude and latitude coordinates of the target spherical coordinate points and the coordinate mapping relation between the fisheye diagram and the longitude and latitude development diagram;
and processing the panoramic image according to the coordinates of the target pixel points in the fisheye diagram.
In some embodiments, the determining, according to the longitude and latitude coordinates of the target spherical coordinate point and the coordinate mapping relationship between the fisheye diagram and the longitude and latitude expansion diagram, the coordinates of the target pixel point in the fisheye diagram includes:
determining a coordinate point corresponding to the target spherical coordinate point in the longitude and latitude expansion map according to the longitude and latitude coordinates of the target spherical coordinate point;
and determining the coordinates of the target pixel points in the fisheye diagram according to the coordinate points corresponding to the target spherical coordinate points in the longitude and latitude expansion diagram and the coordinate mapping relation between the fisheye diagram and the longitude and latitude expansion diagram.
Therefore, the coordinates of the target pixel points in the fisheye diagram can be accurately determined according to the coordinate points corresponding to the target spherical coordinate points in the longitude and latitude expansion diagram and the coordinate mapping relation between the fisheye diagram and the longitude and latitude expansion diagram.
In some embodiments, the method further comprises:
determining a connecting line passing through the origin of the spherical coordinate system and the target spherical coordinate point;
and determining the longitude and latitude coordinates of the target spherical coordinate point according to the angle between the connecting line and the coordinate axis of the spherical coordinate system.
It will be appreciated that the longitude and latitude of the target spherical coordinate point can be advantageously determined accurately based on the angle between the connecting line and the coordinate axes of the spherical coordinate system.
In some embodiments, the determining the longitude and latitude coordinates of the target spherical coordinate point according to an angle between the connection line and a coordinate axis of the spherical coordinate system includes:
determining the latitude of the target spherical coordinate point according to the angle between the connecting line and the y axis of the spherical coordinate system; determining the longitude of the target spherical coordinate point according to the angle between the projection of the connecting line on the horizontal plane and the z axis of the spherical coordinate system; the y axis is a coordinate axis in the vertical direction, and the z axis is a coordinate axis in the horizontal direction.
It can be understood that the latitude of the target spherical coordinate point can be determined accurately according to the angle between the connecting line and the coordinate axis in the vertical direction of the spherical coordinate system, and the longitude of the target spherical coordinate point can be determined accurately according to the angle between the connecting line and the coordinate axis in the horizontal direction of the spherical coordinate system.
In some embodiments, the converting the spherical image into a panoramic image in a screen coordinate system includes:
converting the spherical image into the panoramic image according to a preset model view projection matrix and a viewport transformation matrix;
correspondingly, the determining, in the spherical image, a target spherical coordinate point corresponding to a target pixel point in the panoramic image includes:
and determining a target spherical coordinate point corresponding to the target pixel point in the spherical image according to the model view projection matrix and the viewport transformation matrix. Therefore, the target spherical coordinate point corresponding to the target pixel point can be directly determined conveniently according to the preset model view projection matrix and the view port transformation matrix, and the method has the characteristics of simplicity and easiness in implementation.
In some embodiments, said determining a target spherical coordinate point in said spherical image corresponding to said target pixel point from said model view projection matrix and said viewport transformation matrix comprises:
determining a Coordinate point corresponding to the target pixel point in a Normalized Device Coordinate system (NDC) according to the viewport transformation matrix;
and determining a target spherical coordinate point corresponding to the target pixel point according to the coordinate point corresponding to the target pixel point in the NDC and the model view projection matrix.
It can be seen that, since the viewport transformation matrix can convert the image of the NDC into a panoramic image in a screen coordinate system, a coordinate point corresponding to the target pixel point can be accurately determined in the NDC according to the viewport transformation matrix; according to the model view projection matrix, the coordinate mapping relation between the spherical image and the image in the NDC can be accurately determined, so that the target spherical coordinate point corresponding to the target pixel point can be accurately determined according to the coordinate point corresponding to the target pixel point in the NDC and the model view projection matrix.
In some embodiments, the determining, in NDC, a coordinate point corresponding to the target pixel point according to the viewport transformation matrix includes:
determining a virtual coordinate value vertical to the screen direction for each pixel point in the panoramic image under the screen coordinate system;
and determining a coordinate point corresponding to the target pixel point in the NDC according to the viewport transformation matrix, the coordinate of the target pixel point in the screen coordinate system and the virtual coordinate value corresponding to the target pixel point.
In the embodiment of the disclosure, two coordinate values corresponding to the target pixel point in the NDC can be obtained according to the viewport transformation matrix and the coordinate of the target pixel point in the screen coordinate system, and then, in combination with the virtual coordinate value corresponding to the target pixel point, the coordinate point corresponding to the target pixel point can be determined in the NDC relatively easily.
In some embodiments, after acquiring the fisheye pattern acquired by the fisheye camera, the method further comprises:
under the condition that the longitude and latitude expansion map corresponding to the fisheye map is a rectangular map, determining the coordinate mapping relation between the longitude and latitude expansion map and the fisheye map according to the following steps: mapping the longitude and latitude expansion map to a three-dimensional spherical surface to obtain a three-dimensional spherical surface image; establishing a coordinate mapping relation between the three-dimensional spherical image and the fisheye image according to the parameters of the fisheye camera; determining the coordinate mapping relation between the longitude and latitude expansion map and the fisheye pattern according to the coordinate mapping relation between the longitude and latitude expansion map and the three-dimensional spherical image and the coordinate mapping relation between the three-dimensional spherical image and the fisheye pattern;
and according to the coordinate mapping relation between the longitude and latitude expansion map and the fisheye map, expanding the fisheye map to obtain a corresponding longitude and latitude expansion map.
Therefore, the fisheye diagram can be directly and accurately unfolded according to the predetermined coordinate mapping relation between the longitude and latitude unfolding diagram and the fisheye diagram.
An embodiment of the present disclosure further provides an image processing apparatus, including: an acquisition module, a coordinate conversion module, a coordinate determination module and a processing module, wherein,
the acquisition module is used for acquiring a fisheye pattern acquired by the fisheye camera;
the coordinate conversion module is used for developing a picture according to the longitude and latitude corresponding to the fisheye pattern and converting the fisheye pattern into a spherical image under a spherical coordinate system; converting the spherical image into a panoramic image under a screen coordinate system;
the coordinate determination module is used for determining a target spherical coordinate point corresponding to a target pixel point in the panoramic image in the spherical image; determining the coordinates of the target pixel points in the fisheye diagram according to the longitude and latitude coordinates of the target spherical coordinate points and the coordinate mapping relation between the fisheye diagram and the longitude and latitude development diagram; the target pixel point is any one pixel point in the panoramic image;
and the processing module is used for processing the panoramic image according to the coordinates of the target pixel points in the fisheye diagram.
The disclosed embodiments also provide an electronic device comprising a processor and a memory for storing a computer program capable of running on the processor; wherein the content of the first and second substances,
the processor is configured to run the computer program to perform any one of the image processing methods described above.
The disclosed embodiments also provide a computer storage medium having a computer program stored thereon, which when executed by a processor implements any of the image processing methods described above.
In the image processing method, the image processing device, the electronic device, and the computer storage medium provided by the embodiments of the present disclosure, while generating the panoramic image, the coordinate mapping relationship between the fisheye diagram and the longitude and latitude expansion diagram, and the coordinate mapping relationship between the spherical image and the panoramic image may be determined, so that according to the coordinate mapping relationship between the fisheye diagram and the longitude and latitude expansion diagram, and the coordinate mapping relationship between the spherical image and the panoramic image, the coordinate of any one pixel point in the panoramic image in the fisheye diagram may be determined more accurately, which is beneficial to improving the accuracy of processing the panoramic image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart of an image processing method of an embodiment of the present disclosure;
fig. 2 is a schematic view of an imaging range of a fisheye pattern acquired by a binocular fisheye camera in an embodiment of the disclosure;
FIG. 3 is a schematic diagram illustrating an embodiment of the present disclosure in which a fisheye diagram is expanded into a rectangular image;
FIG. 4 is a schematic diagram of a complete sphere in a spherical coordinate system according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a process for expanding a fisheye pattern according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram illustrating a principle of determining a coordinate mapping relationship between a three-dimensional spherical image and a fisheye pattern in the embodiment of the present disclosure;
FIG. 7 is a schematic flow chart illustrating the process of determining a target spherical coordinate point corresponding to a target pixel point in the embodiment of the present disclosure;
fig. 8 is a schematic flowchart of determining a coordinate point corresponding to a target pixel point in an NDC according to an embodiment of the present disclosure;
fig. 9 is a schematic flow chart illustrating the process of determining the coordinates of the target pixel point in the fish eye diagram according to the embodiment of the present disclosure;
FIG. 10 is a schematic flowchart illustrating a process for determining longitude and latitude coordinates of a target spherical coordinate point according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram illustrating a geometric relationship for determining latitude and longitude coordinates of a target spherical coordinate point in an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
The present disclosure will be described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the examples provided herein are merely illustrative of the present disclosure and are not intended to limit the present disclosure. In addition, the embodiments provided below are some embodiments for implementing the disclosure, not all embodiments for implementing the disclosure, and the technical solutions described in the embodiments of the disclosure may be implemented in any combination without conflict.
It should be noted that, in the embodiments of the present disclosure, the terms "comprises," "comprising," or any other variation thereof are intended to cover a non-exclusive inclusion, so that a method or apparatus including a series of elements includes not only the explicitly recited elements but also other elements not explicitly listed or inherent to the method or apparatus. Without further limitation, the use of the phrase "including a. -. said." does not exclude the presence of other elements (e.g., steps in a method or elements in a device, such as portions of circuitry, processors, programs, software, etc.) in the method or device in which the element is included.
For example, the image processing method provided by the embodiment of the present disclosure includes a series of steps, but the image processing method provided by the embodiment of the present disclosure is not limited to the described steps, and similarly, the image processing apparatus provided by the embodiment of the present disclosure includes a series of modules, but the apparatus provided by the embodiment of the present disclosure is not limited to include the explicitly described modules, and may also include modules that are required to be configured to acquire related information or perform processing based on the information.
The disclosed embodiments may be implemented in electronic devices that are comprised of terminals and/or servers and are operational with numerous other general purpose or special purpose computing system environments or configurations. Here, the terminal may be a thin client, a thick client, a hand-held or laptop device, a microprocessor-based system, a set-top box, a programmable consumer electronics, a network personal computer, a small computer system, etc., and the server may be a server computer system, a small computer system, a mainframe computer system, a distributed cloud computing environment including any of the above, etc.
Electronic devices such as terminals, servers, etc. may include program modules for executing instructions. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the disclosure, and as shown in fig. 1, the flowchart may include:
step 101: and acquiring a fish eye pattern acquired by the fish eye camera.
In the disclosed embodiment, the fisheye image may be captured by a fisheye camera, the lens focal length Of the fisheye camera tends to be short, typically 16mm or less, and the Field Of View (FOV) is close to or equal to 180 °. In order to make the lens have a larger shooting angle of view because the shorter the focal length is, the larger the angle of view is, the front lens of the fish-eye camera has a very short diameter and is parabolic and convex toward the front of the lens, and is similar to the fish eye, so the fish-eye camera is called as a fish-eye camera.
Illustratively, the fisheye camera may be a monocular fisheye camera or a binocular fisheye camera; taking a binocular fisheye camera as an example, in fig. 2, a circle 201 in the left half and a circle 202 in the right half respectively represent imaging ranges of two fisheye images captured by the binocular fisheye camera.
Step 102: and (4) converting the fisheye diagram into a spherical image under a spherical coordinate system according to the longitude and latitude expansion diagram corresponding to the fisheye diagram.
In the embodiment of the disclosure, the longitude and latitude development image can be a rectangular image or other two-dimensional plane images convenient for determining longitude and latitude coordinates; for example, the coordinate mapping relationship between the fisheye diagram and the longitude and latitude expansion diagram may be predetermined according to internal parameters of the fisheye camera, and then the fisheye diagram may be expanded according to the coordinate mapping relationship between the fisheye diagram and the longitude and latitude expansion diagram to obtain the longitude and latitude expansion diagram corresponding to the fisheye diagram.
For example, referring to fig. 3, in the case that the longitude and latitude expansion maps are rectangular maps for two fisheye maps acquired by the binocular fisheye camera, the pixel value of each coordinate of the longitude and latitude expansion maps may be replaced with the pixel value of the corresponding coordinate on the fisheye map according to the coordinate mapping relationship between the fisheye maps and the longitude and latitude expansion maps, so that the pixel value filling is completed in the entire rectangular area of the longitude and latitude expansion maps. In fig. 3, for the fisheye diagram in the left half circle 201, after the fisheye diagram is expanded, a first rectangular area 301 can be obtained; for the fisheye diagram in the right half circle 201, after the fisheye diagram is expanded, a second rectangular area 302 can be obtained.
In the embodiment of the disclosure, after the longitude and latitude expansion map corresponding to the fisheye diagram is obtained, the fisheye diagram can be converted into the spherical image in the spherical coordinate system according to the longitude and latitude coordinates of each point in the longitude and latitude expansion map corresponding to the fisheye diagram.
For example, the implementation of converting the fish eye diagram into a spherical image in a spherical coordinate system may be: for the fisheye pattern collected by the monocular fisheye camera, the longitude and latitude development diagram and the hemispherical surface corresponding to the fisheye pattern can be subjected to longitude and latitude segmentation according to the same longitude and latitude segmentation mode, an area position mapping relation is established between the longitude and latitude development diagram and the hemispherical surface according to the same longitude and latitude, and the image of each area in the longitude and latitude development diagram is pasted in the corresponding area of the hemispherical surface according to the area position mapping relation. In the embodiment of the disclosure, under the condition that the monocular fisheye camera is adopted to collect the fisheye diagram, the spherical image in the spherical coordinate system is a hemispherical image.
Because the hemispherical surface is a curved surface and the longitude and latitude development map is a two-dimensional plane, in order to realize mapping of the hemispherical surface, a small enough area needs to be obtained through division of the longitude and latitude lines, so that each area of the hemispherical surface can be approximately regarded as a small plane, and the subsequent mapping of the three-dimensional spherical surface is convenient to realize.
In one example, the longitude and latitude expansion map and the hemisphere may be divided into a plurality of quadrilateral areas according to the width of 160 parts and the height of 80 parts, and each quadrilateral area is formed by connecting lines between four longitude and latitude intersection points. In the longitude and latitude development image and the three-dimensional spherical surface which are divided by the longitude and latitude lines, the mapping relation of the quadrilateral areas with the same longitude and latitude positions can be established, so that the image of each quadrilateral area in the longitude and latitude development image can be pasted in the corresponding area of the hemispherical surface according to the mapping relation of the quadrilateral areas.
For example, in the case of mapping each quadrilateral region of the hemispherical surface, the longitude and latitude expansion map and each quadrilateral region of the hemispherical surface may be divided into two triangles by a straight line, so that mapping of each triangular region in the hemispherical surface is realized according to the mapping relationship between the longitude and latitude expansion map and the triangular region in the hemispherical surface.
For example, in order to paste the image of each area in the longitude and latitude expansion map on the corresponding area of the hemisphere, texture coordinates may be determined for the longitude and latitude expansion map and each area of the hemisphere, so that the pasting of each area of the hemisphere is performed according to the mapping relationship between the longitude and latitude expansion map and the texture coordinates of each point of each area of the hemisphere.
It should be noted that the above description only exemplarily describes the mapping manner for each area of the hemispherical surface, and in the embodiment of the present disclosure, other manners may also be adopted to implement the mapping for each area of the hemispherical surface.
For two fisheye patterns acquired by a binocular fisheye camera, the two hemispherical pictures can be pasted according to the semi-spherical picture pasting mode for each fisheye pattern; after the two hemispherical surfaces are pasted, the two hemispherical surfaces can be spliced to obtain a complete spherical image; in the embodiment of the disclosure, under the condition that two fisheye images are acquired by using a binocular fisheye camera, the spherical image in the spherical coordinate system is a complete spherical image.
Exemplarily, fig. 4 is a complete sphere under a spherical coordinate system in the embodiment of the present disclosure, a circle 201 in a left half of fig. 2 corresponds to a rear hemisphere in fig. 4, and a circle 201 in a right half of fig. 2 corresponds to a front hemisphere in fig. 4.
Step 103: and converting the spherical image into a panoramic image in a screen coordinate system.
Illustratively, the spherical image can be converted into a panoramic image in a screen coordinate system according to a preset model view projection matrix and a viewport transformation matrix.
In the embodiment of the present disclosure, the spherical coordinate system may be used as a local coordinate system, the local coordinate system is a coordinate system using the center of the object as a coordinate origin, and the operations of rotation, translation, and the like of the object are all performed around the local coordinate system, and at this time, when the object model performs the operations of rotation, translation, and the like, the local coordinate system also performs corresponding operations of rotation or translation; illustratively, the local coordinate system is a coordinate system with the center of the fisheye camera as the origin of coordinates, the orientation of the fisheye camera is the direction in which the observer observes the panoramic image on the screen, and the default orientation of the fisheye camera may be set according to the needs of the observer, for example, the default orientation of the fisheye camera is a direction perpendicular to the screen and pointing to the front hemisphere in fig. 4, or a direction perpendicular to the screen and pointing to the rear hemisphere in fig. 4.
Here, the model view projection matrix is a matrix obtained by multiplying the model matrix, the projection matrix and the view matrix; the model matrix is used to convert the spherical image of the local coordinate system to the world coordinate system, for example, the model matrix is an identity matrix in response to the local coordinate system being a coordinate system with the center of the fisheye camera as the origin of coordinates. The view matrix is used to convert the images of the world coordinate system to the camera coordinate system, for example, the view matrix may be a 4 x 4 size matrix, and the view matrix is used to characterize parameters such as camera coordinate points, camera orientation, and the like. The projection matrix is obtained by converting an image of the camera coordinate system into a cropping coordinate system, the cropping coordinate system refers to a space coordinate system obtained by converting the camera coordinate system through the projection matrix, for example, the projection matrix may be a matrix with a size of 4 × 4, and the projection matrix may be a matrix for characterizing a perspective projection process or a matrix for characterizing an orthogonal projection process.
In the embodiment of the disclosure, the image of the cutting coordinate system is an image of NDC; the viewport transformation matrix is used to convert the image of the NDC into a panoramic image in a screen coordinate system.
In practical applications, the model matrix, the projection matrix, the view matrix, and the viewport transformation matrix may be set based on an Open Graphics Library (OpenGL) standard, where OpenGL is a cross-language, cross-platform Application Programming Interface (API) for rendering 2D and 3D vector Graphics; the interface consists of nearly 350 different function calls, used to draw complex three-dimensional scenes from simple graphics bits; in some embodiments, the conversion from the spherical image to the panoramic image may be implemented by using the OpenGL standard, that is, the rendering of the panoramic image may be implemented by using the OpenGL standard on the basis of obtaining the spherical image, the model matrix, the projection matrix, the view matrix, and the viewport transformation matrix.
In some embodiments, referring to steps 101 to 103, for each frame of fisheye image captured by the fisheye camera, a corresponding frame of panoramic image may be generated, and thus, for at least two frames of images captured consecutively by the fisheye camera, a panoramic video may be generated, where each frame of image in the panoramic video is a panoramic image. In some embodiments, after the panoramic video is generated, the panoramic video may be played.
Step 104: determining a target spherical coordinate point corresponding to a target pixel point in the panoramic image in the spherical image; the target pixel point is any one pixel point in the panoramic image.
It is understood that, by executing step 103, a conversion relationship between the spherical image and the panoramic image in the screen coordinate system may be determined, that is, a coordinate mapping relationship between the spherical image and the panoramic image in the screen coordinate system may be determined, and a target spherical coordinate point corresponding to the target pixel point may be determined according to the coordinate mapping relationship between the spherical image and the panoramic image in the screen coordinate system.
Step 105: and determining the coordinates of the target pixel points in the fisheye diagram according to the longitude and latitude coordinates of the target spherical coordinate points and the coordinate mapping relation between the fisheye diagram and the longitude and latitude expansion diagram.
Step 106: and processing the panoramic image according to the coordinates of the target pixel points in the fisheye diagram.
The following describes an implementation of processing a panoramic image by means of several examples.
Example 1: under the condition of displaying the panoramic image, the coordinates of target pixel points in the fish eye image are presented in the panoramic image, and the target pixel points can be pixel points selected by an observer through clicking the panoramic image.
Example 2: and under the condition of playing the panoramic video, the coordinates of the target pixel points in the fisheye diagram are presented in any frame of panoramic image of the panoramic video.
Example 3: under the condition that the panoramic image is processed by adopting the computer vision processing technology, coordinates of target pixel points in the fisheye diagram are used as label information used by the computer vision processing technology; here, an implementation of processing the panoramic image by using the computer vision processing technology may be: and carrying out target identification or target ranging on the panoramic image by adopting a computer vision processing technology.
The above-mentioned contents are only a few examples of processing a panoramic image, and the implementation of processing a panoramic image in the embodiment of the present disclosure is not limited thereto.
In practical applications, the steps 101 to 105 may be implemented by a Processor in an electronic Device, where the Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor.
Therefore, according to the coordinate mapping relationship between the fish eye diagram and the longitude and latitude expansion diagram and the coordinate mapping relationship between the spherical image and the panoramic image, the coordinate of any one pixel point in the panoramic image can be accurately determined in the fish eye diagram, and the accuracy of processing the panoramic image is improved.
In some embodiments of the present disclosure, referring to fig. 5, the process of expanding the fisheye diagram may include:
step 501: under the condition that the longitude and latitude expansion map corresponding to the fisheye map is a rectangular map, determining the coordinate mapping relation between the longitude and latitude expansion map and the fisheye map according to the following steps: mapping the longitude and latitude expansion map onto a three-dimensional spherical surface to obtain a three-dimensional spherical surface image; establishing a coordinate mapping relation between the three-dimensional spherical image and the fisheye image according to the parameters of the fisheye camera; and determining the coordinate mapping relation between the longitude and latitude expansion map and the fisheye map according to the coordinate mapping relation between the longitude and latitude expansion map and the three-dimensional spherical image and the coordinate mapping relation between the three-dimensional spherical image and the fisheye map.
The principle of determining the coordinate mapping relationship between the three-dimensional spherical image and the fisheye diagram is exemplarily described below with reference to fig. 6.
Mapping the longitude and latitude expansion map to a three-dimensional spherical surface to obtain a three-dimensional spherical surface image; and (4) mapping a coordinate point on the three-dimensional spherical image by any point in the longitude and latitude expansion map as N.
Using the initial external parameter of the fisheye camera to convert the coordinate point P on the three-dimensional spherical imagewConverting the coordinate into a camera coordinate system to obtain a coordinate point P of the camera coordinate systemc
Referring to fig. 6, a coordinate point P in the camera coordinate system is obtainedcThen, projecting the coordinate point N on the three-dimensional spherical image to the normalized spherical surface to obtain a coordinate point P on the normalized spherical surfaces(ii) a Projecting the coordinate point N on the three-dimensional spherical image to a normalization plane to obtain a coordinate point P on the normalization planen
And carrying out distortion correction and smoothing treatment on the projection image obtained by projecting to the normalization plane to obtain a fisheye diagram.
The disclosed embodimentsCoordinate point P of camera coordinate systemcCan be recorded as (x, y, z) with the coordinate point PcThe coordinates of the corresponding fish eye pattern can be recorded as (u, v), and the coordinate point PcThe coordinates (u, v) of the corresponding fish eye diagram can be calculated according to formula (1) and formula (2).
Figure BDA0003508921570000121
Figure BDA0003508921570000122
Wherein f isxRepresenting the width-wise focal length, f, of the fisheye camerayIndicating the height-wise focal length of the fisheye camera, cxAbscissa value, c, representing the optical center position of the fisheye camerayThe ordinate value of the optical center position of the fisheye camera is shown, Xi is the distortion parameter of the fisheye camera, and Xi is Xi in fig. 6; parameter fx、fy、cx、cyAnd ξ may be obtained by calibration of a fisheye camera.
It can be seen that, referring to the above formula (1) and formula (2), the coordinate mapping relationship between the image of the camera coordinate system and the fisheye diagram can be determined; then, the coordinate mapping relation between the three-dimensional spherical image and the fisheye diagram can be determined by combining the coordinate mapping relation between the three-dimensional spherical image and the image of the camera coordinate system.
Step 502: and according to the coordinate mapping relation between the longitude and latitude expansion map and the fisheye map, performing expansion processing on the fisheye map to obtain a corresponding longitude and latitude expansion map.
Therefore, the fisheye diagram can be directly and accurately unfolded according to the predetermined coordinate mapping relation between the longitude and latitude unfolding diagram and the fisheye diagram.
In some embodiments of the present disclosure, in the spherical image, determining the target spherical coordinate point corresponding to the target pixel point in the panoramic image is implemented by: determining a target spherical coordinate point corresponding to the target pixel point in the spherical image according to the model view projection matrix and the viewport transformation matrix; therefore, the target spherical coordinate point corresponding to the target pixel point can be directly determined conveniently according to the preset model view projection matrix and the view port transformation matrix, and the method has the characteristics of simplicity and easiness in implementation.
In some embodiments of the present disclosure, referring to fig. 7, a process of determining a target spherical coordinate point corresponding to a target pixel point in a spherical image according to a model view projection matrix and a viewport transformation matrix may include:
step 701: according to the viewport transformation matrix, coordinate points corresponding to the target pixel points are determined in the NDC.
In the embodiment of the present disclosure, since the viewport transformation matrix may convert an image of the NDC into a panoramic image in a screen coordinate system, according to the viewport transformation matrix, a coordinate mapping relationship between the image of the NDC and the panoramic image in the screen coordinate system may be determined, and thus, a coordinate point corresponding to the target pixel point may be determined in the NDC.
Step 702: and determining a target spherical coordinate point corresponding to the target pixel point according to the coordinate point corresponding to the target pixel point in the NDC and the model view projection matrix.
It can be seen that, since the viewport transformation matrix can convert the image of the NDC into a panoramic image in a screen coordinate system, a coordinate point corresponding to the target pixel point can be accurately determined in the NDC according to the viewport transformation matrix; according to the model view projection matrix, the coordinate mapping relation between the spherical image and the image in the NDC can be accurately determined, so that the target spherical coordinate point corresponding to the target pixel point can be accurately determined according to the coordinate point corresponding to the target pixel point in the NDC and the model view projection matrix.
In some embodiments of the present disclosure, referring to fig. 8, a process of determining a coordinate point corresponding to a target pixel point in NDC according to a viewport transformation matrix may include:
step 801: and determining a virtual coordinate value vertical to the screen direction aiming at each pixel point in the panoramic image under the screen coordinate system.
In the embodiment of the present disclosure, since the coordinates of the pixels in the screen coordinate system are two-dimensional coordinates, and the coordinates of the midpoint of the NDC are three-dimensional coordinates, in order to determine the coordinate point corresponding to the pixel in the screen coordinate system in the NDC, a virtual coordinate value perpendicular to the screen direction needs to be determined for each pixel in the panoramic image in the screen coordinate system.
Exemplarily, the virtual coordinate values set for different pixel points may be the same or different; for example, the virtual coordinate values set for each pixel point in the screen coordinate system are all 1.0.
Step 802: and determining a coordinate point corresponding to the target pixel point in the NDC according to the viewport transformation matrix, the coordinates of the target pixel point in the screen coordinate system and the virtual coordinate value corresponding to the target pixel point.
Exemplarily, an initial three-dimensional coordinate corresponding to a target pixel point can be determined according to a coordinate of the target pixel point in a screen coordinate system and a virtual coordinate value corresponding to the target pixel point; then, a coordinate point corresponding to the target pixel point may be determined in the NDC according to the viewport transformation matrix and the initial three-dimensional coordinates corresponding to the target pixel point.
Illustratively, the origin of the screen coordinate system is the top left vertex, and the coordinates of the target pixel point in the screen coordinate system can be denoted as P (x)0,y0) The initial three-dimensional coordinate corresponding to the target pixel point can be denoted as P' (x)0,ViewPortHeigh-y0,z0) Wherein ViewPortHeigh represents the screen height of the screen coordinate system, z0And representing the virtual coordinate value set for the target pixel point.
After the initial three-dimensional coordinate P 'corresponding to the target pixel point is determined, the inverse matrix of the viewport transformation matrix VP may be multiplied by the initial three-dimensional coordinate P', so as to obtain the coordinate of the coordinate point corresponding to the target pixel point in the NDC.
Further, the coordinates P of the target spherical coordinate point corresponding to the target pixel pointballCan be calculated according to the formula (3).
Pball=inverse(mvp)*inverse(VP)*P′ (3)
Wherein invert () represents the inverse matrix and mvp represents the model view projection matrix.
In the embodiment of the disclosure, two coordinate values corresponding to the target pixel point in the NDC can be obtained according to the viewport transformation matrix and the coordinate of the target pixel point in the screen coordinate system, and then, in combination with the virtual coordinate value corresponding to the target pixel point, the coordinate point corresponding to the target pixel point can be determined in the NDC relatively easily.
In some embodiments of the present disclosure, referring to fig. 9, a process of determining coordinates of a target pixel point in a fisheye diagram according to a longitude and latitude coordinate of a target spherical coordinate point and a coordinate mapping relationship between the fisheye diagram and a longitude and latitude expansion diagram may include:
step 901: and determining the coordinate points corresponding to the target spherical coordinate points in the longitude and latitude expansion map according to the longitude and latitude coordinates of the target spherical coordinate points.
In the embodiment of the disclosure, the coordinate point corresponding to the target spherical coordinate point in the longitude and latitude expansion map can be determined according to the mapping relation between the spherical image and the same longitude and latitude coordinates in the longitude and latitude expansion map.
Step 902: and determining the coordinates of the target pixel points in the fisheye diagram according to the coordinate points corresponding to the target spherical coordinate points in the longitude and latitude expansion diagram and the coordinate mapping relation between the fisheye diagram and the longitude and latitude expansion diagram.
Illustratively, the target spherical coordinate point has a coordinate of PballThe coordinate of a coordinate point corresponding to the target spherical coordinate point in the longitude and latitude expansion diagram is PunfoldAnd then the coordinate P of the target pixel point in the fish eye diagramfishCan be calculated according to equation (4).
Pfish=fun(Punfold) (4)
Wherein fun () represents the coordinate mapping function between the fisheye diagram and the latitude and longitude expansion diagram.
Therefore, the coordinates of the target pixel points in the fisheye diagram can be accurately determined according to the coordinate points corresponding to the target spherical coordinate points in the longitude and latitude expansion diagram and the coordinate mapping relation between the fisheye diagram and the longitude and latitude expansion diagram.
In some embodiments of the present disclosure, referring to fig. 10, the process of determining the longitude and latitude coordinates of the target spherical coordinate point may include:
step 1001: and determining a connecting line passing through the origin of the spherical coordinate system and the target spherical coordinate point.
In the embodiment of the present disclosure, a connection line passing through the origin of the spherical coordinate system and the target spherical coordinate point may be a straight line or a ray.
Step 1002: and determining the longitude and latitude coordinates of the target spherical coordinate point according to the angle between the connecting line and the coordinate axis of the spherical coordinate system.
Here, the coordinate axes of the spherical coordinate system may include an x-axis, a y-axis, and a z-axis, and referring to fig. 11, the origin of the spherical coordinate system is O, the y-axis is a coordinate axis in a vertical direction, and the x-axis and the z-axis are two mutually perpendicular coordinate axes in a horizontal direction.
Illustratively, the implementation manner of this step may be: determining the latitude of the target spherical coordinate point according to the angle between the connecting line and the y axis of the spherical coordinate system; and determining the longitude of the target spherical coordinate point according to the angle between the projection of the connecting line on the horizontal plane and the z axis of the spherical coordinate system.
Illustratively, referring to FIG. 11, from the origin O of the spherical coordinate system, a ray is issued that passes through the target spherical coordinate point, intersecting the sphere at point A.
Calculating the latitude of the point A according to the included angle beta between the ray OA and the y axis, and determining the latitude of the point A as the latitude of the target spherical coordinate point; from the angle α between the ray OA and the z-axis in the horizontal projection, the longitude of point a can be calculated and determined as the longitude of the target spherical coordinate point.
It will be appreciated that the longitude and latitude of the target spherical coordinate point can be advantageously determined accurately based on the angle between the connecting line and the coordinate axes of the spherical coordinate system.
The image processing method provided by the embodiment of the disclosure can be applied to scenes such as panoramic video playing, video interaction and the like, and the use scenes of the embodiment of the disclosure are exemplarily described below.
After the fisheye image is continuously shot by using the fisheye camera, referring to steps 101 to 103, a panoramic video can be generated for at least two frames of images continuously collected by the fisheye camera, and the panoramic video can be played by using a mobile phone and other playing devices. Supposing that a shooting target of the fisheye camera is a cabinet, a device B exists in the cabinet, and after a certain frame of image of the panoramic video clicks an image area of the device B, the technical scheme of the embodiment of the disclosure is adopted to determine the coordinate of the device B in the fisheye diagram; in the case of transmitting the panoramic video to the electronic device of another user, the coordinates of the device B in the fish eye diagram may be transmitted together as tag information; when the electronic device of another user performs video processing on the panoramic video, the video processing result may be optimized with reference to the tag information, for example, in a scene that performs 3D modeling using the panoramic video, it is necessary to perform object recognition on each frame of image of the panoramic video, and after the position information of the device B is determined by the object recognition, the position information of the device B may be optimized using the tag information.
It will be understood by those of skill in the art that in the above method of the present embodiment, the order of writing the steps does not imply a strict order of execution and does not impose any limitations on the implementation, as the order of execution of the steps should be determined by their function and possibly inherent logic.
On the basis of the image processing method proposed by the foregoing embodiment, an embodiment of the present disclosure proposes an image processing apparatus.
Fig. 12 is a schematic diagram illustrating a composition structure of an image processing apparatus according to an embodiment of the disclosure, as shown in fig. 12, the apparatus may include an obtaining module 121, a coordinate conversion module 122, a coordinate determination module 123, and a processing module 124, wherein,
the acquisition module 121 is configured to acquire a fisheye pattern acquired by the fisheye camera;
the coordinate conversion module 122 is configured to convert the fisheye diagram into a spherical image in a spherical coordinate system according to a longitude and latitude expansion diagram corresponding to the fisheye diagram; converting the spherical image into a panoramic image under a screen coordinate system;
a coordinate determination module 123, configured to determine, in the spherical image, a target spherical coordinate point corresponding to a target pixel point in the panoramic image; determining the coordinates of the target pixel points in the fisheye diagram according to the longitude and latitude coordinates of the target spherical coordinate points and the coordinate mapping relation between the fisheye diagram and the longitude and latitude development diagram; the target pixel point is any one pixel point in the panoramic image;
and the processing module 124 is configured to process the panoramic image according to the coordinates of the target pixel point in the fisheye diagram.
In some embodiments, the coordinate determining module 123 is configured to determine the coordinates of the target pixel point in the fisheye diagram according to the longitude and latitude coordinates of the target spherical coordinate point and the coordinate mapping relationship between the fisheye diagram and the longitude and latitude expansion diagram, including:
determining a coordinate point corresponding to the target spherical coordinate point in the longitude and latitude expansion map according to the longitude and latitude coordinates of the target spherical coordinate point;
and determining the coordinates of the target pixel points in the fisheye diagram according to the coordinate points corresponding to the target spherical coordinate points in the longitude and latitude expansion diagram and the coordinate mapping relation between the fisheye diagram and the longitude and latitude expansion diagram.
In some embodiments, the coordinate determination module 123 is further configured to determine a connection line passing through an origin of the spherical coordinate system and the target spherical coordinate point; and determining the longitude and latitude coordinates of the target spherical coordinate point according to the angle between the connecting line and the coordinate axis of the spherical coordinate system.
In some embodiments, the coordinate determining module 123 is configured to determine the longitude and latitude coordinates of the target spherical coordinate point according to an angle between the connection line and a coordinate axis of the spherical coordinate system, and includes:
determining the latitude of the target spherical coordinate point according to the angle between the connecting line and the y axis of the spherical coordinate system; determining the longitude of the target spherical coordinate point according to the angle between the projection of the connecting line on the horizontal plane and the z axis of the spherical coordinate system; the y axis is a coordinate axis in the vertical direction, and the z axis is a coordinate axis in the horizontal direction.
In some embodiments, the coordinate transformation module 122 is configured to transform the spherical image into a panoramic image in a screen coordinate system, and includes:
converting the spherical image into the panoramic image according to a preset model view projection matrix and a viewport transformation matrix;
correspondingly, the determining, in the spherical image, a target spherical coordinate point corresponding to a target pixel point in the panoramic image includes:
and determining a target spherical coordinate point corresponding to the target pixel point in the spherical image according to the model view projection matrix and the viewport transformation matrix.
In some embodiments, the coordinate transformation module 122, configured to determine a target spherical coordinate point in the spherical image corresponding to the target pixel point according to the model view projection matrix and the viewport transformation matrix, includes:
determining a coordinate point corresponding to the target pixel point in the NDC according to the viewport transformation matrix;
and determining a target spherical coordinate point corresponding to the target pixel point according to the coordinate point corresponding to the target pixel point in the NDC and the model view projection matrix.
In some embodiments, the coordinate transformation module 122, configured to determine a coordinate point corresponding to the target pixel point in NDC according to the viewport transformation matrix, includes:
determining a virtual coordinate value vertical to the screen direction for each pixel point in the panoramic image under the screen coordinate system;
and determining a coordinate point corresponding to the target pixel point in the NDC according to the viewport transformation matrix, the coordinate of the target pixel point in the screen coordinate system and the virtual coordinate value corresponding to the target pixel point.
In some embodiments, the obtaining module 121 is further configured to, after obtaining the fisheye pattern acquired by the fisheye camera, determine, when a longitude and latitude expansion map corresponding to the fisheye pattern is a rectangular map, a coordinate mapping relationship between the longitude and latitude expansion map and the fisheye pattern according to the following steps: mapping the longitude and latitude expansion map to a three-dimensional spherical surface to obtain a three-dimensional spherical surface image; establishing a coordinate mapping relation between the three-dimensional spherical image and the fisheye image according to the parameters of the fisheye camera; determining the coordinate mapping relation between the longitude and latitude expansion map and the fisheye pattern according to the coordinate mapping relation between the longitude and latitude expansion map and the three-dimensional spherical image and the coordinate mapping relation between the three-dimensional spherical image and the fisheye pattern;
the obtaining module 121 is further configured to perform expansion processing on the fisheye diagram according to the coordinate mapping relationship between the longitude and latitude expansion diagram and the fisheye diagram to obtain a corresponding longitude and latitude expansion diagram.
In practical applications, the obtaining module 121, the coordinate converting module 122, the coordinate determining module 123, and the processing module 124 may all be implemented by a processor in an electronic device, where the processor may be at least one of an ASIC, a DSP, a DSPD, a PLD, an FPGA, a CPU, a controller, a microcontroller, and a microprocessor.
In addition, each functional module in this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Specifically, the computer program instructions corresponding to an image processing method in the present embodiment may be stored on a storage medium such as an optical disc, a hard disk, a usb disk, or the like, and when the computer program instructions corresponding to an image processing method in the storage medium are read or executed by an electronic device, any one of the image processing methods of the foregoing embodiments is implemented.
Based on the same technical concept of the foregoing embodiment, referring to fig. 13, it shows an electronic device 13 provided by the embodiment of the present disclosure, which may include: a memory 131 and a processor 132; wherein the content of the first and second substances,
the memory 131 for storing computer programs and data;
the processor 132 is configured to execute the computer program stored in the memory to implement any one of the image processing methods of the foregoing embodiments.
In practical applications, the memory 131 may be a volatile memory (RAM); or a non-volatile memory (non-volatile memory) such as a ROM, a flash memory (flash memory), a Hard Disk (Hard Disk Drive, HDD) or a Solid-State Drive (SSD); or a combination of the above types of memories and provides instructions and data to the processor 132.
The processor 132 may be at least one of ASIC, DSP, DSPD, PLD, FPGA, CPU, controller, microcontroller, and microprocessor. It is understood that the electronic devices for implementing the above-described processor functions may be other devices, and the embodiments of the present disclosure are not particularly limited.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, which are not repeated herein for brevity
The methods disclosed in the method embodiments provided by the present disclosure may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in the various product embodiments provided by the disclosure may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the various method or apparatus embodiments provided by the present disclosure may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. An image processing method, characterized in that the method comprises:
acquiring a fisheye pattern acquired by a fisheye camera;
converting the fisheye diagram into a spherical image under a spherical coordinate system according to a longitude and latitude expansion diagram corresponding to the fisheye diagram;
converting the spherical image into a panoramic image under a screen coordinate system;
determining a target spherical coordinate point corresponding to a target pixel point in the panoramic image in the spherical image; the target pixel point is any one pixel point in the panoramic image;
determining the coordinates of the target pixel points in the fisheye diagram according to the longitude and latitude coordinates of the target spherical coordinate points and the coordinate mapping relation between the fisheye diagram and the longitude and latitude development diagram;
and processing the panoramic image according to the coordinates of the target pixel points in the fisheye diagram.
2. The method of claim 1, wherein the determining the coordinates of the target pixel point in the fisheye diagram according to the longitude and latitude coordinates of the target spherical coordinate point and the coordinate mapping relationship between the fisheye diagram and the longitude and latitude expansion diagram comprises:
determining a coordinate point corresponding to the target spherical coordinate point in the longitude and latitude expansion map according to the longitude and latitude coordinates of the target spherical coordinate point;
and determining the coordinates of the target pixel points in the fisheye diagram according to the coordinate points corresponding to the target spherical coordinate points in the longitude and latitude expansion diagram and the coordinate mapping relation between the fisheye diagram and the longitude and latitude expansion diagram.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
determining a connecting line passing through the origin of the spherical coordinate system and the target spherical coordinate point;
and determining the longitude and latitude coordinates of the target spherical coordinate point according to the angle between the connecting line and the coordinate axis of the spherical coordinate system.
4. The method of claim 3, wherein determining the latitude and longitude coordinates of the target spherical coordinate point based on an angle between the line and a coordinate axis of the spherical coordinate system comprises:
determining the latitude of the target spherical coordinate point according to the angle between the connecting line and the y axis of the spherical coordinate system; determining the longitude of the target spherical coordinate point according to the angle between the projection of the connecting line on the horizontal plane and the z axis of the spherical coordinate system; the y axis is a coordinate axis in the vertical direction, and the z axis is a coordinate axis in the horizontal direction.
5. The method of any one of claims 1 to 4, wherein the converting the spherical image into a panoramic image in a screen coordinate system comprises:
converting the spherical image into the panoramic image according to a preset model view projection matrix and a viewport transformation matrix;
correspondingly, the determining, in the spherical image, a target spherical coordinate point corresponding to a target pixel point in the panoramic image includes:
and determining a target spherical coordinate point corresponding to the target pixel point in the spherical image according to the model view projection matrix and the viewport transformation matrix.
6. The method of claim 5, wherein determining a target spherical coordinate point in the spherical image corresponding to the target pixel point from the model view projection matrix and the viewport transformation matrix comprises:
according to the viewport transformation matrix, determining a coordinate point corresponding to the target pixel point in a normalized device coordinate system NDC;
and determining a target spherical coordinate point corresponding to the target pixel point according to the coordinate point corresponding to the target pixel point in the NDC and the model view projection matrix.
7. The method of claim 6, wherein determining a coordinate point corresponding to the target pixel point in NDC according to the viewport transformation matrix comprises:
determining a virtual coordinate value vertical to the screen direction for each pixel point in the panoramic image under the screen coordinate system;
and determining a coordinate point corresponding to the target pixel point in the NDC according to the viewport transformation matrix, the coordinate of the target pixel point in the screen coordinate system and the virtual coordinate value corresponding to the target pixel point.
8. The method according to any one of claims 1 to 7, wherein after acquiring the fisheye pattern acquired by the fisheye camera, the method further comprises:
under the condition that the longitude and latitude expansion map corresponding to the fisheye map is a rectangular map, determining the coordinate mapping relation between the longitude and latitude expansion map and the fisheye map according to the following steps: mapping the longitude and latitude expansion map to a three-dimensional spherical surface to obtain a three-dimensional spherical surface image; establishing a coordinate mapping relation between the three-dimensional spherical image and the fisheye image according to the parameters of the fisheye camera; determining the coordinate mapping relation between the longitude and latitude expansion map and the fisheye pattern according to the coordinate mapping relation between the longitude and latitude expansion map and the three-dimensional spherical image and the coordinate mapping relation between the three-dimensional spherical image and the fisheye pattern;
and according to the coordinate mapping relation between the longitude and latitude expansion map and the fisheye map, expanding the fisheye map to obtain a corresponding longitude and latitude expansion map.
9. An image processing apparatus, characterized in that the apparatus comprises: an acquisition module, a coordinate conversion module, a coordinate determination module and a processing module, wherein,
the acquisition module is used for acquiring a fisheye pattern acquired by the fisheye camera;
the coordinate conversion module is used for developing a picture according to the longitude and latitude corresponding to the fisheye pattern and converting the fisheye pattern into a spherical image under a spherical coordinate system; converting the spherical image into a panoramic image under a screen coordinate system;
the coordinate determination module is used for determining a target spherical coordinate point corresponding to a target pixel point in the panoramic image in the spherical image; determining the coordinates of the target pixel points in the fisheye diagram according to the longitude and latitude coordinates of the target spherical coordinate points and the coordinate mapping relation between the fisheye diagram and the longitude and latitude development diagram; the target pixel point is any one pixel point in the panoramic image;
and the processing module is used for processing the panoramic image according to the coordinates of the target pixel points in the fisheye diagram.
10. An electronic device comprising a processor and a memory for storing a computer program operable on the processor; wherein the content of the first and second substances,
the processor is configured to run the computer program to perform the method of any one of claims 1 to 8.
11. A computer storage medium on which a computer program is stored, characterized in that the computer program realizes the method of any one of claims 1 to 8 when executed by a processor.
CN202210145911.8A 2022-02-17 2022-02-17 Image processing method, image processing device, electronic equipment and computer storage medium Withdrawn CN114549289A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210145911.8A CN114549289A (en) 2022-02-17 2022-02-17 Image processing method, image processing device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210145911.8A CN114549289A (en) 2022-02-17 2022-02-17 Image processing method, image processing device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN114549289A true CN114549289A (en) 2022-05-27

Family

ID=81675281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210145911.8A Withdrawn CN114549289A (en) 2022-02-17 2022-02-17 Image processing method, image processing device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN114549289A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115018904A (en) * 2022-06-02 2022-09-06 如你所视(北京)科技有限公司 Mask generation method and device for panoramic image
CN116431095A (en) * 2023-03-23 2023-07-14 北京凯视达科技股份有限公司 Panoramic display method, panoramic display device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115018904A (en) * 2022-06-02 2022-09-06 如你所视(北京)科技有限公司 Mask generation method and device for panoramic image
CN115018904B (en) * 2022-06-02 2023-10-20 如你所视(北京)科技有限公司 Method and device for generating mask of panoramic image
CN116431095A (en) * 2023-03-23 2023-07-14 北京凯视达科技股份有限公司 Panoramic display method, panoramic display device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Bogdan et al. DeepCalib: A deep learning approach for automatic intrinsic calibration of wide field-of-view cameras
CN108335353B (en) Three-dimensional reconstruction method, device and system of dynamic scene, server and medium
CN111857329B (en) Method, device and equipment for calculating fixation point
CN108876926B (en) Navigation method and system in panoramic scene and AR/VR client equipment
CN107169924B (en) Method and system for establishing three-dimensional panoramic image
JP6764995B2 (en) Panorama image compression method and equipment
US20190012804A1 (en) Methods and apparatuses for panoramic image processing
WO2017152803A1 (en) Image processing method and device
EP3134868A1 (en) Generation and use of a 3d radon image
CN114549289A (en) Image processing method, image processing device, electronic equipment and computer storage medium
CN113345063B (en) PBR three-dimensional reconstruction method, system and computer storage medium based on deep learning
CN111161398B (en) Image generation method, device, equipment and storage medium
CN110648274B (en) Method and device for generating fisheye image
CN105809729B (en) A kind of spherical panorama rendering method of virtual scene
US11922568B2 (en) Finite aperture omni-directional stereo light transport
CN110544278B (en) Rigid body motion capture method and device and AGV pose capture system
WO2017113729A1 (en) 360-degree image loading method and loading module, and mobile terminal
CN114004890B (en) Attitude determination method and apparatus, electronic device, and storage medium
CN114511447A (en) Image processing method, device, equipment and computer storage medium
CN113132708B (en) Method and apparatus for acquiring three-dimensional scene image using fisheye camera, device and medium
CN113096008A (en) Panoramic picture display method, display device and storage medium
US11288774B2 (en) Image processing method and apparatus, storage medium, and electronic apparatus
CN110163922B (en) Fisheye camera calibration system, fisheye camera calibration method, fisheye camera calibration device, electronic equipment and storage medium
Popovic et al. Design and implementation of real-time multi-sensor vision systems
WO2018150086A2 (en) Methods and apparatuses for determining positions of multi-directional image capture apparatuses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220527

WW01 Invention patent application withdrawn after publication