CN112489114A - Image conversion method and device, computer readable storage medium and electronic equipment - Google Patents
Image conversion method and device, computer readable storage medium and electronic equipment Download PDFInfo
- Publication number
- CN112489114A CN112489114A CN202011344168.6A CN202011344168A CN112489114A CN 112489114 A CN112489114 A CN 112489114A CN 202011344168 A CN202011344168 A CN 202011344168A CN 112489114 A CN112489114 A CN 112489114A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- coordinate system
- mapped
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000006243 chemical reaction Methods 0.000 title claims abstract description 49
- 238000013507 mapping Methods 0.000 claims abstract description 70
- 238000012545 processing Methods 0.000 claims abstract description 36
- 238000012937 correction Methods 0.000 claims abstract description 16
- 230000009467 reduction Effects 0.000 claims abstract description 9
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 239000000126 substance Substances 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the disclosure discloses an image conversion method and an image conversion device, wherein the method comprises the following steps: based on a first distortion parameter and a first internal parameter of an original camera for acquiring an original image, carrying out distortion correction processing on the original image to obtain a corrected image; determining an image to be mapped based on the corrected image; mapping an image to be mapped to a two-dimensional image plane coordinate system of the target camera based on a first internal reference of the original camera and a second internal reference of the target camera to obtain a mapped image; and carrying out distortion reduction processing on the mapping image based on the second distortion parameter and the second internal parameter of the target camera to obtain the target image. The embodiment of the disclosure can obtain images of different styles simulating the shooting of another camera according to the camera internal parameters and the distortion coefficients, and the implementation mode is simpler, more convenient and more efficient. The generated image is closer to the real scene because the pixels of the generated image are converted from the pixels of the original image.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image conversion method and apparatus, a computer-readable storage medium, and an electronic device.
Background
Different cameras have different imaging qualities due to differences in camera parameters, distortion coefficients and the like. The image processing algorithm based on deep learning has strong dependence and adaptability on a training data set, and if the difference (in the aspects of camera internal parameter/distortion coefficient and the like) between a camera for collecting data and a camera for collecting training data in an actual scene is large, the prediction effect of the algorithm model in the actual scene is generally poor. In practical application, cameras used in an actual scene may be various, and in order to adapt to the actual scene, a batch of data needs to be acquired again by using the cameras of the actual scene to perform model training. While the cost of data acquisition (manpower/material/time, etc.) is typically high. How to effectively utilize the existing data and generate the data under the actual camera is a very worthy problem.
At present, methods for generating data under different cameras generally adopt a deep learning-based method, such as generating a countermeasure network, performing style migration, and generating data fitting an actual scene.
Disclosure of Invention
The embodiment of the disclosure provides an image conversion method, an image conversion device, a computer-readable storage medium and an electronic device.
An embodiment of the present disclosure provides an image conversion method, including: based on a first distortion parameter and a first internal parameter of an original camera for acquiring an original image, carrying out distortion correction processing on the original image to obtain a corrected image; determining an image to be mapped based on the corrected image; mapping an image to be mapped to a two-dimensional image plane coordinate system of the target camera based on a first internal reference of the original camera and a second internal reference of the target camera to obtain a mapped image; and carrying out distortion reduction processing on the mapping image based on the second distortion parameter and the second internal parameter of the target camera to obtain the target image.
According to another aspect of an embodiment of the present disclosure, there is provided an image conversion apparatus including: the correction module is used for carrying out distortion correction processing on the original image based on a first distortion parameter and a first internal parameter of an original camera used for collecting the original image to obtain a corrected image; the determining module is used for determining an image to be mapped based on the corrected image; the mapping module is used for mapping the image to be mapped to a two-dimensional image plane coordinate system of the target camera based on the first internal reference of the original camera and the second internal reference of the target camera to obtain a mapped image; and the restoration module is used for carrying out distortion restoration processing on the mapping image based on the second distortion parameter and the second internal parameter of the target camera to obtain the target image.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the above-described image conversion method.
According to another aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; and the processor is used for reading the executable instructions from the memory and executing the instructions to realize the image conversion method.
Based on the image conversion method, the image conversion device, the computer-readable storage medium and the electronic device provided by the embodiments of the present disclosure, a corrected image is obtained by performing distortion correction processing on an original image, then an image to be mapped is determined based on the corrected image, then the image to be mapped is mapped to a two-dimensional image plane coordinate system of a target camera based on a first internal reference of the original camera and a second internal reference of the target camera, so as to obtain a mapped image, and finally the mapped image is subjected to distortion reduction processing based on a second distortion parameter and a second internal reference of the target camera, so as to obtain the target image, so that images of different styles simulating another camera to shoot can be obtained according to the internal reference and the distortion coefficient of the camera, and the implementation mode is simpler and more efficient. The generated image is closer to the real scene because the pixels of the generated image are converted from the pixels of the original image.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a system diagram to which the present disclosure is applicable.
Fig. 2 is a flowchart illustrating an image conversion method according to an exemplary embodiment of the disclosure.
Fig. 3 is a flowchart illustrating an image conversion method according to another exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating an image conversion method according to another exemplary embodiment of the present disclosure.
Fig. 5 is a schematic structural diagram of an image conversion device according to an exemplary embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an image conversion device according to another exemplary embodiment of the present disclosure.
Fig. 7 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the application
In order to generate images with different shooting styles, a method commonly used at present trains a model based on a deep learning method, for example, a countermeasure network is generated, style migration is performed, and data fitting an actual scene is generated. However, this method is greatly affected by the model accuracy, and the pixel values in the image are synthesized through a network, so that the image is easily distorted in the detailed part.
Exemplary System
Fig. 1 illustrates an exemplary system architecture 100 to which an image conversion method or an image conversion apparatus of an embodiment of the present disclosure may be applied.
As shown in fig. 1, system architecture 100 may include terminal device 101, network 102, and server 103. Network 102 is the medium used to provide communication links between terminal devices 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal device 101 to interact with server 103 over network 102 to receive or send messages and the like. Various communication client applications, such as an image processing application, a human-computer interaction application, a search application, a web browser application, an instant messaging tool, and the like, may be installed on the terminal device 101.
The terminal device 101 may be various electronic devices including, but not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), etc., and a fixed terminal such as a digital TV, a desktop computer, etc.
The server 103 may be a server that provides various services, such as a background image processing server that performs shooting style conversion on an image uploaded by the terminal apparatus 101. The background image processing server may process the received image to obtain a processing result (e.g., an image simulating the shooting style of another camera).
It should be noted that the image conversion method provided by the embodiment of the present disclosure may be executed by the server 103 or the terminal device 101, and accordingly, the image conversion apparatus may be disposed in the server 103 or the terminal device 101.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where the original image for conversion does not need to be acquired from a remote place, the above system architecture may not include a network, and only include a server or a terminal device.
Exemplary method
Fig. 2 is a flowchart illustrating an image conversion method according to an exemplary embodiment of the disclosure. The embodiment can be applied to an electronic device (such as the terminal device 101 or the server 103 shown in fig. 1), and as shown in fig. 2, the method includes the following steps:
In this embodiment, the electronic device may perform distortion correction processing on the original image based on a first distortion parameter and a first internal parameter of an original camera used for acquiring the original image, so as to obtain a corrected image. The first distortion parameter may be obtained by calibrating a distortion parameter of the original camera in advance.
Specifically, the camera distortion includes radial distortion and tangential distortion, and the integrated distortion formula:
ud=u*(1+k1r2+k2r4+k3r6)+2p1u*v*+p2(r2+2u*2) (1)
vd=v*(1+k1r2+k2r4+k3r6)+p1(r2+2v*2)+2p2u*v* (2)
wherein the content of the first and second substances,r2=u*2+v*2,(k1,k2,p1,p2,k3) As distortion parameters of the camera, fx、fyIs the first internal reference.Representing the coordinates of the distorted image in a two-dimensional coordinate system,representing the coordinates of the undistorted image in a two-dimensional coordinate system. Equations (1) and (2) are coordinate conversion equations for converting an image from distorted to undistorted. The electronic device may perform inverse transformation using the above equations (1) and (2) to perform distortion correction processing on the original image.
In this embodiment, the electronic device may determine the image to be mapped based on the rectified image.
The image to be mapped is an image mapped to the two-dimensional image plane coordinates of another camera. Optionally, the corrected image may be directly determined as an image to be mapped, and the corrected image may be processed in another manner (for example, camera shooting angle conversion) to obtain the image to be mapped.
And 203, mapping the image to be mapped to a two-dimensional image plane coordinate system of the target camera based on the first internal reference of the original camera and the second internal reference of the target camera to obtain a mapped image.
In this embodiment, the electronic device may map the image to be mapped to the two-dimensional image plane coordinate system of the target camera based on the first internal reference of the original camera and the second internal reference of the target camera, so as to obtain the mapped image.
The first internal reference and the second internal reference may be obtained by performing internal reference calibration on the original camera and the target camera based on an existing internal reference calibration method. The target camera is a camera having different parameters (including internal parameters, distortion parameters, etc.) from the original camera, and since the parameters of the target camera are different from those of the original camera, the style of the image taken by the target camera is different from that of the image taken by the original camera. For example, the original camera is a normal lens camera and the target camera is a fisheye lens camera.
The first internal reference is used for representing the mapping relation between the three-dimensional local coordinate system of the original camera and the two-dimensional image plane coordinate system of the original camera, and the second internal reference is used for representing the mapping relation between the three-dimensional local coordinate system of the target camera and the two-dimensional image plane coordinate system of the target camera. The purpose of the embodiment is to convert the style of the original image into another style, and can simulate that the target camera is used for shooting the image under the corresponding camera position and posture of the image to be mapped.
Generally, when the position and the posture of the target camera are the same as those of the original camera, the pixel points in the image to be mapped can be converted into a three-dimensional local coordinate system based on the first internal reference, and then the pixel points in the three-dimensional local coordinate system can be converted into a two-dimensional image plane coordinate system based on the second internal reference, so that the mapped image is obtained.
And 204, performing distortion reduction processing on the mapping image based on the second distortion parameter and the second internal parameter of the target camera to obtain a target image.
In this embodiment, the electronic device may perform distortion reduction processing on the mapped image based on the second distortion parameter and the second internal parameter of the target camera to obtain the target image. Specifically, the electronic device may implement the distortion reduction processing on the mapped image based on the above equations (1) and (2).
According to the method provided by the embodiment of the disclosure, the original image is subjected to distortion correction processing to obtain a corrected image, then the image to be mapped is determined based on the corrected image, then the image to be mapped is mapped to the two-dimensional image plane coordinate system of the target camera based on the first internal reference of the original camera and the second internal reference of the target camera to obtain a mapped image, and finally the mapped image is subjected to distortion reduction processing based on the second distortion parameter and the second internal reference of the target camera to obtain the target image, so that images of different styles simulating the shooting of another camera can be obtained according to the internal reference and the distortion coefficient of the camera, and the implementation mode is simpler, more convenient and more efficient. The generated image is closer to the real scene because the pixels of the generated image are converted from the pixels of the original image.
In some alternative implementations, step 203 may be performed as follows:
firstly, a first mapping relation between a two-dimensional image plane coordinate system of an original camera and a local three-dimensional coordinate system of the original camera is determined based on first internal reference.
Specifically, the camera imaging formula is:
whereinAre coordinates in the three-dimensional local coordinate system of the camera,coordinates in a two-dimensional coordinate system of the image are represented, and K represents camera intrinsic parameters.
Equation (3) is a conversion equation from the three-dimensional local coordinate system of the camera to the two-dimensional image plane coordinate system, so from equation (3) above, a conversion equation from the two-dimensional image plane coordinate system of the image to be mapped to the three-dimensional local coordinate system of the original camera can be obtained as:
wherein, KoDenotes the first internal reference, Ko -1The first mapping relationship may be represented as described above,the coordinates of the pixel points in the image to be mapped are obtained.
And then, based on the second internal reference, determining a second mapping relation between the local three-dimensional coordinate system of the original camera and the two-dimensional image plane coordinate system of the target camera.
Specifically, according to equation (3), a conversion equation from the three-dimensional local coordinate system of the target camera to the two-dimensional image plane coordinate system of the mapping image can be obtained as follows:
wherein, KdIs the second internal reference, KdThe second mapping relationship can be represented.Coordinates of pixel points in the mapped image.
And finally, mapping the image to be mapped to a two-dimensional image plane coordinate system of the target camera based on the first mapping relation and the second mapping relation to obtain a mapping image.
Specifically, based on the above equations (4) and (5), a coordinate mapping relationship between the image to be mapped and the mapping image can be obtained:
and (6) substituting the first internal reference and the second internal reference into the formula (6) to obtain a mapping image.
According to the implementation mode, the coordinate mapping relation between the image to be mapped and the mapping image is determined through the first internal reference and the second internal reference, and each pixel point in the image to be mapped can be directly mapped into the mapping image, so that the efficiency of generating the mapping image can be improved, and the risk of image distortion can be reduced because the pixel points of the mapping image are all from the image to be mapped.
In some alternative implementations, step 202 may be performed as follows:
and converting the corrected image into an image to be mapped based on the first internal reference and a preset camera rotation angle in a three-dimensional local coordinate system of the original camera. Wherein the camera rotation angle may be a pre-specified angle. In general, the camera rotation angle may include a pitch angle (pitch), a yaw angle (yaw), and a roll angle (roll). The camera rotation angle is a change in the camera shooting angle based on the camera shooting angle of the second image.
Generally, based on the first internal reference, pixel points in the corrected image are converted to a three-dimensional local coordinate system of the original camera, affine transformation is performed under the three-dimensional local coordinate system, rotation of the pixel points is completed, and then the pixel points are mapped back to a two-dimensional coordinate system, so that an image to be mapped is obtained.
According to the implementation mode, the corrected image is subjected to camera shooting angle conversion, images under different camera shooting angles can be generated, so that a target camera with different parameters from that of an original camera can be simulated to shoot images under different camera shooting angles, the target image with more differences compared with the original image can be obtained, and the application range of image conversion is expanded.
In some optional implementations, as shown in fig. 3, in the above optional embodiment, the corrected image may be converted into the image to be mapped based on the first internal reference and the preset camera rotation angle according to the following steps:
The preset number may be set arbitrarily, for example, the preset number is 4. The electronic device can extract the first set of pixel points based on various means. Such as random extraction, extraction by designation, etc.
Optionally, the electronic device may extract a first set of pixel points from the rectified image that includes a preset number of non-collinear points. The first pixel point sets which are not collinear can enable the first pixel point sets to fully represent two-dimensional characteristics of the image, and accuracy of subsequent affine transformation is improved.
Generally, the electronic device may convert the first pixel point set to a three-dimensional local coordinate system of the original camera based on the first internal reference, perform affine transformation in the three-dimensional local coordinate system, complete rotation of the pixel point, and then map back to the two-dimensional coordinate system, thereby obtaining a second pixel point set.
Specifically, the mapping relationship may be obtained based on the following formula:
if f is equal to K.Rz·Ry·Rx·K-1F represents the coordinate mapping relationship. Wherein the content of the first and second substances,two-dimensional coordinates representing a third set of pixel points,two-dimensional coordinates representing a second set of pixel points, K being a first internal reference, Rz、Ry、RxIs a rotation matrix. Wherein the content of the first and second substances,
Rxrepresenting a rotation matrix about the x-axis, RyRepresenting a rotation matrix about the y-axis, RzRepresenting a rotation matrix around the z-axis. Pitch, yaw, roll in the rotation matrix are camera rotation angles.
Specifically, based on the above f, the coordinates of each pixel in the corrected image may be converted, and finally the image to be mapped is obtained.
In the method provided by the embodiment corresponding to fig. 3, the first pixel point set is extracted from the corrected image, and the second pixel point set is obtained based on the first internal reference and the camera rotation angle, so that the coordinate mapping relationship of the pixel point sets can be obtained, the coordinate mapping relationship between the images can be determined by using fewer pixel points, the calculation amount is reduced, and the image conversion efficiency is improved.
In some alternative implementations, as shown in fig. 4, step 2022 may include the following steps:
Specifically, based on the following expression (8), the first set of pixel points may be converted into a first set of spatial points in a three-dimensional local coordinate system.
Wherein, KoIs used as the first internal reference,two-dimensional coordinates representing a first set of pixel points.
Specifically, the affine transformation formula is shown as follows:
Rx、Ry、Rza rotation matrix is shown, as described in step 2023 above. The coordinates of the first set of spatial points may be affine transformed according to equation (9).
Specifically, based on the above formula (3), a second set of spatial points in the three-dimensional local coordinate system of the camera may be converted to a second set of pixel points in the two-dimensional image plane coordinate system.
According to the implementation mode, the first pixel point set is converted into the three-dimensional local coordinate system to be subjected to affine transformation, and then is converted into the two-dimensional coordinate system from the three-dimensional local coordinate system, so that the sight angle conversion can be accurately performed on the first pixel point set, and the accuracy of determining the coordinate mapping relation between the corrected image and the image to be mapped is improved.
Exemplary devices
Fig. 5 is a schematic structural diagram of an image conversion device according to an exemplary embodiment of the present disclosure. The present embodiment can be applied to an electronic device, as shown in fig. 5, the image conversion apparatus includes: the correction module 501 is configured to perform distortion correction processing on an original image based on a first distortion parameter and a first internal parameter of an original camera used for acquiring the original image, so as to obtain a corrected image; a determining module 502, configured to determine an image to be mapped based on the rectified image; the mapping module 503 is configured to map the image to be mapped to a two-dimensional image plane coordinate system of the target camera based on a first internal reference of the original camera and a second internal reference of the target camera, so as to obtain a mapped image; and a restoring module 504, configured to perform distortion restoring processing on the mapped image based on the second distortion parameter and the second internal parameter of the target camera, so as to obtain a target image.
In this embodiment, the correcting module 501 may perform distortion correction processing on the original image based on a first distortion parameter and a first internal parameter of an original camera used for acquiring the original image, so as to obtain a corrected image. The first distortion parameter may be obtained by calibrating a distortion parameter of the original camera in advance.
In this embodiment, the determining module 502 may determine the image to be mapped based on the rectified image.
The image to be mapped is an image mapped to the two-dimensional image plane coordinates of another camera. Optionally, the corrected image may be directly determined as an image to be mapped, and the corrected image may be processed in another manner (for example, camera shooting angle conversion) to obtain the image to be mapped.
In this embodiment, the mapping module 503 may map the image to be mapped to the two-dimensional image plane coordinate system of the target camera based on the first internal reference of the original camera and the second internal reference of the target camera, so as to obtain a mapped image.
The first internal reference and the second internal reference may be obtained by performing internal reference calibration on the original camera and the target camera based on an existing internal reference calibration method. The target camera is a camera having different parameters (including internal parameters, distortion parameters, etc.) from the original camera, and since the parameters of the target camera are different from those of the original camera, the style of the image taken by the target camera is different from that of the image taken by the original camera. For example, the original camera is a normal lens camera and the target camera is a fisheye lens camera.
The first internal reference is used for representing the mapping relation between the three-dimensional local coordinate system of the original camera and the two-dimensional image plane coordinate system of the original camera, and the second internal reference is used for representing the mapping relation between the three-dimensional local coordinate system of the target camera and the two-dimensional image plane coordinate system of the target camera. Since the purpose of this embodiment is to convert the style of the original image into another style, it can be simulated that the image is taken with the target camera at the camera position and posture corresponding to the image to be mapped.
Generally, based on the first internal reference, the pixel points in the image to be mapped are converted into the three-dimensional local coordinate system, and then based on the second internal reference, the pixel points in the three-dimensional local coordinate system are converted into the two-dimensional image plane coordinate system, so as to obtain the mapped image.
In this embodiment, the restoring module 504 may perform distortion restoring processing on the mapped image based on the second distortion parameter and the second internal parameter of the target camera to obtain the target image. Specifically, the restoration module 504 may perform distortion restoration processing on the mapping image based on the above equations (1) and (2).
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image conversion apparatus according to another exemplary embodiment of the present disclosure.
In some optional implementations, the mapping module 503 includes: a first determining unit 5031, configured to determine, based on the first internal reference, a first mapping relationship between a two-dimensional image plane coordinate system of the original camera and a local three-dimensional coordinate system of the original camera; a second determining unit 5032, configured to determine, based on the second internal reference, a second mapping relationship between the local three-dimensional coordinate system of the original camera and the two-dimensional image plane coordinate system of the target camera; the mapping unit 5033 is configured to map the image to be mapped to a two-dimensional image plane coordinate system of the target camera based on the first mapping relationship and the second mapping relationship to obtain a mapped image.
In some optional implementations, the determining module 502 is further configured to: and converting the corrected image into an image to be mapped based on the first internal reference and a preset camera rotation angle in a three-dimensional local coordinate system of the original camera.
In some optional implementations, the determining module 502 includes: an extracting unit 5021, configured to extract a first pixel point set including a preset number of points from the rectified image; a first conversion unit 5022, configured to convert the first pixel point set into a second pixel point set based on the first internal reference and the camera rotation angle in the two-dimensional image plane coordinate system of the corrected image; a third determining unit 5023, configured to determine a coordinate mapping relationship between the first pixel point set and the second pixel point set in the two-dimensional image plane coordinate system of the corrected image; the second conversion unit 5024 is configured to convert the rectified image into an image to be mapped according to the coordinate mapping relationship.
In some optional implementations, the first conversion unit 5022 includes: a first conversion subunit 50221, configured to convert the first pixel point set into a first spatial point set in a three-dimensional local coordinate system based on the first internal reference; the second conversion subunit 50222 is configured to perform affine transformation on the coordinates of the first spatial point set based on the camera rotation angle to obtain a second spatial point set; a third converting subunit 50223, configured to convert the second spatial point set to a second pixel point set in the two-dimensional image plane coordinate system based on the second internal reference.
In some optional implementations, the extraction unit 5021 is further to: a first set of pixel points comprising a predetermined number of non-collinear points is extracted from the rectified image.
The image conversion apparatus provided by the above embodiment of the present disclosure performs distortion correction processing on an original image to obtain a corrected image, then determines an image to be mapped based on the corrected image, then maps the image to be mapped to a two-dimensional image plane coordinate system of a target camera based on a first internal reference of the original camera and a second internal reference of the target camera to obtain a mapped image, and finally performs distortion reduction processing on the mapped image based on a second distortion parameter and the second internal reference of the target camera to obtain the target image, so that an image of a different style simulating another camera to shoot can be obtained according to the camera internal reference and a distortion coefficient, and an implementation manner is simpler and more efficient. The generated image is closer to the real scene because the pixels of the generated image are converted from the pixels of the original image.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 7. The electronic device may be either or both of the terminal device 101 and the server 103 as shown in fig. 1, or a stand-alone device separate from them, which may communicate with the terminal device 101 and the server 103 to receive the collected input signals therefrom.
FIG. 7 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
As shown in fig. 7, the electronic device 700 includes one or more processors 701 and memory 702.
The processor 701 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 700 to perform desired functions.
Memory 702 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer-readable storage medium and executed by the processor 701 to implement the image conversion methods of the various embodiments of the present disclosure above and/or other desired functions. Various contents such as an original image, a target image, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 700 may further include: an input device 703 and an output device 704, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is the terminal device 101 or the server 103, the input device 703 may be a camera, a mouse, a keyboard, or the like, and is used for inputting an image. When the electronic device is a stand-alone device, the input means 703 may be a communication network connector for receiving input images from the terminal device 101 and the server 103.
The output device 704 may output various information including the generated target image to the outside. The output devices 704 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 700 relevant to the present disclosure are shown in fig. 7, omitting components such as buses, input/output interfaces, and the like. In addition, electronic device 700 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the image conversion method according to various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the image conversion method according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
A computer-readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
Claims (10)
1. An image conversion method, comprising:
based on a first distortion parameter and a first internal parameter of an original camera used for collecting an original image, carrying out distortion correction processing on the original image to obtain a corrected image;
determining an image to be mapped based on the corrected image;
mapping the image to be mapped to a two-dimensional image plane coordinate system of the target camera based on the first internal reference of the original camera and the second internal reference of the target camera to obtain a mapped image;
and carrying out distortion reduction processing on the mapping image based on the second distortion parameter and the second internal parameter of the target camera to obtain a target image.
2. The method of claim 1, wherein the mapping the image to be mapped to a two-dimensional image plane coordinate system of the target camera based on the first internal reference of the original camera and the second internal reference of the target camera to obtain a mapping image comprises:
determining a first mapping relation between a two-dimensional image plane coordinate system of the original camera and a local three-dimensional coordinate system of the original camera based on the first internal reference;
determining a second mapping relation between the local three-dimensional coordinate system of the original camera and the two-dimensional image plane coordinate system of the target camera based on the second internal reference;
and mapping the image to be mapped to a two-dimensional image plane coordinate system of the target camera based on the first mapping relation and the second mapping relation to obtain a mapped image.
3. The method of claim 1, wherein the determining an image to be mapped based on the rectified image comprises:
and converting the corrected image into an image to be mapped based on the first internal reference and a preset camera rotation angle in the three-dimensional local coordinate system of the original camera.
4. The method of claim 3, wherein the converting the rectified image into an image to be mapped based on the first internal parameter and a preset camera rotation angle comprises:
extracting a first pixel point set containing a preset number of points from the corrected image;
converting the first pixel point set into a second pixel point set based on the first internal reference and the camera rotation angle under a two-dimensional image plane coordinate system of the corrected image;
determining a coordinate mapping relation of the first pixel point set and the second pixel point set under a two-dimensional image plane coordinate system of the corrected image;
and converting the corrected image into an image to be mapped according to the coordinate mapping relation.
5. The method of claim 4, wherein said converting the first set of pixel points to a second set of pixel points based on the first internal reference and the camera rotation angle comprises:
converting the first set of pixel points into a first set of spatial points in the three-dimensional local coordinate system based on the first internal reference;
performing affine transformation on the coordinates of the first space point set based on the camera rotation angle to obtain a second space point set;
based on the second internal reference, converting the second set of spatial points to a second set of pixel points in the two-dimensional image plane coordinate system.
6. The method according to claim 4 or 5, wherein said extracting from said rectified image a first set of pixel points comprising a preset number of points comprises:
a first set of pixel points comprising a preset number of non-collinear points is extracted from the rectified image.
7. An image conversion apparatus comprising:
the correction module is used for carrying out distortion correction processing on the original image based on a first distortion parameter and a first internal parameter of an original camera used for collecting the original image to obtain a corrected image;
the determining module is used for determining an image to be mapped based on the corrected image;
the mapping module is used for mapping the image to be mapped to a two-dimensional image plane coordinate system of the target camera based on the first internal reference of the original camera and the second internal reference of the target camera to obtain a mapped image;
and the restoration module is used for carrying out distortion restoration processing on the mapping image based on the second distortion parameter and the second internal parameter of the target camera to obtain a target image.
8. The apparatus of claim 7, wherein the mapping module comprises:
the first determining unit is used for determining a first mapping relation between a two-dimensional image plane coordinate system of the original camera and a local three-dimensional coordinate system of the original camera based on the first internal reference;
a second determining unit, configured to determine, based on the second internal reference, a second mapping relationship between the local three-dimensional coordinate system of the original camera and the two-dimensional image plane coordinate system of the target camera;
and the mapping unit is used for mapping the image to be mapped to a two-dimensional image plane coordinate system of the target camera based on the first mapping relation and the second mapping relation to obtain a mapping image.
9. A computer-readable storage medium, the storage medium storing a computer program for performing the method of any of the preceding claims 1-6.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011344168.6A CN112489114B (en) | 2020-11-25 | 2020-11-25 | Image conversion method, image conversion device, computer readable storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011344168.6A CN112489114B (en) | 2020-11-25 | 2020-11-25 | Image conversion method, image conversion device, computer readable storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112489114A true CN112489114A (en) | 2021-03-12 |
CN112489114B CN112489114B (en) | 2024-05-10 |
Family
ID=74934976
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011344168.6A Active CN112489114B (en) | 2020-11-25 | 2020-11-25 | Image conversion method, image conversion device, computer readable storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112489114B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112884769A (en) * | 2021-04-12 | 2021-06-01 | 深圳中科飞测科技股份有限公司 | Image processing method, image processing apparatus, optical system, and computer-readable storage medium |
CN113055601A (en) * | 2021-03-30 | 2021-06-29 | Oppo广东移动通信有限公司 | Binocular shooting correction method, binocular shooting correction device and electronic equipment |
CN113067984A (en) * | 2021-03-30 | 2021-07-02 | Oppo广东移动通信有限公司 | Binocular shooting correction method, binocular shooting correction device and electronic equipment |
CN113096192A (en) * | 2021-04-25 | 2021-07-09 | 西安四维图新信息技术有限公司 | Image sensor internal reference calibration method, device, equipment and storage medium |
CN114821497A (en) * | 2022-02-24 | 2022-07-29 | 广州文远知行科技有限公司 | Method, device and equipment for determining position of target object and storage medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002158915A (en) * | 2000-09-05 | 2002-05-31 | Inst Of Physical & Chemical Res | Correction method of lens distortion of image |
CN104363986A (en) * | 2014-10-31 | 2015-02-18 | 华为技术有限公司 | Image processing method and device |
US20150254818A1 (en) * | 2014-03-10 | 2015-09-10 | Omnivision Technologies, Inc. | Image Transformation And Multi-View Output Systems And Methods |
CN107424126A (en) * | 2017-05-26 | 2017-12-01 | 广州视源电子科技股份有限公司 | Method for correcting image, device, equipment, system and picture pick-up device and display device |
US20180160045A1 (en) * | 2015-07-31 | 2018-06-07 | SZ DJI Technology Co., Ltd. | Method and device of image processing and camera |
US20180255282A1 (en) * | 2017-03-03 | 2018-09-06 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for generating image information |
CN109155822A (en) * | 2017-11-28 | 2019-01-04 | 深圳市大疆创新科技有限公司 | Image processing method and device |
WO2019041651A1 (en) * | 2017-08-30 | 2019-03-07 | 广州视源电子科技股份有限公司 | Image correction method, apparatus and device, and computer readable storage medium |
US20190132584A1 (en) * | 2017-11-01 | 2019-05-02 | Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. | Method and device for calibration |
CN110517202A (en) * | 2019-08-30 | 2019-11-29 | 的卢技术有限公司 | A kind of vehicle body camera calibration method and its caliberating device |
CN111028290A (en) * | 2019-11-26 | 2020-04-17 | 北京光年无限科技有限公司 | Graph processing method and device for picture book reading robot |
CN111047651A (en) * | 2019-12-12 | 2020-04-21 | 中航华东光电有限公司 | Method for correcting distorted image |
CN111539880A (en) * | 2020-04-15 | 2020-08-14 | 上海摩象网络科技有限公司 | Image processing method and device and handheld camera |
CN111951193A (en) * | 2020-08-21 | 2020-11-17 | 安谋科技(中国)有限公司 | Method and apparatus for correcting horizontal distortion of image |
-
2020
- 2020-11-25 CN CN202011344168.6A patent/CN112489114B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002158915A (en) * | 2000-09-05 | 2002-05-31 | Inst Of Physical & Chemical Res | Correction method of lens distortion of image |
US20150254818A1 (en) * | 2014-03-10 | 2015-09-10 | Omnivision Technologies, Inc. | Image Transformation And Multi-View Output Systems And Methods |
CN104363986A (en) * | 2014-10-31 | 2015-02-18 | 华为技术有限公司 | Image processing method and device |
US20180160045A1 (en) * | 2015-07-31 | 2018-06-07 | SZ DJI Technology Co., Ltd. | Method and device of image processing and camera |
US20180255282A1 (en) * | 2017-03-03 | 2018-09-06 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for generating image information |
CN107424126A (en) * | 2017-05-26 | 2017-12-01 | 广州视源电子科技股份有限公司 | Method for correcting image, device, equipment, system and picture pick-up device and display device |
WO2019041651A1 (en) * | 2017-08-30 | 2019-03-07 | 广州视源电子科技股份有限公司 | Image correction method, apparatus and device, and computer readable storage medium |
US20190132584A1 (en) * | 2017-11-01 | 2019-05-02 | Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. | Method and device for calibration |
CN109155822A (en) * | 2017-11-28 | 2019-01-04 | 深圳市大疆创新科技有限公司 | Image processing method and device |
US20200267297A1 (en) * | 2017-11-28 | 2020-08-20 | SZ DJI Technology Co., Ltd. | Image processing method and apparatus |
CN110517202A (en) * | 2019-08-30 | 2019-11-29 | 的卢技术有限公司 | A kind of vehicle body camera calibration method and its caliberating device |
CN111028290A (en) * | 2019-11-26 | 2020-04-17 | 北京光年无限科技有限公司 | Graph processing method and device for picture book reading robot |
CN111047651A (en) * | 2019-12-12 | 2020-04-21 | 中航华东光电有限公司 | Method for correcting distorted image |
CN111539880A (en) * | 2020-04-15 | 2020-08-14 | 上海摩象网络科技有限公司 | Image processing method and device and handheld camera |
CN111951193A (en) * | 2020-08-21 | 2020-11-17 | 安谋科技(中国)有限公司 | Method and apparatus for correcting horizontal distortion of image |
Non-Patent Citations (1)
Title |
---|
朱瑶;王治乐;贺磊;周程灏;: "线扫描相机标定及畸变矫正方法", 航天返回与遥感, no. 06 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113055601A (en) * | 2021-03-30 | 2021-06-29 | Oppo广东移动通信有限公司 | Binocular shooting correction method, binocular shooting correction device and electronic equipment |
CN113067984A (en) * | 2021-03-30 | 2021-07-02 | Oppo广东移动通信有限公司 | Binocular shooting correction method, binocular shooting correction device and electronic equipment |
CN113067984B (en) * | 2021-03-30 | 2023-01-17 | Oppo广东移动通信有限公司 | Binocular shooting correction method, binocular shooting correction device and electronic equipment |
CN112884769A (en) * | 2021-04-12 | 2021-06-01 | 深圳中科飞测科技股份有限公司 | Image processing method, image processing apparatus, optical system, and computer-readable storage medium |
CN113096192A (en) * | 2021-04-25 | 2021-07-09 | 西安四维图新信息技术有限公司 | Image sensor internal reference calibration method, device, equipment and storage medium |
CN113096192B (en) * | 2021-04-25 | 2024-05-07 | 西安四维图新信息技术有限公司 | Image sensor internal parameter calibration method, device, equipment and storage medium |
CN114821497A (en) * | 2022-02-24 | 2022-07-29 | 广州文远知行科技有限公司 | Method, device and equipment for determining position of target object and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112489114B (en) | 2024-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112489114B (en) | Image conversion method, image conversion device, computer readable storage medium and electronic equipment | |
CN110288547A (en) | Method and apparatus for generating image denoising model | |
CN114399597B (en) | Method and device for constructing scene space model and storage medium | |
CN111429354B (en) | Image splicing method and device, panorama splicing method and device, storage medium and electronic equipment | |
US11816865B2 (en) | Extrinsic camera parameter calibration method, extrinsic camera parameter calibration apparatus, and extrinsic camera parameter calibration system | |
CN111428805B (en) | Method for detecting salient object, model, storage medium and electronic device | |
CN111402404B (en) | Panorama complementing method and device, computer readable storage medium and electronic equipment | |
CN111432119B (en) | Image shooting method and device, computer readable storage medium and electronic equipment | |
CN111612842B (en) | Method and device for generating pose estimation model | |
CN112085775B (en) | Image processing method, device, terminal and storage medium | |
CN111784776B (en) | Visual positioning method and device, computer readable medium and electronic equipment | |
CN111429501A (en) | Depth map prediction model generation method and device and depth map prediction method and device | |
CN111325798A (en) | Camera model correction method and device, AR implementation equipment and readable storage medium | |
CN111402136B (en) | Panorama generation method and device, computer readable storage medium and electronic equipment | |
CN114047823A (en) | Three-dimensional model display method, computer-readable storage medium and electronic device | |
CN114742722A (en) | Document correction method, device, electronic equipment and storage medium | |
WO2022262273A1 (en) | Optical center alignment test method and apparatus, and storage medium and electronic device | |
CN114882465A (en) | Visual perception method and device, storage medium and electronic equipment | |
CN111445518A (en) | Image conversion method and device, depth map prediction method and device | |
CN117894038A (en) | Method and device for generating object gesture in image | |
CN110390731B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN111429353A (en) | Image splicing method and device, panorama splicing method and device, storage medium and electronic equipment | |
CN112465716A (en) | Image conversion method and device, computer readable storage medium and electronic equipment | |
CN114241029B (en) | Image three-dimensional reconstruction method and device | |
CN115393423A (en) | Target detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |