CN113538587A - Camera coordinate transformation method, terminal and storage medium - Google Patents

Camera coordinate transformation method, terminal and storage medium Download PDF

Info

Publication number
CN113538587A
CN113538587A CN202010298442.4A CN202010298442A CN113538587A CN 113538587 A CN113538587 A CN 113538587A CN 202010298442 A CN202010298442 A CN 202010298442A CN 113538587 A CN113538587 A CN 113538587A
Authority
CN
China
Prior art keywords
camera
transformation
coordinate system
points
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010298442.4A
Other languages
Chinese (zh)
Inventor
赵国如
张宇
梁升云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202010298442.4A priority Critical patent/CN113538587A/en
Priority to PCT/CN2020/139233 priority patent/WO2021208486A1/en
Publication of CN113538587A publication Critical patent/CN113538587A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/06
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Abstract

The application relates to a camera coordinate transformation method, a terminal and a storage medium. The method comprises the following steps: step a: carrying out projection transformation on an original image acquired by a camera; step b: carrying out viewport transformation on the image after projection transformation, and roughly mapping all 3D points of a projection object in a visual scene into a two-dimensional plane to obtain mapping points of the projection object under a screen coordinate system; step c: and adjusting the mapping points to enable the mapping points to be overlapped with the 3D points in the original image, and taking the viewpoint and the field angle parameters during the overlapping as fixed parameters of the coordinate transformation of the camera. The method is simple to operate, the 3D coordinate system can be well transformed into the screen/pixel coordinate system under the condition that the camera internal parameters are not clear or the external parameters are not accurate, a transformation result with a good effect is obtained, and the problems that the traditional camera calibration coordinate transformation is not accurate and the internal and external parameters of the camera are depended on are solved.

Description

Camera coordinate transformation method, terminal and storage medium
Technical Field
The present application relates to the field of machine vision technologies, and in particular, to a method for transforming coordinates of a camera, a terminal, and a storage medium.
Background
The camera coordinate transformation cannot leave the camera calibration. In image measurement processes and machine vision applications, in order to determine the correlation between the three-dimensional geometric position of a certain point on the surface of an object in space and the corresponding point in the image, a geometric model of camera imaging must be established, and the parameters of the geometric model are the parameters of the camera. Under most conditions, the camera parameters can be obtained only through experiments and calculation, and the process of solving the parameters is camera calibration. The camera calibration method which is widely applied and has higher accuracy at present is Zhang friend calibration method [ Zhang Z. Flexible camera calibration by viewing a plane from un-known orientations [ C ]// Proceedings of the seven IEEE International Conference on Computer Vision, IEEE,1999 ].
A patent "a 3D model transformation system and method" with application number 201310450994.2 proposes a method for converting 2D planar images into corresponding 3D model images; an image calibration technique is proposed in the patent camera calibration of application No. 201580041718.8, which can determine the position of the camera; a patent "camera calibration method and device" of application No. 201710254363.1 discloses a camera calibration method and device, belonging to the field of computer vision.
Performing a coordinate transformation operation generally requires internal and external parameters of the camera in order to calculate a transformation relationship between several coordinate systems, as shown in fig. 1, which is a schematic diagram of four coordinate systems. The conversion calculation process between 4 coordinate systems involved in the camera coordinate transformation is as follows:
Figure BDA0002453094790000021
in the formula (1), the reaction mixture is,
Figure BDA0002453094790000022
is a camera external reference matrix, and is a camera external reference matrix,
Figure BDA0002453094790000023
is a camera internal reference matrix, the distortion coefficient of the lens is ignored, and (u, v) is a pixel coordinate system (X)w,Yw,Zw) The whole formula is a conversion process of a world coordinate system (3D coordinates) and a pixel coordinate system (2D coordinates).
It can be seen that the conversion depends on internal and external parameters, and for a camera with unknown internal parameters, the coordinate transformation cannot be completed under the condition that the camera cannot be calibrated. In addition, the coordinate transformation based on graphics still needs data of the center of a visual area and the angle of view, and needs to know lens parameters of a camera, and the parameters are not easy to obtain.
Disclosure of Invention
The present application provides a camera coordinate transformation method, a terminal and a storage medium, which aim to solve at least one of the above technical problems in the prior art to a certain extent.
In order to solve the above problems, the present application provides the following technical solutions:
a camera coordinate transformation method, comprising the steps of:
step a: carrying out projection transformation on an original image acquired by a camera;
step b: carrying out viewport transformation on the image after projection transformation, and roughly mapping all 3D points of a projection object in a visual scene into a two-dimensional plane to obtain mapping points of the projection object under a screen coordinate system;
step c: and adjusting the mapping points to enable the mapping points to be overlapped with the 3D points in the original image, and taking the viewpoint and the field angle parameters during the overlapping as fixed parameters of the coordinate transformation of the camera.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in the step a, the original image is a video image, and the video image is not less than a set frame number, or the video image includes image information of a set number at different positions.
The technical scheme adopted by the embodiment of the application further comprises the following steps: before carrying out projection transformation on an original image acquired by a camera, the method further comprises the following steps:
and converting the world coordinate system of the original image.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the world coordinate system conversion is specifically as follows:
and converting the original image from a three-dimensional object coordinate system to a world coordinate system by adopting three-dimensional geometric transformation.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in the step a, the projective transformation specifically includes:
and performing projection transformation on the image converted by the world coordinate system by adopting one-point perspective.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in step c, before the adjusting the mapping point, the method further includes:
and connecting all the mapping points according to the projection object structure and displaying the mapping points on the original image to obtain the two-dimensional display of the projection object in a screen coordinate system.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in the step c, the adjusting the mapping point specifically includes:
the positions, the viewpoints and the parameters of the field angles of the mapping points of the multi-frame images at different positions are respectively adjusted through integral translation and scaling.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in the step c, the adjusting the mapping point further includes:
and judging whether the superposed three-dimensional coordinates of the images are matched, and if not, re-adjusting the mapping points.
The embodiment of the application adopts another technical scheme that: a terminal comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions for implementing the camera coordinate transformation method;
the processor is to execute the program instructions stored by the memory to control camera coordinate transformation.
The embodiment of the application adopts another technical scheme that: a storage medium storing program instructions executable by a processor for performing the camera coordinate transformation method.
Compared with the prior art, the embodiment of the application has the advantages that: according to the camera coordinate transformation method, the terminal and the storage medium, all 3D points are mapped onto the two-dimensional plane according to the principle of firstly projecting, transforming, roughly mapping and then accurately adjusting, then the mapping points are moved to the positions of actual points to enable the mapping points to be overlapped, and the field angle and the visual area center at the moment are fixed values of the camera. The method is simple to operate, the 3D coordinate system can be well transformed into the screen/pixel coordinate system under the condition that the camera internal parameters are not clear or the external parameters are not accurate, a transformation result with a good effect is obtained, and the problems that the traditional camera calibration coordinate transformation is not accurate and the internal and external parameters of the camera are depended on are solved.
Drawings
FIG. 1 is a schematic diagram of four coordinate systems;
FIG. 2 is a flow chart of a method of camera coordinate transformation according to an embodiment of the present application;
FIG. 3 is a schematic view of a skeletal model;
FIG. 4 is a schematic diagram of a projective transformation according to an embodiment of the present application;
FIG. 5 is a side view of a one-point perspective transformation of an embodiment of the present application;
FIG. 6 is a diagram illustrating rough mapping results according to an embodiment of the present application;
FIG. 7 is a diagram illustrating a mapping point adjustment process according to an embodiment of the present application;
FIG. 8 is a graph showing the experimental results of the examples of the present application;
fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a storage medium according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Specifically, please refer to fig. 2, which is a flowchart of a method for transforming camera coordinates according to an embodiment of the present application. The camera coordinate transformation method of the embodiment of the application comprises the following steps:
step 100: acquiring an original image acquired by a single camera;
in step 100, the acquired original image is a video image acquired by a camera with three-dimensional coordinates, fixed posture, unknown internal parameters and inaccurate external parameters in a world coordinate system. In order to ensure that the viewing zone center and the viewing angle information are the same at different positions of the viewing zone, the collected video image cannot be less than the set frame number (about 200 frames are taken as an example in the present application), or the video image comprises a large amount of image information at different positions. For convenience of illustration, the following embodiment takes the conversion from a 3D coordinate system to a 2D coordinate system (it is understood that the two coordinate systems can be converted to each other in the same coordinate system, and the principle is the same), where each frame of image includes 23 points, and constitutes a skeleton model of a human body, and the skeleton model is schematically shown in fig. 3, so as to project three-dimensional data onto a two-dimensional image to form human body key point data.
Step 200: converting the original image from a three-dimensional object coordinate system to a world coordinate system by adopting three-dimensional geometric transformation;
in step 200, three-dimensional geometric transformation comprises translation, rotation, scaling and the like; the three-dimensional geometric transformation matrix is:
Figure BDA0002453094790000061
in the formula (2), the parameters a, b, c, d, e, f, h, I, j can be subjected to rotation, proportion, miscut and symmetric transformation, p, q, r are main parameters of perspective projection, k, m, n are main parameters of translation transformation, and s is a parameter of overall proportion transformation.
Step 300: performing projection transformation on the transformed image by adopting one-point perspective;
in step 300, since the camera lens is parallel to a plane (xoz plane) and is orthogonal to only one axis (y axis), the present application performs projection transformation on an image using one-point perspective. As shown in fig. 4 and 5, fig. 4 is a schematic view of projective transformation, and fig. 5 is a side view of one-point perspective transformation. It is understood that there are different numbers of points for different projection objects, and the following is a detailed description of the projection transformation performed in this embodiment.
Known from the trigonometric relationship:
Figure BDA0002453094790000071
if O and O' coincide with each other, the projection plane becomes an XOZ plane, and equation (3) can be simplified as equation (4) below:
Figure BDA0002453094790000072
the other projection directions are the same, and are not described herein again.
Step 400: carrying out Viewport transformation on the projection plane by using a Viewport () function, roughly mapping all 3D points of a projection object in the view into a two-dimensional plane, and obtaining the mapping point coordinates and position information of the projection object in a screen coordinate system;
in step 400, all 3D points of the projected object need to be mapped to a two-dimensional plane for subsequent fine adjustment.
Step 500: connecting all mapping points according to the structure of the projection object and displaying the mapping points on the original image to obtain two-dimensional display of the projection object in a screen coordinate system;
in step 500, as shown in fig. 6, a rough mapping result diagram is obtained. It can be seen that the mapping point after rough mapping does not coincide with the 3D point of the projection object in the original image due to lack of camera parameters and the like, but it can be seen from the position and angle relationship between the mapping point and the 3D point that the overall direction and structure of the mapping point and the 3D point are the same, so that the mapping point can coincide with the 3D point by adjusting parameters such as the position relationship, the zoom relationship, the angle relationship, the size of the angle of view, and the like.
Step 600: adjusting parameters such as the position, the viewpoint and the field angle of the mapping point through operations such as integral translation and scaling, and enabling the mapping point to coincide with the 3D point in the original image;
in step 600, the adjustment mode may be manual adjustment or setting related transformation and keys in the program. Fig. 7 is a schematic diagram illustrating a mapping point adjustment process. It can be understood that, one adjustment is only performed on the first frame image in the video image, the previous degree of engagement is high, but a large deviation exists after the position of the subsequent frame image is obviously changed, so that the adjustment needs to be performed on a plurality of frame images in different positions to find a proper parameter configuration.
Step 700: judging whether the three-dimensional coordinates of the superposed images are matched, if not, executing the step 600 again, otherwise, executing the step 800;
step 800: parameters such as a viewpoint and a field angle at the time of image superimposition are fixed parameters for the 3D coordinate system conversion by the camera.
In step 800, a single picture or a small number of connected pictures cannot normally display the position and mapping relationship at one time, and the mapping to different angles and sizes of the two-dimensional plane may have the same result, so that the acquired video image cannot be less than the set frame number, thereby ensuring that the visual area center and the visual angle information are the same at different positions of the visual area, and when the mapping relationships of a plurality of positions are consistent, the parameters of the camera are fixed.
In order to verify the feasibility and the effectiveness of the application, the application is verified through experiments, and as shown in fig. 8, the application is a graph of experimental effects. The test effect shows that the method has high integrating degree and can meet expectation and requirements.
Please refer to fig. 9, which is a schematic diagram of a terminal structure according to an embodiment of the present application. The terminal 50 comprises a processor 51, a memory 52 coupled to the processor 51.
The memory 52 stores program instructions for implementing the camera coordinate transformation method described above.
The processor 51 is operative to execute program instructions stored by the memory 52 to control the camera coordinate transformation.
The processor 51 may also be referred to as a CPU (Central Processing Unit). The processor 51 may be an integrated circuit chip having signal processing capabilities. The processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Please refer to fig. 10, which is a schematic structural diagram of a storage medium according to an embodiment of the present application. The storage medium of the embodiment of the present application stores a program file 61 capable of implementing all the methods described above, where the program file 61 may be stored in the storage medium in the form of a software product, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
According to the camera coordinate transformation method, the terminal and the storage medium, all 3D points are mapped onto the two-dimensional plane according to the principle of firstly projecting, transforming, roughly mapping and then accurately adjusting, then the mapping points are moved to the positions of actual points to enable the mapping points to be overlapped, and the field angle and the visual area center at the moment are fixed values of the camera. The method is simple to operate, the 3D coordinate system can be well transformed into the screen/pixel coordinate system under the condition that the camera internal parameters are not clear or the external parameters are not accurate (the position is fuzzy), a transformation result with a good effect is obtained, and the problems that the traditional camera calibration coordinate transformation is not accurate and the internal and external parameters of the camera are depended on are solved.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for transforming coordinates of a camera, comprising the steps of:
step a: carrying out projection transformation on an original image acquired by a camera;
step b: carrying out viewport transformation on the image after projection transformation, and roughly mapping all 3D points of a projection object in a visual scene into a two-dimensional plane to obtain mapping points of the projection object under a screen coordinate system;
step c: and adjusting the mapping points to enable the mapping points to be overlapped with the 3D points in the original image, and taking the viewpoint and the field angle parameters during the overlapping as fixed parameters of the coordinate transformation of the camera.
2. The method of claim 1, wherein in the step a, the original image is a video image, and the video image is not less than a set number of frames, or the video image includes a set number of image information of different positions.
3. The method for transforming coordinates of a camera according to claim 2, wherein before the step a, the step of projectively transforming the original image captured by the camera further comprises:
and converting the world coordinate system of the original image.
4. The camera coordinate transformation method of claim 3, wherein the world coordinate system transformation is specifically:
and converting the original image from a three-dimensional object coordinate system to a world coordinate system by adopting three-dimensional geometric transformation.
5. The method of claim 4, wherein in step a, the projective transformation is specifically:
and performing projection transformation on the image converted by the world coordinate system by adopting one-point perspective.
6. The method for transforming coordinates of a camera according to claim 1, wherein before the adjusting the mapping points in the step c, the method further comprises:
and connecting all the mapping points according to the projection object structure and displaying the mapping points on the original image to obtain the two-dimensional display of the projection object in a screen coordinate system.
7. The method for transforming coordinates of a camera according to claim 6, wherein in the step c, the adjusting the mapping points comprises:
the positions, the viewpoints and the parameters of the field angles of the mapping points of the multi-frame images at different positions are respectively adjusted through integral translation and scaling.
8. The method of claim 7, wherein the step c of adjusting the mapping points further comprises:
and judging whether the superposed three-dimensional coordinates of the images are matched, and if not, re-adjusting the mapping points.
9. A terminal, comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions for implementing the camera coordinate transformation method of any of claims 1-8;
the processor is to execute the program instructions stored by the memory to control camera coordinate transformation.
10. A storage medium storing program instructions executable by a processor to perform the camera coordinate transformation method of any one of claims 1 to 8.
CN202010298442.4A 2020-04-16 2020-04-16 Camera coordinate transformation method, terminal and storage medium Pending CN113538587A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010298442.4A CN113538587A (en) 2020-04-16 2020-04-16 Camera coordinate transformation method, terminal and storage medium
PCT/CN2020/139233 WO2021208486A1 (en) 2020-04-16 2020-12-25 Camera coordinate transformation method, terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010298442.4A CN113538587A (en) 2020-04-16 2020-04-16 Camera coordinate transformation method, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN113538587A true CN113538587A (en) 2021-10-22

Family

ID=78084737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010298442.4A Pending CN113538587A (en) 2020-04-16 2020-04-16 Camera coordinate transformation method, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN113538587A (en)
WO (1) WO2021208486A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114259272B (en) * 2021-12-15 2024-02-23 中国人民解放军火箭军特色医学中心 External visual intervention hemostasis device and method
CN114309934A (en) * 2021-12-29 2022-04-12 北京航星机器制造有限公司 Automatic laser welding method for frame skin box body structure
CN114170326B (en) * 2022-02-09 2022-05-20 北京芯海视界三维科技有限公司 Method and device for acquiring origin of camera coordinate system
CN114640833A (en) * 2022-03-11 2022-06-17 峰米(重庆)创新科技有限公司 Projection picture adjusting method and device, electronic equipment and storage medium
CN116051713B (en) * 2022-08-04 2023-10-31 荣耀终端有限公司 Rendering method, electronic device, and computer-readable storage medium
CN116051647A (en) * 2022-08-08 2023-05-02 荣耀终端有限公司 Camera calibration method and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150302575A1 (en) * 2014-04-17 2015-10-22 Siemens Aktiengesellschaft Sun location prediction in image space with astronomical almanac-based calibration using ground based camera
CN110097574A (en) * 2019-04-24 2019-08-06 南京邮电大学 A kind of real-time pose estimation method of known rigid body
CN110517202B (en) * 2019-08-30 2023-07-28 的卢技术有限公司 Car body camera calibration method and calibration device thereof
CN110728715B (en) * 2019-09-06 2023-04-25 南京工程学院 Intelligent inspection robot camera angle self-adaptive adjustment method
CN110675436A (en) * 2019-09-09 2020-01-10 中国科学院微小卫星创新研究院 Laser radar and stereoscopic vision registration method based on 3D feature points
CN110880185B (en) * 2019-11-08 2022-08-12 南京理工大学 High-precision dynamic real-time 360-degree all-dimensional point cloud acquisition method based on fringe projection

Also Published As

Publication number Publication date
WO2021208486A1 (en) 2021-10-21

Similar Documents

Publication Publication Date Title
CN113538587A (en) Camera coordinate transformation method, terminal and storage medium
JP6330987B2 (en) Image processing apparatus, image processing method, and storage medium
US11039121B2 (en) Calibration apparatus, chart for calibration, chart pattern generation apparatus, and calibration method
CN111750820B (en) Image positioning method and system
TWI574224B (en) An image processing apparatus, an image processing method, a video product processing apparatus, a recording medium, and an image display apparatus
US11568516B2 (en) Depth-based image stitching for handling parallax
EP2328125A1 (en) Image splicing method and device
US9230330B2 (en) Three dimensional sensing method and three dimensional sensing apparatus
CN110070598B (en) Mobile terminal for 3D scanning reconstruction and 3D scanning reconstruction method thereof
US20140118557A1 (en) Method and apparatus for providing camera calibration
CN108629756B (en) Kinectv2 depth image invalid point repairing method
KR20150120066A (en) System for distortion correction and calibration using pattern projection, and method using the same
US11282232B2 (en) Camera calibration using depth data
US20120162220A1 (en) Three-dimensional model creation system
JP2019532531A (en) Panorama image compression method and apparatus
US9183634B2 (en) Image processing apparatus and image processing method
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
CN109495733B (en) Three-dimensional image reconstruction method, device and non-transitory computer readable storage medium thereof
CN113256718B (en) Positioning method and device, equipment and storage medium
CN113643414A (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN109661815A (en) There are the robust disparity estimations in the case where the significant Strength Changes of camera array
TWI615808B (en) Image processing method for immediately producing panoramic images
CN115086625A (en) Correction method, device and system of projection picture, correction equipment and projection equipment
TWI731430B (en) Information display method and information display system
CN111489384B (en) Method, device, system and medium for evaluating shielding based on mutual viewing angle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination