CN111726602B - Matching method of different-field-angle camera-imaging system, system and computing system thereof - Google Patents

Matching method of different-field-angle camera-imaging system, system and computing system thereof Download PDF

Info

Publication number
CN111726602B
CN111726602B CN201910220302.2A CN201910220302A CN111726602B CN 111726602 B CN111726602 B CN 111726602B CN 201910220302 A CN201910220302 A CN 201910220302A CN 111726602 B CN111726602 B CN 111726602B
Authority
CN
China
Prior art keywords
camera
data
imaging
image
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910220302.2A
Other languages
Chinese (zh)
Other versions
CN111726602A (en
Inventor
丁建雄
王城特
张本好
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sunny Optical Zhejiang Research Institute Co Ltd
Original Assignee
Sunny Optical Zhejiang Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sunny Optical Zhejiang Research Institute Co Ltd filed Critical Sunny Optical Zhejiang Research Institute Co Ltd
Priority to CN201910220302.2A priority Critical patent/CN111726602B/en
Publication of CN111726602A publication Critical patent/CN111726602A/en
Application granted granted Critical
Publication of CN111726602B publication Critical patent/CN111726602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A matching method of different field angles camera-image system and its system and computing system are used to match a camera system and an image system with different field angles. The matching method of the different-field-angle camera-visualization system comprises the following steps: acquiring first camera data acquired by the camera system, wherein the first camera data comprise image data with depth information corresponding to each pixel point on a camera surface of the camera system; calibrating the first camera data based on the depth information by an image calibration model to obtain second camera data matched with the imaging system; and mapping the second camera data to a display surface of the display system according to a preset mapping proportion to obtain display data, so that an image displayed by the display system based on the display data is not distorted in the depth direction.

Description

Matching method of different-field-angle camera-imaging system, system and computing system thereof
Technical Field
The invention relates to the technical field of optical imaging, in particular to a matching method of a different-field-angle camera-imaging system, a system and a computing system thereof.
Background
The problem of parameter matching is usually considered from the aspect of hardware in the design of the existing high-efficiency camera-imaging system, so that the parameters of an optical system for camera shooting and imaging are consistent, and the problem of parameter mismatching can be avoided to the maximum extent. In addition, since most of the existing visualization systems (such as displays) are two-dimensional displays, and do not pay attention to actual depth changes in many cases, only resolution matching or even aspect ratio matching is required, so that various aspect ratios such as 4:3,16:9 and various image and video matching standards such as 960P and 1080P are produced in the market.
However, in most cases, there is always a difference between the parameters (such as the angle of view) of the camera system and the parameters (such as the angle of view) of the visualization system, so that there is a difference between the parameters of the image acquired by the camera system and the image to be displayed by the visualization system, thereby causing various matching difficulties. This can be difficult or impossible to achieve if the parameters of the different systems must be closely calibrated to be consistent. Especially for visualization systems such as AR (augmented reality), VR (virtual reality), and 3D movies, etc., the displayed images tend to have depth information, being three-dimensional; accordingly, the image obtained by the camera system also needs to be three-dimensional. Therefore, for the imaging system and the visualization system with different viewing angles, if the compression method of the two-dimensional image is directly applied to compress the three-dimensional image, the three-dimensional image displayed by the visualization system will be severely distorted (i.e. stretched or compressed at different depths), which greatly affects the user's experience.
Disclosure of Invention
An object of the present invention is to provide a matching method of a different field angle camera-visualization system, a system and a computing system thereof, which can quantitatively solve the problem of image matching between camera systems and visualization systems with different field angles, and facilitate the visualization system to present images taken by the camera systems with different field angles in a suitable form.
Another object of the present invention is to provide a matching method for a different field angle imaging-visualization system, a system and a computing system thereof, which do not require hardware matching between the imaging system and the visualization system, and thus can greatly improve the production efficiency of hardware devices.
Another object of the present invention is to provide a matching method for different field angle camera-imaging system, a system and a computing system thereof, which can algorithmically match the different field angle camera-imaging system with the imaging system, and have the advantages of simple operation, low cost, and convenience for improving matching efficiency and matching effect.
Another object of the present invention is to provide a matching method for a different field angle camera-imaging system, a system and a computing system thereof, which can ensure that an image displayed by the imaging system is not distorted (i.e. is not stretched or compressed in the depth direction), and thus, the matching method is helpful for improving the user experience.
Another object of the present invention is to provide a matching method of a different field angle camera-visualization system, a system and a computing system thereof, wherein, in an embodiment of the present invention, the matching method matches images obtained by camera systems with different field angles into the visualization system in a compressed or stretched manner, so that the visualization system displays the images substantially completely, while ensuring that no depth distortion occurs.
Another object of the present invention is to provide a matching method of a different field angle camera-visualization system, a system and a computing system thereof, wherein, in an embodiment of the present invention, the matching method directly matches images obtained by camera systems with different field angles into the visualization system under the condition of ensuring that no depth distortion occurs, so that the visualization system partially displays the images.
Another object of the present invention is to provide a matching method for a different field angle camera-imaging system, a system and a computing system thereof, which can be applied to various types of camera systems and imaging systems, are not specific to a particular system, and are convenient for mass replication and popularization.
To achieve at least one of the above objects and other objects and advantages, the present invention provides a matching method of a different field angle imaging-developing system for matching an imaging system and a developing system having different field angles, comprising the steps of:
acquiring first camera data acquired by the camera system, wherein the first camera data comprise image data with depth information corresponding to each pixel point on a camera surface of the camera system;
calibrating the first camera data based on the depth information by an image calibration model to obtain second camera data matched with the imaging system; and
and mapping the second camera shooting data to a display surface of the display system according to a preset mapping proportion to obtain display data, so that an image displayed by the display system based on the display data is not distorted in the depth direction.
In an embodiment of the invention, the image calibration model is
Figure BDA0002003393310000031
Wherein h "is an imaging height on the imaging surface of the imaging system based on the second imaging data; h is based onThe imaging height of the first camera shooting data on the camera shooting surface of the camera shooting system; Δ X is a depth difference between a plane to be calibrated of the camera system and a reference datum plane of the camera system; theta' is the half field angle of the visualization system; z1A depth value of a reference datum of the camera system; θ is the half field angle of the imaging system.
In an embodiment of the present invention, the step of calibrating the first image data based on the depth information by an image calibration model to obtain the second image data matched with the visualization system includes the steps of:
selecting a reference datum plane according to the focusing depth of the camera system;
judging whether the depth value of the plane to be calibrated of the camera system is larger than the depth value of the reference datum plane of the camera system;
when the depth value of the plane to be calibrated of the camera system is larger than the depth value of the reference datum plane of the camera system, according to the image calibration model, capturing image data corresponding to the calibration plane of the imaging system from image data corresponding to the plane to be calibrated in the first camera data to serve as the second camera data; and
and when the depth value of the plane to be calibrated of the camera system is not larger than the depth value of the reference datum plane of the camera system, amplifying the image data corresponding to the calibration plane of the developing system from the image data corresponding to the plane to be calibrated in the first camera data according to the image calibration model to serve as the second camera data.
In an embodiment of the invention, the depth value of the reference datum of the camera system is an average depth value in a focus depth range of the camera system.
In an embodiment of the present invention, the predetermined mapping ratio is
Figure BDA0002003393310000032
Wherein f' is a focal length based on the visualization system; f is the focal length of the camera system; Δ X is the correction to be made for the camera systemA depth difference between a quasi-plane and a reference datum of the camera system; theta' is the half field angle of the visualization system; z1The depth value of the reference datum of the camera system; θ is the half field angle of the imaging system.
According to another aspect of the present invention, there is further provided a matching method of a different field angle imaging-developing system for matching an imaging system and a developing system which have different field angles from each other, comprising the steps of:
acquiring first camera data acquired by the camera system, wherein the first camera data comprise image data with depth information corresponding to each pixel point on a camera surface of the camera system;
capturing image data matched with the imaging system from the first image capturing data by using an image capturing model to serve as second image capturing data, wherein the field angle of the imaging system is smaller than that of the imaging system; and
and mapping the second camera shooting data to a display surface of the display system according to a preset mapping proportion to obtain display data, so that an image displayed by the display system based on the display data is not distorted in the depth direction.
In an embodiment of the present invention, the image capture model is:
Figure BDA0002003393310000041
wherein: h' is the imaging height of the second image pickup data on the image pickup surface of the image pickup system; theta' is the half field angle of the visualization system; h is the imaging height of the first camera shooting data on the camera shooting surface of the camera shooting system; θ is the half field angle of the imaging system.
In an embodiment of the present invention, the predetermined mapping ratio is:
Figure BDA0002003393310000042
wherein: f' is the focal length of the imaging system; f is the focal length of the camera system.
According to another aspect of the present invention, there is provided a matching system of a different field angle camera-imaging system for matching a camera system and an imaging system having different field angles, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring first camera data acquired by the camera system, and the first camera data comprises image data with depth information corresponding to each pixel point on a camera surface of the camera system;
an image calibration module for calibrating the first camera data based on depth information by an image calibration model to obtain second camera data matched with the imaging system; and
and the mapping module is used for mapping the second camera shooting data to a display surface of the display system according to a preset mapping proportion so as to obtain display data, so that an image displayed by the display system based on the display data is not distorted in the depth direction.
In an embodiment of the invention, the image calibration model is
Figure BDA0002003393310000043
Wherein h "is an imaging height on the imaging surface of the imaging system based on the second imaging data; h is the imaging height of the first camera shooting data on the camera shooting surface of the camera shooting system; Δ X is a depth difference between a plane to be calibrated of the camera system and a reference datum plane of the camera system; theta' is the half field angle of the visualization system; z1A depth value of a reference datum of the camera system; θ is the half field angle of the imaging system.
In an embodiment of the present invention, the image calibration module includes a selection module, a determination module, a truncation module, and an amplification module, wherein the selection module is configured to select a reference datum according to a depth of focus of the camera system; the judging module is used for judging whether the depth value of the plane to be calibrated of the camera system is larger than the depth value of the reference datum plane of the camera system; the capture module is configured to capture, according to the image calibration model, image data corresponding to the calibration plane of the imaging system from image data corresponding to the to-be-calibrated plane in the first image capture data to serve as the second image capture data when the depth value of the to-be-calibrated plane of the imaging system is greater than the depth value of the reference datum plane of the imaging system; and the amplification module is used for amplifying the image data corresponding to the calibration plane of the developing system from the image data corresponding to the plane to be calibrated in the first camera shooting data according to the image calibration model to be used as the second camera shooting data when the depth value of the plane to be calibrated of the camera shooting system is not larger than the depth value of the reference datum plane of the camera shooting system.
According to another aspect of the present invention, there is provided a matching system of a different field angle camera-imaging system for matching a camera system and an imaging system having different field angles, comprising:
the acquisition module is used for acquiring first camera data acquired by the camera system, wherein the first camera data comprises image data with depth information corresponding to each pixel point on a camera surface of the camera system;
an image capture module for capturing image data matched with the imaging system from the first image capture data by an image capture model to serve as second image capture data, wherein the field angle of the imaging system is smaller than that of the imaging system; and
and the mapping module is used for mapping the second camera shooting data to a display surface of the display system according to a preset mapping proportion so as to obtain display data, so that an image displayed by the display system based on the display data is not distorted in the depth direction.
In an embodiment of the present invention, the image capture model is:
Figure BDA0002003393310000051
wherein: h' is the imaging height of the second image pickup data on the image pickup surface of the image pickup system; theta' is the half field angle of the visualization system; h is based on the first number of shotsAccording to the imaging height on the image pickup surface of the image pickup system; θ is the half field angle of the imaging system.
According to another aspect of the present invention, the present invention also provides a computing system comprising:
a logic machine for executing instructions; and
a storage machine, wherein the storage machine is configured to store machine readable instructions executable by the logic machine to implement any of the above-described matching methods for the different field angle camera-visualization system.
According to another aspect of the present invention, there is also provided a computer-readable storage medium having stored thereon computer program instructions operable to, when executed by a computing apparatus, execute any one of the above-described matching methods of the different-field-angle imaging-visualization system.
Further objects and advantages of the invention will be fully apparent from the ensuing description and drawings.
These and other objects, features and advantages of the present invention will become more fully apparent from the following detailed description, the accompanying drawings and the claims.
Drawings
Fig. 1 shows a schematic diagram of the principle of pinhole imaging.
Fig. 2 is a schematic diagram illustrating a matching method of a conventional different field angle imaging-visualization system.
Fig. 3A is a flowchart illustrating a matching method of the different-field-angle camera-visualization system according to the first embodiment of the present invention.
Fig. 3B is a flowchart illustrating a calibration step in the matching method of the different-field-angle imaging-visualization system according to the first embodiment of the present invention.
Fig. 4 is a schematic diagram illustrating a matching method of the different field angle camera-imaging system according to the first embodiment of the present invention.
Fig. 5 is a block diagram schematically illustrating a matching system of the different-field-angle imaging-visualization system according to the first embodiment of the present invention.
Fig. 6 is a flowchart illustrating a matching method of a different field angle camera-imaging system according to a second embodiment of the present invention.
Fig. 7 is a schematic diagram illustrating a matching method of the different field angle camera-imaging system according to the second embodiment of the present invention.
Fig. 8 is a block diagram schematically illustrating a matching system of the different-field-angle imaging-visualization system according to the second embodiment of the present invention.
FIG. 9 is a block diagram schematic of a computing system in accordance with an embodiment of the present invention.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
In the present invention, the terms "a" and "an" in the claims and the description should be understood as meaning "one or more", that is, one element may be one in number in one embodiment, and the element may be more than one in number in another embodiment. The terms "a" and "an" should not be construed as limiting the number unless the number of such elements is explicitly recited as one in the present disclosure, but rather the terms "a" and "an" should not be construed as being limited to only one of the number.
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present invention, it should be noted that, unless explicitly stated or limited otherwise, the terms "connected" and "connected" are to be interpreted broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
As is well known, imaging models of imaging systems such as cameras and the like and imaging systems such as projectors and the like basically employ pinhole imaging models. As shown in fig. 1, taking a monocular camera as an example, where the camera coordinate system is a space coordinate system O-XYZ, the pixel plane coordinate system is a plane coordinate system O-xy, and the distance between the pixel plane and the camera optical center (i.e., the focal length) is f, the real world space point P (X, Y, Z) is imaged through the aperture and the mapped point on the pixel plane is P (X, Y, f).
Therefore, from the trigonometric relationship:
Figure BDA0002003393310000071
finishing to obtain:
Figure BDA0002003393310000072
if the field angle of the imaging system is the same as the field angle of the visualization system, the image obtained by the imaging system can be directly compressed or stretched according to the relationship between the focal length f of the imaging system and the focal length f' of the visualization system without generating the distortion problem in the depth direction. However, the field angle of the imaging system is usually larger than that of the visualization system (of course, in some examples, the field angle of the imaging system may also be smaller than that of the visualization system), and if the image obtained by the imaging system is processed by a direct compression or stretching method, a distortion problem in the depth direction inevitably occurs.
Illustratively, as shown in fig. 2, take the case that the field angle 2 θ of the imaging system is larger than the field angle 2 θ' of the visualization system, where P is1And P2Respectively, the depth distance on the field angle boundary of the camera system is Z1And Z2The two boundary points of (1). If the direct compression method is adopted to match the camera system with the development system, namely the image height h 'of the image on the camera surface 100 of the camera system is directly compressed to the image height h' of the image on the development surface 200 of the development system, so that the image on the camera surface 100 of the camera system is directly mapped to the development surface 200 of the development system, namely the point P is directly mapped1And P2Move to point P'1And P'2That is, the point P1And P2An image formed on the image pickup surface 100 of the image pickup system corresponds to a point P'1And P'2An image formed on the image display surface 200 of the image display system at the time point P'1And P'2Is located at a field angle boundary of the visualization system and has a depth distance of Z'1And Z'2Position of (1), its midpoint P'1And P'2Respectively with point P1And P2Equal in height.
Then, according to the triangle relation, it can be known that: z1tanθ=Z'1tanθ';Z2tanθ=Z'2tanθ'
Finishing to obtain:
Figure BDA0002003393310000081
from this, it can be seen that if Δ X represents the point P1And point P2The depth distance therebetween; DeltaX 'represents a point P'1And point P'2The depth distance between, then:
Figure BDA0002003393310000082
it is noted that since the field angle of the imaging system is larger than that of the visualization system, i.e., θ > θ ', Δ X' > Δ X, which means that employing the direct compression method will cause the image displayed by the visualization system to be distorted in a stretching manner in the depth direction. Similarly, when the field angle of the imaging system is smaller than that of the visualization system, i.e., θ < θ ', Δ X' < Δ X, which means that the direct compression method will cause compression distortion in the depth direction of the image displayed by the visualization system. Therefore, the present invention needs to perform the necessary image algorithm calibration in the depth direction, so that the imaging system and the visualization system having different field angles do not have distortion in the depth direction when they are matched.
Referring to fig. 3A to 5, a matching method of a different-field-angle camera-imaging system and a system thereof according to a first embodiment of the present invention are illustrated, which can quantitatively solve the problem of image matching between camera systems and imaging systems with different field angles, and help the imaging systems to substantially completely present images captured by the camera systems with different field angles. Specifically, as shown in fig. 3A, the matching method of the different-field-angle camera-visualization system includes the steps of:
s110: acquiring first camera data acquired by a camera system, wherein the first camera data comprise image data with depth information corresponding to each pixel point on a camera surface of the camera system;
s120: calibrating the first camera shooting data based on depth information by an image calibration model to obtain second camera shooting data matched with a developing system, wherein the angle of view of the developing system is different from that of the camera shooting system; and
s130: and mapping the second camera image data to a display surface of the display system according to a preset mapping proportion to obtain the display data of the display system, so that the image displayed by the display system based on the display data is not distorted in the depth direction.
Specifically, in the matching method of the different-field-angle camera-imaging system of the present invention, the image calibration model is:
Figure BDA0002003393310000091
wherein: h' is the imaging height of the second image pickup data on the image pickup surface of the image pickup system; h is the imaging height of the first camera shooting data on the camera shooting surface of the camera shooting system; Δ X is a depth difference between a plane to be calibrated of the camera system and a reference datum plane of the camera system; θ' is the half field angle of the visualization system; z1A depth value of the reference datum of the camera system; and theta is the half field angle of the camera system. It is to be understood that the first and second image sensing data of the image sensing system include not only color information such as RGB data or the like corresponding to each pixel point on the image sensing surface of the image sensing system, but also depth information corresponding to each pixel point.
It is worth mentioning that since the first image data of the imaging system contains image data of different depths, if only the direct compression method is adopted to directly map the image data of all the depths into the visualization system, distortion in the depth direction (as shown in fig. 2) inevitably occurs. Therefore, in order to avoid the occurrence of distortion in the depth direction, a unique image calibration model is required to calibrate the first image data to obtain second image data matched to the visualization system, so that after the second image data is directly mapped to the visualization system to obtain visualization data, the visualization system will not change any image displayed based on the visualization data in the depth direction.
Specifically, taking a monocular imaging system and a monocular visualization system as examples, as shown in fig. 4, a point O is an optical center of the imaging system and the visualization system, and θ' is a half field angle of the visualization systemAnd theta is a half field angle of the camera system. Point P1An edge of a field of view at a reference datum 101 of the camera system, wherein a depth value of the reference datum 101 is Z1Then the half frame L at the reference base plane 1011=Z1tan θ. Point P2An edge of a field of view at a plane to be calibrated 102 of the camera system, wherein a depth value of the plane to be calibrated 102 is Z2Then the half frame L at the plane 102 to be calibrated2=Z2tan θ. Δ X is the depth difference between the plane to be calibrated 102 of the camera system and the reference base plane 101, i.e. Δ X ═ Z2-Z1Then, the half-frame at the plane to be calibrated 102 is:
L2=Z1tanθ+ΔXtanθ (1)。
furthermore, point P'1An edge of a field of view at a matching reference plane 201 of the visualization system, wherein a depth value of the matching reference plane 201 is Z'1Then, half picture frame L 'at the matching reference plane 201'1=Z'1tan θ'; point P'2An edge of a field of view at a direct matching face 202 of the visualization system, wherein a depth value of the direct matching face 202 is Z'2Then the half picture at the direct matching face 202 is L'2=Z'2tan θ'. Point P'3An edge of a field of view located at a calibration plane of the visualization system, wherein a depth value of the calibration plane is Z'3Then half frame L 'at the calibration plane'3=Z'3anθ'。
From fig. 4 and the geometrical relationships it is easy to see that: a depth difference Δ X ' ═ Z ' between the direct mating face 202 and the mating reference face 201 of the visualization system '2-Z'1Is not equal to the depth difference Δ X ═ Z between the plane to be calibrated 102 of the camera system and the reference base plane 1012-Z1This means that if the development system and the imaging system are matched by a direct compression method, the image displayed by the development system will inevitably be distorted in the depth direction with respect to the image taken by the imaging system (when θ > θ')Tensile distortion occurs; when θ < θ', compressive distortion occurs).
To ensure that the image displayed by the visualization system is not distorted in the depth direction, the depth difference Δ X ″ -Z 'between the calibration plane 203 and the matching reference plane 201 of the visualization system'3-Z'1Is equal to the difference in depth Δ X ═ Z between the plane to be calibrated 102 of the camera system and the reference base plane 1012-Z1I.e., Δ X ″ ═ Δ X. In addition, since the present invention matches the reference datum 101 to the matching datum 201 by using a direct compression method, the half frame size at the reference datum 101 is equal to the half frame size at the matching datum 201, i.e., L1=L'1(the frame of the reference plane is not changed). Thus, as can be seen from fig. 4 and the geometric relationship, the half-frame at the calibration plane is: l'3=L'1+ΔX”tanθ'=L1+ΔXtanθ',
Namely: l'3=Z1tanθ+ΔXtanθ' (2)
In addition, due to the point P3Is located at the plane to be calibrated 102 of the camera system and is associated with a point P'3Corresponds to, so that point P3Is equal to half picture frame L 'at the calibration plane'3=Z'3tan θ'. In the camera system, a point P2Corresponding to a point p on the image capture plane 100 of the image capture system in the first image capture data2(ii) a Point P3Corresponding to a point p on the image pickup surface 100 of the image pickup system in the second image pickup data3Therefore, from the similar triangle:
Figure BDA0002003393310000111
wherein: h "is an imaging height on the imaging surface 100 of the imaging system based on the second imaging data; h is an imaging height on the imaging surface 100 of the imaging system based on the first imaging data.
Notably, due to the point p2Edges corresponding to the first image dataThe boundary point is the point p based on the imaging height of the first imaging data on the imaging surface 100 of the imaging system2Height h of (d); if with point p3Processing the first image data for the boundary to obtain the second image data, point p3Corresponding to the boundary point in the second image data, such that the height of the image formed on the image capture surface 100 of the image capture system based on the second image data is point p3Of (c) is measured. Therefore, the image calibration model obtained by sorting according to the formula (1), the formula (2) and the formula (3) is:
Figure BDA0002003393310000112
wherein: h "is an imaging height on the imaging surface 100 of the imaging system based on the second imaging data; h is an imaging height on the imaging surface 100 of the imaging system based on the first imaging data; Δ X is a depth difference between a plane to be calibrated of the camera system and a reference datum plane of the camera system; θ' is the half field angle of the visualization system; z1A depth value of the reference datum of the camera system; and theta is the half field angle of the camera system.
It should be noted that after the first image data of the imaging system is calibrated by the image calibration model to obtain the second image data, the second image data is directly mapped to the image display plane 200 of the imaging system according to the predetermined mapping ratio to obtain the corresponding image data. In this way, the image displayed by the visualization system based on the visualization data is not distorted in the depth direction with respect to the image captured by the imaging system, which contributes to an improvement in the user's experience. In other words, the difference in depth between the images displayed by the visualization system on the calibration plane and the matching reference plane based on the visualization data is equal to the difference in depth of the object photographed by the camera system at the plane to be calibrated and the reference plane, so that the image displayed by the visualization system is not distorted in the depth direction compared to the object photographed by the camera system after the visualization system is matched with the camera system by the matching method of the different-field-angle camera-visualization system.
Furthermore, the model is calibrated from the image
Figure BDA0002003393310000121
And h ═ f tan θ can be obtained:
Figure BDA0002003393310000122
wherein f is the focal length of the camera system.
Due to point p'3Corresponding to the boundary point of the development data, so that the imaging height on the development plane 200 of the development system based on the development data is the point p2Height of (a):
h'=f'tanθ' (5)
wherein f' is the focal length of the visualization system.
Therefore, combining equation (4) and equation (5) may obtain the predetermined mapping ratio:
Figure BDA0002003393310000123
wherein f' is a focal length based on the visualization system; f is the focal length of the camera system; Δ X is a depth difference between a plane to be calibrated of the camera system and a reference datum plane of the camera system; θ' is the half field angle of the visualization system; z1A depth value of the reference datum of the camera system; and theta is the half field angle of the camera system.
It is worth mentioning that, according to the image calibration model and the attached fig. 4, it can be known that: when the field angle of the camera system is larger than that of the visualization system, that is, θ > θ', if the depth difference between the plane to be calibrated of the camera system and the reference plane of the camera system is larger than zero, that is, Δ X > 0, then h "< h, that is, the imaging height of the second camera data on the camera surface 100 of the camera system is smaller than the imaging height of the first camera data on the camera surface 100 of the camera system, so that it is necessary to cut out a corresponding proportion of image data from the first camera data according to the image calibration model (that is, remove the area exceeding the imaging height h" in the first camera data) as the second camera data, so that the second camera data can be matched with the visualization system without depth distortion; if the depth difference between the plane to be calibrated of the camera system and the reference datum plane of the camera system is smaller than zero, that is, Δ X is smaller than 0, then h "> h, that is, the imaging height of the second camera data on the camera surface 100 of the camera system is larger than the imaging height of the first camera data on the camera surface 100 of the camera system, so that image data with a corresponding proportion needs to be amplified from the first camera data according to the image calibration model (that is, an area exceeding the imaging height h in the first camera data is left empty) to serve as the second camera data, so that the second camera data can be matched with the visualization system without depth distortion.
Similarly, when the field angle of the camera system is smaller than that of the visualization system, that is, θ < θ', if the depth difference between the plane to be calibrated of the camera system and the reference plane of the camera system is greater than zero, that is, Δ X > 0, then h "> h, that is, the imaging height on the camera plane 100 of the camera system based on the second camera data is greater than the imaging height on the camera plane 100 of the camera system based on the first camera data, so that it is necessary to augment a corresponding proportion of image data from the first camera data according to the image calibration model to serve as the second camera data, so that the second camera data can be matched with the visualization system without depth distortion; if the depth difference between the plane to be calibrated of the camera system and the reference datum plane of the camera system is smaller than zero, that is, Δ X is smaller than 0, then h "< h, that is, the imaging height of the second camera data on the camera surface 100 of the camera system is smaller than the imaging height of the first camera data on the camera surface 100 of the camera system, so that image data with a corresponding proportion needs to be intercepted from the first camera data according to the image calibration model to serve as the second camera data, so that the second camera data can be matched with the visualization system without depth distortion.
Exemplarily, taking the angle of view of the camera system larger than the angle of view of the visualization system as an example, as shown in fig. 3B, the step 120 of the matching method of the different-angle camera-visualization system includes the steps of:
s121: selecting a reference datum plane according to the focusing depth of the camera system;
s122: judging whether the depth value of a plane to be calibrated of the camera system is larger than the depth value of the reference datum plane of the camera system;
s123: when the depth value of the plane to be calibrated of the camera system is larger than the depth value of the reference datum plane of the camera system, according to the image calibration model, capturing image data corresponding to the calibration plane of the developing system from image data corresponding to the plane to be calibrated in the first camera data to serve as the second camera data; and
s124: and when the depth value of the plane to be calibrated of the camera system is not larger than the depth value of the reference datum plane of the camera system, amplifying image data corresponding to the calibration plane of the developing system from image data corresponding to the plane to be calibrated in the first camera data according to the image calibration model to serve as the second camera data.
It is to be noted that, in the step S121 of the matching method of the different-field-angle imaging-visualization system according to this embodiment of the present invention, the depth value of the reference base plane is preferably implemented as an average depth value of the depth-of-focus range of the imaging system. For example, if the depth of focus of the camera system is in the range of 4-5 m, i.e. the scene shot by the camera system is at a position 4-5 m away, the depth value of the reference datum of the camera system is preferably equal to 4.5 m, so that the distortion of the image can be minimized. Of course, in other examples of the present invention, the depth value of the reference plane may also be implemented as other depth values in the depth of focus range of the image capturing system, such as a minimum depth value, a maximum depth value, and so on, and even the depth value of the kakan reference plane may also be implemented as a depth value outside the depth of focus range of the image capturing system, which is not described in detail herein.
More specifically, in the step S123 of the matching method of the different-field-angle imaging-visualization system according to this embodiment of the present invention, as shown in fig. 4, if the depth value Z of the plane to be calibrated 102 of the imaging system is a depth value Z2Depth value Z larger than the reference base plane 1011I.e. the difference in depth Δ X ═ Z between the plane to be calibrated 102 and the reference base plane 1012-Z1> 0, half frame L at the calibration plane of the visualization system "2=Z”2tan theta' is smaller than a half-frame L at the plane to be calibrated 102 of the camera system2=Z2tan θ, therefore, after the image corresponding to the plane to be calibrated 102 in the first image data needs to be truncated according to the ratio of the half frame at the calibration plane to the half frame at the plane to be calibrated 102 to obtain the second image data, the second image data is mapped to the visualization system correspondingly, and the second image data is displayed on the calibration plane through the visualization system, so as to prevent the image on the plane to be calibrated 102 of the image capture system from being distorted in the direction perpendicular to the depth direction.
Similarly, in the step S124, if the depth value Z of the plane to be calibrated 102 of the camera system is set2Depth value Z larger than the reference base plane 1011I.e. the difference in depth Δ X ═ Z between the plane to be calibrated 102 and the reference base plane 1012-Z1< 0, half-frame L at the calibration plane of the visualization system "2=Z”2tan theta' greater than the imaging systemA half frame L of the system at the plane 102 to be calibrated2=Z2tan θ, therefore, after the image corresponding to the plane to be calibrated 102 in the first image data is amplified according to the ratio of the half frame at the calibration plane to the half frame at the plane to be calibrated 102 to obtain the second image data, the second image data is mapped to the imaging system, and the second image data is displayed on the calibration plane by the imaging system, so as to prevent the image on the plane to be calibrated 102 of the imaging system from being distorted in the direction perpendicular to the depth direction.
In particular, the present invention can use a blank method to augment the first image data, which not only can ensure that the image on the plane to be calibrated 102 of the image capturing system is not distorted in the direction perpendicular to the depth direction, but also the images on other calibration planes behind the calibration plane can be shown through the blank area. Of course, in other examples of the present invention, the unoccupied area on the calibration plane may be amplified according to the previous and subsequent frame images, so as to make the image on the calibration plane more reasonable and smooth.
It is to be noted that, in consideration of the occlusion relationship, the matching method for the different-field-angle camera-imaging system according to the present invention may further divide the depth at a predetermined interval according to the depth value of the plane to be calibrated of the camera system after matching the plane to be calibrated of the camera system with the calibration plane of the imaging system, and then gradually render the image from small to large according to the depth value of the plane to be calibrated of the camera system, so that the overlapped portion does not need to be rendered and displayed, which is helpful for avoiding or reducing repeated rendering and greatly increasing the rendering speed.
According to another aspect of the present invention, the above first embodiment of the present invention further provides a matching system of a different-field-angle imaging-visualization system, which is used for matching an imaging system and a visualization system with different field angles, so that the visualization system can substantially completely display an image captured by the imaging system without distortion in a depth direction.
Specifically, as shown in fig. 5, the matching system 30 of the different field angle camera-visualization system includes an acquisition module 31, an image calibration module 32, and a mapping module 33, which are sequentially communicably connected. The acquiring module 31 is configured to acquire first image data acquired by an image capturing system, where the first image data includes image data with depth information corresponding to each pixel point on an image capturing surface of the image capturing system. The image calibration module 32 is configured to calibrate the first image data based on the depth information by using an image calibration model to obtain second image data matched with a visualization system, wherein a field angle of the visualization system is different from a field angle of the imaging system. The mapping module 33 is configured to map the second captured image data to the display surface of the display system correspondingly according to a predetermined mapping ratio to obtain the display data of the display system, so that an image displayed by the display system is not distorted in a depth direction.
In an example of the present invention, as shown in fig. 5, the image calibration module 32 of the matching system 30 of the different field angle camera-imaging system includes a selecting module 321, a determining module 322, a capturing module 323, and an amplifying module 324, wherein the selecting module 321 is configured to select a reference datum according to a depth of focus of the camera system; the determining module 322 is configured to determine whether a depth value of a plane to be calibrated of the camera system is greater than a depth value of the reference datum of the camera system; the capture module 323 is configured to capture, according to the image calibration model, image data corresponding to the calibration plane of the visualization system from image data corresponding to the to-be-calibrated plane in the first image capture data to serve as the second image capture data when the depth value of the to-be-calibrated plane of the imaging system is greater than the depth value of the reference datum plane of the imaging system; and the augmentation module 324 is configured to augment, as the second camera data, image data corresponding to the calibration plane of the visualization system from image data corresponding to the plane to be calibrated in the first camera data according to the image calibration model when the depth value of the plane to be calibrated of the camera system is not greater than the depth value of the reference datum of the camera system.
It should be noted that although in most scenes, the imaging system needs to display the image captured by the camera system completely, in some specific scenes, for example, after obtaining an environment image with a large field angle by using a wide-angle camera, a user wearing the VR or AR device only needs to view the image in the field of view, and at this time, the VR or AR device does not need to display the image completely, but only needs to display a part of the image. Therefore, in such a similar scenario, the present invention does not need to adopt the matching method of the first embodiment to match the different-field-angle imaging-visualization system, but can adopt an image capture method to realize the matching of the different-field-angle imaging-visualization system.
Referring to fig. 6 to 8, a matching method of a different field angle imaging-visualization system according to a second embodiment of the present invention is illustrated, which can quantitatively solve the problem of image matching between an imaging system and a visualization system different in field angle so that the visualization system can partially render an image taken by the imaging system different in field angle without occurrence of distortion in a depth direction.
Specifically, as shown in fig. 6, the matching method of the different-field-angle imaging-visualization system includes the steps of:
s210: acquiring first camera data acquired by a camera system, wherein the first camera data comprise image data with depth information corresponding to each pixel point on a camera surface of the camera system;
s220: capturing image data matched with the imaging system from the first image capturing data by using an image capturing model to serve as second image capturing data, wherein the field angle of the imaging system is smaller than that of the imaging system; and
s230: and mapping the second camera data to a display surface of the display system according to a preset mapping proportion to obtain the display data of the display system, so that an image displayed by the display system based on the display data is not distorted in the depth direction.
More specifically, in the matching method of the different-field-angle camera-visualization system according to the second embodiment of the present invention, the image cutout model may be implemented as:
Figure BDA0002003393310000171
wherein: h' is the imaging height of the second image pickup data on the image pickup surface of the image pickup system; θ' is the half field angle of the visualization system; h is the imaging height of the first camera shooting data on the camera shooting surface of the camera shooting system; and theta is the half field angle of the camera system.
And after second camera data matched with the visualization system are intercepted from the first camera data through the image interception model, mapping the second camera data to a visualization surface of the visualization system according to a preset mapping proportion to obtain the visualization data. In this way, with the matching method of the different-field-angle imaging-and-visualization system according to the second embodiment of the present invention, it is possible to directly intercept, from among the image data on the imaging plane of the imaging system, the image data that matches the visualization plane of the visualization system without any distortion occurring in the depth direction.
Illustratively, as shown in fig. 7, a point O is an optical center of the imaging system and the visualization system, θ 'is a half field angle of the visualization system, and f' is a focal length of the visualization system; theta is the half field angle of the camera system, and f is the focal length of the camera system. Point P1At the edge of the field of view of the camera system at a reference datum, and point p1Is a point P on the image pick-up surface 100 of the image pick-up system1Corresponding image points; point P2The edge of the field of view at the reference datum of the visualization system, wherein the point p2Is a point P on the image pick-up surface 100 of the image pick-up system1Correspond toThe image point of (a); and p 'is'2Is the point P on the image display surface 200 of the imaging system2The corresponding image point.
Then, according to the trigonometric relationship, the relationship between the image height cut from the image pickup surface 100 of the image pickup system and the imaging height on the image pickup surface 100 of the image pickup system needs to satisfy:
Figure BDA0002003393310000172
sorting to obtain the image interception model:
Figure BDA0002003393310000173
θ>θ'
further, if the imaging height h ═ f × tan θ on the imaging surface 100 of the imaging system, h ═ f tan θ'.
Therefore, according to the imaging height h ' ═ f ' × tan θ ' on the imaging surface 200 of the imaging system, the predetermined mapping ratio is obtained as:
Figure BDA0002003393310000174
wherein f' is the focal length of the imaging system; f is the focal length of the camera system.
It should be noted that, since the field angle of the imaging system is larger than that of the visualization system, i.e., θ > θ', the visualization system cannot completely display the image captured by the imaging system. But the image matching with the visualization system is directly cut out from the image shot by the camera system through the image cutting model, and only the point P is equivalent to the reference datum plane of the camera system1For the edge frame, a point P is cut out2Is edge framing and does not suffer any tensile or compressive distortion in the depth direction.
According to another aspect of the present invention, the second embodiment of the present invention further provides a matching system of a different-field-angle imaging-and-visualization system, which is used for matching an imaging system and a visualization system with different field angles, so that the visualization system can partially display an image captured by the imaging system without distortion in a depth direction.
Specifically, as shown in fig. 8, the matching system 40 of the different-field-angle imaging-visualization system includes an acquisition module 41, an image capture module 42, and a mapping module 43, which are communicably connected in sequence. The obtaining module 41 is configured to obtain first image data collected by an image capturing system, where the first image data includes image data with depth information corresponding to each pixel point on an image capturing surface of the image capturing system. The image capture module 42 is configured to capture image data matched with the visualization system from the first image capture data by using an image capture model, so as to serve as second image capture data, wherein the field angle of the visualization system is smaller than that of the imaging system. The mapping module 43 is configured to map the second captured image data to a display surface of the display system according to a predetermined mapping ratio to obtain display data of the display system, so that an image displayed by the display system based on the display data is not distorted in a depth direction.
In one example of the present invention, the image interception model is
Figure BDA0002003393310000181
θ > θ', wherein: h' is the imaging height on the imaging surface of the imaging system; f' is the focal length of the visualization system; θ' is the half field angle of the visualization system; h is the imaging height on the camera shooting surface of the camera shooting system; f is the focal length of the camera system; and theta is the half field angle of the camera system.
Illustrative computing System
FIG. 9 illustrates a non-limiting embodiment of a computing system 900 that can perform one or more of the above-described methods or processes, and illustrates a computing system 900 in simplified form. The computing system 900 may take the form of: one or more head mounted display devices, or one or more devices cooperating with a head mounted display device (e.g., personal computers, server computers, tablet computers, home entertainment computers, network computing devices, gaming devices, mobile computing devices, mobile communication devices (e.g., smart phones), and/or other computing devices).
As shown in fig. 9, the computing system 900 includes a logic machine 901 and a storage machine 902, wherein the logic machine 901 is configured to execute instructions; the storage machine 902 is configured to store machine readable instructions executable by the logic machine 901 to implement any of the above-described matching methods for the different field angle camera-visualization system.
Of course, the computing system 900 may optionally include a display subsystem 903, an input subsystem 904, a communication subsystem 905, and/or other components not shown in fig. 9.
The logic machine 901 includes one or more physical devices configured to execute instructions. For example, the logic machine 901 may be configured to execute instructions that are part of: one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, implement a technical effect, or otherwise arrive at a desired result.
The logic machine 901 may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic machine 901 may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the logic machine 901 may be single core or multicore, and the instructions executed thereon may be configured for serial, parallel, and/or distributed processing. The various components of the logic machine 901 may optionally be distributed over two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic machine 901 may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
The storage machine 902 comprises one or more physical devices configured to hold machine-readable instructions executable by the logic machine 901 to implement the methods and processes described herein. In implementing these methods and processes, the state of the storage machine 902 may be transformed (e.g., to hold different data).
The storage machine 902 may include removable and/or built-in devices. The storage machine 902 may include optical memory (e.g., CD, DVD, HD-DVD, blu-ray disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. The storage machine 902 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It is understood that the storage machine 902 includes one or more physical devices. However, aspects of the instructions described herein may alternatively be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a limited period of time.
Aspects of the logic machine 901 and the storage machine 902 may be integrated together into one or more hardware logic components. These hardware logic components may include, for example, Field Programmable Gate Arrays (FPGAs), program and application specific integrated circuits (PASIC/ASIC), program and application specific standard products (PSSP/ASSP), system on a chip (SOC), and Complex Programmable Logic Devices (CPLDs).
Notably, when the computing system 900 includes the display subsystem 903, the display subsystem 903 can be used to present a visual representation of data held by the storage machine 902. The visual representation may take the form of a Graphical User Interface (GUI). As the herein described methods and processes change the data held by the storage machine, and thus transform the state of the storage machine 902, the state of the display subsystem 903 may likewise be transformed to visually represent changes in the underlying data. The display subsystem 903 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with the logic machine 901 and/or the storage machine 902 in a shared enclosure, or such display devices may be peripheral display devices.
Further, when the computing system 900 includes the input subsystem 904, the input subsystem 904 may include or interface with one or more user input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem 904 may include or interface with selected Natural User Input (NUI) components. Such component parts may be integrated or peripheral and the transduction and/or processing of input actions may be processed on-board or off-board. Example NUI components may include a microphone for speech and/or voice recognition; infrared, color, stereo display and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer and/or gyroscope for motion detection and/or intent recognition; and an electric field sensing component for assessing brain activity and/or body movement; and/or any other suitable sensor.
When the computing system 900 includes the communication subsystem 905, the communication subsystem 905 may be configured to communicatively couple the computing system 900 with one or more other computing devices. The communication subsystem 905 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As a non-limiting example, the communication subsystem may be configured for communication via a wireless telephone network or a wired or wireless local or wide area network. In some embodiments, the communication subsystem 905 may allow the computing system 900 to send and/or receive messages to/from other devices via a network, such as the internet.
It will be appreciated that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Also, the order of the above-described processes may be changed.
Illustrative computing program product
In addition to the above-described methods and apparatus, embodiments of the present invention may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the methods according to various embodiments of the present invention described in the "exemplary methods" section above of this specification.
The computer program product may write program code for carrying out operations for embodiments of the present invention in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, an embodiment of the present invention may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps of the above-described method of the present specification.
The computer readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present invention have been described above with reference to specific embodiments, but it should be noted that the advantages, effects, etc. mentioned in the present invention are only examples and are not limiting, and the advantages, effects, etc. must not be considered to be possessed by various embodiments of the present invention. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the invention is not limited to the specific details described above.
The block diagrams of devices, apparatuses, systems involved in the present invention are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the apparatus, devices and methods of the present invention, the components or steps may be broken down and/or re-combined. These decompositions and/or recombinations are to be regarded as equivalents of the present invention.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the examples, and any variations or modifications of the embodiments of the present invention may be made without departing from the principles.

Claims (11)

1. A matching method of different field angles image pick-up and display system is used for matching an image pick-up system and a display system with different field angles, which is characterized in that the method comprises the following steps:
acquiring first camera data acquired by the camera system, wherein the first camera data comprise image data with depth information corresponding to each pixel point on a camera surface of the camera system;
calibrating the first camera data based on the depth information by an image calibration model to obtain second camera data matched with the imaging system; and
mapping the second camera data to a display surface of the display system according to a preset mapping proportion to obtain display data, so that an image displayed by the display system based on the display data is not distorted in the depth direction;
wherein the image calibration model is
Figure 58423DEST_PATH_IMAGE001
Wherein h ″ is an imaging height on an imaging surface of the imaging system based on the second imaging data; h is the imaging height of the first camera shooting data on the camera shooting surface of the camera shooting system; Δ X is a depth difference between a plane to be calibrated of the camera system and a reference datum plane of the camera system; theta' is the half field angle of the visualization system; z1 is the depth value of the reference datum of the camera system; θ is the half field angle of the imaging system.
2. The matching method of the different field angle imaging-visualization system as claimed in claim 1, wherein the step of calibrating the first imaging data based on the depth information by an image calibration model to obtain the second imaging data matched with the visualization system comprises the steps of:
selecting a reference datum plane according to the focusing depth of the camera system;
judging whether the depth value of the plane to be calibrated of the camera system is larger than the depth value of the reference datum plane of the camera system;
when the depth value of the plane to be calibrated of the camera system is larger than the depth value of the reference datum plane of the camera system, according to the image calibration model, capturing image data corresponding to the calibration plane of the imaging system from image data corresponding to the plane to be calibrated in the first camera data to serve as the second camera data; and
and when the depth value of the plane to be calibrated of the camera system is not larger than the depth value of the reference datum plane of the camera system, amplifying the image data corresponding to the calibration plane of the developing system from the image data corresponding to the plane to be calibrated in the first camera data according to the image calibration model to serve as the second camera data.
3. The matching method of the different field angle camera-visualization system as claimed in claim 2, wherein the depth value of the reference datum of the camera system is an average depth value within a focus depth range of the camera system.
4. The matching method of the different field angle imaging-visualization system according to claim 1, wherein the predetermined mapping ratio is
Figure 461722DEST_PATH_IMAGE002
Figure 658348DEST_PATH_IMAGE003
(ii) a Wherein f' is a focal length based on the visualization system; f is the focal length of the camera system; Δ X is a depth difference between a plane to be calibrated of the camera system and a reference datum plane of the camera system; theta' is the half field angle of the visualization system; z1 is the image pickup systemThe depth value of the reference datum; θ is the half field angle of the imaging system.
5. A matching method of different field angles image pick-up and display system is used for matching an image pick-up system and a display system with different field angles, which is characterized in that the method comprises the following steps:
acquiring first camera data acquired by the camera system, wherein the first camera data comprise image data with depth information corresponding to each pixel point on a camera surface of the camera system;
capturing image data matched with the imaging system from the first image capturing data by using an image capturing model to serve as second image capturing data, wherein the field angle of the imaging system is smaller than that of the imaging system; and
mapping the second camera data to a display surface of the display system according to a preset mapping proportion to obtain display data, so that an image displayed by the display system based on the display data is not distorted in the depth direction;
wherein the image interception model is:
Figure 130787DEST_PATH_IMAGE004
wherein: h ″ is an imaging height on the imaging surface of the imaging system based on the second imaging data; theta' is the half field angle of the visualization system; h is the imaging height of the first camera shooting data on the camera shooting surface of the camera shooting system; θ is the half field angle of the imaging system.
6. The matching method of the different field angle imaging-visualization system according to claim 5, wherein the predetermined mapping ratio is:
Figure 336640DEST_PATH_IMAGE005
wherein: f' is the focal length of the imaging system; f is the focal length of the camera system.
7. A matching system of a different field angle imaging-developing system for matching an imaging system and a developing system which are different from each other in field angle, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring first camera data acquired by the camera system, and the first camera data comprises image data with depth information corresponding to each pixel point on a camera surface of the camera system;
an image calibration module for calibrating the first camera data based on depth information by an image calibration model to obtain second camera data matched with the imaging system; and
the mapping module is used for mapping the second camera shooting data to a display surface of the display system according to a preset mapping proportion so as to obtain display data, and an image displayed by the display system based on the display data is not distorted in the depth direction;
wherein the image calibration model is
Figure 63288DEST_PATH_IMAGE006
Wherein h ″ is an imaging height on an imaging surface of the imaging system based on the second imaging data; h is the imaging height of the first camera shooting data on the camera shooting surface of the camera shooting system; Δ X is a depth difference between a plane to be calibrated of the camera system and a reference datum plane of the camera system; theta' is the half field angle of the visualization system; z1 is the depth value of the reference datum of the camera system; θ is the half field angle of the imaging system.
8. The matching system of the different field angle camera-imaging system as claimed in claim 7, wherein the image calibration module comprises a selecting module, a determining module, a capturing module and an amplifying module, wherein the selecting module is configured to select the reference datum according to the depth of focus of the camera system; the judging module is used for judging whether the depth value of the plane to be calibrated of the camera system is larger than the depth value of the reference datum plane of the camera system; the capture module is configured to capture, according to the image calibration model, image data corresponding to the calibration plane of the imaging system from image data corresponding to the to-be-calibrated plane in the first image capture data to serve as the second image capture data when the depth value of the to-be-calibrated plane of the imaging system is greater than the depth value of the reference datum plane of the imaging system; and the amplification module is used for amplifying the image data corresponding to the calibration plane of the developing system from the image data corresponding to the plane to be calibrated in the first camera shooting data according to the image calibration model to be used as the second camera shooting data when the depth value of the plane to be calibrated of the camera shooting system is not larger than the depth value of the reference datum plane of the camera shooting system.
9. A matching system of a different field angle imaging-developing system for matching an imaging system and a developing system which are different from each other in field angle, comprising:
the acquisition module is used for acquiring first camera data acquired by the camera system, wherein the first camera data comprises image data with depth information corresponding to each pixel point on a camera surface of the camera system;
an image capture module for capturing image data matched with the imaging system from the first image capture data by an image capture model to serve as second image capture data, wherein the field angle of the imaging system is smaller than that of the imaging system; and
the mapping module is used for mapping the second camera shooting data to a display surface of the display system according to a preset mapping proportion so as to obtain display data, and an image displayed by the display system based on the display data is not distorted in the depth direction;
wherein the image interception model is:
Figure 211241DEST_PATH_IMAGE007
wherein: h ″ is an imaging height on the imaging surface of the imaging system based on the second imaging data; theta' is the developing systemA uniform half field angle; h is the imaging height of the first camera shooting data on the camera shooting surface of the camera shooting system; θ is the half field angle of the imaging system.
10. A computing system, comprising:
a logic machine for executing instructions; and
a storage machine, wherein the storage machine is configured to hold machine readable instructions executable by the logic machine to implement the matching method of the different field angle camera-visualization system of any of claims 1 to 6.
11. A computer-readable storage medium having stored thereon computer program instructions operable to, when executed by a computing apparatus, execute the matching method of the different-field-angle imaging-visualization system according to any one of claims 1 to 6.
CN201910220302.2A 2019-03-22 2019-03-22 Matching method of different-field-angle camera-imaging system, system and computing system thereof Active CN111726602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910220302.2A CN111726602B (en) 2019-03-22 2019-03-22 Matching method of different-field-angle camera-imaging system, system and computing system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910220302.2A CN111726602B (en) 2019-03-22 2019-03-22 Matching method of different-field-angle camera-imaging system, system and computing system thereof

Publications (2)

Publication Number Publication Date
CN111726602A CN111726602A (en) 2020-09-29
CN111726602B true CN111726602B (en) 2022-04-22

Family

ID=72563092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910220302.2A Active CN111726602B (en) 2019-03-22 2019-03-22 Matching method of different-field-angle camera-imaging system, system and computing system thereof

Country Status (1)

Country Link
CN (1) CN111726602B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587542A (en) * 2009-06-26 2009-11-25 上海大学 Field depth blending strengthening display method and system based on eye movement tracking
CN104010178A (en) * 2014-06-06 2014-08-27 深圳市墨克瑞光电子研究院 Binocular image parallax adjusting method and device and binocular camera
CN108886612A (en) * 2016-02-11 2018-11-23 奇跃公司 Reduce the more depth plane display systems switched between depth plane

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102114248B1 (en) * 2013-09-13 2020-05-22 삼성전자 주식회사 Depth information based optical distortion correction device and method
CN103561257B (en) * 2013-11-01 2015-05-13 北京航空航天大学 Interference-free light-encoded depth extraction method based on depth reference planes
EP3092787B1 (en) * 2014-01-08 2020-07-29 Sony Corporation Perspective change using depth information
CN109070804B (en) * 2016-04-14 2021-09-21 金泰克斯公司 Vision-corrected vehicle display
CN107392853B (en) * 2017-07-13 2020-05-26 河北中科恒运软件科技股份有限公司 Method and system for video fusion distortion correction and viewpoint fine adjustment of double cameras
US10701334B2 (en) * 2017-10-11 2020-06-30 Adobe Inc. Virtual reality parallax correction
KR101997991B1 (en) * 2017-11-29 2019-07-08 재단법인 다차원 스마트 아이티 융합시스템 연구단 Image merging method and system using viewpoint transformation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587542A (en) * 2009-06-26 2009-11-25 上海大学 Field depth blending strengthening display method and system based on eye movement tracking
CN104010178A (en) * 2014-06-06 2014-08-27 深圳市墨克瑞光电子研究院 Binocular image parallax adjusting method and device and binocular camera
CN108886612A (en) * 2016-02-11 2018-11-23 奇跃公司 Reduce the more depth plane display systems switched between depth plane

Also Published As

Publication number Publication date
CN111726602A (en) 2020-09-29

Similar Documents

Publication Publication Date Title
JP5887267B2 (en) 3D image interpolation apparatus, 3D imaging apparatus, and 3D image interpolation method
US20120242795A1 (en) Digital 3d camera using periodic illumination
US11343425B2 (en) Control apparatus, control method, and storage medium
EP3435655A1 (en) Electronic device for acquiring image using plurality of cameras and method for processing image using the same
CN107439002B (en) Depth imaging
US9813693B1 (en) Accounting for perspective effects in images
US9087402B2 (en) Augmenting images with higher resolution data
US20180014003A1 (en) Measuring Accuracy of Image Based Depth Sensing Systems
US20150177062A1 (en) Information processing apparatus, information processing method, and storage medium
KR102423295B1 (en) An apparatus for composing objects using depth map and a method thereof
KR102452575B1 (en) Apparatus and method for compensating variation of images caused by optical image stabilization motion
US20140204083A1 (en) Systems and methods for real-time distortion processing
US11062422B2 (en) Image processing apparatus, image communication system, image processing method, and recording medium
TWI608737B (en) Image projection
US20200202567A1 (en) Calibrating a machine vision camera
JP5805013B2 (en) Captured image display device, captured image display method, and program
US10019837B2 (en) Visualization alignment for three-dimensional scanning
CN111726602B (en) Matching method of different-field-angle camera-imaging system, system and computing system thereof
US10902554B2 (en) Method and system for providing at least a portion of content having six degrees of freedom motion
JP5926626B2 (en) Image processing apparatus, control method therefor, and program
CN111754558B (en) Matching method for RGB-D camera system and binocular imaging system and related system thereof
KR102151250B1 (en) Device and method for deriving object coordinate
US20220150655A1 (en) Generating audio output signals
US20240078743A1 (en) Stereo Depth Markers
FR3013492A1 (en) METHOD USING 3D GEOMETRY DATA FOR PRESENTATION AND CONTROL OF VIRTUAL REALITY IMAGE IN 3D SPACE

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant