CN110553585A - 3D information acquisition device based on optical array - Google Patents

3D information acquisition device based on optical array Download PDF

Info

Publication number
CN110553585A
CN110553585A CN201910835437.XA CN201910835437A CN110553585A CN 110553585 A CN110553585 A CN 110553585A CN 201910835437 A CN201910835437 A CN 201910835437A CN 110553585 A CN110553585 A CN 110553585A
Authority
CN
China
Prior art keywords
array
optical
target object
information
optical array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910835437.XA
Other languages
Chinese (zh)
Inventor
左忠斌
左达宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Love Vision (beijing) Technology Co Ltd
Tianmu Aishi Beijing Technology Co Ltd
Original Assignee
Tianmu Love Vision (beijing) Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Love Vision (beijing) Technology Co Ltd filed Critical Tianmu Love Vision (beijing) Technology Co Ltd
Priority to CN201910835437.XA priority Critical patent/CN110553585A/en
Publication of CN110553585A publication Critical patent/CN110553585A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Abstract

the invention provides a 3D acquisition device based on an optical array, which comprises: the image acquisition device is used for acquiring images; the optical array is used for receiving light rays of the target object in different directions so as to transmit images of different areas of the target object to different areas of the image acquisition device; and the image processing device is used for processing the plurality of images acquired by the image acquisition device to acquire the 3D information of the target object. The technical problem that the multi-camera matrix is low in acquisition resolution due to the volume of the camera is firstly noticed and proposed, and the optical array is used for receiving light rays of a target object in different directions, so that images of different areas of the target object are transmitted to different areas of an image acquisition device, the volume is reduced, and the 3D measurement/synthesis resolution is improved.

Description

3D information acquisition device based on optical array
Technical Field
The invention relates to the technical field of 3D measurement of objects, in particular to the technical field of 3D acquisition of a target object and measurement of geometric dimensions such as length by using pictures.
Background
At present, 3D acquisition equipment mainly aims at a certain specific object, and simultaneously acquires a plurality of pictures of the object through a plurality of cameras after the object is determined, so as to synthesize a 3D image of the object, and measure the length, the contour and the like of the object by using 3D point cloud data.
however, the use of multiple cameras results in a bulky overall device. And because the sizes of the lens and the body of the current camera are fixed, a limit value (determined by the geometric dimension of the camera) exists on the distance between the adjacent cameras. In this case, the intervals of the acquisition by the plurality of cameras are large, so that the resultant 3D point cloud or image synthesis effect is poor, and the measurement accuracy is affected. Currently, multiple cameras must be located away from the target to solve this problem. However, if the target is small, the ratio of the target in the image is small, so that the resolution of the target in the image is low, which also affects 3D synthesis and measurement. Therefore, in the above case, a long-focus lens is also used to take a picture, so that the camera capture area is spaced more densely. However, this increases the requirements and cost for the lens, and the requirements for the camera shutter and ambient light are high when taking a telephoto lens. In summary, a camera matrix formed by a plurality of cameras is large in size, low in resolution and high in camera requirements.
In particular, there is no effective means for 3D imaging and 3D acquisition of a minute object by 3D synthesis using a plurality of images.
disclosure of Invention
In view of the above, the present invention has been made to provide a 3D information acquisition apparatus that overcomes or at least partially solves the above-mentioned problems.
the invention provides a 3D information acquisition device based on an optical array, which comprises: comprises that
The image acquisition device is used for acquiring a plurality of images;
The optical array is used for receiving light rays of the target object in different directions so as to transmit images of different areas of the target object to different areas of the image acquisition device;
And the image processing device is used for processing the plurality of images acquired by the image acquisition device to acquire the 3D information of the target object.
Optionally: the included angle between the optical axes of two adjacent optical lenses of the optical array satisfies
H*(1-cosb)=L*sin2b*;
a=m*b;
0<m<0.8;
wherein L is the distance from the image acquisition device to the target object, H is the actual size of the target object in the acquired image, and a is the optical axis included angle of two adjacent optical lenses of the optical array.
Optionally: at least one part of the same area of the target object exists in the three images acquired by the adjacent three optical lenses of the optical array.
Optionally: the image acquisition device comprises one or more image acquisition units.
Optionally: the transmission mirror array comprises at least 3 perspective mirror units, and the reflector array comprises at least 3 reflector units
Optionally: the optical array is a micro-lens array or a liquid crystal array.
The invention also provides a 3D information comparison device: the 3D information acquisition device comprises any one of the above devices.
The invention also provides a device for generating a target object, which comprises: and generating a matching object matched with the corresponding area of the target object by using the at least one area 3D information acquired by the 3D information acquisition device.
The invention also provides a 3D information acquisition method based on the optical array, which is characterized in that 3D information of the target object is acquired by using any one device.
Invention and technical effects
1. The technical problem that the multi-camera matrix is low in acquisition resolution due to the volume of the camera is firstly noticed and proposed, and the optical array is used for receiving light rays of a target object in different directions, so that images of different areas of the target object are transmitted to different areas of an image acquisition device, the volume is reduced, and the 3D measurement/synthesis resolution is improved.
2. Because the target objects are different and the shapes of the target objects are different, the optical array structure is difficult to be normatively expressed when being optimized for achieving a better synthesis effect, and therefore, no technology for optimizing the optical array structure exists at present. Through repeated tests, the structure of the matrix is optimized by summarizing experience, and experience conditions required to be met among all optical units in the optical array are given.
3. The optical modulator is used for collecting images of different areas, so that the using number of finished cameras is reduced, the size is reduced, the optical modulator is used for transmitting the images of the different areas to the different areas of the image sensor, and the utilization rate of the image sensor is improved (the existing image sensor has a large amount of data redundancy due to the ultrahigh resolution). And is more convenient to use because the optical modulators are generally regular in shape.
4. The prior art mainly promotes the synthesis effect through hardware upgrading and strict calibration, and the prior art does not have any hint to ensure the effect and stability of 3D synthesis by changing the angle position of a camera during photographing, and does not have specific optimized conditions. The invention firstly proposes the optimization of the angle position of the camera during shooting to ensure the effect and the stability of 3D synthesis, and proposes the optimal experience condition required to be met by the camera position through repeated tests, thereby greatly improving the effect of 3D synthesis and the stability of the synthesized image.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic diagram of an optical array-based 3D information acquisition apparatus according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an embodiment of an optical array based 3D information acquisition device according to the present invention;
FIG. 3 is a schematic diagram of another embodiment of an optical array based 3D information acquisition apparatus according to an embodiment of the present invention;
FIG. 4 is an enlarged schematic side view of an optical array of the present invention;
FIG. 5 is an enlarged front view of an optical array according to the present invention;
FIG. 6 is a schematic diagram of an embodiment of a 3D information acquisition apparatus based on an optical array according to another embodiment of the present invention;
FIG. 7 is a schematic diagram of another embodiment of a 3D information acquisition device based on an optical array according to another embodiment of the present invention;
description of reference numerals:
101 optical array, 201 image acquisition device, 100 image processing device, 1011 mirror, 1012 prism, 1013 liquid crystal modulator, 2011 image acquisition unit.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example 1 (microlens array)
To solve the above technical problem, an embodiment of the present invention provides a 3D information acquiring apparatus. Referring to fig. 1 to 5, the method specifically includes: a microlens array (optical array) 101, an image pickup device 201, and an image processing apparatus 100. Each small mirror on the microlens array 101 reflects an image of a different area of the object onto a different area of the image capturing unit 2011 of the image capturing device 201.
The image capturing unit 2011 is a separate CCD or CMOS chip, and the image capturing unit 2011 may include one or more image capturing units.
The matrix point of the virtual matrix is determined by the position of the image acquisition device 201 when the target object image is acquired, and the optical axes of two adjacent reflection sheets of the micro lens array 101 at least satisfy the following conditions:
H*(1-cosb)=L*sin2b;
a=m*b;
0<m<1.5;
Where L is the distance from the image capturing device 201 to the target object, typically the distance from the directly opposite area of the captured target object when the image capturing device 201 is in the first position.
H is the actual size of the object in the captured image, which is typically a picture taken by the image capture device 201 in the first position, where the object has a true geometric size (not the size in the picture), measured along the direction from the first position to the second position. E.g., the first position and the second position are in a horizontally displaced relationship, then the dimension is measured along a horizontal cross direction of the target. For example, if the leftmost end of the target object that can be displayed in the picture is a and the rightmost end is B, the linear distance from a to B on the target object is measured and is H. The measuring method can calculate the actual distance by combining the focal length of the camera lens according to the A, B distance in the picture, and can also mark A, B on the target object and directly measure the AB linear distance by using other measuring means.
a is the optical axis included angle of two adjacent reflection sheets.
m is a coefficient.
Because the size of the object and the concave-convex condition are different, the value of a can not be limited by a strict formula, and the value needs to be limited according to experience. According to a number of experiments, m may be within 1.5, but preferably may be within 0.8. Specific experimental data are seen in the following table:
After the object and the image capturing device 201 are determined, the value of a can be calculated according to the above empirical formula, and the rotation parameters of each lens of the microlens array 101 can be determined according to the value of a.
In some cases, even based on the empirical formula, the microlens array parameters (a values) may not be easily determined, and in this case, the microlens array parameters need to be adjusted experimentally, as follows: calculating the predicted microlens array parameter a according to the above formula, and controlling the rotation of each lens of the microlens array according to the predicted microlens array parameter, such as taking a picture P1 of one lens (or a lens area composed of a plurality of lenses) at an angle W1, taking a picture P2 of another lens (or another lens area composed of a plurality of lenses) after an angle W2, comparing whether a part representing the same area of the target object exists in the pictures P1 and P2, i.e. P1 &P 2 is not empty (e.g. simultaneously including the left part of the iris but different shooting angles), if not, readjusting the value a, re-rotating the lenses, and repeating the above comparison steps. If P1 n P2 is not empty, the third lens (or a third lens area made up of multiple lenses) continues to be rotated according to the a value (adjusted or unadjusted) to take picture P3 at angle W3, and again, it is compared whether there is a portion of picture P1, picture P2, and picture P3 that represents the same area of the object, i.e., P1 n P2 n P3 is not empty. And synthesizing 3D by using a plurality of pictures, testing the 3D synthesis effect, and meeting the requirements of 3D information acquisition and measurement. That is, the structural parameters of the micro lens array are determined by the rotation angle of the optical reflector on the micro lens array, and the adjacent three optical sheets (or optical sheet groups) meet the condition that at least parts of the three images collected at corresponding positions are in the same region of the target object.
after obtaining a plurality of target images, the image processing apparatus 100 processes the image-synthesized 3D. The method of image stitching according to the adjacent image feature points can be used for synthesizing the 3D point cloud or the image by using a plurality of images at a plurality of angles, and other methods can also be used.
The image splicing method comprises the following steps:
(1) Processing the plurality of images and extracting respective feature points; features of the respective Feature points in the plurality of images may be described using a Scale-Invariant Feature Transform (SIFT) Feature descriptor. The SIFT feature descriptor has 128 feature description vectors, can describe 128 aspects of features of any feature point in direction and scale, and remarkably improves the accuracy of feature description, and meanwhile, the feature descriptor has spatial independence.
(2) And respectively generating feature point cloud data of the human face features and feature point cloud data of the iris features on the basis of the feature points of the extracted multiple images. The method specifically comprises the following steps:
(2-1) matching the feature points of the multiple pictures according to the features of the feature points of each image in the multiple extracted images to establish a matched facial feature point data set; matching the feature points of the multiple pictures according to the features of the feature points of each image in the multiple extracted images, and establishing a matched iris feature point data set;
And (2-2) calculating the relative position of the camera relative to the characteristic point on the space of each position according to the optical information of the camera and different positions of the camera when the plurality of images are acquired, and calculating the space depth information of the characteristic point in the plurality of images according to the relative position. Similarly, spatial depth information of feature points in a plurality of images can be calculated. The calculation may employ beam adjustment.
Calculating spatial depth information of the feature points may include: the spatial position information and the color information, that is, may be an X-axis coordinate of the feature point at a spatial position, a Y-axis coordinate of the feature point at a spatial position, a Z-axis coordinate of the feature point at a spatial position, a value of an R channel of the color information of the feature point, a value of a G channel of the color information of the feature point, a value of a B channel of the color information of the feature point, a value of an Alpha channel of the color information of the feature point, or the like. In this way, the generated feature point cloud data includes spatial position information and color information of the feature points, and the format of the feature point cloud data may be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein Xn represents the X-axis coordinate of the feature point at the spatial position; yn represents the Y-axis coordinate of the feature point at the spatial position; zn represents the Z-axis coordinate of the characteristic point at the space position; rn represents a value of an R channel of color information of the feature point; gn represents a value of a G channel of color information of the feature point; bn represents the value of the B channel of the color information of the feature point; an represents the value of the Alpha channel of the color information of the feature point.
And (2-3) generating feature point cloud data of the features of the target object according to the feature point data set matched with the plurality of images and the spatial depth information of the feature points.
And (2-4) constructing a 3D model of the target object according to the characteristic point cloud data so as to realize acquisition of the point cloud data of the target object.
And (2-5) attaching the acquired color and texture of the target object to the point cloud data to form a 3D image of the target object.
Wherein, the 3D image can be synthesized by using all the images in a group of images, and the image with higher quality can be selected from the images for synthesis.
The above-mentioned stitching method is only a limited example, and is not limited thereto, and all methods for generating a three-dimensional image from a plurality of multi-angle two-dimensional images may be used.
It is understood that if the image capturing units 2011 are not high in pixels, the image capturing device 201 may have a plurality of image capturing units 2011, and each small lens on the microlens array 101 reflects an image of a different area of the object onto the plurality of image capturing units 2011 respectively. Of course, the light may be reflected to different regions of the plurality of image capturing units 2011.
DLP4500NIR is an example, with an array of 912x1140, a pitch of 7.6 μm, and a switching frequency of 4225 Hz. However, the existing DMD can only deflect about ± 12 °, that is, only three reflection angles are provided, and only 3 images of the target object can be transmitted, and the DMD can also be used for synthesizing 3D images of the iris, but the accuracy is not sufficient. For this purpose, it is necessary to select a microlens Array, such as a Thin-Film microlens Array (Thin-Film Mirror Array), whose Mirror plates can be continuously deflected.
Since the microlens array is chip scale, its size is very small and can be used to photograph small objects such as the iris.
Of course, the microlens array may not be used in an existing product, and for example, a fixed microlens array that does not rotate is designed according to a fixed shooting scene.
The microlens array is not limited to a reflective type, and can deflect light by means of transmission, refraction, and diffraction.
Example 2 (Multi-mirror, Multi-transmission mirror)
as shown in fig. 4 and 5, the optical array 101 may be a mirror group consisting of multiple mirrors 1011, and the number of mirrors 1011 can be determined according to the requirement of image quality, but is generally greater than 3. If the number of the reflecting mirrors 1011 is too small, the 3D information that can be obtained is not comprehensive; the optical array 101 shown in fig. 6 may be a prism group composed of multiple prisms 1012, and the number of prisms 1012 may be determined according to the requirement of image quality, but should be generally greater than 3. If the number of prisms 1012 is too small, the 3D information that can be obtained is not comprehensive; the optical array 101 may be a liquid crystal modulator 1013 as shown in fig. 7, and applying a voltage changes the optical refractive index of different areas of the liquid crystal, thereby deflecting the light. It is understood that the light modulation unit of the optical array is not limited to a mirror or a prism, and other optical elements capable of deflecting light rays by transmission, refraction and diffraction can be used.
Although the above embodiments do not elaborate on the remaining optical lenses enabling clear imaging, it is understood that there should be optical lenses before and/or after the optical array light path so that the image of the object is clearly presented in the image capture device 201.
The target object in the invention can be a solid object or a composition of a plurality of objects.
the 3D information of the object includes a 3D image, a 3D point cloud, a 3D mesh, local 3D features, 3D dimensions and all parameters with 3D features of the object.
The 3D and three-dimensional information in the present invention means having XYZ three-dimensional information, particularly depth information, and is essentially different from only two-dimensional plane information. It is also fundamentally different from some definitions, called 3D, panoramic, holographic, three-dimensional, but actually only comprising two-dimensional information, in particular not depth information.
The capture area in the present invention refers to a range in which an image capture device (e.g., a camera) can capture an image.
The image acquisition device can be a CCD, a CMOS, a camera, a video camera, an industrial camera, a monitor, a camera, a mobile phone, a tablet, a notebook, a mobile terminal, a wearable device, intelligent glasses, an intelligent watch, an intelligent bracelet and all devices with image acquisition functions.
The 3D information of multiple regions of the target obtained in the above embodiments can be used for comparison, for example, for identification of identity. Firstly, the scheme of the invention is utilized to acquire the 3D information of the face and the iris of the human body, and the information is stored in a server as standard data. When the system is used, for example, when the system needs to perform identity authentication to perform operations such as payment and door opening, the 3D acquisition device can be used for acquiring and acquiring the 3D information of the face and the iris of the human body again, the acquired information is compared with standard data, and if the comparison is successful, the next action is allowed. It can be understood that the comparison can also be used for identifying fixed assets such as antiques and artworks, namely, the 3D information of a plurality of areas of the antiques and the artworks is firstly acquired as standard data, when the identification is needed, the 3D information of the plurality of areas is acquired again and compared with the standard data, and the authenticity is identified.
The 3D information of multiple regions of the target object obtained in the above embodiments can be used to design, produce, and manufacture a kit for the target object. For example, 3D data of the head of a human body is obtained, and a more suitable hat can be designed and manufactured for the human body; the human head data and the 3D eye data are obtained, and suitable glasses can be designed and manufactured for the human body.
The 3D information of the object obtained in the above embodiment can be used to measure the geometric size and contour of the object.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of a visible light camera based biometric four-dimensional data acquisition apparatus according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
Thus, it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been illustrated and described in detail herein, many other variations or modifications consistent with the principles of the invention may be directly determined or derived from the disclosure of the present invention without departing from the spirit and scope of the invention. Accordingly, the scope of the invention should be understood and interpreted to cover all such other variations or modifications.

Claims (9)

1. An optical array-based 3D information acquisition device, characterized in that: comprises that
The image acquisition device is used for acquiring a plurality of images;
The optical array is used for receiving light rays of the target object in different directions so as to transmit images of different areas of the target object to different areas of the image acquisition device; the optical array is a transmission mirror array or a reflecting mirror array; each mirror of the array is continuously deflectable;
And the image processing device is used for processing the plurality of images acquired by the image acquisition device to acquire the 3D information of the target object.
2. The optical array-based 3D information acquisition apparatus according to claim 1, wherein: the included angle between the optical axes of two adjacent optical lenses of the optical array satisfies
H*(1-cosb)=L*sin2b*;
a=m*b;
0<m<0.8;
Wherein L is the distance from the image acquisition device to the target object, H is the actual size of the target object in the acquired image, and a is the optical axis included angle of two adjacent optical lenses of the optical array.
3. the optical array-based 3D information acquisition apparatus according to claim 1, wherein: at least one part of the same area of the target object exists in the three images acquired by the adjacent three optical lenses of the optical array.
4. The optical array-based 3D information acquisition apparatus according to claim 1, wherein: the image acquisition device comprises one or more image acquisition units.
5. the optical array-based 3D information acquisition apparatus according to claim 1, wherein: the transreflective mirror array comprises at least 3 transreflective mirror units, and the reflective mirror array comprises at least 3 reflective mirror units.
6. The optical array-based 3D information acquisition apparatus according to claim 1, wherein: the optical array is a micro-lens array or a liquid crystal array.
7. the utility model provides a 3D information compares device which characterized in that: comprising a 3D information acquisition apparatus according to any of the preceding claims 1-6.
8. a kit of objects produces device, its characterized in that: generating a matching object matched with the corresponding area of the target object by using the 3D information of at least one area obtained by the 3D information acquisition device of any one of the claims 1-6.
9. A 3D information acquisition method based on an optical array, characterized in that 3D information of an object is acquired by using the apparatus of any one of claims 1 to 8.
CN201910835437.XA 2018-09-05 2018-09-05 3D information acquisition device based on optical array Pending CN110553585A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910835437.XA CN110553585A (en) 2018-09-05 2018-09-05 3D information acquisition device based on optical array

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811032824.1A CN109084679B (en) 2018-09-05 2018-09-05 A kind of 3D measurement and acquisition device based on spatial light modulator
CN201910835437.XA CN110553585A (en) 2018-09-05 2018-09-05 3D information acquisition device based on optical array

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811032824.1A Division CN109084679B (en) 2018-09-05 2018-09-05 A kind of 3D measurement and acquisition device based on spatial light modulator

Publications (1)

Publication Number Publication Date
CN110553585A true CN110553585A (en) 2019-12-10

Family

ID=64840694

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910835437.XA Pending CN110553585A (en) 2018-09-05 2018-09-05 3D information acquisition device based on optical array
CN201811032824.1A Active CN109084679B (en) 2018-09-05 2018-09-05 A kind of 3D measurement and acquisition device based on spatial light modulator

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811032824.1A Active CN109084679B (en) 2018-09-05 2018-09-05 A kind of 3D measurement and acquisition device based on spatial light modulator

Country Status (1)

Country Link
CN (2) CN110553585A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112254678A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Indoor 3D information acquisition equipment and method
CN112629412A (en) * 2019-12-12 2021-04-09 天目爱视(北京)科技有限公司 Rotary type 3D intelligent vision equipment

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111076674B (en) * 2019-12-12 2020-11-17 天目爱视(北京)科技有限公司 Closely target object 3D collection equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1740842A (en) * 2004-08-27 2006-03-01 株式会社林创研 Lighting optical device
US20130016190A1 (en) * 2011-07-14 2013-01-17 Faro Technologies, Inc. Grating-based scanner with phase and pitch adjustment
US20130016362A1 (en) * 2011-07-13 2013-01-17 Faro Technologies, Inc. Device and method using a spatial light modulator to find 3d coordinates of an object
CN104567718A (en) * 2015-01-08 2015-04-29 四川大学 Integration imaging micro-image array generating method based on multi-angle projection PMP
CN107065426A (en) * 2017-03-09 2017-08-18 龙岩学院 Solid figure harvester and method
CN107202549A (en) * 2017-05-27 2017-09-26 南开大学 A kind of high precision three-dimensional measurement method and measuring instrument
CN107894690A (en) * 2017-10-27 2018-04-10 上海理鑫光学科技有限公司 A kind of optical projection system in structural light three-dimensional measurement

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021014B (en) * 2012-11-29 2015-01-21 长春理工大学 Method for increasing reconstruction resolution ratio of computer integrated image
CN104864849B (en) * 2014-02-24 2017-12-26 电信科学技术研究院 Vision navigation method and device and robot
WO2016045100A1 (en) * 2014-09-26 2016-03-31 深圳市泛彩溢实业有限公司 Holographic three-dimensional information collecting and restoring device and method
KR102297488B1 (en) * 2015-02-17 2021-09-02 삼성전자주식회사 Light field camera
CN106067162B (en) * 2016-06-03 2019-03-01 西安电子科技大学 The acquisition of integration imaging super-resolution micro unit pattern matrix and reconstructing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1740842A (en) * 2004-08-27 2006-03-01 株式会社林创研 Lighting optical device
US20130016362A1 (en) * 2011-07-13 2013-01-17 Faro Technologies, Inc. Device and method using a spatial light modulator to find 3d coordinates of an object
US20130016190A1 (en) * 2011-07-14 2013-01-17 Faro Technologies, Inc. Grating-based scanner with phase and pitch adjustment
CN104567718A (en) * 2015-01-08 2015-04-29 四川大学 Integration imaging micro-image array generating method based on multi-angle projection PMP
CN107065426A (en) * 2017-03-09 2017-08-18 龙岩学院 Solid figure harvester and method
CN107202549A (en) * 2017-05-27 2017-09-26 南开大学 A kind of high precision three-dimensional measurement method and measuring instrument
CN107894690A (en) * 2017-10-27 2018-04-10 上海理鑫光学科技有限公司 A kind of optical projection system in structural light three-dimensional measurement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
奥拉沃·索格尔德: "《光子微系统》", 30 June 2011, 东南大学出版社 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112629412A (en) * 2019-12-12 2021-04-09 天目爱视(北京)科技有限公司 Rotary type 3D intelligent vision equipment
WO2021115302A1 (en) * 2019-12-12 2021-06-17 左忠斌 3d intelligent visual device
CN112254678A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Indoor 3D information acquisition equipment and method
CN112254678B (en) * 2020-10-15 2022-08-12 天目爱视(北京)科技有限公司 Indoor 3D information acquisition equipment and method

Also Published As

Publication number Publication date
CN109084679B (en) 2019-08-06
CN109084679A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN110580732B (en) 3D information acquisition device
CN110543871B (en) Point cloud-based 3D comparison measurement method
TWI692967B (en) Image device
CN110567370B (en) Variable-focus self-adaptive 3D information acquisition method
CN108432230B (en) Imaging device and method for displaying an image of a scene
CN109146961B (en) 3D measures and acquisition device based on virtual matrix
US9057942B2 (en) Single camera for stereoscopic 3-D capture
JP7462890B2 (en) Method and system for calibrating a plenoptic camera system - Patents.com
CN110553585A (en) 3D information acquisition device based on optical array
CN107991838A (en) Self-adaptation three-dimensional stereo imaging system
CN110827196A (en) Device capable of simultaneously acquiring 3D information of multiple regions of target object
CN109146949B (en) A kind of 3D measurement and information acquisition device based on video data
CN209279885U (en) Image capture device, 3D information comparison and mating object generating means
Kagawa et al. A three‐dimensional multifunctional compound‐eye endoscopic system with extended depth of field
CN211178345U (en) Three-dimensional acquisition equipment
CN208653473U (en) Image capture device, 3D information comparison device, mating object generating means
JP2000184398A (en) Virtual image stereoscopic synthesis device, virtual image stereoscopic synthesis method, game machine and recording medium
CN109394170B (en) A kind of iris information measuring system of no-reflection
JP6367803B2 (en) Method for the description of object points in object space and combinations for its implementation
CN102466961B (en) Method for synthesizing stereoscopic image with long focal length and stereoscopic imaging system
CN110986771B (en) Concave 3D information acquisition and measurement equipment based on optical fiber bundle
CN211085115U (en) Standardized biological three-dimensional information acquisition device
CN209279884U (en) Image capture device, 3D information comparison device and mating object generating means
WO2020244273A1 (en) Dual camera three-dimensional stereoscopic imaging system and processing method
CN201965319U (en) Stereo lens and digital camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination