CN110567370B - Variable-focus self-adaptive 3D information acquisition method - Google Patents

Variable-focus self-adaptive 3D information acquisition method Download PDF

Info

Publication number
CN110567370B
CN110567370B CN201910862046.7A CN201910862046A CN110567370B CN 110567370 B CN110567370 B CN 110567370B CN 201910862046 A CN201910862046 A CN 201910862046A CN 110567370 B CN110567370 B CN 110567370B
Authority
CN
China
Prior art keywords
image acquisition
target object
distance
acquisition device
adaptive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910862046.7A
Other languages
Chinese (zh)
Other versions
CN110567370A (en
Inventor
左忠斌
左达宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Aishi Beijing Technology Co Ltd
Original Assignee
Tianmu Aishi Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Aishi Beijing Technology Co Ltd filed Critical Tianmu Aishi Beijing Technology Co Ltd
Priority to CN201910862046.7A priority Critical patent/CN110567370B/en
Publication of CN110567370A publication Critical patent/CN110567370A/en
Application granted granted Critical
Publication of CN110567370B publication Critical patent/CN110567370B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/218Image signal generators using stereoscopic image cameras using a single 2D image sensor using spatial multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof

Abstract

The invention provides a variable-focus self-adaptive 3D information acquisition method.A group of images of a target object are acquired by an image acquisition device through relative motion with the target object; in the relative movement process, the self-adaptive unit adjusts the image acquisition device according to the distance between the image acquisition device and the target object, so that: when the image acquisition device is at a certain position A1, the distance between the image acquisition device and a certain opposite region of the target object is H1, the focal length of the lens is F1, after the relative movement, the distance between the image acquisition device and the other opposite region of the target object is H2, and the focal length of the lens is F2; the processing unit obtains 3D information of the object according to a plurality of images in the group of images. The method provides a solution scheme of adopting a mobile camera, zooming again and focusing automatically in the field of 3D measurement for the first time, and solves the problem of poor 3D synthesis effect caused by focusing once in the prior art.

Description

Variable-focus self-adaptive 3D information acquisition method
Technical Field
The invention relates to the technical field of 3D measurement of objects, in particular to the technical field of 3D acquisition of a target object and measurement of geometric dimensions such as length and the like by utilizing an image.
Background
At present, 3D acquisition/measurement equipment mainly aims at a certain specific object, and after the object is determined, a camera acquires a plurality of pictures of the object by rotating around the object, so as to synthesize a 3D image of the object, and measure the length, the contour and the like of the object by using 3D point cloud data.
However, the camera focusing inaccuracy caused by the concave-convex of the object outline is not considered in the device, so that the 3D images cannot be synthesized, and the 3D measurement is inaccurate. Since the existing devices require the camera to focus on the object before measurement/acquisition, all taking a picture at that focal length during the entire rotation, it is advisable if the object is a standard cylinder and the camera is rotated around the center of the cylinder. However, if the focusing position is started, the area of the object facing the camera is at a distance H from the camera, and during the rotation, the area of the object facing the camera is at a distance H (x) from the camera, where x is the camera position. Because the contour of the object is not circular or because the rotation center of the camera is difficult to completely coincide with the center of the object, H (x) is difficult to completely equal to H, which causes difficulty in focusing and accuracy in the rotation process, thereby causing that the 3D images cannot be synthesized or the synthesis has larger errors, thereby causing that the 3D measurement is inaccurate. The main reason why this technical problem has not been mentioned by the prior art is that the 3D models for presentation can be synthesized in a usual way, but the accuracy thereof is difficult to use for measurement. That is, the above limitations in use hinder the discovery of this technical problem, and no mention has been made of the 3D synthesis effect and measurement accuracy that may be caused by the irregular contour of the object. No attempt is made to solve this technical problem.
Disclosure of Invention
In view of the above, the present invention has been made to provide an adaptive 3D measurement and information acquisition apparatus that overcomes or at least partially solves the above-mentioned problems.
The invention provides a variable-focus self-adaptive 3D information acquisition method, which is characterized by comprising the following steps of: the method comprises the steps that an image acquisition device moves relative to a target to acquire a group of images of the target;
in the relative movement process, the self-adaptive unit adjusts the image acquisition device according to the distance between the image acquisition device and the target object, so that: when the image acquisition device is at a certain position A1, the distance between the image acquisition device and a certain opposite region of the target object is H1, the focal length of the lens is F1, after the relative movement, the distance between the image acquisition device and the other opposite region of the target object is H2, and the focal length of the lens is F2;
the processing unit obtains 3D information of the object according to a plurality of images in the group of images.
Optionally, the adaptive unit is a driving device for driving the image acquisition device to move.
Optionally, the driving device drives the image capturing device such that a distance between the image capturing device and the target object is constant during the relative movement.
Optionally, the driving device is one or more of a displacement device and a rotation device.
Optionally, the adaptive unit is an optical adjustment device capable of adjusting the optical path in real time during the relative movement.
Optionally, the optical adjustment device is an automatic zoom device capable of zooming in real time during the relative movement.
Optionally, an automatic focusing device capable of focusing in real time is further included.
Optionally, the device further comprises a distance measuring device.
Optionally, in the relative movement process, two adjacent positions of the image acquisition device when acquiring the image at least satisfy the following conditions:
H*(1-cosb)=L*sin2b;
a=m*b;
0<m<0.8
wherein L is the distance between the image acquisition device and the target object, H is the actual size of the target object in the acquired image, a is the included angle of the optical axes of the two adjacent position image acquisition devices, and m is a coefficient.
Optionally, in the relative movement process, when the image acquisition device acquires the images, adjacent three positions satisfy that at least part of the same region of the target object exists in all three images acquired at the corresponding positions.
Invention and technical effects
1. It is first noticed and proposed that for an object with an irregular contour, photographing with only a single focal length during the relative motion of the camera will affect the 3D synthesis effect and the measurement and comparison accuracy.
2. In order to solve the problem of inaccurate focusing caused by irregular change of distance from a camera due to irregular contour of an object, a solution scheme of moving the camera, zooming again and focusing automatically is provided, and the method is provided for the first time in the field of 3D acquisition and measurement.
3. The real-time focusing in the moving process of the camera is firstly proposed. The problem of poor 3D synthesis effect caused by one-time focusing in the prior art is solved. Simultaneously, in order to cooperate with real-time focusing, the rotation mode of the camera is optimized: and stopping at an angle suitable for photographing to wait for focusing, and rotating after photographing is finished.
4. And an optimized focusing strategy is adopted, so that the focusing speed is ensured, and the problems of reduced acquisition speed and prolonged measurement time caused by real-time focusing are solved. This is different from the existing focusing strategy, which has not high real-time focusing requirement.
5. The prior art mainly promotes the synthesis effect through hardware upgrading and strict calibration, and no suggestion in the prior art can ensure the effect and stability of 3D synthesis through changing the angle position when the camera shoots, and no specific optimized condition exists. The invention firstly proposes the optimization of the angle position of the camera during shooting to ensure the effect and the stability of 3D synthesis, and proposes the optimal experience condition required to be met by the camera position through repeated tests, thereby greatly improving the effect of 3D synthesis and the stability of the synthesized image.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a schematic view of a 3D measuring device/3D information acquisition device according to an embodiment of the invention;
description of reference numerals:
the device comprises a track 101, an image acquisition device 201, a processing unit 100, a rotating device 102 and a displacement device 202.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example 1
To solve the above technical problem, an embodiment of the present invention provides an adaptive 3D information acquisition/measurement apparatus. As shown in fig. 1, the method specifically includes: the device comprises a track 101, an image acquisition device 201, a processing unit 100, a rotating device 102 and a displacement device 202.
The image acquisition device 201 is mounted on the displacement device 202, the displacement device 202 is mounted on the rotating device 102, and the rotating device 102 can move along the track 101, so as to drive the image acquisition device 201 to rotate around the target object.
The image capturing device 201 may be a camera, a video camera, a CCD, a CMOS, and may be configured with various lenses, such as an infrared lens, a visible light lens, a telephoto lens, a wide-angle lens, a macro lens, etc., as required.
Taking an image acquisition device as a camera as an example for explanation:
since the camera is required to take clear pictures and focus objects accurately, but focusing in the conventional technology is only performed at the beginning of rotation, if the focusing position is started, the distance from the area of the object opposite to the camera is H, and during the rotation, the distance from the area of the object opposite to the camera is H (x), wherein x is the position of the camera. Because the contour of the object is not circular or because the rotation center of the camera is difficult to completely coincide with the center of the object, H (x) is difficult to completely equal to H, which causes difficulty in focusing and accuracy in the rotation process, thereby causing that the 3D images cannot be synthesized or the synthesis has larger errors, thereby causing that the 3D measurement is inaccurate.
Therefore, the displacement device 202 can move the image capturing device 201 in the radial direction of the image capturing device 201, so that the image capturing device 201 can be close to or far away from the target object, thereby ensuring that the image capturing device 201 is focused accurately all the time in the whole rotation process, i.e. the displacement device 202 drives the image capturing device 201 so that the distance between the image capturing device 201 and the target object is kept unchanged in the relative movement process. Therefore, even for the image acquisition device 201 with the lens being the fixed-focus lens, the focusing accuracy can be ensured in the whole rotation process.
A distance measuring device 203 is also included to measure the real-time distance from the image acquisition device 201 to the object. After the first focusing is completed, the distance measuring device measures the distance H from the image acquisition device 201 to the target object, and after the rotation starts, the distance measuring device 203 measures the real-time distance H (x) from the image acquisition device 201 to the object in real time and transmits the data H and H (x) to the processing unit 100. When the processing unit 100 determines H (x) > H, it controls the displacement device 202 to move the distance H (x) — H in the radial direction toward the object, and when it determines H (x) < H, it controls the displacement device 202 to move the distance H (x) in the radial direction away from the object, and when it determines H (x) > H, it does not operate.
The distance measuring device 203 may be a laser distance meter, an image distance meter, or the like. Which may be a single module or may be part of the image capturing device 201.
Preferably, although the distance between the image capturing device 201 and the object can be kept constant by moving the image capturing device 201, the stepping motor that moves has a minimum step distance, which affects the resolution of the movement of the image capturing device 201. Thereby making it impossible to keep the distance between the image pickup device 201 and the object strictly constant. Inaccuracies in movement caused by aging of the device can also cause this problem. Therefore, in order to avoid the problem that the distance cannot be maintained due to the mechanical structure, the rotation may be stopped at each position where the image capturing apparatus 201 is rotated to perform photographing, and then the automatic focusing may be performed again.
However, such frequent stopping and long time for re-automatic focusing affect the real-time performance of 3D information acquisition, and even in extreme cases, the 3D synthesis fails due to the long shooting time and the moving or deformation of the object shot in a long time. Therefore, there is a need to optimize the autofocus speed. In the rotating process, the distance measuring device measures the distance (object distance) h (x) from the camera 201 to an object in real time, sends the measurement result to the processing unit 100, the processing unit 100 looks up the object distance-focal length table to find a corresponding focal length value, sends a focusing signal to the camera 201, and controls the camera ultrasonic motor to drive the lens to move for rapid focusing. This is also one of the points of the present invention. Of course, focusing can be performed by using an image contrast comparison method in addition to the distance measurement method, which is specifically referred to in example 2.
Displacement device 202 may be a one-axis motion platform, a two-axis motion platform, a three-axis motion platform, a four-axis motion platform, a five-axis motion platform, or a six-axis motion platform. The processing unit 100 sends a driving signal to the servo motor according to the distance signal of the distance measuring device 203, so that the displacement device 202 drives the image capturing device 201 to move.
Example 2
To solve the above technical problem, an embodiment of the present invention provides an adaptive 3D information acquiring apparatus. The method specifically comprises the following steps: the track 101, the image acquisition device 201, the processing unit 100 and the rotating device 102. Since the object distance of the image capturing device 201 is changed in a different area from the target object during the rotation, it is difficult to focus accurately during the rotation. Besides the method of adaptively changing the object distance in embodiment 1, the focal length of the image capturing device 201 may be adjusted, so that the focusing can be accurately performed at a new object distance.
At a certain position a1, the distance measuring device 203 measures that the distance between the image acquisition device 201 and a certain directly opposite area of the target object is H1, and the focal length of the lens is F1; when the image capturing device 201 is located at another position a2, or the target object rotates a certain angle relative to the image capturing device 201, the distance measuring device 203 measures that the distance between the image capturing device 201 and another directly facing area of the target object is H2, the processing unit 100 receives the data sent by the distance measuring device 203, and determines the focal length F2 that the lens should have at this time according to an object distance-lens focal length table (measured in advance), and controls the lens motor, thereby adjusting the focal length of the lens to F2.
In addition to the distance measurement mode, at the position a2, the image capturing device 201 captures an image with a contrast Q1, and at this time, the processing unit 100 controls the lens motor to rotate, and adjusts the focal length of the lens to be larger or smaller, so that after a certain contrast Q2 is reached, no matter the focal length is increased or decreased again, the contrast Q2 is decreased, that is, the contrast Q2 is the best. The focal length F2 is the best focal length at which the picture of the area is taken clearly.
Of course, the focus may be adjusted again according to the size change of the target object captured by the image capturing device 201, so that the ratio of the target object in the field of view of the image capturing device 201 remains unchanged.
In a special case, the displacement device 202 in embodiment 1 may also be included, in which case, the image acquisition device 201 may obtain a sharp image of the object through the cooperation of the displacement device 202 and the zoom lens.
In addition, an automatic focusing step may be further included, that is, automatic focusing is performed at the end or during the process of the above operation, as described in embodiment 1.
Example 3
Although the different regions of the target object have concave-convex parts, the degree of concave-convex parts is low, and at this time, if the mode of embodiment 1 or 2 is adopted, the self-adaptive adjustment time is long, which is not beneficial to rapid acquisition and measurement. In the rotating process, the distance measuring device 203 measures the distance (object distance) h (x) from the camera 201 to an object in real time, sends the measurement result to the processing unit 100, the processing unit 100 looks up the object distance-focal length table to find a corresponding focal length value, sends a focusing signal to the camera 201, and controls the camera ultrasonic motor to drive the lens to move for rapid focusing. Therefore, under the condition that the position of the image acquisition device 201 is not adjusted and the focal length of the lens is not adjusted greatly, the rapid focusing can be realized, and the clear picture shot by the image acquisition device 201 is ensured. This is also one of the points of the present invention. Of course, focusing can be performed by using an image contrast comparison method in addition to the distance measurement method, which is specifically referred to in example 2.
The acquisition position during the relative movement is determined by the position of the image acquisition device 201 when the image of the target object is acquired, and the two adjacent positions at least satisfy the following conditions:
H*(1-cosb)=L*sin2b;
a=m*b;
0<m<1.5
where L is the distance from the image capturing device 201 to the target object, typically the distance from the captured target object directly facing the area when the image capturing device 201 is in the first position, and m is a coefficient.
H is the actual size of the object in the captured image, which is typically a picture taken by the image capture device 201 in the first position, where the object has a true geometric size (not the size in the picture), measured along the direction from the first position to the second position. E.g., the first position and the second position are in a horizontally displaced relationship, then the dimension is measured along a horizontal cross direction of the target. For example, if the leftmost end of the target object that can be displayed in the picture is a and the rightmost end is B, the linear distance from a to B on the target object is measured and is H. The measuring method can calculate the actual distance by combining the focal length of the camera lens according to the A, B distance in the picture, and can also mark A, B on the target object and directly measure the AB linear distance by using other measuring means.
and a is an included angle of optical axes of the two adjacent position image acquisition devices.
m is a coefficient
Because the size of the object and the concave-convex condition are different, the value of a can not be limited by a strict formula, and the value needs to be limited according to experience. According to a number of experiments, m may be within 1.5, but preferably may be within 0.8. Specific experimental data are seen in the following table:
target object Value of m Synthetic effect Rate of synthesis
Human head 0.1、0.2、0.3、0.4 Is very good >90%
Human head 0.5、0.6 Good taste >85%
Human head 0.7、0.8 Is better >80%
Human head 0.9、1.0 In general >70%
Human head 1.0、1.1、1.2 In general >60%
Human head 1.2、1.3、1.4、1.5 Are synthesized relevantly >50%
Human head 1.6、1.7 Is difficult to synthesize <40%
After the target and the image acquisition device 201 are determined, the value of a can be calculated according to the above empirical formula, and the parameter of the virtual matrix, i.e. the position relationship between the matrix points, can be determined according to the value of a.
In a general case, the virtual matrix is a one-dimensional matrix, for example, a plurality of matrix points (acquisition positions) are arranged in a horizontal direction. However, when some target objects are large, a two-dimensional matrix is required, and two positions adjacent in the vertical direction also satisfy the above-described a-value condition.
In some cases, even according to the above empirical formula, the value a is not easy to be determined in some cases, and in this case, the matrix parameters need to be adjusted according to the experiment, which is as follows: calculating a prediction matrix parameter a according to the formula, and controlling the camera to move to a corresponding matrix point according to the matrix parameter, for example, the camera takes a picture P1 at a position W1, and takes a picture P2 after moving to a position W2, at this time, comparing whether there is a portion representing the same region of the object in the pictures P1 and P2, i.e., P1 ≈ P2 is not empty (for example, the portion includes a human eye angle at the same time, but the shooting angle is different), if not, readjusting the value a, moving to the position W2', and repeating the comparison step. If P1 n P2 is not empty, the camera continues to be moved to the W3 position according to the a value (adjusted or unadjusted), picture P3 is taken, and again a comparison is made as to whether there is a portion of picture P1, picture P2, and picture P3 that represents the same area as the target, i.e., P1 n P2 n P3 is not empty. And synthesizing 3D by using a plurality of pictures, testing the 3D synthesis effect, and meeting the requirements of 3D information acquisition and measurement. That is, the structure of the matrix is determined by the positions of the image capturing device 201 when capturing a plurality of images, and the adjacent three positions satisfy that at least a portion representing the same region of the object exists in all of the three images captured at the corresponding positions.
3D image synthesis method
The image capturing device 201 can rotate and the shutter can be performed simultaneously, that is, the shutter is controlled to take a picture without interrupting the rotation process of the image capturing device 201.
Or the shutter can be controlled to take a picture after the image acquisition device 201 rotates to a certain position, and the rotation process is continued after the picture taking is finished. Namely, the shutter is continuously controlled to be interrupted to take a picture during the rotation process.
The processing unit 100 receives a group of images sent by the image capturing device 201, and screens out a plurality of images from the group of images.
And synthesizing a 3D image of the face by using the plurality of images. The synthesis method may use a method of image stitching based on adjacent image feature points, or may use other methods.
The image splicing method comprises the following steps:
(1) processing the plurality of images and extracting respective feature points; features of the respective Feature points in the plurality of images may be described using a Scale-Invariant Feature Transform (SIFT) Feature descriptor. The SIFT feature descriptor has 128 feature description vectors, can describe 128 aspects of features of any feature point in direction and scale, and remarkably improves the accuracy of feature description, and meanwhile, the feature descriptor has spatial independence.
(2) And respectively generating feature point cloud data of the human face features and feature point cloud data of the iris features on the basis of the extracted feature points of the plurality of images. The method specifically comprises the following steps:
(2-1) matching the feature points of the plurality of images according to the features of the feature points of each image in the plurality of extracted images to establish a matched facial feature point data set; matching the characteristic points of the plurality of images according to the extracted characteristic of the characteristic point of each image in the plurality of images, and establishing a matched iris characteristic point data set;
and (2-2) calculating the relative position of the camera relative to the characteristic point on the space of each position according to the optical information of the camera and different positions of the camera when the plurality of images are acquired, and calculating the space depth information of the characteristic point in the plurality of images according to the relative position. Similarly, spatial depth information of feature points in a plurality of images can be calculated. The calculation may employ beam adjustment.
Calculating spatial depth information of the feature points may include: the spatial position information and the color information, that is, may be an X-axis coordinate of the feature point at a spatial position, a Y-axis coordinate of the feature point at a spatial position, a Z-axis coordinate of the feature point at a spatial position, a value of an R channel of the color information of the feature point, a value of a G channel of the color information of the feature point, a value of a B channel of the color information of the feature point, a value of an Alpha channel of the color information of the feature point, or the like. In this way, the generated feature point cloud data includes spatial position information and color information of the feature points, and the format of the feature point cloud data may be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
wherein Xn represents the X-axis coordinate of the feature point at the spatial position; yn represents the Y-axis coordinate of the feature point at the spatial position; zn represents the Z-axis coordinate of the characteristic point at the space position; rn represents a value of an R channel of color information of the feature point; gn represents a value of a G channel of color information of the feature point; bn represents the value of the B channel of the color information of the feature point; an represents the value of the Alpha channel of the color information of the feature point.
And (2-3) generating feature point cloud data of the features of the target object according to the feature point data set matched with the plurality of images and the spatial depth information of the feature points.
And (2-4) constructing a 3D model of the target object according to the characteristic point cloud data so as to realize acquisition of the point cloud data of the target object.
And (2-5) attaching the acquired color and texture of the target object to the point cloud data to form a 3D image of the target object.
Wherein, the 3D image can be synthesized by using all the images in a group of images, and the image with higher quality can be selected from the images for synthesis.
The above-mentioned stitching method is only a limited example, and is not limited thereto, and all methods for generating a three-dimensional image from a plurality of multi-angle two-dimensional images may be used.
Example 4
Some objects have different regions with larger depth difference, such as the plait of a girl, and the head of the object is obviously protruded. The depth of field requirements for the camera are high if the direct shot is taken (the applicant has first noticed the problem). At this time, the processing unit 100 controls the displacement device 202 to move, and when a certain region of the target object protrudes relative to the camera, the displacement device 202 drives the camera to move away from the target object; when a certain region of the target object is recessed relative to the camera, the displacement device 202 drives the camera to approach the target object, so that the distance between the camera and different target regions of the human body is kept basically constant.
In the existing system, the focusing of the camera can only be performed at the beginning stage, and the camera performs a series of photographing with a fixed focal length in the whole rotation process. In this way, when the unevenness of the target object is large, the acquired image may be unclear. The existing system does not recognize the problem of 3D acquisition of the object with large concave-convex part and does not try to solve the problem. The main reason is that the existing cameras capable of automatic optical focusing are all focused before shooting, the automatic focusing is realized by pressing a shutter key, and the camera is difficult to rotate and focus at the same time, which is determined by the inherent control method of the camera. The existing cameras are designed for shooting two-dimensional images, the camera does not need to focus frequently, the automatic focusing is realized by a shutter key, and no protocol and/or interface can realize external software to control focusing. In addition, the existing focusing needs a complete focusing strategy due to uncertain target objects, so that the speed is very low, the customer experience is influenced, and the existing focusing is not suitable for 3D acquisition. The distance measuring device measures the distance h (x) from the camera to the object in real time, sends the measurement result to the processing unit 100, and the processing unit 100 looks up the object distance-focal length table to find the corresponding focal length value, sends a focusing signal to the camera and controls the ultrasonic motor of the camera to drive the lens to move for rapid focusing. Therefore, under the condition that the position of the image acquisition device 201 is not adjusted and the focal length of the lens is not adjusted greatly, the rapid focusing can be realized, and the clear picture shot by the image acquisition device 201 is ensured. This is also one of the points of the present invention. Of course, focusing may be performed by using an image contrast comparison method, in addition to the distance measurement method. The system directly sends a focusing starting signal to the camera processing unit 100 through external software, and starts an internal focusing program of the processing unit 100, so that the focusing of a camera lens is realized, and the camera can automatically focus for many times in the rotation process, thereby ensuring that a shot image is clear. This is also one of the points of the present invention. Meanwhile, according to the relatively determined characteristics of the target object, the focusing strategy is optimized, the focusing speed is higher, and the requirement of 3D acquisition can be met.
The target object, and the object all represent objects for which 3D information is to be acquired.
The target object, and the object all represent objects for which 3D information is to be acquired.
The target object in the invention can be a solid object or a composition of a plurality of objects.
The 3D information of the object includes a 3D image, a 3D point cloud, a 3D mesh, local 3D features, 3D dimensions and all parameters with 3D features of the object.
The 3D and three-dimensional information in the present invention means having XYZ three-dimensional information, particularly depth information, and is essentially different from only two-dimensional plane information. It is also fundamentally different from some definitions, called 3D, panoramic, holographic, three-dimensional, but actually only comprising two-dimensional information, in particular not depth information.
The capture area in the present invention refers to a range in which an image capture device (e.g., a camera) can capture an image.
The image acquisition device can be a CCD, a CMOS, a camera, a video camera, an industrial camera, a monitor, a camera, a mobile phone, a tablet, a notebook, a mobile terminal, a wearable device, intelligent glasses, an intelligent watch, an intelligent bracelet and all devices with image acquisition functions.
The 3D information of multiple regions of the target obtained in the above embodiments can be used for comparison, for example, for identification of identity. Firstly, the scheme of the invention is utilized to acquire the 3D information of the face and the iris of the human body, and the information is stored in a server as standard data. When the system is used, for example, when the system needs to perform identity authentication to perform operations such as payment and door opening, the 3D acquisition device can be used for acquiring and acquiring the 3D information of the face and the iris of the human body again, the acquired information is compared with standard data, and if the comparison is successful, the next action is allowed. It can be understood that the comparison can also be used for identifying fixed assets such as antiques and artworks, namely, the 3D information of a plurality of areas of the antiques and the artworks is firstly acquired as standard data, when the identification is needed, the 3D information of the plurality of areas is acquired again and compared with the standard data, and the authenticity is identified.
The 3D information of multiple regions of the target object obtained in the above embodiments can be used to design, produce, and manufacture a kit for the target object. For example, 3D data of the head of a human body is obtained, and a more suitable hat can be designed and manufactured for the human body; the human head data and the 3D eye data are obtained, and suitable glasses can be designed and manufactured for the human body.
The 3D information of the object obtained in the above embodiment can be used to measure the geometric size and contour of the object.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in an apparatus in accordance with embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
Thus, it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been illustrated and described in detail herein, many other variations or modifications consistent with the principles of the invention may be directly determined or derived from the disclosure of the present invention without departing from the spirit and scope of the invention. Accordingly, the scope of the invention should be understood and interpreted to cover all such other variations or modifications.

Claims (9)

1. A variable-focus self-adaptive 3D information acquisition method is characterized by comprising the following steps: comprises that
The image acquisition device moves relative to the target object to acquire a group of images of the target object;
the object distance of the image acquisition device and different areas of the target object changes in the movement process, so that: in the relative movement process, the self-adaptive unit adjusts the image acquisition device according to the distance between the image acquisition device and the target object, so that: when the image acquisition device is at a certain position A1, the distance between the image acquisition device and a certain opposite region of the target object is H1, the focal length of the lens is F1, after the relative movement, the distance between the image acquisition device and the other opposite region of the target object is H2, and the focal length of the lens is F2;
the processing unit obtains 3D information of the target object according to a plurality of images in the group of images;
in the relative movement process, the adjacent two positions of the image acquisition device when acquiring the image at least meet the following conditions:
H*(1-cosb)=L*sin2b;
a=m*b;
0<m<0.8;
wherein L is the distance between the image acquisition device and the target object, H is the actual size of the target object in the acquired image, a is the included angle of the optical axes of the two adjacent position image acquisition devices, and m is a coefficient.
2. The variable focus adaptive 3D information acquisition method of claim 1, wherein: the self-adaptive unit is a driving device for driving the image acquisition device to move.
3. The variable focus adaptive 3D information acquisition method of claim 2, wherein: the driving device drives the image acquisition device to enable the distance between the image acquisition device and the target object to be constant in the relative movement process.
4. The variable focus adaptive 3D information acquisition method of claim 2, wherein: the driving device is one or a combination of a displacement device and a rotating device.
5. The variable focus adaptive 3D information acquisition method of claim 1, wherein: the self-adaptive unit is an optical adjusting device capable of adjusting the optical path in real time in the relative movement process.
6. The variable focus adaptive 3D information acquisition method of claim 5, wherein: the optical adjusting device is an automatic zooming device capable of zooming in real time in the relative movement process.
7. The variable focus adaptive 3D information acquisition method according to any of claims 1 to 6, characterized by: the automatic focusing device capable of focusing in real time is further included.
8. The variable focus adaptive 3D information acquisition method of claim 7, wherein: also comprises a distance measuring device.
9. The variable focus adaptive 3D information acquisition method of claim 1, wherein: in the relative movement process, when the image acquisition device acquires the images, the adjacent three positions meet the condition that at least parts of the same area of the target object exist in the three images acquired at the corresponding positions.
CN201910862046.7A 2018-09-05 2018-09-05 Variable-focus self-adaptive 3D information acquisition method Active CN110567370B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910862046.7A CN110567370B (en) 2018-09-05 2018-09-05 Variable-focus self-adaptive 3D information acquisition method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811031412.6A CN109141240B (en) 2018-09-05 2018-09-05 A kind of measurement of adaptive 3 D and information acquisition device
CN201910862046.7A CN110567370B (en) 2018-09-05 2018-09-05 Variable-focus self-adaptive 3D information acquisition method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811031412.6A Division CN109141240B (en) 2018-09-05 2018-09-05 A kind of measurement of adaptive 3 D and information acquisition device

Publications (2)

Publication Number Publication Date
CN110567370A CN110567370A (en) 2019-12-13
CN110567370B true CN110567370B (en) 2021-11-16

Family

ID=64827069

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201811031412.6A Active CN109141240B (en) 2018-09-05 2018-09-05 A kind of measurement of adaptive 3 D and information acquisition device
CN201910862046.7A Active CN110567370B (en) 2018-09-05 2018-09-05 Variable-focus self-adaptive 3D information acquisition method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201811031412.6A Active CN109141240B (en) 2018-09-05 2018-09-05 A kind of measurement of adaptive 3 D and information acquisition device

Country Status (1)

Country Link
CN (2) CN109141240B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109819169A (en) * 2019-02-13 2019-05-28 上海闻泰信息技术有限公司 Panorama shooting method, device, equipment and medium
CN111649690A (en) * 2019-12-12 2020-09-11 天目爱视(北京)科技有限公司 Handheld 3D information acquisition equipment and method
WO2021115297A1 (en) * 2019-12-12 2021-06-17 左忠斌 3d information collection apparatus and method
CN110986770B (en) * 2019-12-12 2020-11-17 天目爱视(北京)科技有限公司 Camera used in 3D acquisition system and camera selection method
CN111207690B (en) * 2020-02-17 2021-03-12 天目爱视(北京)科技有限公司 Adjustable iris 3D information acquisition measuring equipment
CN111445570B (en) * 2020-03-09 2021-04-27 天目爱视(北京)科技有限公司 Customized garment design production equipment and method
CN112465960B (en) * 2020-12-18 2022-05-20 天目爱视(北京)科技有限公司 Size calibration device and method for three-dimensional model
CN114689014B (en) * 2022-05-31 2022-09-02 江西省医学科学院 Monocular camera focusing and ranging device, monocular camera focusing and ranging method, storage medium and computer

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5133601A (en) * 1991-06-12 1992-07-28 Wyko Corporation Rough surface profiler and method
EP0640829A2 (en) * 1987-08-12 1995-03-01 Olympus Optical Co., Ltd. Scanning probe microscope
CN1188904A (en) * 1996-12-23 1998-07-29 三星航空产业株式会社 Zooming method and apparatus for camera
CN1278610A (en) * 1999-06-18 2001-01-03 奥林巴斯光学工业株式会社 Camera with automatic focus-regulating device
CN101183206A (en) * 2006-11-13 2008-05-21 华晶科技股份有限公司 Method for calculating distance and actuate size of shot object
CN101672620A (en) * 2008-09-08 2010-03-17 鸿富锦精密工业(深圳)有限公司 Electronic device and method for measuring size of object
CN101865662A (en) * 2010-02-05 2010-10-20 陆金桂 New method for measuring screw pitch of propeller blades
CN104330882A (en) * 2014-11-27 2015-02-04 中国航空工业集团公司洛阳电光设备研究所 Self-adaption zooming system and self-adaption zooming method
CN107277359A (en) * 2017-07-13 2017-10-20 深圳市魔眼科技有限公司 Method, device, mobile terminal and the storage medium of adaptive zoom in 3D scannings

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03217806A (en) * 1990-01-23 1991-09-25 Matsushita Electric Ind Co Ltd Lens position detector
JPWO2003034713A1 (en) * 2001-10-15 2005-02-10 浜松ホトニクス株式会社 Swimmer imaging device
JP4979928B2 (en) * 2005-11-28 2012-07-18 株式会社トプコン Three-dimensional shape calculation device and three-dimensional shape calculation method
JP4702072B2 (en) * 2006-01-20 2011-06-15 カシオ計算機株式会社 Projection device, distance measurement elevation angle control method and program for projection device
SG177157A1 (en) * 2009-06-16 2012-01-30 Intel Corp Camera applications in a handheld device
CN101839700A (en) * 2010-03-29 2010-09-22 重庆建设工业(集团)有限责任公司 Non-contact image measuring system
CN103973957B (en) * 2013-01-29 2018-07-06 上海八运水科技发展有限公司 Binocular 3D automatic focusing system for camera and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0640829A2 (en) * 1987-08-12 1995-03-01 Olympus Optical Co., Ltd. Scanning probe microscope
US5133601A (en) * 1991-06-12 1992-07-28 Wyko Corporation Rough surface profiler and method
CN1188904A (en) * 1996-12-23 1998-07-29 三星航空产业株式会社 Zooming method and apparatus for camera
CN1278610A (en) * 1999-06-18 2001-01-03 奥林巴斯光学工业株式会社 Camera with automatic focus-regulating device
CN101183206A (en) * 2006-11-13 2008-05-21 华晶科技股份有限公司 Method for calculating distance and actuate size of shot object
CN101672620A (en) * 2008-09-08 2010-03-17 鸿富锦精密工业(深圳)有限公司 Electronic device and method for measuring size of object
CN101865662A (en) * 2010-02-05 2010-10-20 陆金桂 New method for measuring screw pitch of propeller blades
CN104330882A (en) * 2014-11-27 2015-02-04 中国航空工业集团公司洛阳电光设备研究所 Self-adaption zooming system and self-adaption zooming method
CN107277359A (en) * 2017-07-13 2017-10-20 深圳市魔眼科技有限公司 Method, device, mobile terminal and the storage medium of adaptive zoom in 3D scannings

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
用于光具座自动测量焦距的变焦光学系统设计;蔡锦浩等;《应用光学》;20160531;第37卷(第3期);359-364 *

Also Published As

Publication number Publication date
CN109141240A (en) 2019-01-04
CN110567370A (en) 2019-12-13
CN109141240B (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110567370B (en) Variable-focus self-adaptive 3D information acquisition method
CN109218702B (en) Camera rotation type 3D measurement and information acquisition device
CN109146961B (en) 3D measures and acquisition device based on virtual matrix
CN110543871B (en) Point cloud-based 3D comparison measurement method
CN110567371B (en) Illumination control system for 3D information acquisition
CN208653401U (en) Adapting to image acquires equipment, 3D information comparison device, mating object generating means
CN110827196A (en) Device capable of simultaneously acquiring 3D information of multiple regions of target object
CN109394168B (en) A kind of iris information measuring system based on light control
CN110580732A (en) Foot 3D information acquisition device
CN111060023A (en) High-precision 3D information acquisition equipment and method
CN209279885U (en) Image capture device, 3D information comparison and mating object generating means
CN109146949B (en) A kind of 3D measurement and information acquisition device based on video data
CN110986770B (en) Camera used in 3D acquisition system and camera selection method
WO2021115300A1 (en) Intelligent control method for 3d information acquisition
CN208653473U (en) Image capture device, 3D information comparison device, mating object generating means
JP2003179800A (en) Device for generating multi-viewpoint image, image processor, method and computer program
CN109084679B (en) A kind of 3D measurement and acquisition device based on spatial light modulator
CN109394170B (en) A kind of iris information measuring system of no-reflection
WO2021077078A1 (en) System and method for lightfield capture
CN209103318U (en) A kind of iris shape measurement system based on illumination
CN213072921U (en) Multi-region image acquisition equipment, 3D information comparison and matching object generation device
JP5925109B2 (en) Image processing apparatus, control method thereof, and control program
CN208795167U (en) Illumination system for 3D information acquisition system
JP7438706B2 (en) Imaging control device, imaging device, and imaging control method
CN209203221U (en) A kind of iris dimensions measuring system and information acquisition system based on light control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant