CN211178345U - Three-dimensional acquisition equipment - Google Patents

Three-dimensional acquisition equipment Download PDF

Info

Publication number
CN211178345U
CN211178345U CN201922224510.8U CN201922224510U CN211178345U CN 211178345 U CN211178345 U CN 211178345U CN 201922224510 U CN201922224510 U CN 201922224510U CN 211178345 U CN211178345 U CN 211178345U
Authority
CN
China
Prior art keywords
image acquisition
acquisition device
camera
image
synthesis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201922224510.8U
Other languages
Chinese (zh)
Inventor
左忠斌
左达宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Aishi Beijing Technology Co Ltd
Original Assignee
Tianmu Aishi Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Aishi Beijing Technology Co Ltd filed Critical Tianmu Aishi Beijing Technology Co Ltd
Priority to CN201922224510.8U priority Critical patent/CN211178345U/en
Application granted granted Critical
Publication of CN211178345U publication Critical patent/CN211178345U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The utility model provides a three-dimensional collection equipment: the image acquisition device is used for acquiring images of the target object in at least three directions; the acquisition position of the image acquisition device meets the preset conditions: it is first proposed to improve both the synthesis speed and the synthesis accuracy by increasing the way in which the background plate rotates together with the camera. By optimizing the size of the background plate, the rotation burden is reduced, and simultaneously the synthesis speed and the synthesis precision can be improved.

Description

Three-dimensional acquisition equipment
Technical Field
The utility model relates to a topography measurement technical field, in particular to 3D topography measurement technical field.
Background
When performing 3D measurements, it is necessary to first acquire 3D information. The currently common method includes using a machine vision mode to collect pictures of an object from different angles, and matching and splicing the pictures to form a 3D model. When pictures at different angles are collected, a plurality of cameras can be arranged at different angles of the object to be detected, and the pictures can be collected from different angles through rotation of a single camera or a plurality of cameras. However, both of these methods involve problems of synthesis speed and synthesis accuracy. The synthesis speed and the synthesis precision are a pair of contradictions to some extent, and the improvement of the synthesis speed can cause the final reduction of the 3D synthesis precision; to improve the 3D synthesis accuracy, the synthesis speed needs to be reduced, and more pictures need to be synthesized.
In the prior art, in order to simultaneously improve the synthesis speed and the synthesis precision, the synthesis is generally realized by a method of optimizing an algorithm. And the art has always considered that the approach to solve the above problems lies in the selection and updating of algorithms, and no method for simultaneously improving the synthesis speed and the synthesis precision from other angles has been proposed so far. However, the optimization of the algorithm has reached a bottleneck at present, and before no more optimal theory appears, the improvement of the synthesis speed and the synthesis precision cannot be considered.
In the prior art, it has also been proposed to use empirical formulas including rotation angle, object size, object distance to define camera position, thereby taking into account the speed and effect of the synthesis. However, in practical applications it is found that: unless a precise angle measuring device is provided, the user is insensitive to the angle and is difficult to accurately determine the angle; the size of the target is difficult to accurately determine, and particularly, the target needs to be frequently replaced in certain application occasions, each measurement brings a large amount of extra workload, and professional equipment is needed to accurately measure irregular targets. The measured error causes the camera position setting error, thereby influencing the acquisition and synthesis speed and effect; accuracy and speed need to be further improved.
Therefore, ① can greatly improve the synthesis speed and the synthesis precision at the same time, ② is convenient to operate, professional equipment is not needed, excessive measurement is not needed, and the camera position can be obtained quickly.
SUMMERY OF THE UTILITY MODEL
In view of the above, the present invention has been made to provide a three-dimensional collecting apparatus that overcomes or at least partially solves the above problems.
The utility model provides a three-dimensional acquisition device,
the image acquisition device is used for acquiring images of the target object in at least three directions;
the acquisition position of the image acquisition device meets the following conditions:
Figure BDA0002315587660000021
<0.603
wherein L is the straight-line distance between the optical centers of two adjacent image acquisition positions, f is the focal length of the image acquisition device, d is the rectangular length or width of the photosensitive element (CCD) of the image acquisition device, T is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis, and T is the adjustment coefficient.
Optionally, there are a plurality of image capturing devices, each located around the target object.
Optionally, the image acquisition device is disposed on the rotation device.
Optionally, the target is disposed on a rotating device.
Alternatively, < 0.410.
Alternatively, < 0.356.
Optionally, the image capturing device is a visible light camera and/or an infrared camera.
Optionally, the image capturing device is a fixed focus camera and/or a zoom camera.
Optionally, the device is connected with an upper computer, a server or a cloud platform through a communication interface.
Optionally, the rotating device is a turntable, a drum, a turntable, a swivel arm and/or a guide rail.
Invention and technical effects
1. By optimizing the position of the camera for collecting the picture, the synthesis speed and the synthesis precision can be ensured to be improved simultaneously.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic view of a collection area moving device in an embodiment 1 of the present invention, which is a rotary structure;
fig. 2 is a schematic view of another implementation manner of the acquisition area moving device in the embodiment 1 of the present invention as a rotating structure;
fig. 3 is a schematic view of a translational structure of a device for moving an acquisition area in embodiment 2 of the present invention;
fig. 4 is a schematic view of a random movement structure of the device for moving the collection area in embodiment 3 of the present invention;
fig. 5 is a schematic diagram of a multi-camera mode in embodiment 4 of the present invention;
the correspondence of reference numerals to the respective components is as follows:
1 target object, 2 object stages, 3 rotating devices, 4 image acquisition devices and 5 linear tracks.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In order to solve the above technical problem, an embodiment of the present invention provides a three-dimensional collecting apparatus, including an image collecting device and a rotating device. The image acquisition device is used for acquiring a group of images of the target object through the relative movement of an acquisition area of the image acquisition device and the target object; and the acquisition area moving device is used for driving the acquisition area of the image acquisition device to generate relative motion with the target object. The collection area is the effective field range of the image collection device.
Example 1: the collecting area moving device is a rotary structure
Referring to fig. 1, an object 1 is fixed on a stage 2, and a rotation device 3 drives an image capturing device 4 to rotate around the object 1. The rotating device 3 can drive the image acquisition device 4 to rotate around the target 1 through a rotating arm. Of course, the rotation is not necessarily a complete circular motion, and can be only rotated by a certain angle according to the acquisition requirement. The rotation does not necessarily need to be circular motion, and the motion track of the image acquisition device 4 can be other curved tracks as long as the camera can shoot the object from different angles.
The rotating device 3 can also drive the image acquisition device to rotate, so that the image acquisition device 4 can acquire target object images from different angles through rotation.
The rotating device 3 may be in various forms such as a cantilever, a turntable, a track, etc., so that the image capturing device 4 can move.
In addition to the above, in some cases, the camera may be fixed, as shown in fig. 2, and the stage 2 carrying the object 1 is rotated, so that the direction of the object 1 facing the image capturing device 4 is changed at any moment, thereby enabling the image capturing device 4 to capture images of the object 1 from different angles. However, in this case, the calculation may still be performed according to the condition converted into the movement of the image capturing device 4, so that the movement conforms to the corresponding empirical formula (which will be described in detail below). For example, in a scenario where the stage 2 rotates, it may be assumed that the stage 2 is stationary and the image capturing device 4 rotates. The distance of the shooting position when the image acquisition device 4 rotates is set by using an empirical formula, so that the rotating speed of the image acquisition device is deduced, the rotating speed of the object stage is reversely deduced, the rotating speed is conveniently controlled, and 3D acquisition is realized. Of course, such scenes are not commonly used, and it is more common to rotate the image capture device.
In addition, in order to enable the image capturing device 4 to capture images of the object 1 in different directions, the image capturing device 4 and the object 1 may be kept still, and the optical axis of the image capturing device 4 may be rotated. For example: the collecting area moving device 4 is an optical scanning device, so that the collecting area of the image collecting device 4 and the object 1 generate relative motion under the condition that the image collecting device 4 does not move or rotate. The acquisition area moving device also comprises a light deflection unit which is driven by machinery to rotate, or is driven by electricity to cause light path deflection, or is distributed in space in multiple groups, so that images of the target object can be acquired from different angles. The light deflection unit may typically be a mirror, which is rotated to collect images of the target object in different directions. Or a reflector surrounding the target object is directly arranged in space, and the light of the reflector enters the image acquisition device 4 in turn. Similarly to the foregoing, the rotation of the optical axis in this case can be regarded as the rotation of the virtual position of the image pickup device 4, and by this method of conversion, it is assumed that the image pickup device 4 rotates, so that the calculation is performed using the following empirical formula.
The image capturing device 4 is used for capturing an image of the object 1, and may be a fixed focus camera or a zoom camera. In particular, the camera may be a visible light camera or an infrared camera. Of course, it is understood that any device with image capturing function can be used, and does not constitute a limitation of the present invention, for example, CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet, and all devices with image capturing function.
A background plate may also be incorporated into the device when the arrangement is rotated. The background plate is located opposite the image capturing device 4 and rotates synchronously when the image capturing device 4 rotates, and remains stationary when the image capturing device 4 is stationary. So that the image of the object captured by the image capturing device 4 is all with the background plate as the background. The background plate is all solid or mostly (body) solid. In particular, the color plate can be a white plate or a black plate, and the specific color can be selected according to the color of the object body. The background plate is usually a flat plate, and preferably a curved plate, such as a concave plate, a convex plate, a spherical plate, and even in some application scenarios, the background plate may have a wavy surface; the plate can also be made into various shapes, for example, three sections of planes can be spliced to form a concave shape as a whole, or a plane and a curved surface can be spliced.
Example 2: the acquisition area moving device is a translation structure
In addition to the above-described rotating structure, the image pickup device 4 can move in a linear trajectory with respect to the object 1. For example, the image pickup device 4 is located on the linear rail 5, and photographing is performed while sequentially passing through the object 1 along the linear rail 5, and the image pickup device 4 is not rotated during the process. Wherein the linear track 5 can also be replaced by a linear cantilever. However, it is more preferable that the image pickup device 4 is rotated so that the optical axis of the image pickup device 4 is directed toward the object 1 when the entire image pickup device 4 moves along a linear path, as shown in fig. 3.
Example 3: the mobile device of the acquisition area is a random motion structure
As shown in fig. 4, the movement of the capturing area is not regular, for example, the image capturing device 4 can be held by hand to capture images around the target 1, and at this time, it is difficult to move in a strict track, and the movement trajectory of the image capturing device 4 is difficult to predict accurately. Therefore, in this case, how to ensure that the captured images can be accurately and stably synthesized into the 3D model is a difficult problem, which has not been mentioned yet. A more common approach is to take multiple photographs, with redundancy in the number of photographs to address this problem. However, the synthesis results are not stable. Although there are some ways to improve the composite effect by limiting the rotation angle of the camera, in practice, the user is not sensitive to the angle, and even if the preferred angle is given, the user is difficult to operate in the case of hand-held shooting. Therefore, the utility model provides a method for improving the synthetic effect and shortening the synthetic time by limiting the moving distance of the camera for twice photographing.
For example, in the process of face recognition, a user can hold the mobile terminal to shoot around the face of the user in a moving mode. As long as the experience requirements (specifically described below) of the photographing position are met, the 3D model of the face can be accurately synthesized, and at this time, the face recognition can be realized by comparing with the standard model stored in advance. For example, the handset may be unlocked, or payment verification may be performed.
In the case of irregular movement, a sensor may be disposed in the mobile terminal or the image capturing device 4, the sensor measures a linear distance traveled by the image capturing device 4 during two shots, and when the travel distance does not satisfy the above-mentioned empirical condition with respect to L (specifically, the following condition), an alarm may be issued to the user.
Example 4: multiple camera mode
It can be understood that, besides the camera and the object move relatively, so that the camera can shoot the images of the object 1 at different angles, as shown in fig. 5, a plurality of cameras can be arranged at different positions around the object 1, so that the images of the object 1 at different angles can be shot simultaneously.
Light source
In general, the light sources are distributed in a distributed manner around the lens of the image capturing device 4, for example, the light sources are ring-shaped L ED lamps around the lens, in some applications, the captured object is a human body, and therefore, the intensity of the light sources needs to be controlled to avoid discomfort of the human body, and particularly, a soft light device, such as a soft light housing, may be disposed on the light path of the light sources, or a L ED area light source may be directly employed, which not only provides soft light but also provides uniform light emission, and more preferably, an O L ED light source may be employed, which has a smaller volume and a softer light, and has a flexible characteristic, and may be attached to a curved surface.
Image acquisition device arrangement
The acquisition area moving device is of a rotary structure, the image acquisition device 4 rotates around the target object 1, when 3D acquisition is carried out, the image acquisition device 4 changes relative to the target object 1 in the direction of optical axes of different acquisition positions, and the positions of two adjacent image acquisition devices 4 or two adjacent acquisition positions of the image acquisition device 4 meet the following conditions:
Figure BDA0002315587660000061
<0.603
wherein L is the linear distance between the optical centers of the image capturing devices 4 at two adjacent capturing positions, f is the focal length of the image capturing device 4, d is the rectangular length or width of the photosensitive element (CCD) of the image capturing device 4, T is the distance from the photosensitive element of the image capturing device 4 to the surface of the target object 1 along the optical axis, and T is the adjustment coefficient.
When the two positions are along the length direction of the photosensitive element of the image acquisition device 4, d is a rectangular length; when the two positions are along the width direction of the photosensitive element of the image pickup device 4, d takes a rectangular width.
The distance of the photosensitive element to the surface of the object along the optical axis when the image pickup device 4 is in any one of the two positions is taken as t in another case L is a besides this methodn、An+1Linear distance between optical centers of two image capturing devices, and An、An+1Two image capturing devices 4 adjacent to each other An-1、An+2Two image capturing devices 4 and An、An+1The distances from the respective photosensitive elements of the two image acquisition devices 4 to the surface of the target 1 along the optical axis are respectively Tn-1、Tn、Tn+1、Tn+2,T=(Tn-1+Tn+Tn+1+Tn+2)/4. Of course, the average value may be calculated by using more positions than the adjacent 4 positions.
As mentioned above, L should be the straight line distance between the optical centers of the two image capturing devices 4, but since the optical center position of the image capturing device 4 is not easily determined in some cases, the center of the photosensitive element of the image capturing device 4, the geometric center of the image capturing device 4, the center of the shaft connecting the image capturing device 4 and the pan (or platform, support), the center of the lens near-end or far-end surface can be used instead in some cases, and the error caused by the above is found to be within an acceptable range through experiments, so that the above range is also within the protection scope of the present invention.
In general, parameters such as object size and angle of view are used as means for estimating the position of a camera in the prior art, and the positional relationship between two cameras is also expressed in terms of angle. Because the angle is not well measured in the actual use process, it is inconvenient in the actual use. Also, the size of the object may vary with the variation of the measurement object. For example, when the head of a child is collected after 3D information on the head of an adult is collected, the head size needs to be measured again and calculated again. The inconvenient measurement and the repeated measurement bring errors in measurement, thereby causing errors in camera position estimation. According to the scheme, the experience conditions required to be met by the position of the camera are given according to a large amount of experimental data, so that the problem that the measurement is difficult to accurately measure the angle is solved, and the size of an object does not need to be directly measured. In the empirical condition, d and f are both fixed parameters of the camera, and corresponding parameters can be given by a manufacturer when the camera and the lens are purchased without measurement. And T is only a straight line distance, and can be conveniently measured by using a traditional measuring method, such as a ruler and a laser range finder. Therefore, the utility model discloses an empirical formula makes the preparation process become convenient and fast, has also improved the degree of accuracy of arranging of camera position simultaneously for the camera can set up in the position of optimizing, thereby has compromise 3D synthetic precision and speed simultaneously, and concrete experimental data is seen below.
Utilize the utility model discloses the device is tested, has obtained following experimental result.
Figure BDA0002315587660000081
The camera lens is replaced, and the experiment is carried out again, so that the following experiment results are obtained.
Figure BDA0002315587660000082
The camera lens is replaced, and the experiment is carried out again, so that the following experiment results are obtained.
Figure BDA0002315587660000083
From the above experimental results and a lot of experimental experiences, it can be derived that the value should satisfy <0.603, and at this time, a part of the 3D model can be synthesized, although a part cannot be automatically synthesized, it is acceptable in the case of low requirements, and the part which cannot be synthesized can be compensated manually or by replacing the algorithm. Particularly, when the value satisfies <0.410, the balance between the synthesis effect and the synthesis time can be optimally taken into consideration; to obtain better synthesis results, <0.356 can be chosen, where the synthesis time will increase, but the synthesis quality is better. Of course, <0.311 may be selected to further improve the effect of the synthesis. And 0.681, the synthesis is not possible. It should be noted that the above ranges are only preferred embodiments and should not be construed as limiting the scope of protection.
Moreover, as can be seen from the above experiment, for the determination of the photographing position of the camera, only the camera parameters (focal length f, CCD size) and the distance T between the camera CCD and the object surface need to be obtained according to the above formula, which makes it easy to design and debug the device. Since the camera parameters (focal length f, CCD size) are determined at the time of purchase of the camera and are indicated in the product description, they are readily available. Therefore, the camera position can be easily calculated according to the formula without carrying out complicated view angle measurement and object size measurement. Especially in some occasions, the lens of the camera needs to be replaced, so the method of the utility model can directly replace the conventional parameter f of the lens to calculate and obtain the position of the camera; similarly, when different objects are collected, the measurement of the size of the object is complicated due to the different sizes of the objects. And use the utility model discloses a method need not to carry out object size measurement, can confirm the camera position more conveniently. And use the utility model discloses definite camera position can compromise composition time and synthetic effect. Therefore, the above-mentioned empirical condition is one of the points of the present invention.
The above data are obtained by experiments for verifying the conditions of the formula, and do not limit the invention. Without these data, the objectivity of the formula is not affected. Those skilled in the art can adjust the equipment parameters and the step details as required to perform experiments, and obtain other data which also meet the formula conditions. After the image acquisition equipment acquires images of a target object in multiple directions through the image acquisition device 4, the images are transmitted to the processor in a data transmission mode. The processor may be located locally or the image may be uploaded to a cloud platform using a remote processor. The synthesis of the 3D model is performed in the processor using the following method. The synthesizing method uses a known method, such as a beam adjustment method, and the like, for example, a synthesizing algorithm disclosed in CN 107655459A.
The utility model discloses in rotary motion, for gathering in-process preceding position collection plane and back position collection plane and taking place alternately but not parallel, or preceding position image acquisition device optical axis and back position image acquisition position optical axis take place alternately but not parallel. That is, the capture area of the image capture device moves around or partially around the target, both of which can be considered as relative rotation. Although the embodiment of the present invention exemplifies more orbital rotation, it should be understood that the limitation of the present invention can be used as long as the non-parallel motion between the acquisition region of the image acquisition device and the target object is rotation. The scope of the invention is not limited to the embodiment with track rotation. The adjacent collecting positions in the utility model are two adjacent positions of collecting action on the moving track when the image collecting device moves relative to the target object. This is generally easily understood for the image acquisition device movements. However, when the target object moves to cause relative movement between the two, the movement of the target object should be converted into the movement of the target object, which is still, and the image capturing device moves according to the relativity of the movement. And then measuring two adjacent positions of the image acquisition device in the converted movement track.
Accessory matching and making
After 3D information of the target object is collected and the 3D model is synthesized, accessories matched with the target object can be manufactured for the target object according to the 3D data.
Such as making glasses for the user to fit the face. On the basis of meeting the limit of the experience condition, acquiring a plurality of pictures of the head of the user in different directions; and (3) synthesizing the plurality of photos into a 3D model by using 3D synthesis software, wherein the adopted method can use a common 3D picture matching algorithm. And after obtaining the 3D mesh model, adding texture information to form a head 3D model. And selecting a proper glasses frame for the user according to the relevant position size of the 3D head model, such as cheek width, nose bridge height, auricle size and the like.
Besides glasses, a variety of accessories such as hats, gloves, artificial limbs, and the like can be designed for users. It is also possible to design the object with accessories that fit it, for example, to design the profiled part with a closely wrapped package, etc.
Target identification and comparison
The 3D information of multiple regions of the target obtained in the above embodiments can be used for comparison, for example, for identification of identity. Utilize at first the utility model discloses a scheme acquires the 3D information of human face and iris to with its storage in the server, as standard data. When the system is used, for example, when the system needs to perform identity authentication to perform operations such as payment and door opening, the 3D acquisition device can be used for acquiring and acquiring the 3D information of the face and the iris of the human body again, the acquired information is compared with standard data, and if the comparison is successful, the next action is allowed. It can be understood that the comparison can also be used for identifying fixed assets such as antiques and artworks, namely, the 3D information of a plurality of areas of the antiques and the artworks is firstly acquired as standard data, when the identification is needed, the 3D information of the plurality of areas is acquired again and compared with the standard data, and the authenticity is identified.
Although the image capturing device captures an image in the above embodiments, the image capturing device is not understood to be applicable to only a group of pictures made of a single picture, and this is merely an illustrative manner for facilitating understanding. The image acquisition device can also acquire video data, and directly utilize the video data or intercept images from the video data to carry out 3D synthesis. However, the shooting position of the corresponding frame of the video data or the captured image used in the synthesis still satisfies the above empirical formula.
The target object, and the object all represent objects for which three-dimensional information is to be acquired. The object may be a solid object or a plurality of object components. For example, the head, hands, etc. The three-dimensional information of the target object comprises a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, a local three-dimensional feature, a three-dimensional size and all parameters with the three-dimensional feature of the target object. The utility model discloses the three-dimensional is that to have XYZ three direction information, especially has degree of depth information, and only two-dimensional plane information has essential difference. It is also fundamentally different from some definitions, which are called three-dimensional, panoramic, holographic, three-dimensional, but actually comprise only two-dimensional information, in particular not depth information.
The collection area of the present invention is the range that the image collection device (e.g., camera) can take. The utility model provides an image acquisition device can be CCD, CMOS, camera, industry camera, monitor, camera, cell-phone, flat board, notebook, mobile terminal, wearable equipment, intelligent glasses, intelligent wrist-watch, intelligent bracelet and have all equipment of image acquisition function.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: rather, the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Moreover, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality according to embodiments of the invention based on some or all of the components in the apparatus of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
Thus, it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been shown and described in detail herein, many other variations and modifications can be made, consistent with the principles of the invention, which are directly determined or derived from the disclosure herein, without departing from the spirit and scope of the invention. Accordingly, the scope of the present invention should be understood and interpreted to cover all such other variations or modifications.

Claims (10)

1. A three-dimensional acquisition device, characterized by:
the image acquisition device is used for acquiring images of the target object in at least three directions;
the acquisition position of the image acquisition device meets the following conditions:
Figure FDA0002315587650000011
<0.603
wherein L is the straight-line distance between the optical centers of two adjacent image acquisition positions, f is the focal length of the image acquisition device, d is the rectangular length or width of the photosensitive element of the image acquisition device, T is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis, and T is the adjustment coefficient.
2. The apparatus of claim 1, wherein: the system comprises a plurality of image acquisition devices which are respectively positioned around a target object.
3. The apparatus of claim 1, wherein: the image acquisition device is arranged on the rotating device.
4. The apparatus of claim 1, wherein: the target is arranged on the rotating device.
5. The apparatus of any of claims 1-4, wherein: < 0.410.
6. The apparatus of any of claims 1-4, wherein: < 0.356.
7. The apparatus of any of claims 1-4, wherein: the image acquisition device is a visible light camera and/or an infrared camera.
8. The apparatus of any of claims 1-4, wherein: the image acquisition device is a fixed-focus camera and/or a zoom camera.
9. The apparatus of any of claims 1-4, wherein: the equipment is connected with an upper computer, a server or a cloud platform through a communication interface.
10. The apparatus of any of claims 3-4, wherein: the rotating device is a turntable, a rotary drum, a rotary table, a rotary arm and/or a guide rail.
CN201922224510.8U 2019-12-12 2019-12-12 Three-dimensional acquisition equipment Active CN211178345U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201922224510.8U CN211178345U (en) 2019-12-12 2019-12-12 Three-dimensional acquisition equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201922224510.8U CN211178345U (en) 2019-12-12 2019-12-12 Three-dimensional acquisition equipment

Publications (1)

Publication Number Publication Date
CN211178345U true CN211178345U (en) 2020-08-04

Family

ID=71803543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201922224510.8U Active CN211178345U (en) 2019-12-12 2019-12-12 Three-dimensional acquisition equipment

Country Status (1)

Country Link
CN (1) CN211178345U (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112254676A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Portable intelligent 3D information acquisition equipment
CN112254671A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Multi-time combined 3D acquisition system and method
CN113505629A (en) * 2021-04-02 2021-10-15 上海师范大学 Intelligent storage article recognition device based on light weight network
CN113516150A (en) * 2021-04-02 2021-10-19 上海师范大学 Intelligent visual image acquisition system based on multi-acquisition visual angle information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112254676A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Portable intelligent 3D information acquisition equipment
CN112254671A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Multi-time combined 3D acquisition system and method
CN113505629A (en) * 2021-04-02 2021-10-15 上海师范大学 Intelligent storage article recognition device based on light weight network
CN113516150A (en) * 2021-04-02 2021-10-19 上海师范大学 Intelligent visual image acquisition system based on multi-acquisition visual angle information

Similar Documents

Publication Publication Date Title
CN111060023B (en) High-precision 3D information acquisition equipment and method
CN211178345U (en) Three-dimensional acquisition equipment
CN111292239B (en) Three-dimensional model splicing equipment and method
CN111076674B (en) Closely target object 3D collection equipment
CN111006586B (en) Intelligent control method for 3D information acquisition
CN109146961B (en) 3D measures and acquisition device based on virtual matrix
CN110986770B (en) Camera used in 3D acquisition system and camera selection method
CN111060008B (en) 3D intelligent vision equipment
CN111429523A (en) Remote calibration method in 3D modeling
CN113100754A (en) 3D information acquisition measuring equipment
CN110827196A (en) Device capable of simultaneously acquiring 3D information of multiple regions of target object
CN110986768A (en) High-speed acquisition and measurement equipment for 3D information of target object
CN211178344U (en) Intelligent three-dimensional vision acquisition equipment
CN211373522U (en) Short-distance 3D information acquisition equipment and 3D synthesis, microscopy and attachment manufacturing equipment
CN111340959B (en) Three-dimensional model seamless texture mapping method based on histogram matching
CN111445528B (en) Multi-camera common calibration method in 3D modeling
CN110973763A (en) Foot intelligence 3D information acquisition measuring equipment
CN211085114U (en) Take 3D information acquisition equipment of background board
CN211375621U (en) Iris 3D information acquisition equipment and iris identification equipment
CN211932790U (en) Human hand three-dimensional information acquisition device
CN111207690B (en) Adjustable iris 3D information acquisition measuring equipment
CN211085115U (en) Standardized biological three-dimensional information acquisition device
WO2021115297A1 (en) 3d information collection apparatus and method
CN111310661B (en) Intelligent 3D information acquisition and measurement equipment for iris
CN211085152U (en) 3D acquisition equipment

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant