WO2021115296A1 - Ultra-thin three-dimensional capturing module for mobile terminal - Google Patents

Ultra-thin three-dimensional capturing module for mobile terminal Download PDF

Info

Publication number
WO2021115296A1
WO2021115296A1 PCT/CN2020/134753 CN2020134753W WO2021115296A1 WO 2021115296 A1 WO2021115296 A1 WO 2021115296A1 CN 2020134753 W CN2020134753 W CN 2020134753W WO 2021115296 A1 WO2021115296 A1 WO 2021115296A1
Authority
WO
WIPO (PCT)
Prior art keywords
image acquisition
acquisition device
mobile terminal
motion
image
Prior art date
Application number
PCT/CN2020/134753
Other languages
French (fr)
Chinese (zh)
Inventor
左忠斌
左达宇
Original Assignee
左忠斌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 左忠斌 filed Critical 左忠斌
Publication of WO2021115296A1 publication Critical patent/WO2021115296A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Definitions

  • the present invention relates to the technical field of object collection, in particular to the technical field of using a camera to perform three-dimensional collection of a target in a mobile terminal.
  • 3D acquisition methods include structured light method and laser scanning method, but these methods all require a light source and a beam shaping system, which are costly, consume a lot of power, and take up a lot of space.
  • mobile phones usually have 1-3 cameras to achieve some special shooting effects, such as background blur.
  • camera system that can be used for 3D acquisition on mobile phones. If only the current camera system is used, it is difficult to perform 3D stitching due to the limited shooting angle, and 3D images cannot be obtained. If you want to increase the shooting angle and increase the redundancy of the captured images, you need to set up multiple cameras.
  • the Digital Emily project of the University of Southern California uses a spherical bracket to fix hundreds of cameras at different positions and angles on the bracket. This conventional system that uses image acquisition equipment for 3D acquisition is difficult to use on small-sized mobile terminal devices such as mobile phones.
  • the current use of mobile phones for 3D acquisition is usually limited to human faces.
  • the use of mobile phones for 3D acquisition usually requires the mobile phone to rotate around the target (with or without track), or the camera to rotate around the target. But obviously this does not apply to distant objects.
  • the rotating parts arranged in the housing also increase the volume of the module, which is not conducive to the miniaturization of the equipment.
  • a 360° three-dimensional model of the target is not needed, but only a three-dimensional model within the line of sight, for example, only a front and partial side three-dimensional model of the park landscape is required. There is no suitable solution to this problem.
  • mobile terminals usually have cameras, but these cameras do not move during the shooting process.
  • the camera is usually hidden through movement before and after the opening. Since they do not move relative to the target during the shooting process, they can only shoot 2D images, and the captured images cannot be synthesized into 3D. Therefore, there is an urgent need in this field for high-quality, low-cost, and small-volume 3D acquisition devices that can be applied to mobile terminals.
  • the present invention is proposed to provide an ultra-thin three-dimensional acquisition module for mobile terminals that overcomes the above-mentioned problems or at least partially solves the above-mentioned problems.
  • the invention provides an ultra-thin three-dimensional acquisition module for a mobile terminal, which includes a data interface, a motion drive device, a motion device and an image acquisition device;
  • the image acquisition device is arranged on the movement device, and the image acquisition device moves relative to the mobile terminal during the image acquisition process;
  • the motion drive device is connected to the motion device
  • the motion drive device is electrically connected to the mobile terminal through the data interface
  • the image acquisition device is electrically connected to the mobile terminal through a data interface
  • the optical axis of the image acquisition device and the motion plane of the image acquisition device have an included angle ⁇ .
  • 90° or 0 ⁇ 90° or 180°> ⁇ >90°.
  • the optical axis converges or diverges with respect to the vertical line of the motion plane of the image acquisition device.
  • the module and the mobile terminal are independent of each other or embedded in the mobile terminal.
  • the image acquisition device includes a visible light image acquisition device and/or an infrared image acquisition device.
  • the image acquisition device extends out of the module housing.
  • the area where the image acquisition device moves further includes a light-transmitting shell part.
  • the motion device is a turntable, a turntable, a curved guide rail, and a linear guide rail.
  • the acquisition position of the image acquisition device is: two adjacent acquisition positions of the image acquisition device meet the following conditions:
  • the present invention also provides a three-dimensional acquisition method for a mobile terminal, and the image acquisition device is arranged in the mobile terminal,
  • the image acquisition device moves relative to the mobile terminal during the acquisition process, thereby capturing images of the target object at different positions;
  • the optical axis of the image acquisition device and the motion plane of the image acquisition device have an included angle ⁇ , 0 ⁇ 180°.
  • the whole device can be moved, which is convenient for outdoor use.
  • the external connection mode is adopted, and there is no need to modify the existing mobile phones, which is more versatile and lower in cost.
  • FIG. 1 is a schematic structural diagram of an implementation manner of a three-dimensional acquisition module in an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of another implementation manner of a three-dimensional acquisition module in an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a third implementation manner of a three-dimensional acquisition module in an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a fourth implementation manner of a three-dimensional acquisition module in an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a fifth implementation manner of a three-dimensional acquisition module in an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a sixth implementation manner of a three-dimensional acquisition module in an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a seventh implementation manner of a three-dimensional acquisition module in an embodiment of the present invention.
  • 1 data interface 2 movement drive device, 3 movement device, 4 image acquisition device, 5 angle adjustment device.
  • Mobile terminal module structure-translation and rotation type
  • an embodiment of the present invention provides a three-dimensional acquisition module for a mobile terminal. As shown in Figs. 1-7, it specifically includes: a data interface 1, a motion driving device 2, a motion device 3, and an image acquisition device 4.
  • the image acquisition device 4 is arranged on the movement device 3.
  • the motion device 3 can be a sliding table and a circular track, the image acquisition device 4 is installed on the sliding table, or the housing of the image acquisition device 4 is directly installed on the guide rail as a sliding table, or the housing of the image acquisition device 4 and the module housing A sliding fit is formed with each other to realize the translation of the image acquisition device 4 on the guide rail.
  • the motion driving device 2 is connected to the motion device 3, and can drive a sliding table or directly drive the housing of the image acquisition device 4 to move. For the lead screw or the tooth-engaged guide rail, the corresponding structure can also be driven to make the image acquisition device 4 translate.
  • the image acquisition device 4 does not rely on manual movement, but is driven and moved according to the purpose of the acquisition, and has certain requirements for the acquisition position, and needs to meet the empirical formula setting (detailed below), so that it can Ensure the accuracy of 3D information collected. If you only rely on the customer's manual movement, it will cause uneven and incomplete image information collection, and it will even be difficult to match and stitch into a 3D image. At the same time, it does not rely on moving the entire mobile phone to realize image acquisition, because this kind of movement either requires the mobile phone to be installed on an additional track, or it is free movement without a track. The former limits the usage scenarios, while the latter leads to a decrease in the quality of the collection.
  • the guide rail is curved, such as a circular arc, as shown in Figs. 4 and 5, so that when the image acquisition device 4 moves on it, the movement track is arc-shaped.
  • the direction of the optical axis of the image acquisition device 4 remains unchanged, that is, the guide rails are arranged parallel to the light emitting surface, so that the image acquisition rotating surface is approximately parallel to the target surface.
  • the rotating surface of the image acquisition device 4 is parallel to the height direction of the building, rather than parallel to the cross section of the building.
  • the guide rail is linear, as shown in FIG. 6, so that when the image acquisition device 4 moves on it, the movement track is a straight line, so as to realize the scanning of the target.
  • the direction of the optical axis of the image acquisition device 4 is also unchanged, only the translation of the direction of the optical axis.
  • each image acquisition device 4 moves along a single guide rail, and the motion track is similar to the above.
  • two image acquisition devices 4 can be set to move along the upper and lower guide rails respectively, so that the acquisition range can be expanded, and at the same time, more pictures can be acquired per unit time, which is more efficient.
  • the two image acquisition devices 4 may be cameras of different wavelength bands, such as infrared wavelength band and visible light wavelength band.
  • the image acquisition device 4 is exposed outside the housing of the acquisition module, that is, the housing of the acquisition module has a corresponding groove, and the image acquisition device 4 protrudes from the groove, as shown in FIGS. 2 and 3.
  • the image capture device 4 can be extended out of the groove when needed, and retracted into the housing when it is not working.
  • the groove has a cover, which can close the groove when the image capture device 4 is retracted to avoid dust.
  • the housing of the acquisition module opposite to the image acquisition device is made of a transparent material.
  • the image acquisition device 4 can directly perform motion acquisition without extending the housing. This is beneficial to waterproof and dustproof.
  • the motion drive device 2 Since the motion drive device 2 is connected to the motion device 3, the image acquisition device 4 is driven to translate according to the predetermined requirements of 3D acquisition. Therefore, the motion drive device 2 needs to have a data interface 1 to receive the corresponding motion instructions, that is, the motion drive device 2 passes the data
  • the interface 1 is electrically connected to the mobile terminal.
  • It also includes a processor, also called a processing unit, which is used to synthesize a 3D model of the target object according to a 3D synthesis algorithm according to a plurality of images collected by the image acquisition device to obtain 3D information of the target object.
  • a processor also called a processing unit, which is used to synthesize a 3D model of the target object according to a 3D synthesis algorithm according to a plurality of images collected by the image acquisition device to obtain 3D information of the target object.
  • the module includes a housing, a plurality of image acquisition devices 4 are distributed in the housing, and the separation distance of the image acquisition devices 4 is limited by the following empirical conditions.
  • the lens of the image capture device is exposed outside the module housing or is located in the module housing.
  • a corresponding protection mechanism is provided to protect the lens.
  • a transparent cover is provided.
  • the housing of the acquisition module opposite to the image acquisition device 4 is made of transparent material.
  • the included angle between the optical axis and the moving plane is the included angle with the terminal housing, which is usually 90°.
  • the angle between the optical axis of the image acquisition device 4 and the housing of the mobile terminal is usually 90°.
  • ⁇ 90° that is, when the image capture device 4 is in different positions, the optical axis converges toward the vertical direction relative to the moving plane or the housing.
  • ⁇ >90° that is, when the image acquisition device 4 is in different positions, the optical axis diverges in a direction perpendicular to the moving plane or the housing.
  • the above-mentioned situation that ⁇ is not equal to 90° can be realized in the following ways: as shown in Figure 7, 1 the image acquisition device 4 is fixed on the movement device 3 at an angle of ⁇ ; 2 the image acquisition device 4 is set on the angle adjustment device 5, which can be The movement position changes, changing ⁇ ; 3The image acquisition device 4 is fixed on the module or the housing.
  • the above-mentioned angle adjusting device 5 may be a turntable.
  • the entire module is an external type.
  • the data interface can be an interface that cooperates with Type-c interface, MicroUSB interface, USB interface, Lightning interface, wifi interface, Bluetooth interface, and cellular network interface. Wired or wirelessly connected to the mobile terminal.
  • the entire module is built-in.
  • the data interface 1 can be directly connected to the processor of the mobile terminal internally.
  • the structure of the module is a part of the mobile phone, that is, although the present invention is described as a module, in fact, these structures already belong to a part of the mobile phone and are already completed when the mobile phone is produced and manufactured.
  • the image acquisition device 4 is electrically connected to the mobile terminal through the data interface 1 so as to transmit the collected images to the mobile terminal for storage and subsequent 3D processing.
  • the module Whether it is external or internal, there is a mechanical connection between the module and the mobile terminal.
  • the module is inserted into the earphone jack of the mobile terminal through the earphone plug. Since the module and the mobile terminal must transmit control signals and image data to each other, in addition to the mechanical connection, there is also an electrical connection, especially a signal connection.
  • the mechanical connection and the electrical connection are realized by the same structure.
  • the mobile phone module is connected to the mobile phone through a mechanical connector/electrical connector, and the mobile phone module and the mobile phone are relatively rigidly connected, so that the two are integrated.
  • the earphone plug described above is inserted into the earphone jack of the mobile terminal, and the mechanical connection and the electrical connection are realized at the same time.
  • Both the module and the mobile phone can be rigidly fixed to each other, and signals can be transmitted between each other.
  • the mechanical connection can also utilize additional mechanical connections. For example, additional plugs and jacks, protrusions and card slots are provided between the module and the mobile phone to achieve a rigid and fixed connection between the module and the mobile phone.
  • the module has a headphone plug, a microUSB plug, a TepyC plug, and a Lightning plug, which are correspondingly inserted into the corresponding sockets of the mobile phone, but this kind of insertion is only used for mechanical connection, not For signal transmission, the signal is connected by other means.
  • the module and the mobile phone are integrated.
  • the module can be fixed relative to the target object, and pictures from different angles can be taken by the movement of the image acquisition device 4.
  • the movement device 3 may include a magnetic levitation device, so that the movement process is smoother and the user experience is improved.
  • the image capture device 4 moves in the housing of the module, and the housing part involved in the movement area is made of a transparent material, such as a transparent resin material.
  • the image acquisition device 4 may be a visible light camera/camera module or an infrared camera/camera module.
  • the visible light camera will not be able to collect the image completely.
  • the infrared camera can be used for collection, and in the subsequent processing, the images collected by the visible light camera and the infrared camera can be matched and fused with each other to realize 3D information collection.
  • the infrared camera and the visible light camera can be side by side in the track. It is also possible to set up two tracks to install an infrared camera and a visible light camera respectively. And you can also use a single camera with a wider spectral sensing range, while taking into account visible light cameras and infrared cameras.
  • the shell of the module has a light source, and the light source is an LED lamp bead, but a smart light source can also be set, for example, different light source brightness, brightness, etc. can be selected according to needs.
  • the light source is used to illuminate the target to prevent the target from being too dark to affect the collection effect and accuracy. But at the same time, it is necessary to prevent the light source from being too bright, causing the loss of target texture information.
  • the light source can also use the mobile terminal's own light source to illuminate the part to be scanned.
  • the images collected by the module can be transmitted to the display module of the mobile terminal for display, so as to facilitate the user to observe their own collection process.
  • the acquisition module that is too far or too close to the target it can be displayed through the display module, and can be reminded through the voice module.
  • the image collected by the module may not be displayed in the display module of the mobile terminal, but the information that it is too far or too close to the target object can be voiced through the mobile terminal to prompt the user to move.
  • the connection between the module and the voice or display module of the mobile terminal is also realized through the data interface 1 of the module.
  • the optical axis direction of the image acquisition device does not change relative to the target at different acquisition positions, and is usually roughly perpendicular to the surface of the target (it can also be at a certain angle).
  • the positions of two adjacent image acquisition devices , Or the two adjacent collection positions of the image collection device meet the following conditions:
  • d takes the length of the rectangle; when the above two positions are along the width direction of the photosensitive element of the image capture device, d is the width of the rectangle.
  • the distance from the photosensitive element to the surface of the target along the optical axis is taken as M.
  • L should be the linear distance between the optical centers of the two image capture devices, but because the position of the optical centers of the image capture devices is not easy to determine in some cases, the photosensitive of the image capture devices can also be used in some cases.
  • the center of the component, the geometric center of the image capture device, the center of the axis connecting the image capture device and the pan/tilt (or platform, bracket), the center of the proximal or distal lens surface instead of Within the acceptable range, therefore, the above-mentioned range is also within the protection scope of the present invention.
  • the adjacent acquisition positions in the present invention refer to two adjacent positions on the moving track where the acquisition action occurs when the image acquisition device moves relative to the target. This is usually easy to understand for the movement of the image capture device. However, when the target object moves to cause the two to move relative to each other, at this time, the movement of the target object should be converted into the target object's immobility according to the relativity of the movement, and the image acquisition device moves. At this time, measure the two adjacent positions of the image acquisition device where the acquisition action occurs in the transformed movement track.
  • the existing algorithm can be used to realize it, or the optimized algorithm proposed by the present invention can be used, which mainly includes the following steps:
  • Step 1 Perform image enhancement processing on all input photos.
  • the following filters are used to enhance the contrast of the original photo and suppress noise at the same time.
  • g(x, y) is the gray value of the original image at (x, y)
  • f(x, y) is the gray value of the original image after being enhanced by the Wallis filter
  • m g is the local gray value of the original image Degree mean
  • s g is the local gray-scale standard deviation of the original image
  • m f is the local gray-scale target value of the transformed image
  • s f is the local gray-scale standard deviation target value of the transformed image.
  • c ⁇ (0,1) is the expansion constant of the image variance
  • b ⁇ (0,1) is the image brightness coefficient constant.
  • the filter can greatly enhance the image texture patterns of different scales in the image, so the number and accuracy of feature points can be improved when extracting the point features of the image, and the reliability and accuracy of the matching result can be improved in the photo feature matching.
  • Step 2 Perform feature point extraction on all input photos, and perform feature point matching to obtain sparse feature points.
  • the SURF operator is used to extract and match the feature points of the photos.
  • the SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching. This method uses Hessian matrix to detect feature points, uses Box Filters to replace second-order Gaussian filtering, and uses integral images to accelerate convolution to increase the calculation speed and reduce the dimensionality of local image feature descriptors. To speed up the matching speed.
  • the main steps include 1 constructing the Hessian matrix to generate all points of interest for feature extraction.
  • the purpose of constructing the Hessian matrix is to generate stable edge points (mutation points) of the image; 2 constructing the scale space feature point positioning, which will be processed by the Hessian matrix Compare each pixel point with 26 points in the neighborhood of two-dimensional image space and scale space, and initially locate the key points, and then filter out the key points with weaker energy and the key points that are incorrectly positioned to filter out the final stable 3
  • the main direction of the feature point is determined by using the Harr wavelet feature in the circular neighborhood of the statistical feature point.
  • the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree fan is counted, and then the fan is rotated at an interval of 0.2 radians and the harr wavelet eigenvalues in the area are counted again.
  • the direction of the sector with the largest value is taken as the main direction of the feature point; 4 Generate a 64-dimensional feature point description vector, and take a 4*4 rectangular area block around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. direction.
  • Each sub-region counts 25 pixels of haar wavelet features in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction.
  • Step 3 Input the matching feature point coordinates, use the beam method to adjust, solve the sparse face 3D point cloud and the position and posture data of the camera, that is, obtain the sparse face model 3D point cloud and the position model coordinate value ;
  • sparse feature points as initial values, dense matching of multi-view photos is performed to obtain dense point cloud data.
  • the process has four main steps: stereo pair selection, depth map calculation, depth map optimization, and depth map fusion. For each image in the input data set, we select a reference image to form a stereo pair for calculating the depth map. Therefore, we can get rough depth maps of all images. These depth maps may contain noise and errors. We use its neighborhood depth map to check consistency to optimize the depth map of each image. Finally, depth map fusion is performed to obtain a three-dimensional point cloud of the entire scene.
  • Step 4 Use the dense point cloud to reconstruct the face surface. Including the process of defining the octree, setting the function space, creating the vector field, solving the Poisson equation, and extracting the isosurface.
  • the integral relationship between the sampling point and the indicator function is obtained from the gradient relationship, and the vector field of the point cloud is obtained according to the integral relationship, and the approximation of the gradient field of the indicator function is calculated to form the Poisson equation.
  • the approximate solution is obtained by matrix iteration, the moving cube algorithm is used to extract the isosurface, and the model of the measured object is reconstructed from the measured point cloud.
  • Step 5 Fully automatic texture mapping of the face model. After the surface model is built, texture mapping is performed.
  • the main process includes: 1The texture data is obtained through the image reconstruction target's surface triangle grid; 2The visibility analysis of the reconstructed model triangle. Use the image calibration information to calculate the visible image set of each triangle and the optimal reference image; 3The triangle surface clustering generates texture patches. According to the visible image set of the triangle surface, the optimal reference image and the neighborhood topological relationship of the triangle surface, the triangle surface cluster is generated into a number of reference image texture patches; 4The texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate the texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangle.
  • the above-mentioned algorithm is an optimized algorithm of the present invention, and this algorithm cooperates with the image acquisition conditions, and the use of this algorithm takes into account the time and quality of synthesis, which is one of the invention points of the present invention.
  • the conventional 3D synthesis algorithm in the prior art can also be used, but the synthesis effect and speed will be affected to a certain extent.
  • the above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a physical object, or it can be a combination of multiple objects. For example, it can be a vehicle, a large sculpture, etc.
  • the three-dimensional information of the target includes a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, a local three-dimensional feature, a three-dimensional size, and all parameters with a three-dimensional feature of the target.
  • the so-called three-dimensional in the present invention refers to three-dimensional information of XYZ, especially depth information, which is essentially different from only two-dimensional plane information. It is also essentially different from the definitions called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially depth information.
  • the collection area mentioned in the present invention refers to the range that an image collection device (such as a camera) can shoot.
  • the image acquisition device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet and belt All devices with image capture function.
  • the 3D information of multiple regions of the target obtained in the above embodiment can be used for comparison, for example, for identity recognition.
  • the 3D acquisition device can be used to collect and obtain the 3D information of the human face and iris again, and compare it with the standard data. If the comparison is successful, the next step is allowed.
  • This kind of comparison can also be used for the identification of fixed assets such as antiques and artworks, that is, first obtain 3D information of multiple areas of antiques and artworks as standard data, and obtain 3D information of multiple areas again when authentication is required.
  • the three-dimensional information of multiple regions of the target obtained in the above embodiments can be used to design, produce, and manufacture accessory items for the target. For example, by obtaining three-dimensional data of the oral cavity and teeth of the human body, more suitable dentures can be designed and manufactured for the human body.
  • the three-dimensional information of the target obtained in the above embodiments can also be used to measure the geometric size and contour of the target.
  • modules or units or components in the embodiments can be combined into one module or unit or component, and in addition, they can be divided into multiple sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or units are mutually exclusive, any combination can be used to compare all features disclosed in this specification (including the accompanying claims, abstract and drawings) and any method or methods disclosed in this manner or All the processes or units of the equipment are combined. Unless expressly stated otherwise, each feature disclosed in this specification (including the accompanying claims, abstract and drawings) may be replaced by an alternative feature providing the same, equivalent or similar purpose.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions based on some or all of the components in the device of the present invention according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Abstract

An ultra-thin three-dimensional capturing module for a mobile terminal comprising a data interface (1), a motion drive device (2), a motion device (3), and an image capturing device (4). The image capturing device (4) is disposed on the motion device (3), and is movable relative to the mobile terminal during an image capturing process. The motion drive device (2) is connected to the motion device (3). The motion drive device (2) is electrically connected to the mobile terminal via the data interface (1). The image capturing device (4) is electrically connected to the mobile terminal via the data interface (1). An optical axis of the image capturing device (4) and a motion plane of the image capturing device (4) have an included angle γ. Motion of the image capturing device (4) reduces the number of cameras used. The invention enables an external connection of the mobile terminal, and facilitates adding a new 3D capturing function to existing mobile phones. The entire apparatus is movable and easy to use outdoors. The invention uses an external connection, such that modifications of existing mobile phones are not required, thereby providing general applicability and lowering costs.

Description

用于移动终端的超薄型三维采集的模组Ultra-thin three-dimensional acquisition module for mobile terminal 技术领域Technical field
本发明涉及物体采集技术领域,特别涉及在移动终端中利用相机进行目标物三维采集技术领域。The present invention relates to the technical field of object collection, in particular to the technical field of using a camera to perform three-dimensional collection of a target in a mobile terminal.
背景技术Background technique
目前常见的3D采集方法包括结构光法、激光扫描法,但这些方法都需要设置光源及光束整形系统,成本较高,耗电量大,且占用空间较大。Currently, common 3D acquisition methods include structured light method and laser scanning method, but these methods all require a light source and a beam shaping system, which are costly, consume a lot of power, and take up a lot of space.
而目前手机通常具有1-3个摄像头,从而实现一些特殊的拍摄效果,例如背景虚化等。但是目前还没有能够用于在手机上进行3D采集的摄像头系统。如果仅使用目前摄像头系统,由于拍摄角度有限,难以进行3D拼接,无法得到3D图像。如果要增加拍摄角度,提高拍摄图像的冗余度,需要设置多个摄像头。例如南加州大学的Digital Emily项目,采用球型支架,在支架上不同位置不同角度固定了上百个相机。这种常规的采用图像采集设备进行3D采集的系统难以用于手机等小体积的移动终端设备上。At present, mobile phones usually have 1-3 cameras to achieve some special shooting effects, such as background blur. However, there is no camera system that can be used for 3D acquisition on mobile phones. If only the current camera system is used, it is difficult to perform 3D stitching due to the limited shooting angle, and 3D images cannot be obtained. If you want to increase the shooting angle and increase the redundancy of the captured images, you need to set up multiple cameras. For example, the Digital Emily project of the University of Southern California uses a spherical bracket to fix hundreds of cameras at different positions and angles on the bracket. This conventional system that uses image acquisition equipment for 3D acquisition is difficult to use on small-sized mobile terminal devices such as mobile phones.
同时,目前也有通过移动手机来直接利用手机上的摄像头拍摄目标物多个角度图像再进行拼接的。然而,这种移动要么需要将手机安装在额外轨道上,要么就是无轨道的自由移动。前者限制了使用场景,而后者导致采集质量下降。At the same time, there are also mobile phones that directly use the camera on the phone to take images of multiple angles of the target and then stitch them together. However, this kind of movement either requires the phone to be installed on an additional track, or it is free movement without tracks. The former limits the usage scenarios, while the latter leads to a decrease in the quality of the collection.
目前也有手机上设置可以转动的摄像头,通常采用手动或电动方式驱动,但其目的是为了拍摄相应角度图片,而并不是为了实现扫描,也更无法合成3D模型。At present, there are also cameras that can be rotated on mobile phones, which are usually driven manually or electrically, but their purpose is to take pictures of corresponding angles, not to scan, and it is impossible to synthesize 3D models.
并且,目前用手机进行三维采集通常仅限于人脸,但是随着手机应用的普遍,对于其他物体,特别是较远物体的三维采集并没有相应方法解决。目前采用手机进行3D采集通常需要手机绕目标物转动(有轨道或无轨道),或是相机绕目标物转动。但显然这对于较远的物体并不适用。而设置在外壳内的转动部件也增大了模组的体积,不利于设备的小型化。而对于某些应用场合下,并不需要目标物360°的三维模型,而只需要视线可及范围内的三维模型,例如只需要公园景观一个正面和部分侧面的三维模型。该问题并没有合适的解决方案。Moreover, the current use of mobile phones for 3D acquisition is usually limited to human faces. However, with the popularity of mobile phone applications, there is no corresponding solution to the 3D acquisition of other objects, especially distant objects. At present, the use of mobile phones for 3D acquisition usually requires the mobile phone to rotate around the target (with or without track), or the camera to rotate around the target. But obviously this does not apply to distant objects. The rotating parts arranged in the housing also increase the volume of the module, which is not conducive to the miniaturization of the equipment. For some applications, a 360° three-dimensional model of the target is not needed, but only a three-dimensional model within the line of sight, for example, only a front and partial side three-dimensional model of the park landscape is required. There is no suitable solution to this problem.
在现有技术中,也曾提出使用包括旋转角度、目标物尺寸、物距的经验公 式限定相机位置,从而兼顾合成速度和效果。然而在实际应用中发现:除非有精确量角装置,否则用户对角度并不敏感,难以准确确定角度;目标物尺寸难以准确确定,特别是某些应用场合目标物需要频繁更换,每次测量带来大量额外工作量,并且需要专业设备才能准确测量不规则目标物。例如对于一个建筑而言,要知道它的长度有时并不容易,需要专业设备。测量的误差导致相机位置设定误差,从而会影响采集合成速度和效果。In the prior art, it has also been proposed to use an empirical formula including rotation angle, target size, and object distance to limit the camera position, so as to take into account the synthesis speed and effect. However, in practical applications, it is found that unless there is a precise angle measuring device, the user is not sensitive to the angle, and it is difficult to accurately determine the angle; the size of the target is difficult to accurately determine, especially in some applications where the target needs to be replaced frequently, and the tape is measured every time A lot of extra work is required, and professional equipment is required to accurately measure irregular targets. For example, for a building, it is sometimes not easy to know its length, and professional equipment is required. The measurement error leads to the camera position setting error, which will affect the acquisition and synthesis speed and effect.
在现有技术中,移动终端通常具有摄像头,但这些摄像头在拍摄过程中并不移动。通常都是在开启前和结束后摄像头通过运动而隐藏。由于它们在拍摄过程中相对于目标物并不运动,它们只能拍摄2D图像,且拍摄的图像也无法合成3D。因此,本领域急需能够应用于移动终端的高质量、低成本、小体积3D采集装置。In the prior art, mobile terminals usually have cameras, but these cameras do not move during the shooting process. The camera is usually hidden through movement before and after the opening. Since they do not move relative to the target during the shooting process, they can only shoot 2D images, and the captured images cannot be synthesized into 3D. Therefore, there is an urgent need in this field for high-quality, low-cost, and small-volume 3D acquisition devices that can be applied to mobile terminals.
发明内容Summary of the invention
鉴于上述问题,提出了本发明以便提供一种克服上述问题或者至少部分地解决上述问题的用于移动终端的超薄型三维采集的模组。In view of the above-mentioned problems, the present invention is proposed to provide an ultra-thin three-dimensional acquisition module for mobile terminals that overcomes the above-mentioned problems or at least partially solves the above-mentioned problems.
本发明提供了一种用于移动终端的超薄型三维采集的模组,包括数据接口、运动驱动装置、运动装置和图像采集装置;The invention provides an ultra-thin three-dimensional acquisition module for a mobile terminal, which includes a data interface, a motion drive device, a motion device and an image acquisition device;
其中图像采集装置设置在运动装置上,图像采集装置在图像采集过程中相对移动终端运动;The image acquisition device is arranged on the movement device, and the image acquisition device moves relative to the mobile terminal during the image acquisition process;
运动驱动装置与运动装置连接;The motion drive device is connected to the motion device;
运动驱动装置通过数据接口与移动终端电连接;The motion drive device is electrically connected to the mobile terminal through the data interface;
图像采集装置通过数据接口与移动终端电连接;The image acquisition device is electrically connected to the mobile terminal through a data interface;
图像采集装置的光轴与图像采集装置的运动平面具有夹角γ。The optical axis of the image acquisition device and the motion plane of the image acquisition device have an included angle γ.
可选的,γ=90°或0<γ<90°或180°>γ>90°。Optionally, γ=90° or 0<γ<90° or 180°>γ>90°.
可选的,图像采集装置在不同位置时,光轴相对于图像采集装置的运动平面的垂线会聚或发散。Optionally, when the image acquisition device is in different positions, the optical axis converges or diverges with respect to the vertical line of the motion plane of the image acquisition device.
可选的,所述模组和移动终端相互独立或内嵌入移动终端。Optionally, the module and the mobile terminal are independent of each other or embedded in the mobile terminal.
可选的,图像采集装置为多个。Optionally, there are multiple image acquisition devices.
可选的,所述图像采集装置包括可见光图像采集装置和/或红外图像采集装置。Optionally, the image acquisition device includes a visible light image acquisition device and/or an infrared image acquisition device.
可选的,图像采集装置伸出所述模组外壳。Optionally, the image acquisition device extends out of the module housing.
可选的,所述图像采集装置运动的区域还包括透光外壳部。Optionally, the area where the image acquisition device moves further includes a light-transmitting shell part.
可选的,所述运动装置为转盘、转台、曲线导轨、直线导轨。Optionally, the motion device is a turntable, a turntable, a curved guide rail, and a linear guide rail.
可选的,图像采集装置的采集位置为:图像采集装置相邻两个采集位置满足如下条件:Optionally, the acquisition position of the image acquisition device is: two adjacent acquisition positions of the image acquisition device meet the following conditions:
Figure PCTCN2020134753-appb-000001
Figure PCTCN2020134753-appb-000001
μ<0.482μ<0.482
其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件(CCD)的矩形长度;M为图像采集装置感光元件沿着光轴到目标物表面的距离;μ为经验系数。Where L is the linear distance between the optical centers of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length of the photosensitive element (CCD) of the image acquisition device; M is the photosensitive element of the image acquisition device along the light The distance from the axis to the surface of the target; μ is the empirical coefficient.
本发明还提供了一种用于移动终端的三维采集方法,图像采集装置设置在移动终端中,The present invention also provides a three-dimensional acquisition method for a mobile terminal, and the image acquisition device is arranged in the mobile terminal,
图像采集装置采集过程中相对移动终端运动,从而在不同位置拍摄目标物的图像;The image acquisition device moves relative to the mobile terminal during the acquisition process, thereby capturing images of the target object at different positions;
图像采集装置的光轴与图像采集装置的运动平面具有夹角γ,0<γ<180°。The optical axis of the image acquisition device and the motion plane of the image acquisition device have an included angle γ, 0<γ<180°.
发明点及技术效果Invention points and technical effects
1、首次提出在移动终端中能够应用图像拼接原理进行3D采集的装置结构。1. For the first time, a device structure that can apply the principle of image splicing for 3D acquisition in a mobile terminal is proposed.
2、通过图像采集装置的移动降低相机的使用数量。2. Reduce the number of cameras used by moving the image acquisition device.
3、能够实现移动终端的外接,方便在现有手机上增加新的3D采集功能。3. It can realize the external connection of the mobile terminal, which is convenient for adding new 3D collection function to the existing mobile phone.
4、整个设备可以移动,方便户外使用。4. The whole device can be moved, which is convenient for outdoor use.
5、采用外置连接方式,无需改造目前已有手机,通用性更强,成本更低。5. The external connection mode is adopted, and there is no need to modify the existing mobile phones, which is more versatile and lower in cost.
6、采用光轴与转动平面呈一定夹角的方式采集,体积更小、更适用于较远目标物。6. It is collected in a way that the optical axis and the rotating plane are at a certain angle, which is smaller in size and more suitable for distant targets.
7、优化相机位置,同时提高检测精度和速度。且优化位置时,无需测量角度,无需测量目标尺寸,适用性更强。7. Optimize the position of the camera while improving the detection accuracy and speed. And when optimizing the position, there is no need to measure the angle, no need to measure the target size, and the applicability is stronger.
附图说明Description of the drawings
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的 部件。在附图中:By reading the detailed description of the preferred embodiments below, various other advantages and benefits will become clear to those of ordinary skill in the art. The drawings are only used for the purpose of illustrating the preferred embodiments, and are not considered as a limitation to the present invention. Also, throughout the drawings, the same reference symbols are used to denote the same components. In the attached picture:
图1为本发明实施例中三维采集模组的一种实施方式结构示意图;FIG. 1 is a schematic structural diagram of an implementation manner of a three-dimensional acquisition module in an embodiment of the present invention;
图2为本发明实施例中三维采集模组的另一种实施方式结构示意图;2 is a schematic structural diagram of another implementation manner of a three-dimensional acquisition module in an embodiment of the present invention;
图3为本发明实施例中三维采集模组的第三种实施方式结构示意图;3 is a schematic structural diagram of a third implementation manner of a three-dimensional acquisition module in an embodiment of the present invention;
图4为本发明实施例中三维采集模组的第四种实施方式结构示意图;4 is a schematic structural diagram of a fourth implementation manner of a three-dimensional acquisition module in an embodiment of the present invention;
图5为本发明实施例中三维采集模组的第五种实施方式结构示意图;5 is a schematic structural diagram of a fifth implementation manner of a three-dimensional acquisition module in an embodiment of the present invention;
图6为本发明实施例中三维采集模组的第六种实施方式结构示意图;6 is a schematic structural diagram of a sixth implementation manner of a three-dimensional acquisition module in an embodiment of the present invention;
图7为本发明实施例中三维采集模组的第七种实施方式结构示意图;FIG. 7 is a schematic structural diagram of a seventh implementation manner of a three-dimensional acquisition module in an embodiment of the present invention;
1数据接口,2运动驱动装置,3运动装置,4图像采集装置,5角度调整装置。1 data interface, 2 movement drive device, 3 movement device, 4 image acquisition device, 5 angle adjustment device.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。Hereinafter, exemplary embodiments of the present disclosure will be described in more detail with reference to the accompanying drawings. Although the drawings show exemplary embodiments of the present disclosure, it should be understood that the present disclosure can be implemented in various forms and should not be limited by the embodiments set forth herein. On the contrary, these embodiments are provided to enable a more thorough understanding of the present disclosure and to fully convey the scope of the present disclosure to those skilled in the art.
移动终端模组结构——平移转动式Mobile terminal module structure-translation and rotation type
为解决上述技术问题,本发明的一实施例提供了一种用于移动终端的三维采集模组。如图1-图7所示,具体包括:数据接口1、运动驱动装置2、运动装置3和图像采集装置4。In order to solve the above technical problem, an embodiment of the present invention provides a three-dimensional acquisition module for a mobile terminal. As shown in Figs. 1-7, it specifically includes: a data interface 1, a motion driving device 2, a motion device 3, and an image acquisition device 4.
其中图像采集装置4设置在运动装置3上。运动装置3可以为滑台和圆形轨道,图像采集装置4安装在滑台上,或者图像采集装置4的外壳本身作为滑台直接安装在导轨上,或者图像采集装置4的外壳与模组外壳相互形成滑动配合,实现图像采集装置4在导轨上平移。运动驱动装置2与运动装置3连接,可以驱动滑台,或者直接驱动图像采集装置4的外壳移动。对于丝杠或齿啮合导轨,也可以驱动相应结构,从而使得图像采集装置4平移。也就是说或,图像采集装置4并不是依靠手动移动的,而是根据采集目的进行驱动移动的,并且对采集位置具有一定的要求,需要符合经验公式设定(具体下面详述),这样可以保证3D采集信息的准确。如果仅依靠客户手动移动,会导致图像信息采集不均匀,不完备,甚至难以匹配拼接成3D图像。同时,也不是依靠移动整个手机来实现图像采集,因为这种移动要么需要将手机安装在额外轨道上, 要么就是无轨道的自由移动。前者限制了使用场景,而后者导致采集质量下降。The image acquisition device 4 is arranged on the movement device 3. The motion device 3 can be a sliding table and a circular track, the image acquisition device 4 is installed on the sliding table, or the housing of the image acquisition device 4 is directly installed on the guide rail as a sliding table, or the housing of the image acquisition device 4 and the module housing A sliding fit is formed with each other to realize the translation of the image acquisition device 4 on the guide rail. The motion driving device 2 is connected to the motion device 3, and can drive a sliding table or directly drive the housing of the image acquisition device 4 to move. For the lead screw or the tooth-engaged guide rail, the corresponding structure can also be driven to make the image acquisition device 4 translate. In other words, the image acquisition device 4 does not rely on manual movement, but is driven and moved according to the purpose of the acquisition, and has certain requirements for the acquisition position, and needs to meet the empirical formula setting (detailed below), so that it can Ensure the accuracy of 3D information collected. If you only rely on the customer's manual movement, it will cause uneven and incomplete image information collection, and it will even be difficult to match and stitch into a 3D image. At the same time, it does not rely on moving the entire mobile phone to realize image acquisition, because this kind of movement either requires the mobile phone to be installed on an additional track, or it is free movement without a track. The former limits the usage scenarios, while the latter leads to a decrease in the quality of the collection.
所述导轨为曲线型,例如为圆弧,如图4、5,使得图像采集装置4在其上运动时,运动轨迹为弧形。但图像采集装置4的光轴方向不变,即导轨为平行出光面设置,使得图像采集旋转面与目标物表面近似平行。例如在用手机拍摄对面大楼的3D模型时,图像采集装置4转动面与大楼的高度方向平行,而不是与大楼的横截面平行。The guide rail is curved, such as a circular arc, as shown in Figs. 4 and 5, so that when the image acquisition device 4 moves on it, the movement track is arc-shaped. However, the direction of the optical axis of the image acquisition device 4 remains unchanged, that is, the guide rails are arranged parallel to the light emitting surface, so that the image acquisition rotating surface is approximately parallel to the target surface. For example, when using a mobile phone to photograph a 3D model of the opposite building, the rotating surface of the image acquisition device 4 is parallel to the height direction of the building, rather than parallel to the cross section of the building.
所述导轨为直线型,如图6,使得图像采集装置4在其上运动时,运动轨迹为直线,从而实现对目标物的扫描。此时图像采集装置4的光轴方向同样不变,仅仅是光轴方向的平移而已。特别是,此时导轨为两条,每条均设置有图像采集装置4,分别沿对应导轨平移。The guide rail is linear, as shown in FIG. 6, so that when the image acquisition device 4 moves on it, the movement track is a straight line, so as to realize the scanning of the target. At this time, the direction of the optical axis of the image acquisition device 4 is also unchanged, only the translation of the direction of the optical axis. In particular, there are two guide rails at this time, each of which is provided with an image acquisition device 4, which respectively translates along the corresponding guide rail.
图像采集装置4可以为多个,每个图像采集装置4沿单一导轨运动,运动轨迹类似上述。例如可以设置两个图像采集装置4,分别沿上下导轨运动,这样可以扩大采集范围,同时也可以在单位时间内采集更多的图片,效率更高。当然,为了特殊需要,两个图像采集装置4可以分别为不同波段的相机,例如红外波段和可见光波段。同时,也可以一个导轨运行多个图像采集装置4。例如在单一导轨上运行并排的两个图像采集装置,同样可以提高效率。There may be multiple image acquisition devices 4, and each image acquisition device 4 moves along a single guide rail, and the motion track is similar to the above. For example, two image acquisition devices 4 can be set to move along the upper and lower guide rails respectively, so that the acquisition range can be expanded, and at the same time, more pictures can be acquired per unit time, which is more efficient. Of course, for special needs, the two image acquisition devices 4 may be cameras of different wavelength bands, such as infrared wavelength band and visible light wavelength band. At the same time, it is also possible to run multiple image acquisition devices 4 on one rail. For example, two image acquisition devices running side by side on a single rail can also improve efficiency.
在一种情况下,图像采集装置4暴露于采集模组外壳之外,即采集模组的外壳具有相应的凹槽,图像采集装置4从凹槽伸出,如图2、3。当然,也可以进一步设计,图像采集装置4可以在需要时伸出凹槽,而在不工作时收回外壳中。并且凹槽具有盖,能够在图像采集装置4收回时封闭凹槽,避免灰尘。In one case, the image acquisition device 4 is exposed outside the housing of the acquisition module, that is, the housing of the acquisition module has a corresponding groove, and the image acquisition device 4 protrudes from the groove, as shown in FIGS. 2 and 3. Of course, it can be further designed. The image capture device 4 can be extended out of the groove when needed, and retracted into the housing when it is not working. In addition, the groove has a cover, which can close the groove when the image capture device 4 is retracted to avoid dust.
在一种情况下,如图1,在图像采集装置4的运动轨迹上,与图像采集装置相对的采集模组的外壳为透明材料制成。这样,图像采集装置4无需伸出外壳,即可直接进行运动采集。这样有利于防水、防尘。In one case, as shown in FIG. 1, on the movement track of the image acquisition device 4, the housing of the acquisition module opposite to the image acquisition device is made of a transparent material. In this way, the image acquisition device 4 can directly perform motion acquisition without extending the housing. This is beneficial to waterproof and dustproof.
由于运动驱动装置2驱动与运动装置3连接,驱动图像采集装置4按照3D采集的预定要求进行平移,因此运动驱动装置2需要具有数据接口1,接收相应的运动指令,即运动驱动装置2通过数据接口1与移动终端电连接。Since the motion drive device 2 is connected to the motion device 3, the image acquisition device 4 is driven to translate according to the predetermined requirements of 3D acquisition. Therefore, the motion drive device 2 needs to have a data interface 1 to receive the corresponding motion instructions, that is, the motion drive device 2 passes the data The interface 1 is electrically connected to the mobile terminal.
还包括处理器,也称处理单元,用以根据图像采集装置采集的多个图像,根据3D合成算法,合成目标物3D模型,得到目标物3D信息。It also includes a processor, also called a processing unit, which is used to synthesize a 3D model of the target object according to a 3D synthesis algorithm according to a plurality of images collected by the image acquisition device to obtain 3D information of the target object.
移动终端模组结构——固定式Mobile terminal module structure-fixed
模组包括外壳,外壳内分布有多个图像采集装置4,图像采集装置4的间隔距离由下述经验条件所限定。图像采集装置的镜头暴露于模组外壳外,或位 于模组外壳内。The module includes a housing, a plurality of image acquisition devices 4 are distributed in the housing, and the separation distance of the image acquisition devices 4 is limited by the following empirical conditions. The lens of the image capture device is exposed outside the module housing or is located in the module housing.
在图像采集装置4的镜头暴露于模组外壳外时,设置相应的保护机构保护镜头。例如设置透明的罩。在图像采集装置4的镜头位于模组外壳内时,与图像采集装置4相对的采集模组的外壳为透明材料制成。When the lens of the image acquisition device 4 is exposed outside the module housing, a corresponding protection mechanism is provided to protect the lens. For example, a transparent cover is provided. When the lens of the image acquisition device 4 is located in the module housing, the housing of the acquisition module opposite to the image acquisition device 4 is made of transparent material.
图像采集装置的光轴方向The direction of the optical axis of the image capture device
图像采集装置4的光轴与移动终端外壳具有夹角γ。通常情况下,γ=90°。在上述图像采集装置4移动的方案中,光轴与移动平面的夹角就是与终端外壳的夹角,通常也为90°。在固定式的方案中,图像采集装置4的光轴与移动终端外壳夹角通常也为90°。The optical axis of the image acquisition device 4 and the housing of the mobile terminal have an included angle γ. Normally, γ=90°. In the above solution for the movement of the image acquisition device 4, the included angle between the optical axis and the moving plane is the included angle with the terminal housing, which is usually 90°. In a fixed solution, the angle between the optical axis of the image acquisition device 4 and the housing of the mobile terminal is usually 90°.
在某些情况下,γ<90°,即图像采集装置4在不同位置时,光轴朝向相对于移动平面或外壳的垂直方向会聚。In some cases, γ<90°, that is, when the image capture device 4 is in different positions, the optical axis converges toward the vertical direction relative to the moving plane or the housing.
在某些情况下,γ>90°,即图像采集装置4在不同位置时,光轴朝向相对于移动平面或外壳的垂直方向发散。In some cases, γ>90°, that is, when the image acquisition device 4 is in different positions, the optical axis diverges in a direction perpendicular to the moving plane or the housing.
无论哪种方式,0<γ<180°。Either way, 0<γ<180°.
上述γ不等于90°的情况可以通过如下方式实现:如图7所示,①图像采集装置4以γ角度固定在运动装置3上;②图像采集装置4设置于角度调整装置5,可以根据其运动位置变化,改变γ;③图像采集装置4固定在模组上或外壳上。上述角度调整装置5可以为转台。The above-mentioned situation that γ is not equal to 90° can be realized in the following ways: as shown in Figure 7, ① the image acquisition device 4 is fixed on the movement device 3 at an angle of γ; ② the image acquisition device 4 is set on the angle adjustment device 5, which can be The movement position changes, changing γ; ③The image acquisition device 4 is fixed on the module or the housing. The above-mentioned angle adjusting device 5 may be a turntable.
模组与移动终端的连接Module and mobile terminal connection
在一种实施例中整个模组为外置式,此时数据接口可以为与Type-c接口、MicroUSB接口、USB接口、Lightning接口、wifi接口、蓝牙接口、蜂窝网络接口相配合的接口,从而通过有线或者无线方式与移动终端连接。In one embodiment, the entire module is an external type. At this time, the data interface can be an interface that cooperates with Type-c interface, MicroUSB interface, USB interface, Lightning interface, wifi interface, Bluetooth interface, and cellular network interface. Wired or wirelessly connected to the mobile terminal.
在另一种实施例中整个模组为内置式,此时数据接口1可直接在内部与移动终端的处理器连接。In another embodiment, the entire module is built-in. In this case, the data interface 1 can be directly connected to the processor of the mobile terminal internally.
在另一种实施例中,模组的结构为手机的一部分,即虽然本发明用模组去描述,但实际上这些结构已经属于手机的一部分,在生产、制造手机时就已经完成。In another embodiment, the structure of the module is a part of the mobile phone, that is, although the present invention is described as a module, in fact, these structures already belong to a part of the mobile phone and are already completed when the mobile phone is produced and manufactured.
为缩小整个模组的体积及耗电量,图像采集装置4通过数据接口1与移动终端电连接,从而将采集到的图像传输至移动终端进行存储和后续3D处理。In order to reduce the volume and power consumption of the entire module, the image acquisition device 4 is electrically connected to the mobile terminal through the data interface 1 so as to transmit the collected images to the mobile terminal for storage and subsequent 3D processing.
无论是外置式还是内置式,模组与移动终端都存在机械连接。例如在外置 式中,模组通过耳机插头插入移动终端的耳机插孔中。由于模组和移动终端之间要相互传递控制信号和图像数据,因此除了机械连接两者之间还存在电学连接,特别是信号连接。Whether it is external or internal, there is a mechanical connection between the module and the mobile terminal. For example, in the external type, the module is inserted into the earphone jack of the mobile terminal through the earphone plug. Since the module and the mobile terminal must transmit control signals and image data to each other, in addition to the mechanical connection, there is also an electrical connection, especially a signal connection.
在外置式时,所述机械连接与电学连接通过同一结构实现。手机模组通过机械连接件/电学连接件与手机连接,并且使得手机模组与手机相对刚性连接,从而使得两者成为一体。例如上述描述的耳机插头插入移动终端的耳机插孔中,同时实现了机械连接和电学连接。既可以把模组与手机相互刚性固定起来,又可以相互之间传递信号。机械连接也可以利用额外的机械连接方式。例如在模组和手机之间设置额外的插头和插孔、凸起和卡槽等方式实现模组和手机之间的刚性固定连接。当然,也可以使用手机现有的插口,例如模组上具有耳机插头、microUSB插头、TepyC插头、Lightning插头,对应插入手机的相应上述插孔中,但这种插入仅用作机械连接,而不进行信号传递,信号由其他方式连接。通过这样的机械连接,模组与手机成为一体,用户手持手机固定不动时模组能够相对目标物固定,通过图像采集装置4的移动来拍摄不同角度图片。In the external type, the mechanical connection and the electrical connection are realized by the same structure. The mobile phone module is connected to the mobile phone through a mechanical connector/electrical connector, and the mobile phone module and the mobile phone are relatively rigidly connected, so that the two are integrated. For example, the earphone plug described above is inserted into the earphone jack of the mobile terminal, and the mechanical connection and the electrical connection are realized at the same time. Both the module and the mobile phone can be rigidly fixed to each other, and signals can be transmitted between each other. The mechanical connection can also utilize additional mechanical connections. For example, additional plugs and jacks, protrusions and card slots are provided between the module and the mobile phone to achieve a rigid and fixed connection between the module and the mobile phone. Of course, you can also use the existing sockets of the mobile phone. For example, the module has a headphone plug, a microUSB plug, a TepyC plug, and a Lightning plug, which are correspondingly inserted into the corresponding sockets of the mobile phone, but this kind of insertion is only used for mechanical connection, not For signal transmission, the signal is connected by other means. Through such a mechanical connection, the module and the mobile phone are integrated. When the user holds the mobile phone in a fixed position, the module can be fixed relative to the target object, and pictures from different angles can be taken by the movement of the image acquisition device 4.
为了方便图像采集装置4的平移或转动,运动装置3可以包括磁浮装置,使得移动过程更加顺畅,提高用户体验。In order to facilitate the translation or rotation of the image acquisition device 4, the movement device 3 may include a magnetic levitation device, so that the movement process is smoother and the user experience is improved.
图像采集装置4在模组的外壳内运动,其运动区域涉及的外壳部分为透明材料制成,例如为透明树脂材料。The image capture device 4 moves in the housing of the module, and the housing part involved in the movement area is made of a transparent material, such as a transparent resin material.
图像采集装置4可以为可见光相机/摄像头模组,也可以为红外相机/摄像头模组。在夜间进行采集时,由于光线限制,可见光相机将无法完整采集图像。此时可以使用红外相机进行采集,并在后续处理时,将可见光相机和红外相机采集的图像相互匹配融合,实现3D信息采集。当然,也可以只依靠可见光相机或红外相机中的一种。并且图像采集装置4也可以为多个。The image acquisition device 4 may be a visible light camera/camera module or an infrared camera/camera module. When collecting at night, due to the limited light, the visible light camera will not be able to collect the image completely. At this time, the infrared camera can be used for collection, and in the subsequent processing, the images collected by the visible light camera and the infrared camera can be matched and fused with each other to realize 3D information collection. Of course, it is also possible to rely on only one of the visible light camera or the infrared camera. In addition, there may be more than one image acquisition device 4.
在具有红外相机的方案中,红外相机和可见光相机可以并排在轨道中。也可以设置两个轨道,分别安装红外相机和可见光相机。并且也可以使用光谱感应范围更广的单相机,同时兼顾可见光相机和红外相机。In a solution with an infrared camera, the infrared camera and the visible light camera can be side by side in the track. It is also possible to set up two tracks to install an infrared camera and a visible light camera respectively. And you can also use a single camera with a wider spectral sensing range, while taking into account visible light cameras and infrared cameras.
模组的外壳具有光源,光源为LED灯珠,但也可以设置智能光源,例如可以根据需要选择不同的光源亮度、亮灭等。光源用来照亮目标物,防止目标物过暗影响采集效果和精度。但同时也要防止光源过亮,导致目标物纹理信息损失。光源也可以使用移动终端自带光源,以照亮待扫描部分为准。The shell of the module has a light source, and the light source is an LED lamp bead, but a smart light source can also be set, for example, different light source brightness, brightness, etc. can be selected according to needs. The light source is used to illuminate the target to prevent the target from being too dark to affect the collection effect and accuracy. But at the same time, it is necessary to prevent the light source from being too bright, causing the loss of target texture information. The light source can also use the mobile terminal's own light source to illuminate the part to be scanned.
为了提高用户体验,可以将模组采集到的图像传输至移动终端的显示模块中进行显示,以方便用户观察自己采集过程。特别是对于采集模组对于目标物 距离过远或者过近,都可以通过显示模块显示出来,并且可以通过语音模块进行提醒。可以理解,模组采集到的图像可以不在移动终端的显示模块中显示,但其与目标物距离过远或者过近的信息可以通过移动终端语音播报,以提示用户进行移动。模组与移动终端的语音或显示模块连接也通过模组的数据接口1实现。In order to improve the user experience, the images collected by the module can be transmitted to the display module of the mobile terminal for display, so as to facilitate the user to observe their own collection process. Especially for the acquisition module that is too far or too close to the target, it can be displayed through the display module, and can be reminded through the voice module. It can be understood that the image collected by the module may not be displayed in the display module of the mobile terminal, but the information that it is too far or too close to the target object can be voiced through the mobile terminal to prompt the user to move. The connection between the module and the voice or display module of the mobile terminal is also realized through the data interface 1 of the module.
图像采集装置采集位置Image capture device capture location
在进行3D采集时,图像采集装置在不同采集位置光轴方向相对于目标物不发生变化,通常大致垂直于目标物表面(也可呈一定角度),此时相邻两个图像采集装置的位置,或图像采集装置相邻两个采集位置满足如下条件:When performing 3D acquisition, the optical axis direction of the image acquisition device does not change relative to the target at different acquisition positions, and is usually roughly perpendicular to the surface of the target (it can also be at a certain angle). At this time, the positions of two adjacent image acquisition devices , Or the two adjacent collection positions of the image collection device meet the following conditions:
Figure PCTCN2020134753-appb-000002
Figure PCTCN2020134753-appb-000002
μ<0.482μ<0.482
其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件(CCD)的矩形长度;M为图像采集装置感光元件沿着光轴到目标物表面的距离;μ为经验系数。Where L is the linear distance between the optical centers of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length of the photosensitive element (CCD) of the image acquisition device; M is the photosensitive element of the image acquisition device along the light The distance from the axis to the surface of the target; μ is the empirical coefficient.
当上述两个位置是沿图像采集装置感光元件长度方向时,d取矩形长度;当上述两个位置是沿图像采集装置感光元件宽度方向时,d取矩形宽度。When the above two positions are along the length direction of the photosensitive element of the image capture device, d takes the length of the rectangle; when the above two positions are along the width direction of the photosensitive element of the image capture device, d is the width of the rectangle.
图像采集装置在上述两个位置中的任何一个位置时,感光元件沿着光轴到目标物表面的距离作为M。When the image capture device is in any one of the above two positions, the distance from the photosensitive element to the surface of the target along the optical axis is taken as M.
如上所述,L应当为两个图像采集装置光心的直线距离,但由于图像采集装置光心位置在某些情况下并不容易确定,因此在某些情况下也可以使用图像采集装置的感光元件中心、图像采集装置的几何中心、图像采集装置与云台(或平台、支架)连接的轴中心、镜头近端或远端表面的中心替代,经过试验发现由此带来的误差是在可接受的范围内的,因此上述范围也在本发明的保护范围之内。As mentioned above, L should be the linear distance between the optical centers of the two image capture devices, but because the position of the optical centers of the image capture devices is not easy to determine in some cases, the photosensitive of the image capture devices can also be used in some cases. The center of the component, the geometric center of the image capture device, the center of the axis connecting the image capture device and the pan/tilt (or platform, bracket), the center of the proximal or distal lens surface instead of Within the acceptable range, therefore, the above-mentioned range is also within the protection scope of the present invention.
采用市售手机摄像头模组,利用本发明装置,进行实验,得到了如下实验 结果。Using a commercially available mobile phone camera module and using the device of the present invention, experiments were carried out, and the following experimental results were obtained.
Figure PCTCN2020134753-appb-000003
Figure PCTCN2020134753-appb-000003
从上述实验结果及大量实验经验可以得出,μ的值应当满足μ<0.482,此时已经能够合成部分3D模型,虽然有一部分无法自动合成,但是在要求不高的情况下也是可以接受的,并且可以通过手动或者更换算法的方式弥补无法合成的部分。特别是的值满足μ<0.403时,能够最佳地兼顾合成效果和合成时间的平衡;为了获得更好的合成效果可以选择μ<0.326,此时合成时间会上升,但合成质量更好。而当μ>0.485时,已经无法合成。但这里应当注意,以上范围仅仅是最佳实施例,并不构成对保护范围的限定。From the above experimental results and a large amount of experimental experience, it can be concluded that the value of μ should satisfy μ<0.482. At this time, some 3D models can be synthesized. Although some of them cannot be synthesized automatically, it is acceptable if the requirements are not high. And you can make up for the parts that cannot be synthesized manually or by replacing the algorithm. In particular, when the value satisfies μ<0.403, the balance between synthesis effect and synthesis time can be optimally taken into account; in order to obtain a better synthesis effect, μ<0.326 can be selected, and the synthesis time will increase at this time, but the synthesis quality will be better. When μ>0.485, synthesis is no longer possible. However, it should be noted here that the above scope is only the best embodiment and does not constitute a limitation on the protection scope.
以上数据仅为验证该公式条件所做实验得到的,并不对发明构成限定。即使没有这些数据,也不影响该公式的客观性。本领域技术人员可以根据需要调整设备参数和步骤细节进行实验,得到其他数据也是符合该公式条件的。The above data is only obtained from experiments to verify the conditions of the formula, and does not limit the invention. Even without these data, it does not affect the objectivity of the formula. Those skilled in the art can adjust the equipment parameters and step details as needed to perform experiments, and obtain other data that also meets the conditions of the formula.
本发明所述的相邻采集位置是指,在图像采集装置相对目标物移动时,移动轨迹上的发生采集动作的两个相邻位置。这通常对于图像采集装置运动容易理解。但对于目标物发生移动导致两者相对移动时,此时应当根据运动的相对性,将目标物的运动转化为目标物不动,而图像采集装置运动。此时再衡量图像采集装置在转化后的移动轨迹中发生采集动作的两个相邻位置。The adjacent acquisition positions in the present invention refer to two adjacent positions on the moving track where the acquisition action occurs when the image acquisition device moves relative to the target. This is usually easy to understand for the movement of the image capture device. However, when the target object moves to cause the two to move relative to each other, at this time, the movement of the target object should be converted into the target object's immobility according to the relativity of the movement, and the image acquisition device moves. At this time, measure the two adjacent positions of the image acquisition device where the acquisition action occurs in the transformed movement track.
3D合成方法3D synthesis method
利用上述采集到的图片进行3D合成时,可以采用现有算法实现,也可以采用本发明提出的优化的算法,主要包括如下步骤:When using the above-mentioned collected pictures for 3D synthesis, the existing algorithm can be used to realize it, or the optimized algorithm proposed by the present invention can be used, which mainly includes the following steps:
步骤1:对所有输入照片进行图像增强处理。采用下述滤波器增强原始照片的反差和同时压制噪声。Step 1: Perform image enhancement processing on all input photos. The following filters are used to enhance the contrast of the original photo and suppress noise at the same time.
Figure PCTCN2020134753-appb-000004
Figure PCTCN2020134753-appb-000004
式中:g(x,y)为原始影像在(x,y)处灰度值,f(x,y)为经过Wallis滤波器增强后该处的灰度值,m g为原始影像局部灰度均值,s g为原始影像局部灰度标准偏差,m f为变换后的影像局部灰度目标值,s f为变换后影像局部灰度标准偏差目标值。c∈(0,1)为影像方差的扩展常数,b∈(0,1)为影像亮度系数常数。 Where: g(x, y) is the gray value of the original image at (x, y), f(x, y) is the gray value of the original image after being enhanced by the Wallis filter, and m g is the local gray value of the original image Degree mean, s g is the local gray-scale standard deviation of the original image, m f is the local gray-scale target value of the transformed image, and s f is the local gray-scale standard deviation target value of the transformed image. c∈(0,1) is the expansion constant of the image variance, and b∈(0,1) is the image brightness coefficient constant.
该滤波器可以大大增强影像中不同尺度的影像纹理模式,所以在提取影像的点特征时可以提高特征点的数量和精度,在照片特征匹配中则提高了匹配结果可靠性和精度。The filter can greatly enhance the image texture patterns of different scales in the image, so the number and accuracy of feature points can be improved when extracting the point features of the image, and the reliability and accuracy of the matching result can be improved in the photo feature matching.
步骤2:对输入的所有照片进行特征点提取,并进行特征点匹配,获取稀疏特征点。采用SURF算子对照片进行特征点提取与匹配。SURF特征匹配方法主要包含三个过程,特征点检测、特征点描述和特征点匹配。该方法使用Hessian矩阵来检测特征点,用箱式滤波器(Box Filters)来代替二阶高斯滤波,用积分图像来加速卷积以提高计算速度,并减少了局部影像特征描述符的维数,来加快匹配速度。主要步骤包括①构建Hessian矩阵,生成所有的兴趣点,用于特征提取,构建Hessian矩阵的目的是为了生成图像稳定的边缘点(突变点);②构建尺度空间特征点定位,将经过Hessian矩阵处理的每个像素点与二维图像空间和尺度空间邻域内的26个点进行比较,初步定位出关键点,再经过滤除能量比较弱的关键点以及错误定位的关键点,筛选出最终的稳定的特征点;③特征点主方向的确定,采用的是统计特征点圆形邻域内的harr小波特征。即在特征点的圆形邻域内,统计60度扇形内所有点的水平、垂直harr小波特 征总和,然后扇形以0.2弧度大小的间隔进行旋转并再次统计该区域内harr小波特征值之后,最后将值最大的那个扇形的方向作为该特征点的主方向;④生成64维特征点描述向量,特征点周围取一个4*4的矩形区域块,但是所取得矩形区域方向是沿着特征点的主方向。每个子区域统计25个像素的水平方向和垂直方向的haar小波特征,这里的水平和垂直方向都是相对主方向而言的。该haar小波特征为水平方向值之后、垂直方向值之后、水平方向绝对值之后以及垂直方向绝对值之和4个方向,把这4个值作为每个子块区域的特征向量,所以一共有4*4*4=64维向量作为Surf特征的描述子;⑤特征点匹配,通过计算两个特征点间的欧式距离来确定匹配度,欧氏距离越短,代表两个特征点的匹配度越好。Step 2: Perform feature point extraction on all input photos, and perform feature point matching to obtain sparse feature points. The SURF operator is used to extract and match the feature points of the photos. The SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching. This method uses Hessian matrix to detect feature points, uses Box Filters to replace second-order Gaussian filtering, and uses integral images to accelerate convolution to increase the calculation speed and reduce the dimensionality of local image feature descriptors. To speed up the matching speed. The main steps include ① constructing the Hessian matrix to generate all points of interest for feature extraction. The purpose of constructing the Hessian matrix is to generate stable edge points (mutation points) of the image; ② constructing the scale space feature point positioning, which will be processed by the Hessian matrix Compare each pixel point with 26 points in the neighborhood of two-dimensional image space and scale space, and initially locate the key points, and then filter out the key points with weaker energy and the key points that are incorrectly positioned to filter out the final stable ③The main direction of the feature point is determined by using the Harr wavelet feature in the circular neighborhood of the statistical feature point. That is, in the circular neighborhood of the feature point, the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree fan is counted, and then the fan is rotated at an interval of 0.2 radians and the harr wavelet eigenvalues in the area are counted again. The direction of the sector with the largest value is taken as the main direction of the feature point; ④ Generate a 64-dimensional feature point description vector, and take a 4*4 rectangular area block around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. direction. Each sub-region counts 25 pixels of haar wavelet features in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction. The haar wavelet features are 4 directions after the horizontal direction value, after the vertical direction value, after the horizontal direction absolute value, and the vertical direction absolute value. These 4 values are used as the feature vector of each sub-block area, so there is a total of 4* 4*4=64-dimensional vector is used as the descriptor of Surf feature; ⑤Feature point matching, the degree of matching is determined by calculating the Euclidean distance between two feature points. The shorter the Euclidean distance, the better the matching degree of the two feature points. .
步骤3:输入匹配的特征点坐标,利用光束法平差,解算稀疏的人脸三维点云和拍照相机的位置和姿态数据,即获得了稀疏人脸模型三维点云和位置的模型坐标值;以稀疏特征点为初值,进行多视照片稠密匹配,获取得到密集点云数据。该过程主要有四个步骤:立体像对选择、深度图计算、深度图优化、深度图融合。针对输入数据集里的每一张影像,我们选择一张参考影像形成一个立体像对,用于计算深度图。因此我们可以得到所有影像的粗略的深度图,这些深度图可能包含噪声和错误,我们利用它的邻域深度图进行一致性检查,来优化每一张影像的深度图。最后进行深度图融合,得到整个场景的三维点云。Step 3: Input the matching feature point coordinates, use the beam method to adjust, solve the sparse face 3D point cloud and the position and posture data of the camera, that is, obtain the sparse face model 3D point cloud and the position model coordinate value ; With sparse feature points as initial values, dense matching of multi-view photos is performed to obtain dense point cloud data. The process has four main steps: stereo pair selection, depth map calculation, depth map optimization, and depth map fusion. For each image in the input data set, we select a reference image to form a stereo pair for calculating the depth map. Therefore, we can get rough depth maps of all images. These depth maps may contain noise and errors. We use its neighborhood depth map to check consistency to optimize the depth map of each image. Finally, depth map fusion is performed to obtain a three-dimensional point cloud of the entire scene.
步骤4:利用密集点云进行人脸曲面重建。包括定义八叉树、设置函数空间、创建向量场、求解泊松方程、提取等值面几个过程。由梯度关系得到采样点和指示函数的积分关系,根据积分关系获得点云的向量场,计算指示函数梯度场的逼近,构成泊松方程。根据泊松方程使用矩阵迭代求出近似解,采用移动方体算法提取等值面,对所测点云重构出被测物体的模型。Step 4: Use the dense point cloud to reconstruct the face surface. Including the process of defining the octree, setting the function space, creating the vector field, solving the Poisson equation, and extracting the isosurface. The integral relationship between the sampling point and the indicator function is obtained from the gradient relationship, and the vector field of the point cloud is obtained according to the integral relationship, and the approximation of the gradient field of the indicator function is calculated to form the Poisson equation. According to the Poisson equation, the approximate solution is obtained by matrix iteration, the moving cube algorithm is used to extract the isosurface, and the model of the measured object is reconstructed from the measured point cloud.
步骤5:人脸模型的全自动纹理贴图。表面模型构建完成后,进行纹理贴图。主要过程包括:①纹理数据获取通过图像重建目标的表面三角面格网;②重建模型三角面的可见性分析。利用图像的标定信息计算每个三角面的可见图像集以及最优参考图像;③三角面聚类生成纹理贴片。根据三角面的可见图像集、最优参考图像以及三角面的邻域拓扑关系,将三角面聚类生成为若干参考图像纹理贴片;④纹理贴片自动排序生成纹理图像。对生成的纹理贴片,按照其大小关系进行排序,生成包围面积最小的纹理图像,得到每个三角面的纹理映射坐标。Step 5: Fully automatic texture mapping of the face model. After the surface model is built, texture mapping is performed. The main process includes: ①The texture data is obtained through the image reconstruction target's surface triangle grid; ②The visibility analysis of the reconstructed model triangle. Use the image calibration information to calculate the visible image set of each triangle and the optimal reference image; ③The triangle surface clustering generates texture patches. According to the visible image set of the triangle surface, the optimal reference image and the neighborhood topological relationship of the triangle surface, the triangle surface cluster is generated into a number of reference image texture patches; ④The texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate the texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangle.
应当注意,上述算法是本发明的优化算法,本算法与图像采集条件相互配合,使用该算法兼顾了合成的时间和质量,是本发明的发明点之一。当然,使用现有技术中常规3D合成算法也可以实现,只是合成效果和速度会受到一定影响。It should be noted that the above-mentioned algorithm is an optimized algorithm of the present invention, and this algorithm cooperates with the image acquisition conditions, and the use of this algorithm takes into account the time and quality of synthesis, which is one of the invention points of the present invention. Of course, the conventional 3D synthesis algorithm in the prior art can also be used, but the synthesis effect and speed will be affected to a certain extent.
上述目标物体、目标物、及物体皆表示预获取三维信息的对象。可以为一实体物体,也可以为多个物体组成物。例如可以为车辆、大型雕塑等。所述目标物的三维信息包括三维图像、三维点云、三维网格、局部三维特征、三维尺寸及一切带有目标物三维特征的参数。本发明里所谓的三维是指具有XYZ三个方向信息,特别是具有深度信息,与只有二维平面信息具有本质区别。也与一些称为三维、全景、全息、三维,但实际上只包括二维信息,特别是不包括深度信息的定义有本质区别。The above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a physical object, or it can be a combination of multiple objects. For example, it can be a vehicle, a large sculpture, etc. The three-dimensional information of the target includes a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, a local three-dimensional feature, a three-dimensional size, and all parameters with a three-dimensional feature of the target. The so-called three-dimensional in the present invention refers to three-dimensional information of XYZ, especially depth information, which is essentially different from only two-dimensional plane information. It is also essentially different from the definitions called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially depth information.
本发明所说的采集区域是指图像采集装置(例如相机)能够拍摄的范围。本发明中的图像采集装置可以为CCD、CMOS、相机、摄像机、工业相机、监视器、摄像头、手机、平板、笔记本、移动终端、可穿戴设备、智能眼镜、智能手表、智能手环以及带有图像采集功能所有设备。The collection area mentioned in the present invention refers to the range that an image collection device (such as a camera) can shoot. The image acquisition device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet and belt All devices with image capture function.
以上实施例获得的目标物多个区域的3D信息可以用于进行比对,例如用于身份的识别。首先利用本发明的方案获取人体面部和虹膜的3D信息,并将其存储在服务器中,作为标准数据。当使用时,例如需要进行身份认证进行支付、开门等操作时,可以用3D获取装置再次采集并获取人体面部和虹膜的3D信息,将其与标准数据进行比对,比对成功则允许进行下一步动作。可以理解,这种比对也可以用于古董、艺术品等固定财产的鉴别,即先获取古董、艺术品 多个区域的3D信息作为标准数据,在需要鉴定时,再次获取多个区域的3D信息,并与标准数据进行比对,鉴别真伪。以上实施例获得的目标物多个区域的三维信息可以用于为该目标物设计、生产、制造配套物。例如,获得人体口腔、牙齿三维数据,可以为人体设计、制造更为合适的假牙。以上实施例获得的目标物的三维信息也可以用于对该目标物的几何尺寸、外形轮廓进行测量。The 3D information of multiple regions of the target obtained in the above embodiment can be used for comparison, for example, for identity recognition. First, use the solution of the present invention to obtain 3D information of the human face and iris, and store it in the server as standard data. When in use, such as when identity authentication is required for payment, door opening, etc., the 3D acquisition device can be used to collect and obtain the 3D information of the human face and iris again, and compare it with the standard data. If the comparison is successful, the next step is allowed. One step. It is understandable that this kind of comparison can also be used for the identification of fixed assets such as antiques and artworks, that is, first obtain 3D information of multiple areas of antiques and artworks as standard data, and obtain 3D information of multiple areas again when authentication is required. Information, and compare with standard data to identify authenticity. The three-dimensional information of multiple regions of the target obtained in the above embodiments can be used to design, produce, and manufacture accessory items for the target. For example, by obtaining three-dimensional data of the oral cavity and teeth of the human body, more suitable dentures can be designed and manufactured for the human body. The three-dimensional information of the target obtained in the above embodiments can also be used to measure the geometric size and contour of the target.
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。In the instructions provided here, a lot of specific details are explained. However, it can be understood that the embodiments of the present invention can be practiced without these specific details. In some instances, well-known methods, structures, and technologies are not shown in detail, so as not to obscure the understanding of this specification.
类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。Similarly, it should be understood that in order to simplify the present disclosure and help understand one or more of the various inventive aspects, in the above description of the exemplary embodiments of the present invention, the various features of the present invention are sometimes grouped together into a single embodiment, Figure, or its description. However, the disclosed method should not be interpreted as reflecting the intention that the claimed invention requires more features than those explicitly stated in each claim. More precisely, as reflected in the following claims, the inventive aspect lies in less than all the features of a single embodiment disclosed previously. Therefore, the claims following the specific embodiment are thus explicitly incorporated into the specific embodiment, wherein each claim itself serves as a separate embodiment of the present invention.
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。Those skilled in the art can understand that it is possible to adaptively change the modules in the device in the embodiment and set them in one or more devices different from the embodiment. The modules or units or components in the embodiments can be combined into one module or unit or component, and in addition, they can be divided into multiple sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or units are mutually exclusive, any combination can be used to compare all features disclosed in this specification (including the accompanying claims, abstract and drawings) and any method or methods disclosed in this manner or All the processes or units of the equipment are combined. Unless expressly stated otherwise, each feature disclosed in this specification (including the accompanying claims, abstract and drawings) may be replaced by an alternative feature providing the same, equivalent or similar purpose.
此外,本领域的技术人员能够理解,尽管在此的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。In addition, those skilled in the art can understand that although some embodiments herein include certain features included in other embodiments but not other features, the combination of features of different embodiments means that they fall within the scope of the present invention. And form different embodiments. For example, in the claims, any one of the claimed embodiments can be used in any combination.
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实 施例的基于本发明装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。The various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them. Those skilled in the art should understand that a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions based on some or all of the components in the device of the present invention according to the embodiments of the present invention. The present invention can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein. Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals. Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。It should be noted that the above-mentioned embodiments illustrate rather than limit the present invention, and those skilled in the art can design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses should not be constructed as a limitation to the claims. The word "comprising" does not exclude the presence of elements or steps not listed in the claims. The word "a" or "an" preceding an element does not exclude the presence of multiple such elements. The invention can be implemented by means of hardware comprising several different elements and by means of a suitably programmed computer. In the unit claims listing several devices, several of these devices may be embodied in the same hardware item. The use of the words first, second, and third, etc. do not indicate any order. These words can be interpreted as names.
至此,本领域技术人员应认识到,虽然本文已详尽示出和描述了本发明的多个示例性实施例,但是,在不脱离本发明精神和范围的情况下,仍可根据本发明公开的内容直接确定或推导出符合本发明原理的许多其他变型或修改。因此,本发明的范围应被理解和认定为覆盖了所有这些其他变型或修改。So far, those skilled in the art should realize that although multiple exemplary embodiments of the present invention have been illustrated and described in detail herein, they can still be disclosed according to the present invention without departing from the spirit and scope of the present invention. The content directly determines or derives many other variations or modifications that conform to the principles of the present invention. Therefore, the scope of the present invention should be understood and deemed to cover all these other variations or modifications.

Claims (11)

  1. 一种用于移动终端的超薄型三维采集的模组,其特征在于:包括数据接口、运动驱动装置、运动装置和图像采集装置;An ultra-thin three-dimensional acquisition module for a mobile terminal, which is characterized in that it includes a data interface, a motion driving device, a motion device, and an image acquisition device;
    其中图像采集装置设置在运动装置上,图像采集装置在图像采集过程中相对移动终端运动;The image acquisition device is arranged on the movement device, and the image acquisition device moves relative to the mobile terminal during the image acquisition process;
    运动驱动装置与运动装置连接;The motion drive device is connected to the motion device;
    运动驱动装置通过数据接口与移动终端电连接;The motion drive device is electrically connected to the mobile terminal through the data interface;
    图像采集装置通过数据接口与移动终端电连接;The image acquisition device is electrically connected to the mobile terminal through a data interface;
    图像采集装置的光轴与图像采集装置的运动平面具有夹角γ。The optical axis of the image acquisition device and the motion plane of the image acquisition device have an included angle γ.
  2. 如权利要求1所述的模组,其特征在于:γ=90°或0<γ<90°或180°>γ>90°。The module according to claim 1, characterized in that: γ=90° or 0<γ<90° or 180°>γ>90°.
  3. 如权利要求1所述的模组,其特征在于:图像采集装置在不同位置时,光轴相对于图像采集装置的运动平面的垂线会聚或发散。The module according to claim 1, wherein when the image acquisition device is in different positions, the optical axis converges or diverges with respect to the vertical line of the motion plane of the image acquisition device.
  4. 如权利要求1所述的模组,其特征在于:所述模组和移动终端相互独立或内嵌入移动终端。The module according to claim 1, wherein the module and the mobile terminal are independent of each other or embedded in the mobile terminal.
  5. 如权利要求1所述的模组,其特征在于:图像采集装置为多个。The module according to claim 1, wherein there are a plurality of image acquisition devices.
  6. 如权利要求1所述的模组,其特征在于:所述图像采集装置包括可见光图像采集装置和/或红外图像采集装置。The module according to claim 1, wherein the image acquisition device comprises a visible light image acquisition device and/or an infrared image acquisition device.
  7. 如权利要求1所述的模组,其特征在于:图像采集装置伸出所述模组外壳。The module according to claim 1, wherein the image acquisition device extends out of the module housing.
  8. 如权利要求1所述的模组,其特征在于:所述图像采集装置运动的区域还包括透光外壳部。The module according to claim 1, wherein the moving area of the image acquisition device further comprises a light-transmitting shell part.
  9. 如权利要求1所述的模组,其特征在于:所述运动装置为转盘、转台、曲线导轨、直线导轨。The module according to claim 1, wherein the moving device is a turntable, a turntable, a curved guide rail, and a linear guide rail.
  10. 如权利要求1所述的模组,其特征在于:图像采集装置的采集位置为:图像采集装置相邻两个采集位置满足如下条件:The module according to claim 1, wherein the collection position of the image collection device is: two adjacent collection positions of the image collection device meet the following conditions:
    Figure PCTCN2020134753-appb-100001
    Figure PCTCN2020134753-appb-100001
    μ<0.482或μ<0.326μ<0.482 or μ<0.326
    其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度;M为图像采集装置感光元件沿着光轴到目标物表面的距离;μ为经验系数。Where L is the linear distance between the optical centers of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length of the photosensitive element of the image acquisition device; M is the photosensitive element of the image acquisition device to the target along the optical axis The distance between the surface of the object; μ is the empirical coefficient.
  11. 一种用于移动终端的三维采集方法,其特征在于:图像采集装置设置在移动终端中,A three-dimensional acquisition method for a mobile terminal, characterized in that: the image acquisition device is arranged in the mobile terminal,
    图像采集装置采集过程中相对移动终端运动,从而在不同位置拍摄目标物的图像;The image acquisition device moves relative to the mobile terminal during the acquisition process, thereby capturing images of the target object at different positions;
    图像采集装置的光轴与图像采集装置的运动平面具有夹角γ,0<γ<180°。The optical axis of the image acquisition device and the motion plane of the image acquisition device have an included angle γ, 0<γ<180°.
PCT/CN2020/134753 2019-12-12 2020-12-09 Ultra-thin three-dimensional capturing module for mobile terminal WO2021115296A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911277069.8 2019-12-12
CN201911277069.8A CN111060009A (en) 2019-12-12 2019-12-12 Ultra-thin three-dimensional acquisition module for mobile terminal

Publications (1)

Publication Number Publication Date
WO2021115296A1 true WO2021115296A1 (en) 2021-06-17

Family

ID=70300929

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134753 WO2021115296A1 (en) 2019-12-12 2020-12-09 Ultra-thin three-dimensional capturing module for mobile terminal

Country Status (2)

Country Link
CN (2) CN111060009A (en)
WO (1) WO2021115296A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111060009A (en) * 2019-12-12 2020-04-24 天目爱视(北京)科技有限公司 Ultra-thin three-dimensional acquisition module for mobile terminal

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050014527A1 (en) * 2003-07-18 2005-01-20 Agere Systems Incorporated Retractable rotatable camera module for mobile communication device and method of operation thereof
CN101840146A (en) * 2010-04-20 2010-09-22 夏佳梁 Method and device for shooting stereo images by automatically correcting parallax error
CN106998411A (en) * 2016-01-26 2017-08-01 西安中兴新软件有限责任公司 A kind of terminal and control method
CN108492254A (en) * 2018-03-27 2018-09-04 西安优艾智合机器人科技有限公司 Image capturing system and method
CN207968575U (en) * 2018-02-09 2018-10-12 广东欧珀移动通信有限公司 Mobile terminal
CN108696734A (en) * 2017-03-01 2018-10-23 中兴通讯股份有限公司 A kind of filming apparatus and its working method
CN111060009A (en) * 2019-12-12 2020-04-24 天目爱视(北京)科技有限公司 Ultra-thin three-dimensional acquisition module for mobile terminal
CN211178346U (en) * 2019-12-12 2020-08-04 天目爱视(北京)科技有限公司 Module for three-dimensional acquisition of mobile terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205921673U (en) * 2016-08-29 2017-02-01 赵罗强 Mobile terminal with stereo -photography function
CN209279885U (en) * 2018-09-05 2019-08-20 天目爱视(北京)科技有限公司 Image capture device, 3D information comparison and mating object generating means

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050014527A1 (en) * 2003-07-18 2005-01-20 Agere Systems Incorporated Retractable rotatable camera module for mobile communication device and method of operation thereof
CN101840146A (en) * 2010-04-20 2010-09-22 夏佳梁 Method and device for shooting stereo images by automatically correcting parallax error
CN106998411A (en) * 2016-01-26 2017-08-01 西安中兴新软件有限责任公司 A kind of terminal and control method
CN108696734A (en) * 2017-03-01 2018-10-23 中兴通讯股份有限公司 A kind of filming apparatus and its working method
CN207968575U (en) * 2018-02-09 2018-10-12 广东欧珀移动通信有限公司 Mobile terminal
CN108492254A (en) * 2018-03-27 2018-09-04 西安优艾智合机器人科技有限公司 Image capturing system and method
CN111060009A (en) * 2019-12-12 2020-04-24 天目爱视(北京)科技有限公司 Ultra-thin three-dimensional acquisition module for mobile terminal
CN211178346U (en) * 2019-12-12 2020-08-04 天目爱视(北京)科技有限公司 Module for three-dimensional acquisition of mobile terminal

Also Published As

Publication number Publication date
CN115112017A (en) 2022-09-27
CN111060009A (en) 2020-04-24

Similar Documents

Publication Publication Date Title
CN111060023B (en) High-precision 3D information acquisition equipment and method
CN111292364B (en) Method for rapidly matching images in three-dimensional model construction process
CN111292239B (en) Three-dimensional model splicing equipment and method
WO2021115301A1 (en) Close-range target 3d acquisition apparatus
CN112304222B (en) Background board synchronous revolution&#39;s 3D information acquisition equipment
US10540784B2 (en) Calibrating texture cameras using features extracted from depth images
WO2021185214A1 (en) Method for long-distance calibration in 3d modeling
CN111160136B (en) Standardized 3D information acquisition and measurement method and system
WO2021115302A1 (en) 3d intelligent visual device
CN111028341B (en) Three-dimensional model generation method
CN111050154B (en) Mobile terminal with lifting type rotary 3D acquisition device
CN111340959B (en) Three-dimensional model seamless texture mapping method based on histogram matching
CN111064949B (en) Intelligent 3D acquisition module for mobile terminal
WO2021115296A1 (en) Ultra-thin three-dimensional capturing module for mobile terminal
WO2021115295A1 (en) Smart 3d acquisition module and mobile terminal having 3d acquisition apparatus
CN111208138A (en) Intelligent wood recognition device
WO2021115297A1 (en) 3d information collection apparatus and method
WO2021115298A1 (en) Glasses matching design device
CN111325780B (en) 3D model rapid construction method based on image screening
CN211178346U (en) Module for three-dimensional acquisition of mobile terminal
CN111207690B (en) Adjustable iris 3D information acquisition measuring equipment
CN211085115U (en) Standardized biological three-dimensional information acquisition device
CN210609483U (en) Module for three-dimensional acquisition of mobile terminal
CN210629707U (en) Mobile terminal with rotation type 3D acquisition module

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20898206

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20898206

Country of ref document: EP

Kind code of ref document: A1