WO2021115297A1 - 3d information collection apparatus and method - Google Patents

3d information collection apparatus and method Download PDF

Info

Publication number
WO2021115297A1
WO2021115297A1 PCT/CN2020/134757 CN2020134757W WO2021115297A1 WO 2021115297 A1 WO2021115297 A1 WO 2021115297A1 CN 2020134757 W CN2020134757 W CN 2020134757W WO 2021115297 A1 WO2021115297 A1 WO 2021115297A1
Authority
WO
WIPO (PCT)
Prior art keywords
image acquisition
target
image
acquisition device
synthesis
Prior art date
Application number
PCT/CN2020/134757
Other languages
French (fr)
Chinese (zh)
Inventor
左忠斌
左达宇
Original Assignee
左忠斌
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201911276052.0A external-priority patent/CN110986768B/en
Priority claimed from CN201911276062.4A external-priority patent/CN111060023B/en
Application filed by 左忠斌 filed Critical 左忠斌
Publication of WO2021115297A1 publication Critical patent/WO2021115297A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures

Definitions

  • the invention relates to the technical field of shape measurement, in particular to the technical field of 3D shape measurement.
  • 3D information needs to be collected first.
  • commonly used methods include using machine vision to collect pictures of objects from different angles, and match these pictures to form a 3D model.
  • multiple cameras can be set at different angles of the object to be measured, or pictures can be collected from different angles by rotating a single or multiple cameras.
  • the problems of synthesis speed and synthesis accuracy are involved.
  • the synthesis speed and synthesis accuracy are a contradiction to a certain extent.
  • the increase of the synthesis speed will lead to the decrease of the final 3D synthesis accuracy; to improve the 3D synthesis accuracy, the synthesis speed needs to be reduced and more pictures are used to synthesize.
  • the present invention is proposed to provide a collection device that overcomes the above-mentioned problems or at least partially solves the above-mentioned problems.
  • the present invention provides a device for collecting 3D information
  • the acquisition area moving device is used to drive the acquisition area of the image acquisition device to move relative to the target;
  • An image acquisition device for acquiring a set of images of the target object through the above-mentioned relative motion
  • the collection position of the image collection device meets the following conditions:
  • the present invention provides a 3D information collection method
  • the collection position of the image collection device meets the following conditions:
  • the present invention also provides a device or method for 3D information collection, which has a plurality of image collection devices, which are respectively located around the target;
  • Multiple image acquisition devices respectively acquire a set of images of the target object from different angles
  • the image acquisition device rotates or translates relative to the target.
  • a background board is provided on the opposite side of the image acquisition device.
  • it also includes a processor, configured to perform 3D synthesis based on a plurality of images in a set of images to generate a 3D model of the target object.
  • a processor configured to perform 3D synthesis based on a plurality of images in a set of images to generate a 3D model of the target object.
  • the processor is included in the device, or located in the upper computer, or located in the remote server.
  • the image acquisition device is in the visible light waveband, infrared light waveband, and/or full waveband.
  • the present invention also provides a 3D synthesis device or method using the above device or method.
  • the present invention also provides a 3D recognition/comparison device or method using the above device or method.
  • the present invention also provides a method or device for making an accessory using the above device or method.
  • the present invention also provides a 3D information acquisition and measurement equipment and method, including an image acquisition device, a rotation device and a background board, wherein
  • the rotating device is used to drive the image acquisition device to rotate, and to drive the background plate to rotate;
  • the background board and the image capture device are kept relatively set during the rotation, so that the background board becomes the background pattern of the image captured by the image capture device during capture;
  • the background board satisfies: projecting in the direction perpendicular to the surface to be photographed, the length W 1 in the horizontal direction of the projected shape and the length W 2 in the vertical direction of the projected shape are determined by the following conditions:
  • d 1 is the length of the imaging element in the horizontal direction
  • d 2 is the length of the imaging element in the vertical direction
  • T is the vertical distance from the sensor element of the image capture device to the background plate along the optical axis
  • f is the focal length of the image capture device
  • a 1 , A 2 is the experience coefficient
  • the present invention also provides a standard 3D information collection and/or measurement method and equipment.
  • the image collection device collects a target object, two adjacent collection positions meet the following conditions:
  • the background board and the image acquisition device are respectively arranged at both ends of the rotating beam, and the rotating device drives the rotating beam to rotate.
  • the rotating device is located on the fixed beam to drive the rotating beam to rotate.
  • the background board is a flat board or a curved board.
  • the main body of the background board is a solid color or has a mark.
  • the background board is an integral or spliced board.
  • the present invention also provides a 3D recognition device, which uses the 3D information provided by the above-mentioned device or method.
  • the present invention also provides a 3D manufacturing equipment that uses the 3D information provided by the above-mentioned equipment or method.
  • FIG. 1 is a schematic diagram of a rotating structure of the collection area moving device in Embodiment 1 of the present invention
  • Embodiment 2 is a schematic diagram of another implementation manner in which the collection area moving device in Embodiment 1 of the present invention is a rotating structure;
  • FIG. 3 is a schematic diagram of a translational structure of the collection area moving device in Embodiment 2 of the present invention.
  • FIG. 4 is a schematic diagram of a random movement structure of the collection area moving device in Embodiment 3 of the present invention.
  • FIG. 5 is a schematic diagram of a multi-camera method in Embodiment 4 of the present invention.
  • Embodiment 6 is a front view of a 3D information collection device provided by Embodiment 5 of the present invention.
  • FIG. 7 is a perspective view of a 3D information collection device provided by Embodiment 5 of the present invention.
  • FIG. 8 is another perspective view of the 3D information collection device provided by Embodiment 5 of the present invention.
  • an embodiment of the present invention provides an image acquisition device for 3D information acquisition, including an image acquisition device and a rotating device.
  • the image acquisition device is used to acquire a set of images of the target through the relative movement of the acquisition area of the image acquisition device and the target; the acquisition area moving device is used to drive the acquisition area of the image acquisition device to move relative to the target.
  • the acquisition area is the effective field of view range of the image acquisition device.
  • Embodiment 1 The collection area moving device is a rotating structure
  • the target 1 is fixed on the stage 2, and the rotating device 3 drives the image acquisition device 4 to rotate around the target 1.
  • the rotating device 3 can drive the image acquisition device 4 to rotate around the target 1 through a rotating arm.
  • this kind of rotation is not necessarily a complete circular motion, and it can only be rotated by a certain angle according to the collection needs.
  • this rotation does not necessarily have to be a circular motion, and the motion trajectory of the image acquisition device 4 can be other curved trajectories, as long as it is ensured that the camera shoots the object from different angles.
  • the rotation device 3 can also drive the image acquisition device to rotate, so that the image acquisition device 4 can collect images of the target object from different angles through rotation.
  • the rotating device 3 can be in various forms such as a cantilever, a turntable, or a track, so that the image acquisition device 4 can move.
  • the camera can also be fixed in some cases.
  • the stage 2 carrying the target 1 rotates, so that the direction of the target 1 facing the image capture device 4 changes from time to time, so that the image capture device 4 Capable of collecting images of target 1 from different angles.
  • the calculation can still be performed according to the situation converted into the movement of the image acquisition device 4, so that the movement conforms to the corresponding empirical formula (the details will be described in detail below).
  • the stage 2 rotates, it can be assumed that the stage 2 does not move but the image capture device 4 rotates.
  • the rotation speed is derived, and the rotation speed of the stage is deduced, so as to facilitate the rotation speed control and realize 3D acquisition.
  • this kind of scene is not commonly used, and it is more commonly used to rotate the image capture device.
  • the acquisition area moving device 4 is an optical scanning device, so that when the image acquisition device 4 does not move or rotate, the acquisition area of the image acquisition device 4 and the target 1 move relative to each other.
  • the collection area moving device also includes a light deflection unit, which is mechanically driven to rotate, or is electrically driven to cause light path deflection, or is arranged in multiple groups in space, so as to obtain images of the target object from different angles.
  • the light deflection unit can typically be a mirror, which rotates so that images of the target object in different directions are collected.
  • the rotation of the optical axis in this case can be regarded as the rotation of the virtual position of the image capture device 4.
  • the image acquisition device 4 is used to acquire an image of the target object 1, and it may be a fixed focus camera or a zoom camera. In particular, it can be a visible light camera or an infrared camera. Of course, it is understandable that any device with image acquisition function can be used and does not constitute a limitation of the present invention. For example, it can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, Mobile terminals, wearable devices, smart glasses, smart watches, smart bracelets, and all devices with image capture functions.
  • the background board is located opposite to the image acquisition device 4, and rotates synchronously when the image acquisition device 4 rotates, and remains stationary when the image acquisition device 4 is stationary.
  • the background board is all solid color, or most (main body) is solid color. In particular, it can be a white board or a black board, and the specific color can be selected according to the main color of the target object.
  • the background board is usually a flat panel, preferably a curved panel, such as a concave panel, a convex panel, a spherical panel, and even in some application scenarios, it can be a background panel with a wavy surface; it can also be a spliced panel of various shapes. For example, three planes can be used for splicing, and the whole is concave, or flat and curved surfaces can be used for splicing.
  • the device also includes a processor, also called a processing unit, for synthesizing a 3D model of the target object according to a 3D synthesis algorithm according to a plurality of images collected by the image acquisition device to obtain 3D information of the target object.
  • a processor also called a processing unit
  • Embodiment 2 The moving device of the collection area is a translational structure
  • the image acquisition device 4 can move relative to the target 1 in a straight line.
  • the image acquisition device 4 is located on a linear track 5 and passes by the target 1 along the linear track 5 to take pictures. During the process, the image acquisition device 4 does not rotate.
  • the linear track 5 can also be replaced by a linear cantilever. More preferably, as shown in FIG. 3, when the image acquisition device 4 as a whole moves along a linear trajectory, it performs a certain rotation, so that the optical axis of the image acquisition device 4 faces the target 1.
  • Embodiment 3 The moving device of the collection area has an irregular movement structure
  • the movement of the collection area is irregular.
  • the image collection device 4 can be hand-held to surround the target 1 for shooting.
  • a more common method is to take more photos and use the redundancy of the number of photos to solve the problem. But the result of this synthesis is not stable.
  • the present invention proposes a method for improving the synthesis effect and shortening the synthesis time by limiting the movement distance of the camera for two shots.
  • a user in the process of face recognition, a user can hold a mobile terminal to move and shoot around his face. As long as it meets the empirical requirements of the photographing position (specifically described below), the face 3D model can be accurately synthesized. At this time, the face recognition can be realized by comparing with the pre-stored standard model. For example, the mobile phone can be unlocked, or payment verification can be performed.
  • a sensor can be installed in the mobile terminal or the image acquisition device 4, and the linear distance that the image acquisition device 4 moves during two shots can be measured by the sensor.
  • L specifically the following conditions
  • an alarm is issued to the user.
  • the alarm includes sound or light alarm to the user.
  • it can also be displayed on the screen of the mobile phone when the user moves the image acquisition device 4, or a voice prompts the user to move the distance and the maximum distance L that can be moved in real time.
  • Sensors that implement this function include: rangefinders, gyroscopes, accelerometers, positioning sensors, and/or combinations thereof.
  • Embodiment 4 Multi-camera mode
  • the camera can take images of the target 1 from different angles. As shown in Figure 5, multiple cameras can be set at different positions around the target 1 so that the target can be photographed at the same time. 1 Images from different angles.
  • the light source is distributed around the lens of the image acquisition device 4 in a dispersed manner, for example, the light source is a ring LED lamp around the lens.
  • the collected object is a human body, it is necessary to control the intensity of the light source to avoid causing discomfort to the human body.
  • a soft light device such as a soft light housing, can be arranged on the light path of the light source.
  • directly use the LED surface light source not only the light is softer, but also the light is more uniform.
  • an OLED light source can be used, which is smaller in size, has softer light, and has flexible characteristics that can be attached to curved surfaces.
  • the light source can also be set in other positions that can provide uniform illumination for the target.
  • the light source can also be a smart light source, that is, the light source parameters are automatically adjusted according to the target 1 and ambient light conditions.
  • the collection area moving device is a rotating structure.
  • the image collection device 4 rotates around the target object 1.
  • the image collection device 4 changes its optical axis direction relative to the target object 1 at different collection positions. At this time, there are two adjacent images.
  • the position of the acquisition device 4, or two adjacent acquisition positions of the image acquisition device 4 meet the following conditions:
  • d takes the length of the rectangle; when the above two positions are along the width direction of the photosensitive element of the image capture device, d is the width of the rectangle.
  • the distance from the photosensitive element to the surface of the target along the optical axis is taken as T.
  • L is A n, A n + 1 two linear distance optical center of the image pickup apparatus, and A n, A n + 1 of two adjacent image pickup devices A
  • it is not limited to 4 adjacent positions, and more positions can be used for average calculation.
  • L should be the linear distance between the optical centers of the two image capture devices, but because the position of the optical centers of the image capture devices is not easy to determine in some cases, the image capture device 4 can also be used in some cases.
  • the center of the photosensitive element, the geometric center of the image capture device, the center of the axis connecting the image capture device and the pan/tilt (or platform, bracket), and the center of the lens near or far within the acceptable range, therefore, the above-mentioned range is also within the protection scope of the present invention.
  • parameters such as object size and field of view are used as a method for estimating the position of the camera, and the positional relationship between the two cameras is also expressed by angle. Since the angle is not easy to measure in actual use, it is more inconvenient in actual use. And, the size of the object will change with the change of the measuring object. For example, after collecting the 3D information of an adult's head, and then collecting the head of a child, the head size needs to be re-measured and recalculated. The above-mentioned inconvenient measurement and multiple re-measurements will cause measurement errors, resulting in errors in the estimation of the camera position.
  • this solution gives the empirical conditions that the camera position needs to meet, which not only avoids measuring angles that are difficult to accurately measure, but also does not need to directly measure the size of the object.
  • d and f are the fixed parameters of the camera.
  • T is only a straight line distance, which can be easily measured by traditional measuring methods, such as rulers and laser rangefinders. Therefore, the empirical formula of the present invention makes the preparation process convenient and quick, and at the same time improves the accuracy of the arrangement of the camera position, so that the camera can be set in an optimized position, thereby taking into account the 3D synthesis accuracy and speed at the same time. Specific experiments See below for data.
  • the method of the present invention can directly replace the lens to calculate the conventional parameter f to obtain the camera position; similarly, when collecting different objects, due to the different size of the object, the measurement of the object size is also More cumbersome.
  • the method of the present invention there is no need to measure the size of the object, and the camera position can be determined more conveniently.
  • the camera position determined by the present invention can take into account the synthesis time and the synthesis effect. Therefore, the above empirical condition is one of the invention points of the present invention.
  • the rotation movement of the present invention is that during the acquisition process, the previous position acquisition plane and the next position acquisition plane cross instead of being parallel, or the optical axis of the image acquisition device at the previous position crosses the optical axis of the image acquisition position at the next position. Instead of parallel. That is to say, the movement of the acquisition area of the image acquisition device around or partly around the target object can be regarded as the relative rotation of the two.
  • the examples of the present invention enumerate more rotational motions with tracks, it can be understood that as long as non-parallel motion occurs between the acquisition area of the image acquisition device and the target, it is in the category of rotation, and the present invention can be used. Qualification.
  • the protection scope of the present invention is not limited to the orbital rotation in the embodiment.
  • the adjacent acquisition positions in the present invention refer to two adjacent positions on the moving track where the acquisition action occurs when the image acquisition device moves relative to the target. This is usually easy to understand for the movement of the image capture device. However, when the target object moves to cause the two to move relative to each other, at this time, the movement of the target object should be converted into the target object's immobility according to the relativity of the movement, and the image acquisition device moves. At this time, measure the two adjacent positions of the image acquisition device where the acquisition action occurs in the transformed movement track.
  • an embodiment of the present invention provides a 3D information collection device, which includes an image collection device 4, a rotation device 3, and a background board 13.
  • the image acquisition device 4 is used to acquire an image of the target object, and it can be a fixed focus camera or a zoom camera. In particular, it can be a visible light camera or an infrared camera. Of course, it is understandable that any device with image acquisition function can be used and does not constitute a limitation of the present invention. For example, it can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, Mobile terminals, wearable devices, smart glasses, smart watches, smart bracelets, and all devices with image capture functions.
  • All of the background plate 13 is a solid color, or most (main body) is a solid color. In particular, it can be a white board or a black board, and the specific color can be selected according to the main color of the target object.
  • the background board 13 is usually a flat panel, preferably a curved panel, such as a concave panel, a convex panel, a spherical panel, and even in some application scenarios, it can be a background panel with a wavy surface; it can also be a splicing panel of various shapes. For example, three planes can be used for splicing, and the whole is concave, or flat and curved surfaces can be used for splicing.
  • the shape of its edge can also be selected according to needs. Normally, it is a straight type, which constitutes a rectangular plate. However, in some applications, the edges can be curved.
  • the background plate 13 is a curved plate, which can minimize the projection size of the background plate 3 when the maximum background range is obtained. This makes the background board need less space when rotating, which is conducive to reducing the size of the device, reducing the weight of the device, and avoiding rotational inertia, thereby making it more conducive to controlling the rotation.
  • the length W 1 in the horizontal direction of the projected shape and the length W 2 in the vertical direction of the projected shape are determined by the following conditions:
  • d 1 is the length of the imaging element in the horizontal direction
  • d 2 is the length of the imaging element in the vertical direction
  • T is the vertical distance from the sensor element of the image capture device to the background plate along the optical axis
  • f is the focal length of the image capture device
  • a 1 , A 2 is the empirical coefficient.
  • the edge of the background board is non-linear, which causes the edge of the projected graphic to be non-linear after projection.
  • W 1 and W 2 measured at different positions are different, so W 1 and W 2 are not easy to determine in actual calculations. Therefore, it is possible to take 3-5 points on the opposite sides of the background plate 13 respectively, measure the linear distance between the two points, and then take the measured average value as W 1 and W 2 in the above conditions.
  • the background plate 13 If the background plate 13 is too large and the cantilever is too long, it will increase the volume of the device, and at the same time bring extra burden to the rotation, making the device more likely to be damaged. However, if the background plate is too small, the background will not be pure, which will bring about the burden of calculation.
  • the background board 13 is installed on the first mounting column 14 through the frame.
  • the first mounting column 14 is arranged at one end of the rotating beam 15 in the vertical direction; the image acquisition device 3 is installed on a horizontal bracket, and the horizontal bracket 16 is connected to the second mounting column.
  • the second mounting column 17 is arranged at the other end of the rotating cross beam 15 along the vertical direction.
  • the first mounting post 14 can move horizontally along the rotating beam 5 to adjust the horizontal position of the background board 13.
  • the second mounting post 7 can move horizontally along the rotating beam 15 to adjust the horizontal position of the image capture device 4.
  • the background plate 13 can also be directly installed on the mounting column, or the background plate 13 and the mounting column can be integrally formed.
  • the frame of the background plate 13 can move up and down along the first mounting column 14 to adjust the position of the background plate 13 in the vertical direction; the horizontal bracket 6 can move up and down along the second mounting column 7 to adjust the image capture device 4 in the vertical direction. The location of the direction.
  • the image capture device 4 can also move in the horizontal direction along the horizontal support 16 to adjust the horizontal position of the image capture device 4.
  • the above-mentioned movement can be realized by a variety of ways such as guide rails, lead screws, and sliding tables.
  • the rotating beam 15 is connected to the fixed beam through the rotating device 3.
  • the rotating device 3 drives the rotating beam 15 to rotate, thereby driving the background plate 13 and the image capture device 4 at both ends of the beam to rotate, but no matter how it rotates, the image capture device 4 and the background plate 13 are both
  • the relative arrangement, in particular, the optical axis of the image capture device 4 passes through the center of the background plate 13.
  • the rotating device 3 can be a motor and a matching rotating transmission system, such as a gear system, a transmission belt, and the like.
  • the light source can be arranged on the rotating beam 15, the first column 14, the second column 17, the horizontal support 16 and/or the image acquisition device 4.
  • the light source can be an LED light source or a smart light source, which is based on the target object and the ambient light.
  • the light source parameters are automatically adjusted according to the situation.
  • the light source is distributed around the lens of the image acquisition device 1 in a dispersed manner, for example, the light source is a ring LED lamp around the lens.
  • the collected object is a human body, it is necessary to control the intensity of the light source to avoid causing discomfort to the human body.
  • a soft light device such as a soft light housing, can be arranged on the light path of the light source.
  • directly use the LED surface light source not only the light is softer, but also the light is more uniform. More preferably, an OLED light source can be used, which is smaller in size, has softer light, and has flexible characteristics that can be attached to curved surfaces.
  • the object to be collected is usually between the image acquisition device 4 and the background plate 13.
  • a seat can be set in the center of the equipment base. And because different people have different heights, the seat can be set to connect with a liftable structure.
  • the lifting mechanism is driven by a driving motor, and the lifting is controlled by a remote controller.
  • the lifting mechanism can also be uniformly controlled by the control terminal. That is, the control panel of the driving motor communicates with the control terminal in a wired or wireless manner, and receives commands from the control terminal.
  • the control terminal can be a computer, cloud platform, mobile phone, tablet, special control equipment, etc.
  • a stage can be set in the center of the device base. Similarly, the stage can also be driven by a lifting structure for height adjustment to facilitate the collection of target object information.
  • the specific control method and connection relationship are the same as the above, and will not be repeated. But in particular, the object is different from a person, and rotation does not bring discomfort. Therefore, the stage can be rotated under the drive of the rotating device. At this time, there is no need to rotate the beam 15 to drive the image capture device 4 and the background plate during collection. 13 rotation. Of course, the stage and the rotating beam 15 can also be rotated at the same time.
  • marking points can be set on the seat or the stage, and the coordinates of these marking points are known. By collecting marker points and combining their coordinates, the absolute size of the 3D synthetic model is obtained.
  • the marking point can be located on the headrest on the seat.
  • the device also includes a processor, also called a processing unit, for synthesizing a 3D model of the target object according to a 3D synthesis algorithm according to a plurality of images collected by the image acquisition device to obtain 3D information of the target object.
  • a processor also called a processing unit
  • a seat can be placed between the image capture device 1 and the background board 3.
  • the head is located just near the rotation axis and between the image capture device 1 and the background board 3. . Since each person has a different height, the height of the area to be collected (for example, the human head) is different. At this time, the position of the human head in the field of view of the image acquisition device 1 can be adjusted by adjusting the height of the seat.
  • the seat can be replaced with a storage table.
  • the background board can move up and down along the first mounting column 14, and the horizontal support 16 carrying the image capture device 4 can move up and down along the second mounting column 17.
  • the movement of the background plate 3 and the image acquisition device 4 are synchronized, ensuring that the optical axis of the image acquisition device 1 passes through the center position of the background plate 3.
  • the image acquisition device 1 can be driven to move back and forth on the horizontal support, so as to ensure that the target object occupies a proper proportion of the pictures collected by the image acquisition device 1.
  • the rotating device 2 drives the image acquisition device 4 and the background board 13 to rotate around the target by rotating the rotating beam 15 and ensures that the two are facing each other during the rotation.
  • the collected separation distance preferably satisfies the following empirical formula:
  • the two adjacent acquisition positions of the image acquisition device meet the following conditions:
  • d takes the length of the rectangle; when the above two positions are along the width direction of the photosensitive element of the image capture device, d is the width of the rectangle.
  • the distance from the photosensitive element to the surface of the target along the optical axis is taken as T.
  • L is A n, A n + 1 two linear distance optical center of the image pickup apparatus, and A n, A n + 1 of two adjacent image pickup devices A
  • it is not limited to 4 adjacent positions, and more positions can be used for average calculation.
  • L should be the linear distance between the optical centers of the two image acquisition devices, but because the position of the optical centers of the image acquisition devices is not easy to determine in some cases, the center of the photosensitive element and the image of the image acquisition device can also be used in some cases.
  • the geometric center of the acquisition device 1, the axis center of the connection between the image acquisition device and the pan/tilt (or platform, bracket), and the center of the proximal or distal surface of the lens are substituted. After experiments, it is found that the resulting error is within an acceptable range. inside.
  • parameters such as object size and field of view are used as a method for estimating the position of the camera, and the positional relationship between the two cameras is also expressed by angle. Since the angle is not easy to measure in actual use, it is more inconvenient in actual use. And, the size of the object will change with the change of the measuring object. For example, after collecting the 3D information of an adult's head, and then collecting the head of a child, the head size needs to be re-measured and recalculated. The above-mentioned inconvenient measurement and multiple re-measurements will cause measurement errors, resulting in errors in the estimation of the camera position.
  • this solution gives the empirical conditions that the camera position needs to meet, which not only avoids measuring angles that are difficult to accurately measure, but also does not need to directly measure the size of the object.
  • d and f are the fixed parameters of the camera.
  • T is only a straight line distance, which can be easily measured by traditional measuring methods, such as rulers and laser rangefinders. Therefore, the empirical formula of the present invention makes the preparation process convenient and quick, and at the same time improves the accuracy of the arrangement of the camera position, so that the camera can be set in an optimized position, thereby taking into account the 3D synthesis accuracy and speed at the same time. Specific experiments See below for data.
  • the method of the present invention can directly replace the lens to calculate the conventional parameter f to obtain the camera position; similarly, when collecting different objects, due to the different size of the object, the measurement of the object size is also More cumbersome.
  • the method of the present invention there is no need to measure the size of the object, and the camera position can be determined more conveniently.
  • the camera position determined by the present invention can take into account the synthesis time and the synthesis effect. Therefore, the above empirical condition is one of the invention points of the present invention.
  • the multiple images are transmitted to the processor in a data transmission manner.
  • the processor can be set locally, or the image can be uploaded to the cloud platform to use a remote processor. Use the following method in the processor to synthesize the 3D model.
  • the existing algorithm can be used to realize it, or the optimized algorithm proposed by the present invention can be used, which mainly includes the following steps:
  • Step 1 Perform image enhancement processing on all input photos.
  • the following filters are used to enhance the contrast of the original photos and suppress noise at the same time.
  • g(x, y) is the gray value of the original image at (x, y)
  • f(x, y) is the gray value of the original image after being enhanced by the Wallis filter
  • m g is the local gray value of the original image Degree mean
  • s g is the local gray-scale standard deviation of the original image
  • m f is the local gray-scale target value of the transformed image
  • s f is the local gray-scale standard deviation target value of the transformed image.
  • c ⁇ (0,1) is the expansion constant of the image variance
  • b ⁇ (0,1) is the image brightness coefficient constant.
  • the filter can greatly enhance the image texture patterns of different scales in the image, so the number and accuracy of feature points can be improved when extracting the point features of the image, and the reliability and accuracy of the matching result can be improved in the photo feature matching.
  • Step 2 Perform feature point extraction on all input photos, and perform feature point matching to obtain sparse feature points.
  • the SURF operator is used to extract and match the feature points of the photos.
  • the SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching. This method uses Hessian matrix to detect feature points, uses Box Filters to replace second-order Gaussian filtering, and uses integral images to accelerate convolution to increase the calculation speed and reduce the dimensionality of local image feature descriptors. To speed up the matching speed.
  • the main steps include 1 constructing the Hessian matrix to generate all points of interest for feature extraction.
  • the purpose of constructing the Hessian matrix is to generate stable edge points (mutation points) of the image; 2 constructing the scale space feature point positioning, which will be processed by the Hessian matrix Compare each pixel point with 26 points in the neighborhood of two-dimensional image space and scale space, and initially locate the key points, and then filter out the key points with weaker energy and the key points that are incorrectly positioned to filter out the final stable 3
  • the main direction of the feature point is determined by using the Harr wavelet feature in the circular neighborhood of the statistical feature point.
  • the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree fan is counted, and then the fan is rotated at an interval of 0.2 radians and the harr wavelet eigenvalues in the area are counted again.
  • the direction of the sector with the largest value is taken as the main direction of the feature point; 4
  • a 64-dimensional feature point description vector is generated, and a 4*4 rectangular area block is taken around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. direction.
  • Each sub-region counts 25 pixels of haar wavelet features in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction.
  • Step 3 Input the matching feature point coordinates, use the beam method to adjust, solve the sparse face 3D point cloud and the position and posture data of the camera, that is, obtain the sparse face model 3D point cloud and the position model coordinate value ;
  • sparse feature points as initial values, dense matching of multi-view photos is performed to obtain dense point cloud data.
  • the process has four main steps: stereo pair selection, depth map calculation, depth map optimization, and depth map fusion. For each image in the input data set, we select a reference image to form a stereo pair for calculating the depth map. Therefore, we can get rough depth maps of all images. These depth maps may contain noise and errors. We use its neighborhood depth map to check consistency to optimize the depth map of each image. Finally, depth map fusion is performed to obtain a three-dimensional point cloud of the entire scene.
  • Step 4 Use the dense point cloud to reconstruct the face surface. Including the process of defining the octree, setting the function space, creating the vector field, solving the Poisson equation, and extracting the isosurface.
  • the integral relationship between the sampling point and the indicator function is obtained from the gradient relationship, and the vector field of the point cloud is obtained according to the integral relationship, and the approximation of the gradient field of the indicator function is calculated to form the Poisson equation.
  • the approximate solution is obtained by matrix iteration, the moving cube algorithm is used to extract the isosurface, and the model of the measured object is reconstructed from the measured point cloud.
  • Step 5 Fully automatic texture mapping of the face model. After the surface model is built, texture mapping is performed.
  • the main process includes: 1The texture data is obtained through the image reconstruction target's surface triangle grid; 2The visibility analysis of the reconstructed model triangle. Use the image calibration information to calculate the visible image set of each triangle and the optimal reference image; 3The triangle surface clustering generates texture patches. According to the visible image set of the triangle surface, the optimal reference image and the neighborhood topological relationship of the triangle surface, the triangle surface cluster is generated into a number of reference image texture patches; 4The texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate the texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangle.
  • the above-mentioned algorithm is an optimized algorithm of the present invention, and this algorithm cooperates with the image acquisition conditions, and the use of this algorithm takes into account the time and quality of synthesis, which is one of the invention points of the present invention.
  • the conventional 3D synthesis algorithm in the prior art can also be used, but the synthesis effect and speed will be affected to a certain extent.
  • the accessory After collecting the 3D information of the target and synthesizing the 3D model, the accessory can be made for the target according to the 3D data.
  • make glasses suitable for the face of the user For example, make glasses suitable for the face of the user.
  • texture information is added to form a 3D head model.
  • the relevant position and size of the head 3D model such as the width of the cheeks, the height of the nose bridge, the size of the auricle, etc., the user selects the appropriate glasses frame.
  • various accessories such as hats, gloves, and prostheses can also be designed for users. It is also possible to design accessory accessories for the object, such as designing packaging boxes that can be tightly wrapped for special-shaped parts.
  • the 3D information of multiple regions of the target obtained in the above embodiment can be used for comparison, for example, for identity recognition.
  • the 3D acquisition device can be used to collect and obtain the 3D information of the human face and iris again, and compare it with the standard data. If the comparison is successful, the next step is allowed.
  • This kind of comparison can also be used for the identification of fixed assets such as antiques and artworks, that is, first obtain 3D information of multiple areas of antiques and artworks as standard data, and obtain 3D information of multiple areas again when authentication is required. Information, and compare with standard data to identify authenticity.
  • the image capture device captures images
  • the image acquisition device can also collect video data, directly use the video data or intercept images from the video data for 3D synthesis.
  • the shooting position of the corresponding frame of the video data or the captured image used in the synthesis still satisfies the above empirical formula.
  • the above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a physical object, or it can be a combination of multiple objects. For example, it can be a head, a hand, and so on.
  • the three-dimensional information of the target includes a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, a local three-dimensional feature, a three-dimensional size, and all parameters with a three-dimensional feature of the target.
  • the so-called three-dimensional in the present invention refers to three-direction information of XYZ, especially depth information, which is essentially different from only two-dimensional plane information. It is also essentially different from the definitions called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially depth information.
  • the collection area mentioned in the present invention refers to the range that an image collection device (such as a camera) can shoot.
  • the image acquisition device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet and belt All devices with image capture function.
  • modules or units or components in the embodiments can be combined into one module or unit or component, and in addition, they can be divided into multiple sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or units are mutually exclusive, any combination can be used to compare all the features disclosed in this specification (including the accompanying claims, abstract and drawings) and any method or methods disclosed in this manner or All the processes or units of the equipment are combined. Unless expressly stated otherwise, each feature disclosed in this specification (including the accompanying claims, abstract and drawings) may be replaced by an alternative feature providing the same, equivalent or similar purpose.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions based on some or all of the components in the device of the present invention according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Abstract

A high-precision 3D information collection apparatus, comprising a collection region moving device (3) which is used for driving a collection region of an image collection device (4) to move relative to a target (1), and the image collection device (4) which is used for collecting a group of images of the target (1) by means of the relative motion, wherein during the relative motion, a collection position of the image collection device (4) meets a preset condition. The synthesis speed and the synthesis precision are both improved by means of adding a background board (13) which rotates along with the image collection device (4). By means of optimizing the size of the background board (13), this can ensure that the synthesis speed and the synthesis precision can both be improved, while also reducing the rotation burden.

Description

3D信息采集的设备及方法Equipment and method for 3D information collection 技术领域Technical field
本发明涉及形貌测量技术领域,特别涉及3D形貌测量技术领域。The invention relates to the technical field of shape measurement, in particular to the technical field of 3D shape measurement.
背景技术Background technique
在进行3D测量时,需要首先采集3D信息。目前常用的方法包括使用机器视觉的方式,采集物体不同角度的图片,并将这些图片匹配拼接形成3D模型。在采集不同角度图片时,可以待测物不同角度设置多个相机,也可以通过单个或多个相机旋转从不同角度采集图片。但无论这两种方式哪一种,都涉及合成速度和合成精度的问题。而合成速度和合成精度在某种程度上是一对矛盾,合成速度的提高会导致最终3D合成精度下降;要提高3D合成精度则需要降低合成速度,通过更多的图片来合成。When performing 3D measurement, 3D information needs to be collected first. At present, commonly used methods include using machine vision to collect pictures of objects from different angles, and match these pictures to form a 3D model. When collecting pictures from different angles, multiple cameras can be set at different angles of the object to be measured, or pictures can be collected from different angles by rotating a single or multiple cameras. However, no matter which of these two methods, the problems of synthesis speed and synthesis accuracy are involved. The synthesis speed and synthesis accuracy are a contradiction to a certain extent. The increase of the synthesis speed will lead to the decrease of the final 3D synthesis accuracy; to improve the 3D synthesis accuracy, the synthesis speed needs to be reduced and more pictures are used to synthesize.
在现有技术中,为了同时提高合成速度和合成精度,通常通过优化算法的方法实现。并且本领域一直认为解决上述问题的途径在于算法的选择和更新,截止目前没有任何提出其他角度同时提高合成速度和合成精度的方法。然而,算法的优化目前已经达到瓶颈,在没有更优理论出现前,已经无法兼顾提高合成速度和合成的精度。In the prior art, in order to improve the synthesis speed and synthesis accuracy at the same time, it is usually realized by the method of optimizing the algorithm. In addition, the art has always believed that the way to solve the above-mentioned problems lies in the selection and update of algorithms. So far, no other methods have been proposed to improve the synthesis speed and synthesis accuracy at the same time. However, the optimization of the algorithm has reached a bottleneck. Before the emergence of a better theory, it has been impossible to both improve the synthesis speed and synthesis accuracy.
在现有技术中,也曾提出使用包括旋转角度、目标物尺寸、物距的经验公式限定相机位置,从而兼顾合成速度和效果。然而在实际应用中发现:除非有精确量角装置,否则用户对角度并不敏感,难以准确确定角度;目标物尺寸难以准确确定,特别是某些应用场合目标物需要频繁更换,每次测量带来大量额外工作量,并且需要专业设备才能准确测量不规则目标物。测量的误差导致相机位置设定误差,从而会影响采集合成速度和效果;准确度和速度还需要进一步提高。In the prior art, it has also been proposed to use empirical formulas including rotation angle, target size, and object distance to limit the camera position, so as to take into account the synthesis speed and effect. However, in practical applications, it is found that unless there is a precise angle measuring device, the user is not sensitive to the angle, and it is difficult to accurately determine the angle; the size of the target is difficult to accurately determine, especially in some applications where the target needs to be replaced frequently, and the tape is measured every time A lot of extra work is required, and professional equipment is required to accurately measure irregular targets. The measurement error leads to the camera position setting error, which will affect the acquisition and synthesis speed and effect; the accuracy and speed need to be further improved.
因此,目前急需解决以下技术问题:①能够同时大幅度提高合成速度和合成精度;②方便操作,无需使用专业设备,无需过多测量,能够快速获得相机位置。Therefore, there is an urgent need to solve the following technical problems: ① It can greatly improve the synthesis speed and synthesis accuracy at the same time; ② It is convenient to operate, does not require the use of professional equipment, does not require excessive measurement, and can quickly obtain the camera position.
发明内容Summary of the invention
鉴于上述问题,提出了本发明提供一种克服上述问题或者至少部分地解决上述问题的采集设备。In view of the above-mentioned problems, the present invention is proposed to provide a collection device that overcomes the above-mentioned problems or at least partially solves the above-mentioned problems.
本发明提供了一种3D信息采集的设备,The present invention provides a device for collecting 3D information,
采集区域移动装置,用于驱动图像采集装置的采集区域与目标物产生相对运动;The acquisition area moving device is used to drive the acquisition area of the image acquisition device to move relative to the target;
图像采集装置,用于通过上述相对运动采集目标物一组图像;An image acquisition device for acquiring a set of images of the target object through the above-mentioned relative motion;
在上述相对运动过程中,图像采集装置的采集位置符合如下条件:During the above relative movement, the collection position of the image collection device meets the following conditions:
Figure PCTCN2020134757-appb-000001
Figure PCTCN2020134757-appb-000001
其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件(CCD)的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数。Where L is the linear distance between the optical centers of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element (CCD) of the image acquisition device; T is the length or width of the photosensitive element of the image acquisition device The distance from the optical axis to the surface of the target; δ is the adjustment coefficient.
本发明提供了一种3D信息采集方法,The present invention provides a 3D information collection method,
驱动图像采集装置的采集区域与目标物产生相对运动;Drive the acquisition area of the image acquisition device to move relative to the target;
通过上述相对运动采集目标物一组图像;Collecting a set of images of the target object through the above-mentioned relative movement;
在上述相对运动过程中,图像采集装置的采集位置符合如下条件:During the above relative movement, the collection position of the image collection device meets the following conditions:
Figure PCTCN2020134757-appb-000002
Figure PCTCN2020134757-appb-000002
其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件(CCD)的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数。Where L is the linear distance between the optical centers of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element (CCD) of the image acquisition device; T is the length or width of the photosensitive element of the image acquisition device The distance from the optical axis to the surface of the target; δ is the adjustment coefficient.
本发明还提供了一种用于3D信息采集的装置或方法,具有多个图像采集装置,分别位于目标物的周围;The present invention also provides a device or method for 3D information collection, which has a plurality of image collection devices, which are respectively located around the target;
多个图像采集装置分别从不同角度采集目标物一组图像;Multiple image acquisition devices respectively acquire a set of images of the target object from different angles;
相邻两个图像采集装置的位置符合如下条件:The positions of two adjacent image acquisition devices meet the following conditions:
Figure PCTCN2020134757-appb-000003
Figure PCTCN2020134757-appb-000003
其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件(CCD)的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数。Where L is the linear distance between the optical centers of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element (CCD) of the image acquisition device; T is the length or width of the photosensitive element of the image acquisition device The distance from the optical axis to the surface of the target; δ is the adjustment coefficient.
可选的,δ<0.410。Optionally, δ<0.410.
可选的,δ<0.356。Optionally, δ<0.356.
可选的,图像采集装置相对于目标物转动或平动。Optionally, the image acquisition device rotates or translates relative to the target.
可选的,图像采集装置对面设置有背景板。Optionally, a background board is provided on the opposite side of the image acquisition device.
可选的,还包括处理器,用于根据一组图像中的多个进行3D合成,生成目标物的3D模型。Optionally, it also includes a processor, configured to perform 3D synthesis based on a plurality of images in a set of images to generate a 3D model of the target object.
可选的,包括处理器设置在该设备中,或位于上位机,或位于远程服务器中。Optionally, the processor is included in the device, or located in the upper computer, or located in the remote server.
可选的,图像采集装置为可见光波段、红外光波段、和/或全波段。Optionally, the image acquisition device is in the visible light waveband, infrared light waveband, and/or full waveband.
本发明还提供了一种使用上述装置或方法的3D合成装置或方法。The present invention also provides a 3D synthesis device or method using the above device or method.
本发明还提供了一种使用上述装置或方法的3D识别/比对装置或方法。The present invention also provides a 3D recognition/comparison device or method using the above device or method.
本发明还提供了一种使用上述装置或方法的附属物制作方法或装置。The present invention also provides a method or device for making an accessory using the above device or method.
本发明的还提供了一种3D信息采集测量设备及方法,包括图像采集装置、旋转装置和背景板,其中The present invention also provides a 3D information acquisition and measurement equipment and method, including an image acquisition device, a rotation device and a background board, wherein
旋转装置用于驱动图像采集装置转动,并驱动背景板转动;The rotating device is used to drive the image acquisition device to rotate, and to drive the background plate to rotate;
背景板和图像采集装置在转动过程中保持相对设置,使得在采集时背景板成为图像采集装置所采集图像的背景图案;The background board and the image capture device are kept relatively set during the rotation, so that the background board becomes the background pattern of the image captured by the image capture device during capture;
背景板满足:在垂直其被拍摄表面的方向进行投影,投影形状的水平方向上长度W 1、投影形状的垂直方向上长度W 2由下述条件决定: The background board satisfies: projecting in the direction perpendicular to the surface to be photographed, the length W 1 in the horizontal direction of the projected shape and the length W 2 in the vertical direction of the projected shape are determined by the following conditions:
Figure PCTCN2020134757-appb-000004
Figure PCTCN2020134757-appb-000004
Figure PCTCN2020134757-appb-000005
Figure PCTCN2020134757-appb-000005
其中,d 1为成像元件水平方向长度,d 2为成像元件垂直方向长度,T为沿光轴方向图像采集装置传感元件到背景板的垂直距离,f为图像采集装置的焦距,A 1、A 2为经验系数; Among them, d 1 is the length of the imaging element in the horizontal direction, d 2 is the length of the imaging element in the vertical direction, T is the vertical distance from the sensor element of the image capture device to the background plate along the optical axis, f is the focal length of the image capture device, A 1 , A 2 is the experience coefficient;
其中A 1>1.04,A 2>1.04。 Among them, A 1 >1.04 and A 2 >1.04.
本发明还提供了一种标准3D信息采集和/或测量方法及设备,图像采集装置采集目标物时,相邻两个采集位置满足如下条件:The present invention also provides a standard 3D information collection and/or measurement method and equipment. When the image collection device collects a target object, two adjacent collection positions meet the following conditions:
Figure PCTCN2020134757-appb-000006
Figure PCTCN2020134757-appb-000006
L为在相邻两个采集位置时图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数;L is the linear distance of the optical center of the image capture device at two adjacent capture positions; f is the focal length of the image capture device; d is the rectangular length or width of the photosensitive element of the image capture device; T is the photosensitive element of the image capture device along the light The distance between the axis and the surface of the target; δ is the adjustment factor;
且δ<0.603。And δ<0.603.
可选的,2>A 1>1.1,2>A 2>1.1。 Optional, 2>A 1 >1.1, 2>A 2 >1.1.
可选的,背景板和图像采集装置分别设置在旋转横梁的两端,旋转装置驱动旋转横梁转动。Optionally, the background board and the image acquisition device are respectively arranged at both ends of the rotating beam, and the rotating device drives the rotating beam to rotate.
可选的,δ<0.410。Optionally, δ<0.410.
可选的,δ<0.356。Optionally, δ<0.356.
可选的,旋转装置位于固定横梁上,驱动旋转横梁转动。Optionally, the rotating device is located on the fixed beam to drive the rotating beam to rotate.
可选的,背景板为平板或曲面板。Optionally, the background board is a flat board or a curved board.
可选的,背景板主体为纯色或具有标记。Optionally, the main body of the background board is a solid color or has a mark.
可选的,背景板为一体成型或拼接板。Optionally, the background board is an integral or spliced board.
本发明的还提供了一种3D识别设备,使用如上述设备或方法提供的3D信息。The present invention also provides a 3D recognition device, which uses the 3D information provided by the above-mentioned device or method.
本发明的还提供了一种3D制造设备,使用如上述设备或方法提供的3D信息。The present invention also provides a 3D manufacturing equipment that uses the 3D information provided by the above-mentioned equipment or method.
发明点及技术效果Invention points and technical effects
1、首次提出通过增加背景板随相机一起旋转的方式来同时提高合成速度和合成精度。1. For the first time, it is proposed to increase the synthesis speed and synthesis accuracy by increasing the background plate to rotate with the camera.
2、通过优化相机采集图片的位置,保证能够同时提高合成速度和合成精度。且优化位置时,无需测量角度,无需测量目标尺寸,适用性更强。2. By optimizing the position where the camera collects pictures, it is guaranteed that the synthesis speed and synthesis accuracy can be improved at the same time. And when optimizing the position, there is no need to measure the angle, no need to measure the target size, and the applicability is stronger.
3、提出了一种与上述采集方式相配合的3D合成算法,能够提高3D建模的速度和效果。3. A 3D synthesis algorithm that matches the above-mentioned acquisition method is proposed, which can improve the speed and effect of 3D modeling.
5、通过优化背景板的尺寸,在降低旋转负担的同时,保证能够同时提高合成速度和合成精度。5. By optimizing the size of the background board, while reducing the burden of rotation, it is guaranteed that the synthesis speed and synthesis accuracy can be improved at the same time.
6、通过优化算法,保证能够同时提高合成速度和合成精度。6. Through the optimization algorithm, it is ensured that the synthesis speed and synthesis accuracy can be improved at the same time.
附图说明Description of the drawings
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本实用新型的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:By reading the detailed description of the preferred embodiments below, various other advantages and benefits will become clear to those of ordinary skill in the art. The drawings are only used for the purpose of illustrating the preferred embodiments, and are not considered as a limitation to the present utility model. Also, throughout the drawings, the same reference symbols are used to denote the same components. In the attached picture:
图1为本发明实施例1中采集区域移动装置为旋转结构的示意图;FIG. 1 is a schematic diagram of a rotating structure of the collection area moving device in Embodiment 1 of the present invention;
图2为本发明实施例1中采集区域移动装置为旋转结构的另一种实现方式的示意图;2 is a schematic diagram of another implementation manner in which the collection area moving device in Embodiment 1 of the present invention is a rotating structure;
图3为本发明实施例2中采集区域移动装置为平动结构的示意图;FIG. 3 is a schematic diagram of a translational structure of the collection area moving device in Embodiment 2 of the present invention; FIG.
图4为本发明实施例3中采集区域移动装置为无规则运动结构的示意图;FIG. 4 is a schematic diagram of a random movement structure of the collection area moving device in Embodiment 3 of the present invention; FIG.
图5为本发明实施例4中多相机方式的示意图;FIG. 5 is a schematic diagram of a multi-camera method in Embodiment 4 of the present invention;
图6为本发明实施例5提供的3D信息采集设备的前视图;6 is a front view of a 3D information collection device provided by Embodiment 5 of the present invention;
图7为本发明实施例5提供的3D信息采集设备的立体图;FIG. 7 is a perspective view of a 3D information collection device provided by Embodiment 5 of the present invention;
图8为本发明实施例5提供的3D信息采集设备的另一立体图;FIG. 8 is another perspective view of the 3D information collection device provided by Embodiment 5 of the present invention;
附图标记与各部件的对应关系如下:The corresponding relationship between the reference signs and the components is as follows:
1目标物,2载物台,3旋转装置,4图像采集装置,5直线轨道。1 target, 2 stage, 3 rotation device, 4 image acquisition device, 5 linear track.
13背景板,14第一安装柱,15旋转横梁,16水平托,17第二安装柱。13 background board, 14 first mounting post, 15 rotating beam, 16 horizontal support, 17 second mounting post.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。Hereinafter, exemplary embodiments of the present disclosure will be described in more detail with reference to the accompanying drawings. Although the drawings show exemplary embodiments of the present disclosure, it should be understood that the present disclosure can be implemented in various forms and should not be limited by the embodiments set forth herein. On the contrary, these embodiments are provided to enable a more thorough understanding of the present disclosure and to fully convey the scope of the present disclosure to those skilled in the art.
为解决上述技术问题,本发明的一实施例提供了一种用于3D信息采集的图像采集设备,包括图像采集装置、旋转装置。图像采集装置用于通过图像采集装置的采集区域与目标物相对运动采集目标物一组图像;采集区域移动装置,用于驱动图像采集装置的采集区域与目标物产生相对运动。采集区域为图像采集装置的有效视场范围。In order to solve the above technical problems, an embodiment of the present invention provides an image acquisition device for 3D information acquisition, including an image acquisition device and a rotating device. The image acquisition device is used to acquire a set of images of the target through the relative movement of the acquisition area of the image acquisition device and the target; the acquisition area moving device is used to drive the acquisition area of the image acquisition device to move relative to the target. The acquisition area is the effective field of view range of the image acquisition device.
实施例1:采集区域移动装置为旋转结构Embodiment 1: The collection area moving device is a rotating structure
请参考图1,目标物1固定于载物台2上,旋转装置3驱动图像采集装置4围绕目标物1转动。旋转装置3可以通过旋转臂带动图像采集装置4围绕目标物1转动。当然这种转动并不一定是完整的圆周运动,可以根据采集需要只转动一定角度。并且这种转动也不一定必须为圆周运动,图像采集装置4的运动轨迹可以为其它曲线轨迹,只要保证相机从不同角度拍摄物体即可。Please refer to FIG. 1, the target 1 is fixed on the stage 2, and the rotating device 3 drives the image acquisition device 4 to rotate around the target 1. The rotating device 3 can drive the image acquisition device 4 to rotate around the target 1 through a rotating arm. Of course, this kind of rotation is not necessarily a complete circular motion, and it can only be rotated by a certain angle according to the collection needs. Moreover, this rotation does not necessarily have to be a circular motion, and the motion trajectory of the image acquisition device 4 can be other curved trajectories, as long as it is ensured that the camera shoots the object from different angles.
旋转装置3也可以驱动图像采集装置自转,通过自转使得图像采集装置4能够从不同角度采集目标物图像。The rotation device 3 can also drive the image acquisition device to rotate, so that the image acquisition device 4 can collect images of the target object from different angles through rotation.
旋转装置3可以为悬臂、转台、轨道等多种形态,使得图像采集装置4能够产生运动即可。The rotating device 3 can be in various forms such as a cantilever, a turntable, or a track, so that the image acquisition device 4 can move.
除了上述方式,在某些情况下也可以将相机固定,如图2,承载目标物1的载物台2转动,使得目标物1面向图像采集装置4的方向时刻变化,从而使得图像采集装置4能够从不同角度采集目标物1图像。但此时计算时,仍然可以按照转化为图像采集装置4运动的情况下来进行计算,从而使得运动符合相应经验公式(具体下面将详细阐述)。例如,载物台2转动的场景下,可以假设载物台2不动,而图像采集装置4旋转。通过利用经验公式设定图像采集装置4旋转时拍摄位置的距离,从而推导出其转速,从而反推出载物台转速,以方便进行转速控制,实现3D采集。当然,这种场景并不常用,更为常用的还是转动图像采集装置。In addition to the above methods, the camera can also be fixed in some cases. As shown in Figure 2, the stage 2 carrying the target 1 rotates, so that the direction of the target 1 facing the image capture device 4 changes from time to time, so that the image capture device 4 Capable of collecting images of target 1 from different angles. However, when calculating at this time, the calculation can still be performed according to the situation converted into the movement of the image acquisition device 4, so that the movement conforms to the corresponding empirical formula (the details will be described in detail below). For example, in a scenario where the stage 2 rotates, it can be assumed that the stage 2 does not move but the image capture device 4 rotates. By using an empirical formula to set the distance of the shooting position of the image acquisition device 4 when it rotates, the rotation speed is derived, and the rotation speed of the stage is deduced, so as to facilitate the rotation speed control and realize 3D acquisition. Of course, this kind of scene is not commonly used, and it is more commonly used to rotate the image capture device.
另外,为了使得图像采集装置4能够采集目标物1不同方向的图像,也可以保持图像采集装置4和目标物1均静止,通过旋转图像采集装置4的光轴来实现。例如:采集区域移动装置4为光学扫描装置,使得图像采集装置4不移动或转动的情况下,图像采集装置4的采集区域与目标物1产生相对运动。采集区域移动装置还包括光线偏转单元,光线偏转单元被机械驱动发生转动,或被电学驱动导致光路偏折,或本身为多组在空间的排布,从而实现从不同角度获得目标物的图像。光线偏转单元典型地可以为反射镜,通过转动使得目标物不 同方向的图像被采集。或直接于空间布置环绕目标物的反射镜,依次使得反射镜的光进入图像采集装置4中。与前述类似,这种情况下光轴的转动可以看作是图像采集装置4虚拟位置的转动,通过这种转换的方法,假设为图像采集装置4转动,从而利用下述经验公式进行计算。In addition, in order to enable the image capture device 4 to capture images of the target object 1 in different directions, it is also possible to keep both the image capture device 4 and the target object 1 still, by rotating the optical axis of the image capture device 4. For example, the acquisition area moving device 4 is an optical scanning device, so that when the image acquisition device 4 does not move or rotate, the acquisition area of the image acquisition device 4 and the target 1 move relative to each other. The collection area moving device also includes a light deflection unit, which is mechanically driven to rotate, or is electrically driven to cause light path deflection, or is arranged in multiple groups in space, so as to obtain images of the target object from different angles. The light deflection unit can typically be a mirror, which rotates so that images of the target object in different directions are collected. Or directly arrange a mirror surrounding the target in space, so that the light from the mirror enters the image acquisition device 4 in turn. Similar to the foregoing, the rotation of the optical axis in this case can be regarded as the rotation of the virtual position of the image capture device 4. Through this conversion method, it is assumed that the image capture device 4 is rotated, and the following empirical formula is used for calculation.
图像采集装置4用于采集目标物1的图像,其可以为定焦相机,或变焦相机。特别是即可以为可见光相机,也可以为红外相机。当然,可以理解的是任何具有图像采集功能的装置均可以使用,并不构成对本发明的限定,例如可以为CCD、CMOS、相机、摄像机、工业相机、监视器、摄像头、手机、平板、笔记本、移动终端、可穿戴设备、智能眼镜、智能手表、智能手环以及带有图像采集功能所有设备。The image acquisition device 4 is used to acquire an image of the target object 1, and it may be a fixed focus camera or a zoom camera. In particular, it can be a visible light camera or an infrared camera. Of course, it is understandable that any device with image acquisition function can be used and does not constitute a limitation of the present invention. For example, it can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, Mobile terminals, wearable devices, smart glasses, smart watches, smart bracelets, and all devices with image capture functions.
在旋转设置时,还可以在设备中加入背景板。背景板位于图像采集装置4对面,并且在图像采集装置4转动时同步转动,在图像采集装置4静止时保持静止。从而使得图像采集装置4采集的目标物图像都是以背景板为背景的。背景板全部为纯色,或大部分(主体)为纯色。特别是可以为白色板或黑色板,具体颜色可以根据目标物主体颜色来选择。背景板通常为平板,优选也可以为曲面板,例如凹面板、凸面板、球形板,甚至在某些应用场景下,可以为表面为波浪形的背景板;也可以为多种形状拼接板,例如可以用三段平面进行拼接,而整体呈现凹形,或用平面和曲面进行拼接等。In the rotation setting, you can also add a background board to the device. The background board is located opposite to the image acquisition device 4, and rotates synchronously when the image acquisition device 4 rotates, and remains stationary when the image acquisition device 4 is stationary. As a result, the target images collected by the image collecting device 4 are all based on the background board. The background board is all solid color, or most (main body) is solid color. In particular, it can be a white board or a black board, and the specific color can be selected according to the main color of the target object. The background board is usually a flat panel, preferably a curved panel, such as a concave panel, a convex panel, a spherical panel, and even in some application scenarios, it can be a background panel with a wavy surface; it can also be a spliced panel of various shapes. For example, three planes can be used for splicing, and the whole is concave, or flat and curved surfaces can be used for splicing.
设备还包括处理器,也称处理单元,用以根据图像采集装置采集的多个图像,根据3D合成算法,合成目标物3D模型,得到目标物3D信息。The device also includes a processor, also called a processing unit, for synthesizing a 3D model of the target object according to a 3D synthesis algorithm according to a plurality of images collected by the image acquisition device to obtain 3D information of the target object.
实施例2:采集区域移动装置为平动结构Embodiment 2: The moving device of the collection area is a translational structure
除了上述旋转结构外,图像采集装置4可以以直线轨迹相对目标物1运动。例如图像采集装置4位于直线轨道5上,沿直线轨道5依次经过目标物1进行拍摄,在过程中图像采集装置4保持不转动。其中直线轨道5也可以用直线悬臂代替。但更佳的是,如图3,在图像采集装置4整体沿直线轨迹运动时,其进行一定的转动,从而使得图像采集装置4的光轴朝向目标物1。In addition to the above-mentioned rotating structure, the image acquisition device 4 can move relative to the target 1 in a straight line. For example, the image acquisition device 4 is located on a linear track 5 and passes by the target 1 along the linear track 5 to take pictures. During the process, the image acquisition device 4 does not rotate. The linear track 5 can also be replaced by a linear cantilever. More preferably, as shown in FIG. 3, when the image acquisition device 4 as a whole moves along a linear trajectory, it performs a certain rotation, so that the optical axis of the image acquisition device 4 faces the target 1.
实施例3:采集区域移动装置为无规则运动结构Embodiment 3: The moving device of the collection area has an irregular movement structure
如图4,有时,采集区域移动并不规则,例如可以手持图像采集装置4环绕目标物1进行拍摄,此时难以以严格的轨道进行运动,图像采集装置4的运动轨迹难以准确预测。因此在这种情况下如何保证拍摄图像能够准确、稳定地合成3D模型是一大难题,目前还未有人提及。更常见的方法是多拍照片,用照片数量的冗余来解决该问题。但这样合成结果并不稳定。虽然目前也有一些 通过限定相机转动角度的方式提高合成效果,但实际上用户对于角度并不敏感,即使给出优选角度,在手持拍摄的情况下用户也很难操作。因此本发明提出了通过限定两次拍照相机移动距离的方式来提高合成效果、缩短合成时间的方法。As shown in Fig. 4, sometimes, the movement of the collection area is irregular. For example, the image collection device 4 can be hand-held to surround the target 1 for shooting. At this time, it is difficult to move on a strict trajectory, and the movement trajectory of the image collection device 4 is difficult to accurately predict. Therefore, in this case, how to ensure that the captured images can be accurately and stably synthesized 3D models is a big problem, and no one has mentioned it yet. A more common method is to take more photos and use the redundancy of the number of photos to solve the problem. But the result of this synthesis is not stable. Although there are some ways to improve the synthesis effect by limiting the camera rotation angle, in fact, the user is not sensitive to the angle. Even if the preferred angle is given, it is difficult for the user to operate in the case of handheld shooting. Therefore, the present invention proposes a method for improving the synthesis effect and shortening the synthesis time by limiting the movement distance of the camera for two shots.
例如在进行人脸识别过程中,用户可以手持移动终端围绕自己面部进行移动拍摄。只要符合拍照位置的经验要求(具体下述),就可以准确合成面部3D模型,此时与预先存储的标准模型相对比,即可实现人脸识别。例如可以解锁手机,或进行支付验证。For example, in the process of face recognition, a user can hold a mobile terminal to move and shoot around his face. As long as it meets the empirical requirements of the photographing position (specifically described below), the face 3D model can be accurately synthesized. At this time, the face recognition can be realized by comparing with the pre-stored standard model. For example, the mobile phone can be unlocked, or payment verification can be performed.
在无规则运动的情况下,可以在移动终端或图像采集装置4中设置传感器,通过传感器测量图像采集装置4两次拍摄时移动的直线距离,在移动距离不满足上述关于L(具体下述条件)的经验条件时,向用户发出报警。所述报警包括向用户发出声音或灯光报警。当然,也可以在用户移动图像采集装置4时,手机屏幕上显示,或语音实时提示用户移动的距离,以及可移动的最大距离L。实现该功能的传感器包括:测距仪、陀螺仪、加速度计、定位传感器和/或它们的组合。In the case of irregular movement, a sensor can be installed in the mobile terminal or the image acquisition device 4, and the linear distance that the image acquisition device 4 moves during two shots can be measured by the sensor. When the movement distance does not meet the above-mentioned L (specifically the following conditions) ) In case of experience conditions, an alarm is issued to the user. The alarm includes sound or light alarm to the user. Of course, it can also be displayed on the screen of the mobile phone when the user moves the image acquisition device 4, or a voice prompts the user to move the distance and the maximum distance L that can be moved in real time. Sensors that implement this function include: rangefinders, gyroscopes, accelerometers, positioning sensors, and/or combinations thereof.
实施例4:多相机方式Embodiment 4: Multi-camera mode
可以了解,除了通过相机与目标物相对运动从而使得相机可以拍摄目标物1不同角度的图像外,如图5,还可以再目标物1周围不同位置设置多个相机,从而可以实现同时拍摄目标物1不同角度的图像。It can be understood that in addition to the relative movement of the camera and the target, the camera can take images of the target 1 from different angles. As shown in Figure 5, multiple cameras can be set at different positions around the target 1 so that the target can be photographed at the same time. 1 Images from different angles.
光源light source
通常情况下,光源位于图像采集装置4的镜头周边分散式分布,例如光源为在镜头周边的环形LED灯。由于在一些应用中,被采集对象为人体,因此需要控制光源强度,避免造成人体不适。特别是可以在光源的光路上设置柔光装置,例如为柔光外壳。或者直接采用LED面光源,不仅光线比较柔和,而且发光更为均匀。更佳地,可以采用OLED光源,体积更小,光线更加柔和,并且具有柔性特性,可以贴附于弯曲的表面。光源也可以设置于其他能够为目标物提供均匀照明的位置。光源也可以为智能光源,即根据目标物1及环境光的情况自动调整光源参数。Normally, the light source is distributed around the lens of the image acquisition device 4 in a dispersed manner, for example, the light source is a ring LED lamp around the lens. Since in some applications, the collected object is a human body, it is necessary to control the intensity of the light source to avoid causing discomfort to the human body. In particular, a soft light device, such as a soft light housing, can be arranged on the light path of the light source. Or directly use the LED surface light source, not only the light is softer, but also the light is more uniform. More preferably, an OLED light source can be used, which is smaller in size, has softer light, and has flexible characteristics that can be attached to curved surfaces. The light source can also be set in other positions that can provide uniform illumination for the target. The light source can also be a smart light source, that is, the light source parameters are automatically adjusted according to the target 1 and ambient light conditions.
图像采集装置设置方法Image acquisition device setting method
采集区域移动装置为旋转结构,图像采集装置4围绕目标物1转动,在进行3D采集时,图像采集装置4在不同采集位置光轴方向相对于目标物1发生变化,此时相邻两个图像采集装置4的位置,或图像采集装置4相邻两个采集 位置满足如下条件:The collection area moving device is a rotating structure. The image collection device 4 rotates around the target object 1. When performing 3D collection, the image collection device 4 changes its optical axis direction relative to the target object 1 at different collection positions. At this time, there are two adjacent images. The position of the acquisition device 4, or two adjacent acquisition positions of the image acquisition device 4 meet the following conditions:
Figure PCTCN2020134757-appb-000007
Figure PCTCN2020134757-appb-000007
其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件(CCD)的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数。Where L is the linear distance between the optical centers of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element (CCD) of the image acquisition device; T is the length or width of the photosensitive element of the image acquisition device The distance from the optical axis to the surface of the target; δ is the adjustment coefficient.
当上述两个位置是沿图像采集装置感光元件长度方向时,d取矩形长度;当上述两个位置是沿图像采集装置感光元件宽度方向时,d取矩形宽度。When the above two positions are along the length direction of the photosensitive element of the image capture device, d takes the length of the rectangle; when the above two positions are along the width direction of the photosensitive element of the image capture device, d is the width of the rectangle.
图像采集装置在两个位置中的任何一个位置时,感光元件沿着光轴到目标物表面的距离作为T。除了这种方法外,在另一种情况下,L为A n、A n+1两个图像采集装置光心的直线距离,与A n、A n+1两个图像采集装置相邻的A n-1、A n+2两个图像采集装置和A n、A n+1两个图像采集装置各自感光元件沿着光轴到目标物表面的距离分别为T n-1、T n、T n+1、T n+2,T=(T n-1+T n+T n+1+T n+2)/4。当然可以不只限于相邻4个位置,也可以用更多的位置进行平均值计算。 When the image capture device is in any one of the two positions, the distance from the photosensitive element to the surface of the target along the optical axis is taken as T. In addition to this method, in another case, L is A n, A n + 1 two linear distance optical center of the image pickup apparatus, and A n, A n + 1 of two adjacent image pickup devices A The distances between the photosensitive elements of the two image acquisition devices n-1 and A n+2 and the two image acquisition devices A n and A n+1 to the surface of the target along the optical axis are respectively T n-1 , T n , T n+1 , T n+2 , T=(T n-1 +T n +T n+1 +T n+2 )/4. Of course, it is not limited to 4 adjacent positions, and more positions can be used for average calculation.
如上所述,L应当为两个图像采集装置光心的直线距离,但由于图像采集装置光心位置在某些情况下并不容易确定,因此在某些情况下也可以使用图像采集装置4的感光元件中心、图像采集装置的几何中心、图像采集装置与云台(或平台、支架)连接的轴中心、镜头近端或远端表面的中心替代,经过试验发现由此带来的误差是在可接受的范围内的,因此上述范围也在本发明的保护范围之内。As mentioned above, L should be the linear distance between the optical centers of the two image capture devices, but because the position of the optical centers of the image capture devices is not easy to determine in some cases, the image capture device 4 can also be used in some cases. The center of the photosensitive element, the geometric center of the image capture device, the center of the axis connecting the image capture device and the pan/tilt (or platform, bracket), and the center of the lens near or far Within the acceptable range, therefore, the above-mentioned range is also within the protection scope of the present invention.
通常情况下,现有技术中均采用物体尺寸、视场角等参数作为推算相机位置的方式,并且两个相机之间的位置关系也采用角度表达。由于角度在实际使 用过程中并不好测量,因此在实际使用时较为不便。并且,物体尺寸会随着测量物体的变化而改变。例如,在进行一个成年人头部3D信息采集后,再进行儿童头部采集时,就需要重新测量头部尺寸,重新推算。上述不方便的测量以及多次重新测量都会带来测量的误差,从而导致相机位置推算错误。而本方案根据大量实验数据,给出了相机位置需要满足的经验条件,不仅避免测量难以准确测量的角度,而且不需要直接测量物体大小尺寸。经验条件中d、f均为相机固定参数,在购买相机、镜头时,厂家即会给出相应参数,无需测量。而T仅为一个直线距离,用传统测量方法,例如直尺、激光测距仪均可以很便捷的测量得到。因此,本发明的经验公式使得准备过程变得方便快捷,同时也提高了相机位置的排布准确度,使得相机能够设置在优化的位置中,从而在同时兼顾了3D合成精度和速度,具体实验数据参见下述。Generally, in the prior art, parameters such as object size and field of view are used as a method for estimating the position of the camera, and the positional relationship between the two cameras is also expressed by angle. Since the angle is not easy to measure in actual use, it is more inconvenient in actual use. And, the size of the object will change with the change of the measuring object. For example, after collecting the 3D information of an adult's head, and then collecting the head of a child, the head size needs to be re-measured and recalculated. The above-mentioned inconvenient measurement and multiple re-measurements will cause measurement errors, resulting in errors in the estimation of the camera position. Based on a large amount of experimental data, this solution gives the empirical conditions that the camera position needs to meet, which not only avoids measuring angles that are difficult to accurately measure, but also does not need to directly measure the size of the object. In the empirical conditions, d and f are the fixed parameters of the camera. When purchasing the camera and lens, the manufacturer will give the corresponding parameters without measurement. And T is only a straight line distance, which can be easily measured by traditional measuring methods, such as rulers and laser rangefinders. Therefore, the empirical formula of the present invention makes the preparation process convenient and quick, and at the same time improves the accuracy of the arrangement of the camera position, so that the camera can be set in an optimized position, thereby taking into account the 3D synthesis accuracy and speed at the same time. Specific experiments See below for data.
利用本发明装置,进行实验,得到了如下实验结果。Using the device of the present invention, experiments were carried out, and the following experimental results were obtained.
Figure PCTCN2020134757-appb-000008
Figure PCTCN2020134757-appb-000008
更换相机镜头,再次实验,得到了如下实验结果。Change the camera lens, experiment again, and get the following experimental results.
Figure PCTCN2020134757-appb-000009
Figure PCTCN2020134757-appb-000009
Figure PCTCN2020134757-appb-000010
Figure PCTCN2020134757-appb-000010
更换相机镜头,再次实验,得到了如下实验结果。Change the camera lens, experiment again, and get the following experimental results.
Figure PCTCN2020134757-appb-000011
Figure PCTCN2020134757-appb-000011
从上述实验结果及大量实验经验可以得出,δ的值应当满足δ<0.603,此时已经能够合成部分3D模型,虽然有一部分无法自动合成,但是在要求不高的情况下也是可以接受的,并且可以通过手动或者更换算法的方式弥补无法合成的部分。特别是δ的值满足δ<0.410时,能够最佳地兼顾合成效果和合成时间的平衡;为了获得更好的合成效果可以选择δ<0.356,此时合成时间会上升,但合成质量更好。当然为了进一步提高合成效果,可以选择δ<0.311。而当δ为0.681时,已经无法合成。但这里应当注意,以上范围仅仅是最佳实施例,并不构成对保护范围的限定。From the above experimental results and a large amount of experimental experience, it can be concluded that the value of δ should satisfy δ<0.603. At this time, it is possible to synthesize some 3D models. Although some of them cannot be synthesized automatically, it is acceptable if the requirements are not high. And you can make up for the parts that cannot be synthesized manually or by replacing the algorithm. Especially when the value of δ satisfies δ<0.410, the balance between synthesis effect and synthesis time can be optimally taken into account; in order to obtain a better synthesis effect, δ<0.356 can be selected. At this time, the synthesis time will increase, but the synthesis quality will be better. Of course, in order to further improve the synthesis effect, you can choose δ<0.311. When δ is 0.681, it can no longer be synthesized. However, it should be noted here that the above scope is only the best embodiment and does not constitute a limitation on the protection scope.
并且从上述实验可以看出,对于相机拍照位置的确定,只需要获取相机参数(焦距f、CCD尺寸)、相机CCD与物体表面的距离T即可根据上述公式得到,这使得在进行设备设计和调试时变得容易。由于相机参数(焦距f、CCD尺寸) 在相机购买时就已经确定,并且是产品说明中就会标示的,很容易获得。因此根据上述公式很容易就能够计算得到相机位置,而不需要再进行繁琐的视场角测量和物体尺寸测量。特别是在一些场合中,需要更换相机镜头,那么本发明的方法直接更换镜头常规参数f计算即可得到相机位置;同理,在采集不同物体时,由于物体大小不同,对于物体尺寸的测量也较为繁琐。而使用本发明的方法,无需进行物体尺寸测量,能够更为便捷地确定相机位置。并且使用本发明确定的相机位置,能够兼顾合成时间和合成效果。因此,上述经验条件是本发明的发明点之一。And from the above experiments, it can be seen that for the determination of the camera's photo location, only the camera parameters (focal length f, CCD size) and the distance T between the camera CCD and the surface of the object can be obtained according to the above formula, which makes the equipment design and It becomes easy when debugging. Since the camera parameters (focal length f, CCD size) are determined when the camera is purchased, and will be marked in the product description, it is easy to obtain. Therefore, the camera position can be easily calculated according to the above formula, without the need for tedious field angle measurement and object size measurement. Especially in some occasions, it is necessary to replace the camera lens, then the method of the present invention can directly replace the lens to calculate the conventional parameter f to obtain the camera position; similarly, when collecting different objects, due to the different size of the object, the measurement of the object size is also More cumbersome. With the method of the present invention, there is no need to measure the size of the object, and the camera position can be determined more conveniently. In addition, the camera position determined by the present invention can take into account the synthesis time and the synthesis effect. Therefore, the above empirical condition is one of the invention points of the present invention.
以上数据仅为验证该公式条件所做实验得到的,并不对发明构成限定。即使没有这些数据,也不影响该公式的客观性。本领域技术人员可以根据需要调整设备参数和步骤细节进行实验,得到其他数据也是符合该公式条件的。The above data is only obtained from experiments to verify the conditions of the formula, and does not limit the invention. Even without these data, it does not affect the objectivity of the formula. Those skilled in the art can adjust the equipment parameters and step details as needed to perform experiments, and obtain other data that also meets the conditions of the formula.
本发明所述的转动运动,为在采集过程中前一位置采集平面和后一位置采集平面发生交叉而不是平行,或前一位置图像采集装置光轴和后一位置图像采集位置光轴发生交叉而不是平行。也就是说,图像采集装置的采集区域环绕或部分环绕目标物运动,均可以认为是两者相对转动。虽然本发明实施例中列举更多的为有轨道的转动运动,但是可以理解,只要图像采集设备的采集区域和目标物之间发生非平行的运动,均是转动范畴,均可以使用本发明的限定条件。本发明保护范围并不限定于实施例中的有轨道转动。The rotation movement of the present invention is that during the acquisition process, the previous position acquisition plane and the next position acquisition plane cross instead of being parallel, or the optical axis of the image acquisition device at the previous position crosses the optical axis of the image acquisition position at the next position. Instead of parallel. That is to say, the movement of the acquisition area of the image acquisition device around or partly around the target object can be regarded as the relative rotation of the two. Although the examples of the present invention enumerate more rotational motions with tracks, it can be understood that as long as non-parallel motion occurs between the acquisition area of the image acquisition device and the target, it is in the category of rotation, and the present invention can be used. Qualification. The protection scope of the present invention is not limited to the orbital rotation in the embodiment.
本发明所述的相邻采集位置是指,在图像采集装置相对目标物移动时,移动轨迹上的发生采集动作的两个相邻位置。这通常对于图像采集装置运动容易理解。但对于目标物发生移动导致两者相对移动时,此时应当根据运动的相对性,将目标物的运动转化为目标物不动,而图像采集装置运动。此时再衡量图像采集装置在转化后的移动轨迹中发生采集动作的两个相邻位置。The adjacent acquisition positions in the present invention refer to two adjacent positions on the moving track where the acquisition action occurs when the image acquisition device moves relative to the target. This is usually easy to understand for the movement of the image capture device. However, when the target object moves to cause the two to move relative to each other, at this time, the movement of the target object should be converted into the target object's immobility according to the relativity of the movement, and the image acquisition device moves. At this time, measure the two adjacent positions of the image acquisition device where the acquisition action occurs in the transformed movement track.
实施例5Example 5
3D信息采集设备结构3D information collection equipment structure
为解决上述技术问题,本发明的一实施例提供了一种3D信息采集设备,包括图像采集装置4、旋转装置3和背景板13。In order to solve the above technical problems, an embodiment of the present invention provides a 3D information collection device, which includes an image collection device 4, a rotation device 3, and a background board 13.
图像采集装置4用于采集目标物的图像,其可以为定焦相机,或变焦相机。特别是即可以为可见光相机,也可以为红外相机。当然,可以理解的是任何具有图像采集功能的装置均可以使用,并不构成对本发明的限定,例如可以为CCD、CMOS、相机、摄像机、工业相机、监视器、摄像头、手机、平板、笔记本、移动终端、可穿戴设备、智能眼镜、智能手表、智能手环以及带有图像采集功能所有设备。The image acquisition device 4 is used to acquire an image of the target object, and it can be a fixed focus camera or a zoom camera. In particular, it can be a visible light camera or an infrared camera. Of course, it is understandable that any device with image acquisition function can be used and does not constitute a limitation of the present invention. For example, it can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, Mobile terminals, wearable devices, smart glasses, smart watches, smart bracelets, and all devices with image capture functions.
背景板13全部为纯色,或大部分(主体)为纯色。特别是可以为白色板或黑色板,具体颜色可以根据目标物主体颜色来选择。背景板13通常为平板,优选也可以为曲面板,例如凹面板、凸面板、球形板,甚至在某些应用场景下,可以为表面为波浪形的背景板;也可以为多种形状拼接板,例如可以用三段平面进行拼接,而整体呈现凹形,或用平面和曲面进行拼接等。除了背景板3表面的形状可以变化外,其边缘形状也可以根据需要选择。通常情况下为直线型,从而构成矩形板。但是在某些应用场合,其边缘可以为曲线。All of the background plate 13 is a solid color, or most (main body) is a solid color. In particular, it can be a white board or a black board, and the specific color can be selected according to the main color of the target object. The background board 13 is usually a flat panel, preferably a curved panel, such as a concave panel, a convex panel, a spherical panel, and even in some application scenarios, it can be a background panel with a wavy surface; it can also be a splicing panel of various shapes. For example, three planes can be used for splicing, and the whole is concave, or flat and curved surfaces can be used for splicing. In addition to the shape of the surface of the background plate 3 can be changed, the shape of its edge can also be selected according to needs. Normally, it is a straight type, which constitutes a rectangular plate. However, in some applications, the edges can be curved.
优选的,背景板13为曲面板,这样可以使得在获得最大背景范围的情况下,使得背景板3投影尺寸最小。这使得背景板在转动时需要的空间更小,有利于缩小设备体积,并且减少设备重量,避免转动惯性,从而更有利于控制转动。Preferably, the background plate 13 is a curved plate, which can minimize the projection size of the background plate 3 when the maximum background range is obtained. This makes the background board need less space when rotating, which is conducive to reducing the size of the device, reducing the weight of the device, and avoiding rotational inertia, thereby making it more conducive to controlling the rotation.
无论背景板13的表面形状和边缘形状如何,在垂直其被拍摄表面的方向进行投影,投影形状的水平方向上长度W 1、投影形状的垂直方向上长度W 2由下述条件决定: Regardless of the surface shape and edge shape of the background plate 13, the projection is performed in the direction perpendicular to the surface to be photographed. The length W 1 in the horizontal direction of the projected shape and the length W 2 in the vertical direction of the projected shape are determined by the following conditions:
Figure PCTCN2020134757-appb-000012
Figure PCTCN2020134757-appb-000012
Figure PCTCN2020134757-appb-000013
Figure PCTCN2020134757-appb-000013
其中,d 1为成像元件水平方向长度,d 2为成像元件垂直方向长度,T为沿光轴方向图像采集装置传感元件到背景板的垂直距离,f为图像采集装置的焦 距,A 1、A 2为经验系数。 Among them, d 1 is the length of the imaging element in the horizontal direction, d 2 is the length of the imaging element in the vertical direction, T is the vertical distance from the sensor element of the image capture device to the background plate along the optical axis, f is the focal length of the image capture device, A 1 , A 2 is the empirical coefficient.
经过大量实验,优选,A 1>1.04,A 2>1.04;更优选的2>A 1>1.1,2>A 2>1.1。 After a lot of experiments, preferably, A 1 >1.04, A 2 >1.04; more preferably 2>A 1 >1.1, 2 >A 2 >1.1.
在一些应用场景下,背景板边缘为非直线形,导致其投影后投影图形边缘也为非直线。此时不同位置测量W 1、W 2均不同,因此实际计算时W 1、W 2不易确定。因此,可以在背景板13相对两侧分别边缘取3-5个点,测量相对两点的直线距离,再取测量的平均值作为上述条件中的W 1、W 2In some application scenarios, the edge of the background board is non-linear, which causes the edge of the projected graphic to be non-linear after projection. At this time, W 1 and W 2 measured at different positions are different, so W 1 and W 2 are not easy to determine in actual calculations. Therefore, it is possible to take 3-5 points on the opposite sides of the background plate 13 respectively, measure the linear distance between the two points, and then take the measured average value as W 1 and W 2 in the above conditions.
如果背景板13过大,使得悬臂过长,会增加设备体积,同时给旋转带来额外的负担,使得设备更容易损坏。但如果背景板过小,则会导致背景不单纯,从而带来计算的负担。If the background plate 13 is too large and the cantilever is too long, it will increase the volume of the device, and at the same time bring extra burden to the rotation, making the device more likely to be damaged. However, if the background plate is too small, the background will not be pure, which will bring about the burden of calculation.
下表为实验对照结果:The following table is the experimental control results:
实验条件:Experimental conditions:
采集对象:石膏像人头Collection object: plaster head
相机:MER-2000-19U3M/CCamera: MER-2000-19U3M/C
镜头:OPT-C1616-10MLens: OPT-C1616-10M
经验系数Experience coefficient 合成时间Synthesis time 合成精度Synthesis accuracy
A 1=1.2,A 2=1.2 A 1 =1.2, A 2 =1.2 3.3分钟3.3 minutes high
A 1=1.4,A 2=1.4 A 1 =1.4, A 2 =1.4 3.4分钟3.4 minutes high
A 1=0.9,A 2=0.9 A 1 =0.9, A 2 =0.9 4.5分钟4.5 minutes 中高Mid to high
no 7.8分钟7.8 minutes in
背景板13通过框体安装在第一安装柱14上,第一安装柱14沿竖直方向设置在旋转横梁15一端;图像采集装置3安装在水平托上,水平托16与第二安装柱连接,第二安装柱17沿竖直方向设置在旋转横梁15另一端。第一安装柱14可以沿旋转横梁5水平移动,从而调节背景板13的水平位置。第二安装柱7可以沿旋转横梁15水平移动,从而调节图像采集装置4的水平位置。当然背景板13也可以直接安装在安装柱上,或背景板13与安装柱一体成型。The background board 13 is installed on the first mounting column 14 through the frame. The first mounting column 14 is arranged at one end of the rotating beam 15 in the vertical direction; the image acquisition device 3 is installed on a horizontal bracket, and the horizontal bracket 16 is connected to the second mounting column. , The second mounting column 17 is arranged at the other end of the rotating cross beam 15 along the vertical direction. The first mounting post 14 can move horizontally along the rotating beam 5 to adjust the horizontal position of the background board 13. The second mounting post 7 can move horizontally along the rotating beam 15 to adjust the horizontal position of the image capture device 4. Of course, the background plate 13 can also be directly installed on the mounting column, or the background plate 13 and the mounting column can be integrally formed.
背景板13的框体可以沿第一安装柱14上下移动,从而调整背景板13在竖直方向的位置;水平托6可以沿第二安装柱7上下移动,从而调整图像采集装置4在竖直方向的位置。The frame of the background plate 13 can move up and down along the first mounting column 14 to adjust the position of the background plate 13 in the vertical direction; the horizontal bracket 6 can move up and down along the second mounting column 7 to adjust the image capture device 4 in the vertical direction. The location of the direction.
图像采集装置4还可以沿水平托16在水平方向上移动,从而调整图像采集装置4的水平位置。The image capture device 4 can also move in the horizontal direction along the horizontal support 16 to adjust the horizontal position of the image capture device 4.
上述移动可以通过导轨、丝杠、滑台等多种方式实现。The above-mentioned movement can be realized by a variety of ways such as guide rails, lead screws, and sliding tables.
旋转横梁15通过旋转装置3与固定横梁连接,旋转装置3驱动旋转横梁 15转动,从而带动横梁两端的背景板13和图像采集装置4转动,但无论怎样转动,图像采集装置4与背景板13均相对设置,特别是图像采集装置4的光轴穿过背景板13中心。The rotating beam 15 is connected to the fixed beam through the rotating device 3. The rotating device 3 drives the rotating beam 15 to rotate, thereby driving the background plate 13 and the image capture device 4 at both ends of the beam to rotate, but no matter how it rotates, the image capture device 4 and the background plate 13 are both The relative arrangement, in particular, the optical axis of the image capture device 4 passes through the center of the background plate 13.
旋转装置3可以为电机以及配合转动传动系统,例如齿轮系统、传动带等。The rotating device 3 can be a motor and a matching rotating transmission system, such as a gear system, a transmission belt, and the like.
光源可以设置于旋转横梁15、第一立柱14、第二立柱17、水平托16和/或图像采集装置4上,光源可以为LED光源,也可以为智能光源,即根据目标物及环境光的情况自动调整光源参数。通常情况下,光源位于图像采集装置1的镜头周边分散式分布,例如光源为在镜头周边的环形LED灯。由于在一些应用中,被采集对象为人体,因此需要控制光源强度,避免造成人体不适。特别是可以在光源的光路上设置柔光装置,例如为柔光外壳。或者直接采用LED面光源,不仅光线比较柔和,而且发光更为均匀。更佳地,可以采用OLED光源,体积更小,光线更加柔和,并且具有柔性特性,可以贴附于弯曲的表面。The light source can be arranged on the rotating beam 15, the first column 14, the second column 17, the horizontal support 16 and/or the image acquisition device 4. The light source can be an LED light source or a smart light source, which is based on the target object and the ambient light. The light source parameters are automatically adjusted according to the situation. Normally, the light source is distributed around the lens of the image acquisition device 1 in a dispersed manner, for example, the light source is a ring LED lamp around the lens. Since in some applications, the collected object is a human body, it is necessary to control the intensity of the light source to avoid causing discomfort to the human body. In particular, a soft light device, such as a soft light housing, can be arranged on the light path of the light source. Or directly use the LED surface light source, not only the light is softer, but also the light is more uniform. More preferably, an OLED light source can be used, which is smaller in size, has softer light, and has flexible characteristics that can be attached to curved surfaces.
图像采集装置4和背景板13之间通常为待采集目标物。当目标物为人体时,可以在设备底座中央设置座椅。并且由于不同人身高不同,座椅可以设置为连接可升降结构。升降机构通过驱动电机驱动,由遥控器进行控制升降。当然,升降机构也可以由控制终端统一控制。即驱动电机的控制面板通过有线或无线方式与控制终端通讯,接收控制终端的命令。控制终端可以为电脑、云平台、手机、平板、专用控制设备等。The object to be collected is usually between the image acquisition device 4 and the background plate 13. When the target is a human body, a seat can be set in the center of the equipment base. And because different people have different heights, the seat can be set to connect with a liftable structure. The lifting mechanism is driven by a driving motor, and the lifting is controlled by a remote controller. Of course, the lifting mechanism can also be uniformly controlled by the control terminal. That is, the control panel of the driving motor communicates with the control terminal in a wired or wireless manner, and receives commands from the control terminal. The control terminal can be a computer, cloud platform, mobile phone, tablet, special control equipment, etc.
但当目标物为物体时,可以在设备底座中央设置载物台。同样,载物台也可以由升降结构驱动进行高度调整,以方便采集目标物体信息。具体控制方法和连接关系与上述相同,不再赘述。但特别的,物体与人不同,旋转并不会带来不适感,因此载物台可在旋转装置的驱动下旋转,此时在采集时就无需旋转横梁15转动带动图像采集装置4和背景板13旋转。当然,也可以载物台和旋转横梁15同时转动。However, when the target is an object, a stage can be set in the center of the device base. Similarly, the stage can also be driven by a lifting structure for height adjustment to facilitate the collection of target object information. The specific control method and connection relationship are the same as the above, and will not be repeated. But in particular, the object is different from a person, and rotation does not bring discomfort. Therefore, the stage can be rotated under the drive of the rotating device. At this time, there is no need to rotate the beam 15 to drive the image capture device 4 and the background plate during collection. 13 rotation. Of course, the stage and the rotating beam 15 can also be rotated at the same time.
为了方便目标物的实际尺寸测量,可在座椅或载物台上设置4个标记点,并且这些标记点的坐标已知。通过采集标记点,并结合其坐标,获得3D合成模型的绝对尺寸。标记点可以位于座椅上的头托上。In order to facilitate the actual size measurement of the target, 4 marking points can be set on the seat or the stage, and the coordinates of these marking points are known. By collecting marker points and combining their coordinates, the absolute size of the 3D synthetic model is obtained. The marking point can be located on the headrest on the seat.
设备还包括处理器,也称处理单元,用以根据图像采集装置采集的多个图像,根据3D合成算法,合成目标物3D模型,得到目标物3D信息。The device also includes a processor, also called a processing unit, for synthesizing a 3D model of the target object according to a 3D synthesis algorithm according to a plurality of images collected by the image acquisition device to obtain 3D information of the target object.
3D信息采集方法流程3D information collection method flow
将目标物,放置在图像采集装置1和背景板3之间。优选放置在旋转装置 2的转轴延长线上,即图像采集装置1转动所围绕的圆心处。这样可以保证图像采集装置1在转动过程中距离目标物的距离基本不变,从而防止由于物距剧烈变化而导致图像采集不清晰,或导致对相机的景深要求过高(增加成本)。Place the target between the image acquisition device 1 and the background board 3. It is preferably placed on the extension line of the rotating shaft of the rotating device 2, that is, at the center of the circle around which the image capturing device 1 rotates. This can ensure that the distance between the image acquisition device 1 and the target object is basically unchanged during the rotation process, thereby preventing the image acquisition from being unclear due to drastic changes in the object distance, or resulting in excessively high requirements on the camera's depth of field (increasing cost).
当目标物为人体头部时,可以在图像采集装置1和背景板3之间放置座椅,在人坐下时,头部正好位于转动轴附近且在图像采集装置1和背景板3之间。由于每个人身高不同,因此待采集的区域(例如人体头部)的高度不同。此时可以通过调节座椅的高度来调整人体头部在图像采集装置1视场中的位置。在进行物体的采集时,可以将座椅替换为置物台。When the target is the human head, a seat can be placed between the image capture device 1 and the background board 3. When the person sits down, the head is located just near the rotation axis and between the image capture device 1 and the background board 3. . Since each person has a different height, the height of the area to be collected (for example, the human head) is different. At this time, the position of the human head in the field of view of the image acquisition device 1 can be adjusted by adjusting the height of the seat. When collecting objects, the seat can be replaced with a storage table.
除了调整座椅高度,还可以通过调整图像采集装置1和背景板3在竖直方向上的高度来保证目标物中心位于图像采集装置4的视场中心。例如背景板可以沿第一安装14柱上下移动,承载图像采集装置4的水平托16可以沿第二安装柱17上下移动。通常,背景板3和图像采集装置4的移动是同步的,保证图像采集装置1的光轴穿过背景板3的中心位置。In addition to adjusting the height of the seat, it is also possible to adjust the height of the image acquisition device 1 and the background board 3 in the vertical direction to ensure that the center of the target is located in the center of the field of view of the image acquisition device 4. For example, the background board can move up and down along the first mounting column 14, and the horizontal support 16 carrying the image capture device 4 can move up and down along the second mounting column 17. Generally, the movement of the background plate 3 and the image acquisition device 4 are synchronized, ensuring that the optical axis of the image acquisition device 1 passes through the center position of the background plate 3.
由于每次采集目标物尺寸大小差异较大。如果图像采集装置1在同一位置进行采集,则会导致目标物在图像中的比例变化巨大。例如目标物A在图像中大小合适时,如果换成较小的目标物B,其在图像中的比例将非常小,这会极大地影响后续的3D合成速度和精度。因此,可以驱动图像采集装置1在水平托上前后移动,保证目标物在图像采集装置1所采集图片中的占比合适。Because the size of the target object varies greatly each time. If the image acquisition device 1 collects at the same position, the ratio of the target object in the image will change greatly. For example, when the size of the target A in the image is appropriate, if it is replaced with a smaller target B, its proportion in the image will be very small, which will greatly affect the subsequent 3D synthesis speed and accuracy. Therefore, the image acquisition device 1 can be driven to move back and forth on the horizontal support, so as to ensure that the target object occupies a proper proportion of the pictures collected by the image acquisition device 1.
保证目标物基本不动,旋转装置2通过转动旋转横梁15带动图像采集装置4和背景板13围绕目标物转动,并且保证在转动过程中两者相对。在转动过程中进行采集时,可以连续转动,间隔固定角度进行采集;也可以在固定间隔角度的位置停止转动进行采集,采集后继续转动,并在下一位置继续停止转动进行采集。To ensure that the target is basically stationary, the rotating device 2 drives the image acquisition device 4 and the background board 13 to rotate around the target by rotating the rotating beam 15 and ensures that the two are facing each other during the rotation. When collecting during rotation, you can continuously rotate and collect at a fixed angle; you can also stop rotation at a fixed angle for collection, continue to rotate after collection, and continue to stop at the next position for collection.
3D采集相机位置优化3D capture camera position optimization
根据大量实验,采集的间隔距离优选满足如下经验公式:According to a large number of experiments, the collected separation distance preferably satisfies the following empirical formula:
在进行3D采集时,图像采集装置相邻两个采集位置满足如下条件:When performing 3D acquisition, the two adjacent acquisition positions of the image acquisition device meet the following conditions:
Figure PCTCN2020134757-appb-000014
Figure PCTCN2020134757-appb-000014
其中L为在相邻两个采集位置时图像采集装置光心的直线距离;f为图像 采集装置的焦距;d为图像采集装置感光元件(CCD)的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数,δ<0.603。Where L is the linear distance of the optical center of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element (CCD) of the image acquisition device; T is the light sensitivity of the image acquisition device The distance from the component along the optical axis to the surface of the target; δ is the adjustment coefficient, δ<0.603.
当上述两个位置是沿图像采集装置感光元件长度方向时,d取矩形长度;当上述两个位置是沿图像采集装置感光元件宽度方向时,d取矩形宽度。When the above two positions are along the length direction of the photosensitive element of the image capture device, d takes the length of the rectangle; when the above two positions are along the width direction of the photosensitive element of the image capture device, d is the width of the rectangle.
图像采集装置在两个位置中的任何一个位置时,感光元件沿着光轴到目标物表面的距离作为T。除了这种方法外,在另一种情况下,L为A n、A n+1两个图像采集装置光心的直线距离,与A n、A n+1两个图像采集装置相邻的A n-1、A n+2两个图像采集装置和A n、A n+1两个图像采集装置各自感光元件沿着光轴到目标物表面的距离分别为T n-1、T n、T n+1、T n+2,T=(T n-1+T n+T n+1+T n+2)/4。当然可以不只限于相邻4个位置,也可以用更多的位置进行平均值计算。 When the image capture device is in any one of the two positions, the distance from the photosensitive element to the surface of the target along the optical axis is taken as T. In addition to this method, in another case, L is A n, A n + 1 two linear distance optical center of the image pickup apparatus, and A n, A n + 1 of two adjacent image pickup devices A The distances between the photosensitive elements of the two image acquisition devices n-1 and A n+2 and the two image acquisition devices A n and A n+1 to the surface of the target along the optical axis are respectively T n-1 , T n , T n+1 , T n+2 , T=(T n-1 +T n +T n+1 +T n+2 )/4. Of course, it is not limited to 4 adjacent positions, and more positions can be used for average calculation.
L应当为两个图像采集装置光心的直线距离,但由于图像采集装置光心位置在某些情况下并不容易确定,因此在某些情况下也可以使用图像采集装置的感光元件中心、图像采集装置1的几何中心、图像采集装置与云台(或平台、支架)连接的轴中心、镜头近端或远端表面的中心替代,经过试验发现由此带来的误差是在可接受的范围内的。L should be the linear distance between the optical centers of the two image acquisition devices, but because the position of the optical centers of the image acquisition devices is not easy to determine in some cases, the center of the photosensitive element and the image of the image acquisition device can also be used in some cases. The geometric center of the acquisition device 1, the axis center of the connection between the image acquisition device and the pan/tilt (or platform, bracket), and the center of the proximal or distal surface of the lens are substituted. After experiments, it is found that the resulting error is within an acceptable range. inside.
通常情况下,现有技术中均采用物体尺寸、视场角等参数作为推算相机位置的方式,并且两个相机之间的位置关系也采用角度表达。由于角度在实际使用过程中并不好测量,因此在实际使用时较为不便。并且,物体尺寸会随着测量物体的变化而改变。例如,在进行一个成年人头部3D信息采集后,再进行儿童头部采集时,就需要重新测量头部尺寸,重新推算。上述不方便的测量以及多次重新测量都会带来测量的误差,从而导致相机位置推算错误。而本方案根据大量实验数据,给出了相机位置需要满足的经验条件,不仅避免测量难以准确测量的角度,而且不需要直接测量物体大小尺寸。经验条件中d、f均为相机固定参数,在购买相机、镜头时,厂家即会给出相应参数,无需测量。而T仅为一个直线距离,用传统测量方法,例如直尺、激光测距仪均可以很便捷的测量得到。因此,本发明的经验公式使得准备过程变得方便快捷,同时也提高了相机位置的排布准确度,使得相机能够设置在优化的位置中,从而在同时兼顾了3D合成精度和速度,具体实验数据参见下述。Generally, in the prior art, parameters such as object size and field of view are used as a method for estimating the position of the camera, and the positional relationship between the two cameras is also expressed by angle. Since the angle is not easy to measure in actual use, it is more inconvenient in actual use. And, the size of the object will change with the change of the measuring object. For example, after collecting the 3D information of an adult's head, and then collecting the head of a child, the head size needs to be re-measured and recalculated. The above-mentioned inconvenient measurement and multiple re-measurements will cause measurement errors, resulting in errors in the estimation of the camera position. Based on a large amount of experimental data, this solution gives the empirical conditions that the camera position needs to meet, which not only avoids measuring angles that are difficult to accurately measure, but also does not need to directly measure the size of the object. In the empirical conditions, d and f are the fixed parameters of the camera. When purchasing the camera and lens, the manufacturer will give the corresponding parameters without measurement. And T is only a straight line distance, which can be easily measured by traditional measuring methods, such as rulers and laser rangefinders. Therefore, the empirical formula of the present invention makes the preparation process convenient and quick, and at the same time improves the accuracy of the arrangement of the camera position, so that the camera can be set in an optimized position, thereby taking into account the 3D synthesis accuracy and speed at the same time. Specific experiments See below for data.
利用本发明装置,进行实验,得到了如下实验结果。Using the device of the present invention, experiments were carried out, and the following experimental results were obtained.
Figure PCTCN2020134757-appb-000015
Figure PCTCN2020134757-appb-000015
Figure PCTCN2020134757-appb-000016
Figure PCTCN2020134757-appb-000016
更换相机镜头,再次实验,得到了如下实验结果。Change the camera lens, experiment again, and get the following experimental results.
Figure PCTCN2020134757-appb-000017
Figure PCTCN2020134757-appb-000017
更换相机镜头,再次实验,得到了如下实验结果。Change the camera lens, experiment again, and get the following experimental results.
Figure PCTCN2020134757-appb-000018
Figure PCTCN2020134757-appb-000018
从上述实验结果及大量实验经验可以得出,δ的值应当满足δ<0.603,此 时已经能够合成部分3D模型,虽然有一部分无法自动合成,但是在要求不高的情况下也是可以接受的,并且可以通过手动或者更换算法的方式弥补无法合成的部分。特别是δ的值满足δ<0.410时,能够最佳地兼顾合成效果和合成时间的平衡;为了获得更好的合成效果可以选择δ<0.356,此时合成时间会上升,但合成质量更好。当然为了进一步提高合成效果,可以选择δ<0.311。而当δ为0.681时,已经无法合成。但这里应当注意,以上范围仅仅是最佳实施例,并不构成对保护范围的限定。From the above experimental results and a large amount of experimental experience, it can be concluded that the value of δ should satisfy δ<0.603. At this time, some 3D models can be synthesized. Although some of them cannot be synthesized automatically, it is acceptable if the requirements are not high. And you can make up for the parts that cannot be synthesized manually or by replacing the algorithm. Especially when the value of δ satisfies δ<0.410, the balance between synthesis effect and synthesis time can be optimally taken into account; in order to obtain a better synthesis effect, δ<0.356 can be selected. At this time, the synthesis time will increase, but the synthesis quality will be better. Of course, in order to further improve the synthesis effect, you can choose δ<0.311. When δ is 0.681, it can no longer be synthesized. However, it should be noted here that the above scope is only the best embodiment and does not constitute a limitation on the protection scope.
并且从上述实验可以看出,对于相机拍照位置的确定,只需要获取相机参数(焦距f、CCD尺寸)、相机CCD与物体表面的距离T即可根据上述公式得到,这使得在进行设备设计和调试时变得容易。由于相机参数(焦距f、CCD尺寸)在相机购买时就已经确定,并且是产品说明中就会标示的,很容易获得。因此根据上述公式很容易就能够计算得到相机位置,而不需要再进行繁琐的视场角测量和物体尺寸测量。特别是在一些场合中,需要更换相机镜头,那么本发明的方法直接更换镜头常规参数f计算即可得到相机位置;同理,在采集不同物体时,由于物体大小不同,对于物体尺寸的测量也较为繁琐。而使用本发明的方法,无需进行物体尺寸测量,能够更为便捷地确定相机位置。并且使用本发明确定的相机位置,能够兼顾合成时间和合成效果。因此,上述经验条件是本发明的发明点之一。And from the above experiments, it can be seen that for the determination of the camera's photo location, only the camera parameters (focal length f, CCD size) and the distance T between the camera CCD and the surface of the object can be obtained according to the above formula, which makes the equipment design and It becomes easy when debugging. Since the camera parameters (focal length f, CCD size) are determined when the camera is purchased, and will be marked in the product description, it is easy to obtain. Therefore, the camera position can be easily calculated according to the above formula, without the need for tedious field angle measurement and object size measurement. Especially in some occasions, it is necessary to replace the camera lens, then the method of the present invention can directly replace the lens to calculate the conventional parameter f to obtain the camera position; similarly, when collecting different objects, due to the different size of the object, the measurement of the object size is also More cumbersome. With the method of the present invention, there is no need to measure the size of the object, and the camera position can be determined more conveniently. In addition, the camera position determined by the present invention can take into account the synthesis time and the synthesis effect. Therefore, the above empirical condition is one of the invention points of the present invention.
以上数据仅为验证该公式条件所做实验得到的,并不对发明构成限定。即使没有这些数据,也不影响该公式的客观性。本领域技术人员可以根据需要调整设备参数和步骤细节进行实验,得到其他数据也是符合该公式条件的。The above data is only obtained from experiments to verify the conditions of the formula, and does not limit the invention. Even without these data, it does not affect the objectivity of the formula. Those skilled in the art can adjust the equipment parameters and step details as needed to perform experiments, and obtain other data that also meets the conditions of the formula.
3D合成方法3D synthesis method
在图像采集设备通过图像采集装置4采集到目标物多个方向的图像后,通过数据传输方式将多张图像传输至处理器。处理器可以在本地设置,也可以将图像上传至云平台利用远程处理器。在处理器中使用如下方法进行3D模型的合成。After the image acquisition device collects images in multiple directions of the target through the image acquisition device 4, the multiple images are transmitted to the processor in a data transmission manner. The processor can be set locally, or the image can be uploaded to the cloud platform to use a remote processor. Use the following method in the processor to synthesize the 3D model.
利用上述采集到的图片进行3D合成时,可以采用现有算法实现,也可以采用本发明提出的优化的算法,主要包括如下步骤:When using the above-mentioned collected pictures for 3D synthesis, the existing algorithm can be used to realize it, or the optimized algorithm proposed by the present invention can be used, which mainly includes the following steps:
步骤1:对所有输入照片进行图像增强处理。采用下述滤波器增强原始照 片的反差和同时压制噪声。Step 1: Perform image enhancement processing on all input photos. The following filters are used to enhance the contrast of the original photos and suppress noise at the same time.
Figure PCTCN2020134757-appb-000019
Figure PCTCN2020134757-appb-000019
式中:g(x,y)为原始影像在(x,y)处灰度值,f(x,y)为经过Wallis滤波器增强后该处的灰度值,m g为原始影像局部灰度均值,s g为原始影像局部灰度标准偏差,m f为变换后的影像局部灰度目标值,s f为变换后影像局部灰度标准偏差目标值。c∈(0,1)为影像方差的扩展常数,b∈(0,1)为影像亮度系数常数。 Where: g(x, y) is the gray value of the original image at (x, y), f(x, y) is the gray value of the original image after being enhanced by the Wallis filter, and m g is the local gray value of the original image Degree mean, s g is the local gray-scale standard deviation of the original image, m f is the local gray-scale target value of the transformed image, and s f is the local gray-scale standard deviation target value of the transformed image. c∈(0,1) is the expansion constant of the image variance, and b∈(0,1) is the image brightness coefficient constant.
该滤波器可以大大增强影像中不同尺度的影像纹理模式,所以在提取影像的点特征时可以提高特征点的数量和精度,在照片特征匹配中则提高了匹配结果可靠性和精度。The filter can greatly enhance the image texture patterns of different scales in the image, so the number and accuracy of feature points can be improved when extracting the point features of the image, and the reliability and accuracy of the matching result can be improved in the photo feature matching.
步骤2:对输入的所有照片进行特征点提取,并进行特征点匹配,获取稀疏特征点。采用SURF算子对照片进行特征点提取与匹配。SURF特征匹配方法主要包含三个过程,特征点检测、特征点描述和特征点匹配。该方法使用Hessian矩阵来检测特征点,用箱式滤波器(Box Filters)来代替二阶高斯滤波,用积分图像来加速卷积以提高计算速度,并减少了局部影像特征描述符的维数,来加快匹配速度。主要步骤包括①构建Hessian矩阵,生成所有的兴趣点,用于特征提取,构建Hessian矩阵的目的是为了生成图像稳定的边缘点(突变点);②构建尺度空间特征点定位,将经过Hessian矩阵处理的每个像素点与二维图像空间和尺度空间邻域内的26个点进行比较,初步定位出关键点,再经过滤除能量比较弱的关键点以及错误定位的关键点,筛选出最终的稳定的特征点;③特征点主方向的确定,采用的是统计特征点圆形邻域内的harr小波特征。即在特征点的圆形邻域内,统计60度扇形内所有点的水平、垂直harr小波特征总和,然后扇形以0.2弧度大小的间隔进行旋转并再次统计该区域内harr小波特征值之后,最后将值最大的那个扇形的方向作为该特征点的主方向;④ 生成64维特征点描述向量,特征点周围取一个4*4的矩形区域块,但是所取得矩形区域方向是沿着特征点的主方向。每个子区域统计25个像素的水平方向和垂直方向的haar小波特征,这里的水平和垂直方向都是相对主方向而言的。该haar小波特征为水平方向值之后、垂直方向值之后、水平方向绝对值之后以及垂直方向绝对值之和4个方向,把这4个值作为每个子块区域的特征向量,所以一共有4*4*4=64维向量作为Surf特征的描述子;⑤特征点匹配,通过计算两个特征点间的欧式距离来确定匹配度,欧氏距离越短,代表两个特征点的匹配度越好。Step 2: Perform feature point extraction on all input photos, and perform feature point matching to obtain sparse feature points. The SURF operator is used to extract and match the feature points of the photos. The SURF feature matching method mainly includes three processes, feature point detection, feature point description and feature point matching. This method uses Hessian matrix to detect feature points, uses Box Filters to replace second-order Gaussian filtering, and uses integral images to accelerate convolution to increase the calculation speed and reduce the dimensionality of local image feature descriptors. To speed up the matching speed. The main steps include ① constructing the Hessian matrix to generate all points of interest for feature extraction. The purpose of constructing the Hessian matrix is to generate stable edge points (mutation points) of the image; ② constructing the scale space feature point positioning, which will be processed by the Hessian matrix Compare each pixel point with 26 points in the neighborhood of two-dimensional image space and scale space, and initially locate the key points, and then filter out the key points with weaker energy and the key points that are incorrectly positioned to filter out the final stable ③The main direction of the feature point is determined by using the Harr wavelet feature in the circular neighborhood of the statistical feature point. That is, in the circular neighborhood of the feature point, the sum of the horizontal and vertical harr wavelet features of all points in the 60-degree fan is counted, and then the fan is rotated at an interval of 0.2 radians and the harr wavelet eigenvalues in the area are counted again. The direction of the sector with the largest value is taken as the main direction of the feature point; ④ A 64-dimensional feature point description vector is generated, and a 4*4 rectangular area block is taken around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. direction. Each sub-region counts 25 pixels of haar wavelet features in the horizontal and vertical directions, where the horizontal and vertical directions are relative to the main direction. The haar wavelet features are 4 directions after the horizontal direction value, after the vertical direction value, after the horizontal direction absolute value, and the vertical direction absolute value. These 4 values are used as the feature vector of each sub-block area, so there is a total of 4* 4*4=64-dimensional vector is used as the descriptor of Surf feature; ⑤Feature point matching, the degree of matching is determined by calculating the Euclidean distance between two feature points. The shorter the Euclidean distance, the better the matching degree of the two feature points. .
步骤3:输入匹配的特征点坐标,利用光束法平差,解算稀疏的人脸三维点云和拍照相机的位置和姿态数据,即获得了稀疏人脸模型三维点云和位置的模型坐标值;以稀疏特征点为初值,进行多视照片稠密匹配,获取得到密集点云数据。该过程主要有四个步骤:立体像对选择、深度图计算、深度图优化、深度图融合。针对输入数据集里的每一张影像,我们选择一张参考影像形成一个立体像对,用于计算深度图。因此我们可以得到所有影像的粗略的深度图,这些深度图可能包含噪声和错误,我们利用它的邻域深度图进行一致性检查,来优化每一张影像的深度图。最后进行深度图融合,得到整个场景的三维点云。Step 3: Input the matching feature point coordinates, use the beam method to adjust, solve the sparse face 3D point cloud and the position and posture data of the camera, that is, obtain the sparse face model 3D point cloud and the position model coordinate value ; With sparse feature points as initial values, dense matching of multi-view photos is performed to obtain dense point cloud data. The process has four main steps: stereo pair selection, depth map calculation, depth map optimization, and depth map fusion. For each image in the input data set, we select a reference image to form a stereo pair for calculating the depth map. Therefore, we can get rough depth maps of all images. These depth maps may contain noise and errors. We use its neighborhood depth map to check consistency to optimize the depth map of each image. Finally, depth map fusion is performed to obtain a three-dimensional point cloud of the entire scene.
步骤4:利用密集点云进行人脸曲面重建。包括定义八叉树、设置函数空间、创建向量场、求解泊松方程、提取等值面几个过程。由梯度关系得到采样点和指示函数的积分关系,根据积分关系获得点云的向量场,计算指示函数梯度场的逼近,构成泊松方程。根据泊松方程使用矩阵迭代求出近似解,采用移动方体算法提取等值面,对所测点云重构出被测物体的模型。Step 4: Use the dense point cloud to reconstruct the face surface. Including the process of defining the octree, setting the function space, creating the vector field, solving the Poisson equation, and extracting the isosurface. The integral relationship between the sampling point and the indicator function is obtained from the gradient relationship, and the vector field of the point cloud is obtained according to the integral relationship, and the approximation of the gradient field of the indicator function is calculated to form the Poisson equation. According to the Poisson equation, the approximate solution is obtained by matrix iteration, the moving cube algorithm is used to extract the isosurface, and the model of the measured object is reconstructed from the measured point cloud.
步骤5:人脸模型的全自动纹理贴图。表面模型构建完成后,进行纹理贴图。主要过程包括:①纹理数据获取通过图像重建目标的表面三角面格网;② 重建模型三角面的可见性分析。利用图像的标定信息计算每个三角面的可见图像集以及最优参考图像;③三角面聚类生成纹理贴片。根据三角面的可见图像集、最优参考图像以及三角面的邻域拓扑关系,将三角面聚类生成为若干参考图像纹理贴片;④纹理贴片自动排序生成纹理图像。对生成的纹理贴片,按照其大小关系进行排序,生成包围面积最小的纹理图像,得到每个三角面的纹理映射坐标。Step 5: Fully automatic texture mapping of the face model. After the surface model is built, texture mapping is performed. The main process includes: ①The texture data is obtained through the image reconstruction target's surface triangle grid; ②The visibility analysis of the reconstructed model triangle. Use the image calibration information to calculate the visible image set of each triangle and the optimal reference image; ③The triangle surface clustering generates texture patches. According to the visible image set of the triangle surface, the optimal reference image and the neighborhood topological relationship of the triangle surface, the triangle surface cluster is generated into a number of reference image texture patches; ④The texture patches are automatically sorted to generate texture images. Sort the generated texture patches according to their size relationship, generate the texture image with the smallest enclosing area, and obtain the texture mapping coordinates of each triangle.
应当注意,上述算法是本发明的优化算法,本算法与图像采集条件相互配合,使用该算法兼顾了合成的时间和质量,是本发明的发明点之一。当然,使用现有技术中常规3D合成算法也可以实现,只是合成效果和速度会受到一定影响。It should be noted that the above-mentioned algorithm is an optimized algorithm of the present invention, and this algorithm cooperates with the image acquisition conditions, and the use of this algorithm takes into account the time and quality of synthesis, which is one of the invention points of the present invention. Of course, the conventional 3D synthesis algorithm in the prior art can also be used, but the synthesis effect and speed will be affected to a certain extent.
附属物的匹配及制作Matching and production of attachments
在采集目标物的3D信息,合成3D模型后,可以根据3D数据为目标物制作与其配套的附属物。After collecting the 3D information of the target and synthesizing the 3D model, the accessory can be made for the target according to the 3D data.
例如为用户制作适合脸型的眼镜。在符合上述经验条件的限定基础上,采集用户头部的不同方向的多张图片;将多张照片利用3D合成软件合成3D模型,所采用的方法可以使用常见的3D图片匹配算法。再得到3D网格模型后,增加纹理信息,从而形成头部3D模型。根据头部3D模型相关位置尺寸,例如脸颊宽度、鼻梁高度、耳廓大小等信息为用户选择合适的眼镜框。For example, make glasses suitable for the face of the user. On the basis of meeting the above empirical conditions, collect multiple pictures of the user's head in different directions; use 3D synthesis software to synthesize a 3D model from the multiple photos, and the method used can use common 3D picture matching algorithms. After obtaining the 3D mesh model, texture information is added to form a 3D head model. According to the relevant position and size of the head 3D model, such as the width of the cheeks, the height of the nose bridge, the size of the auricle, etc., the user selects the appropriate glasses frame.
除了眼镜外,还可以为用户设计帽子、手套、假肢等多种附属物。也可以为物体设计与之配合的附属物,例如为异形零件设计能够严密包裹的包装盒等。In addition to glasses, various accessories such as hats, gloves, and prostheses can also be designed for users. It is also possible to design accessory accessories for the object, such as designing packaging boxes that can be tightly wrapped for special-shaped parts.
目标物识别与比对Target recognition and comparison
以上实施例获得的目标物多个区域的3D信息可以用于进行比对,例如用于身份的识别。首先利用本发明的方案获取人体面部和虹膜的3D信息,并将其存储在服务器中,作为标准数据。当使用时,例如需要进行身份认证进行支付、开门等操作时,可以用3D获取装置再次采集并获取人体面部和虹膜的3D信息,将其与标准数据进行比对,比对成功则允许进行下一步动作。可以理解,这种比对也可以用于古董、艺术品等固定财产的鉴别,即先获取古董、艺术品多个区域的3D信息作为标准数据,在需要鉴定时,再次获取多个区域的3D信息,并与标准数据进行比对,鉴别真伪。The 3D information of multiple regions of the target obtained in the above embodiment can be used for comparison, for example, for identity recognition. First, use the solution of the present invention to obtain 3D information of the human face and iris, and store it in the server as standard data. When in use, such as when identity authentication is required for payment, door opening, etc., the 3D acquisition device can be used to collect and obtain the 3D information of the human face and iris again, and compare it with the standard data. If the comparison is successful, the next step is allowed. One step. It is understandable that this kind of comparison can also be used for the identification of fixed assets such as antiques and artworks, that is, first obtain 3D information of multiple areas of antiques and artworks as standard data, and obtain 3D information of multiple areas again when authentication is required. Information, and compare with standard data to identify authenticity.
虽然上述实施例中记载图像采集装置采集图像,但不应理解为仅适用于单张图片构成的图片组,这只是为了便于理解而采用的说明方式。图像采集装置也可以采集视频数据,直接利用视频数据或从视频数据中截取图像进行3D合成。但合成时所利用的视频数据相应帧或截取的图像的拍摄位置,依然满足上述经验公式。Although the foregoing embodiments describe that the image capture device captures images, it should not be construed as being only applicable to a group of pictures composed of a single picture, and this is only an illustrative method for ease of understanding. The image acquisition device can also collect video data, directly use the video data or intercept images from the video data for 3D synthesis. However, the shooting position of the corresponding frame of the video data or the captured image used in the synthesis still satisfies the above empirical formula.
上述目标物体、目标物、及物体皆表示预获取三维信息的对象。可以为一实体物体,也可以为多个物体组成物。例如可以为头部、手部等。所述目标物的三维信息包括三维图像、三维点云、三维网格、局部三维特征、三维尺寸及一切带有目标物三维特征的参数。本实用新型里所谓的三维是指具有XYZ三个方向信息,特别是具有深度信息,与只有二维平面信息具有本质区别。也与一些称为三维、全景、全息、三维,但实际上只包括二维信息,特别是不包括深度信息的定义有本质区别。The above-mentioned target object, target object, and object all represent objects for which three-dimensional information is pre-acquired. It can be a physical object, or it can be a combination of multiple objects. For example, it can be a head, a hand, and so on. The three-dimensional information of the target includes a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, a local three-dimensional feature, a three-dimensional size, and all parameters with a three-dimensional feature of the target. The so-called three-dimensional in the present invention refers to three-direction information of XYZ, especially depth information, which is essentially different from only two-dimensional plane information. It is also essentially different from the definitions called three-dimensional, panoramic, holographic, and three-dimensional, but actually only include two-dimensional information, especially depth information.
本发明所说的采集区域是指图像采集装置(例如相机)能够拍摄的范围。本发明中的图像采集装置可以为CCD、CMOS、相机、摄像机、工业相机、监视器、摄像头、手机、平板、笔记本、移动终端、可穿戴设备、智能眼镜、智能手表、智能手环以及带有图像采集功能所有设备。The collection area mentioned in the present invention refers to the range that an image collection device (such as a camera) can shoot. The image acquisition device in the present invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable device, smart glasses, smart watch, smart bracelet and belt All devices with image capture function.
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本实用新型的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。In the instructions provided here, a lot of specific details are explained. However, it can be understood that the embodiments of the present invention can be practiced without these specific details. In some instances, well-known methods, structures, and technologies are not shown in detail, so as not to obscure the understanding of this specification.
类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。Similarly, it should be understood that in order to simplify the present disclosure and help understand one or more of the various inventive aspects, in the above description of the exemplary embodiments of the present invention, the various features of the present invention are sometimes grouped together into a single embodiment, Figure, or its description. However, the disclosed method should not be interpreted as reflecting the intention that the claimed invention requires more features than those explicitly stated in each claim. More precisely, as reflected in the following claims, the inventive aspect lies in less than all the features of a single embodiment disclosed previously. Therefore, the claims following the specific embodiment are thus explicitly incorporated into the specific embodiment, wherein each claim itself serves as a separate embodiment of the present invention.
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的 至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。Those skilled in the art can understand that it is possible to adaptively change the modules in the device in the embodiment and set them in one or more devices different from the embodiment. The modules or units or components in the embodiments can be combined into one module or unit or component, and in addition, they can be divided into multiple sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or units are mutually exclusive, any combination can be used to compare all the features disclosed in this specification (including the accompanying claims, abstract and drawings) and any method or methods disclosed in this manner or All the processes or units of the equipment are combined. Unless expressly stated otherwise, each feature disclosed in this specification (including the accompanying claims, abstract and drawings) may be replaced by an alternative feature providing the same, equivalent or similar purpose.
此外,本领域的技术人员能够理解,尽管在此的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。In addition, those skilled in the art can understand that although some embodiments herein include certain features included in other embodiments but not other features, the combination of features of different embodiments means that they fall within the scope of the present invention. And form different embodiments. For example, in the claims, any one of the claimed embodiments can be used in any combination.
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的基于本发明装置中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。The various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them. Those skilled in the art should understand that a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions based on some or all of the components in the device of the present invention according to the embodiments of the present invention. The present invention can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein. Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals. Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。It should be noted that the above-mentioned embodiments illustrate rather than limit the present invention, and those skilled in the art can design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses should not be constructed as a limitation to the claims. The word "comprising" does not exclude the presence of elements or steps not listed in the claims. The word "a" or "an" preceding an element does not exclude the presence of multiple such elements. The invention can be implemented by means of hardware comprising several different elements and by means of a suitably programmed computer. In the unit claims listing several devices, several of these devices may be embodied in the same hardware item. The use of the words first, second, and third, etc. do not indicate any order. These words can be interpreted as names.
至此,本领域技术人员应认识到,虽然本文已详尽示出和描述了本发明的多个示例性实施例,但是,在不脱离本发明精神和范围的情况下,仍可根据本发明公开的内容直接确定或推导出符合本发明原理的许多其他变型或修改。因此,本发明的范围应被理解和认定为覆盖了所有这些其他变型或修改。So far, those skilled in the art should realize that although multiple exemplary embodiments of the present invention have been illustrated and described in detail herein, they can still be disclosed according to the present invention without departing from the spirit and scope of the present invention. The content directly determines or derives many other variations or modifications that conform to the principles of the present invention. Therefore, the scope of the present invention should be understood and deemed to cover all these other variations or modifications.

Claims (56)

  1. 一种高精度3D信息采集的设备,其特征在于:A device for collecting high-precision 3D information, which is characterized by:
    采集区域移动装置,用于驱动图像采集装置的采集区域与目标物产生相对运动;The acquisition area moving device is used to drive the acquisition area of the image acquisition device to move relative to the target;
    图像采集装置,用于通过上述相对运动采集目标物一组图像;An image acquisition device for acquiring a set of images of the target object through the above-mentioned relative motion;
    在上述相对运动过程中,图像采集装置的采集位置符合如下条件:During the above relative movement, the collection position of the image collection device meets the following conditions:
    Figure PCTCN2020134757-appb-100001
    Figure PCTCN2020134757-appb-100001
    其中L为图像采集装置在相邻两个采集位置时光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数。Where L is the linear distance of the optical center of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; T is the photosensitive element of the image acquisition device along the light The distance between the axis and the surface of the target; δ is the adjustment coefficient.
  2. 如权利要求1所述的设备,其特征在于:δ<0.603,优选δ<0.410。The device according to claim 1, characterized in that: δ<0.603, preferably δ<0.410.
  3. 如权利要求1所述的设备,其特征在于:δ<0.356;或δ<0.311;或δ<0.284;或δ<0.261;或δ<0.241;或δ<0.107。The device according to claim 1, wherein: δ<0.356; or δ<0.311; or δ<0.284; or δ<0.261; or δ<0.241; or δ<0.107.
  4. 如权利要求1所述的设备,其特征在于:图像采集装置相对于目标物转动或平动。The device according to claim 1, wherein the image acquisition device rotates or translates relative to the target.
  5. 如权利要求1所述的设备,其特征在于:图像采集装置对面设置有背景板。The device according to claim 1, wherein a background board is provided opposite to the image acquisition device.
  6. 如权利要求1所述的设备,其特征在于:还包括处理器,用于根据一组图像中的多个进行3D合成,生成目标物的3D模型。8. The device of claim 1, further comprising a processor, configured to perform 3D synthesis based on a plurality of images in a set of images to generate a 3D model of the target object.
  7. 如权利要求1所述的设备,其特征在于:包括处理器,设置在该设备中,或位于上位机,或位于远程服务器中。The device according to claim 1, characterized in that it comprises a processor, which is arranged in the device, or located in an upper computer, or located in a remote server.
  8. 如权利要求1所述的设备,其特征在于:图像采集装置为可见光波段、红外光波段、或全波段。The device according to claim 1, wherein the image acquisition device is in the visible light waveband, infrared light waveband, or full waveband.
  9. 一种3D合成装置,其特征在于,使用如权利要求1-8任一所述设备。A 3D synthesis device, characterized in that it uses the device according to any one of claims 1-8.
  10. 一种3D识别装置,其特征在于,使用如权利要求1-8任一所述设备。A 3D recognition device, characterized in that it uses the device according to any one of claims 1-8.
  11. 一种附属物制作装置,其特征在于,使用如权利要求1-8任一所述设备,根据3D数据为目标物制作与其配套的附属物。An accessory making device, which is characterized in that the device according to any one of claims 1-8 is used to make an accessory for the target object according to 3D data.
  12. 一种用于3D信息采集的设备,其特征在于:A device for collecting 3D information, which is characterized in:
    具有多个图像采集装置,分别位于目标物的周围;Have multiple image acquisition devices, respectively located around the target;
    多个图像采集装置分别从不同角度采集目标物一组图像;Multiple image acquisition devices respectively acquire a set of images of the target object from different angles;
    相邻两个图像采集装置的位置符合如下条件:The positions of two adjacent image acquisition devices meet the following conditions:
    Figure PCTCN2020134757-appb-100002
    Figure PCTCN2020134757-appb-100002
    其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数。Where L is the linear distance between the optical centers of the image capture device at two adjacent capture positions; f is the focal length of the image capture device; d is the rectangular length or width of the photosensitive element of the image capture device; T is the photosensitive element of the image capture device along the optical axis The distance to the surface of the target; δ is the adjustment coefficient.
  13. 如权利要求12所述的设备,其特征在于:δ<0.603,优选δ<0.410。The device according to claim 12, characterized in that: δ<0.603, preferably δ<0.410.
  14. 如权利要求12所述的设备,其特征在于:δ<0.356;或δ<0.311;或δ<0.284;或δ<0.261;或δ<0.241;或δ<0.107。The device according to claim 12, wherein: δ<0.356; or δ<0.311; or δ<0.284; or δ<0.261; or δ<0.241; or δ<0.107.
  15. 如权利要求12所述的设备,其特征在于:图像采集装置对面设置有背景板。The device according to claim 12, wherein a background board is provided opposite to the image acquisition device.
  16. 如权利要求12所述的设备,其特征在于:还包括处理器,用于根据一组图像中的多个进行3D合成,生成目标物的3D模型。The device according to claim 12, further comprising a processor, configured to perform 3D synthesis according to a plurality of images in a group of images to generate a 3D model of the target object.
  17. 如权利要求12所述的设备,其特征在于:包括处理器,设置在该设备中,或位于上位机,或位于远程服务器中。The device according to claim 12, characterized in that it comprises a processor, which is set in the device, or located in the upper computer, or located in the remote server.
  18. 如权利要求12所述的设备,其特征在于:图像采集装置为可见光波段、红外光波段、或全波段。The device according to claim 12, wherein the image acquisition device is in the visible light waveband, infrared light waveband, or full waveband.
  19. 一种3D合成装置,其特征在于,使用如权利要求12-18任一所述设备。A 3D synthesis device, characterized in that it uses the device according to any one of claims 12-18.
  20. 一种3D识别装置,其特征在于,使用如权利要求12-18任一所述设备。A 3D recognition device, which is characterized in that the device according to any one of claims 12-18 is used.
  21. 一种附属物制作装置,其特征在于,使用如权利要求12-18任一所述设备,根据3D数据为目标物制作与其配套的附属物。An accessory making device, which is characterized in that the equipment according to any one of claims 12-18 is used to make an accessory for the target object according to 3D data.
  22. 一种高精度3D信息采集的方法,其特征在于:A method for collecting high-precision 3D information, which is characterized by:
    采集区域移动装置驱动图像采集装置的采集区域与目标物产生相对运动;The acquisition area moving device drives the acquisition area of the image acquisition device to move relative to the target;
    图像采集装置通过上述相对运动采集目标物一组图像;The image acquisition device acquires a set of images of the target object through the above-mentioned relative movement;
    在上述相对运动过程中,图像采集装置的采集位置符合如下条件:During the above relative movement, the collection position of the image collection device meets the following conditions:
    Figure PCTCN2020134757-appb-100003
    Figure PCTCN2020134757-appb-100003
    其中L为图像采集装置在相邻两个采集位置时光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数。Where L is the linear distance of the optical center of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; T is the photosensitive element of the image acquisition device along the light The distance between the axis and the surface of the target; δ is the adjustment coefficient.
  23. 如权利要求22所述的方法,其特征在于:δ<0.603,优选δ<0.410。The method according to claim 22, characterized in that: δ<0.603, preferably δ<0.410.
  24. 如权利要求22所述的方法,其特征在于:δ<0.356;或δ<0.311;或δ<0.284;或δ<0.261;或δ<0.241;或δ<0.107。The method of claim 22, wherein: δ<0.356; or δ<0.311; or δ<0.284; or δ<0.261; or δ<0.241; or δ<0.107.
  25. 如权利要求22所述的方法,其特征在于:图像采集装置相对于目标物转动或平动。The method of claim 22, wherein the image acquisition device rotates or translates relative to the target.
  26. 如权利要求22所述的方法,其特征在于:图像采集装置对面设置有背景板。The method according to claim 22, wherein a background board is provided opposite to the image acquisition device.
  27. 如权利要求22所述的方法,其特征在于:还包括,处理器根据一组图像中的多个进行3D合成,生成目标物的3D模型。The method according to claim 22, further comprising the processor performing 3D synthesis based on a plurality of images in a set of images to generate a 3D model of the target object.
  28. 如权利要求22所述的方法,其特征在于:图像采集装置为可见光波段、红外光波段、或全波段。The method of claim 22, wherein the image acquisition device is in the visible light band, the infrared light band, or the full band.
  29. 一种3D合成方法,其特征在于,使用如权利要求22-28任一所述方法。A 3D synthesis method, characterized in that the method according to any one of claims 22-28 is used.
  30. 一种3D识别方法,其特征在于,使用如权利要求22-28任一所述方法。A 3D recognition method, characterized in that the method according to any one of claims 22-28 is used.
  31. 一种附属物制作方法,其特征在于,使用如权利要求22-28任一所述方法,根据3D数据为目标物制作与其配套的附属物。A method for making an appendage, which is characterized in that the method according to any one of claims 22-28 is used to make a matching appendage for the target object based on 3D data.
  32. 一种用于3D信息采集的方法,其特征在于:A method for 3D information collection, which is characterized in:
    多个图像采集装置分别位于目标物的周围;A plurality of image acquisition devices are respectively located around the target object;
    多个图像采集装置分别从不同角度采集目标物一组图像;Multiple image acquisition devices respectively acquire a set of images of the target object from different angles;
    相邻两个图像采集装置的位置符合如下条件:The positions of two adjacent image acquisition devices meet the following conditions:
    Figure PCTCN2020134757-appb-100004
    Figure PCTCN2020134757-appb-100004
    其中L为相邻两个采集位置图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度或宽度;T为图像采集装置感 光元件沿着光轴到目标物表面的距离;δ为调整系数。Where L is the linear distance between the optical centers of the image capture device at two adjacent capture positions; f is the focal length of the image capture device; d is the rectangular length or width of the photosensitive element of the image capture device; T is the photosensitive element of the image capture device along the optical axis The distance to the surface of the target; δ is the adjustment coefficient.
  33. 如权利要求32所述的方法,其特征在于:δ<0.603,优选δ<0.410。The method according to claim 32, characterized in that: δ<0.603, preferably δ<0.410.
  34. 如权利要求32所述的方法,其特征在于:δ<0.356;或δ<0.311;或δ<0.284;或δ<0.261;或δ<0.241;或δ<0.107。The method of claim 32, wherein: δ<0.356; or δ<0.311; or δ<0.284; or δ<0.261; or δ<0.241; or δ<0.107.
  35. 如权利要求32所述的方法,其特征在于:图像采集装置对面设置有背景板。The method according to claim 32, wherein a background board is provided opposite the image acquisition device.
  36. 如权利要求32所述的方法,其特征在于:还包括处理器根据一组图像中的多个进行3D合成,生成目标物的3D模型。The method according to claim 32, further comprising the processor performing 3D synthesis based on a plurality of images in a set of images to generate a 3D model of the target object.
  37. 如权利要求32所述的方法,其特征在于:图像采集装置为可见光波段、红外光波段、或全波段。The method according to claim 32, wherein the image acquisition device is in a visible light band, an infrared light band, or a full band.
  38. 一种3D合成方法,其特征在于,使用如权利要求33-37任一所述方法。A 3D synthesis method, characterized in that the method according to any one of claims 33-37 is used.
  39. 一种3D识别方法,其特征在于,使用如权利要求33-37任一所述方法。A 3D recognition method, characterized in that the method according to any one of claims 33-37 is used.
  40. 一种附属物制作方法,其特征在于,使用如权利要求33-37任一所述方法,根据3D数据为目标物制作与其配套的附属物。A method for making attachments, which is characterized by using the method according to any one of claims 33-37 to make an attachment for the target object according to 3D data.
  41. 一种用于目标物3D信息高速采集测量设备,其特征在于:包括图像采集装置、旋转装置和背景板,其中A high-speed acquisition and measurement equipment for 3D information of a target, which is characterized in that it comprises an image acquisition device, a rotation device and a background board, wherein
    旋转装置用于驱动图像采集装置转动,并驱动背景板转动;The rotating device is used to drive the image acquisition device to rotate, and to drive the background plate to rotate;
    背景板和图像采集装置在转动过程中保持相对设置,使得在采集时背景板成为图像采集装置所采集图像的背景图案;The background board and the image capture device are kept relatively set during the rotation, so that the background board becomes the background pattern of the image captured by the image capture device during capture;
    图像采集装置采集目标物时,相邻两个采集位置满足如下条件:When the image acquisition device collects a target, two adjacent acquisition positions meet the following conditions:
    Figure PCTCN2020134757-appb-100005
    Figure PCTCN2020134757-appb-100005
    L为在相邻两个采集位置时图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数。L is the linear distance of the optical center of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; T is the photosensitive element of the image acquisition device along the light The distance between the axis and the surface of the target; δ is the adjustment coefficient.
  42. 如权利要求41所述的设备,其特征在于:δ<0.603,优选δ<0.410。The device according to claim 41, characterized in that: δ<0.603, preferably δ<0.410.
  43. 如权利要求41所述的设备,其特征在于:旋转装置位于固定横梁上,驱动旋转横梁转动。The equipment according to claim 41, wherein the rotating device is located on the fixed beam and drives the rotating beam to rotate.
  44. 如权利要求41所述的设备,其特征在于:背景板为平板或曲面板。The device according to claim 41, wherein the background plate is a flat plate or a curved plate.
  45. 如权利要求41所述的设备,其特征在于:背景板主体为纯色或具有标记。The device according to claim 41, wherein the main body of the background board is solid color or has a mark.
  46. 如权利要求41所述的设备,其特征在于:背景板为一体成型或拼接板。The device according to claim 41, wherein the background board is an integrally formed or spliced board.
  47. 如权利要求41所述的设备,其特征在于:背景板满足:在垂直其被拍摄表面的方向进行投影,投影形状的水平方向上长度W 1、投影形状的垂直方向上长度W 2由下述条件决定: The device according to claim 41, wherein the background board satisfies the following requirements: projecting in a direction perpendicular to the surface to be photographed, the length W 1 in the horizontal direction of the projected shape and the length W 2 in the vertical direction of the projected shape by Conditional decision:
    Figure PCTCN2020134757-appb-100006
    Figure PCTCN2020134757-appb-100006
    Figure PCTCN2020134757-appb-100007
    Figure PCTCN2020134757-appb-100007
    其中,d 1为成像元件水平方向长度,d 2为成像元件垂直方向长度,T为沿光轴方向图像采集装置传感元件到背景板的垂直距离,f为图像采集装置的焦距,A 1、A 2为经验系数; Among them, d 1 is the length of the imaging element in the horizontal direction, d 2 is the length of the imaging element in the vertical direction, T is the vertical distance from the sensor element of the image capture device to the background plate along the optical axis, f is the focal length of the image capture device, A 1 , A 2 is the experience coefficient;
    其中A 1>1.04,A 2>1.04。 Among them, A 1 >1.04 and A 2 >1.04.
  48. 如权利要求13所述的设备,其特征在于:2>A 1>1.1,2>A 2>1.1。 The device according to claim 13, characterized in that: 2>A 1 >1.1, 2 >A 2 >1.1.
  49. 一种3D识别设备,其特征在于:使用如权利要求41-48任一所述设备提供的3D信息。A 3D recognition device, characterized in that it uses 3D information provided by the device according to any one of claims 41-48.
  50. 一种3D制造设备,其特征在于:使用如权利要求41-48任一所述设备提供的3D信息。A 3D manufacturing equipment, characterized in that it uses the 3D information provided by the equipment according to any one of claims 41-48.
  51. 一种用于目标物3D信息高速采集测量设备,其特征在于:A high-speed acquisition and measurement equipment for 3D information of a target, which is characterized in that:
    图像采集装置采集目标物时,相邻两个采集位置满足如下条件:When the image acquisition device collects a target, two adjacent acquisition positions meet the following conditions:
    Figure PCTCN2020134757-appb-100008
    Figure PCTCN2020134757-appb-100008
    L为在相邻两个采集位置时图像采集装置光心的直线距离;f为图像采集装置的焦距;d为图像采集装置感光元件的矩形长度或宽度;T为图像采集装置感光元件沿着光轴到目标物表面的距离;δ为调整系数。L is the linear distance of the optical center of the image acquisition device at two adjacent acquisition positions; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; T is the photosensitive element of the image acquisition device along the light The distance between the axis and the surface of the target; δ is the adjustment coefficient.
  52. 如权利要求51所述的设备,其特征在于:所述设备还包括背景板,背景板满足:在垂直其被拍摄表面的方向进行投影,投影形状的水平方向上长度W 1、投影形状的垂直方向上长度W 2由下述条件决定: The device according to claim 51, characterized in that: the device further comprises a background board, the background board satisfies: projecting in a direction perpendicular to the surface to be photographed, the length W 1 in the horizontal direction of the projection shape, and the vertical direction of the projection shape The length W 2 in the direction is determined by the following conditions:
    Figure PCTCN2020134757-appb-100009
    Figure PCTCN2020134757-appb-100009
    Figure PCTCN2020134757-appb-100010
    Figure PCTCN2020134757-appb-100010
    其中,d 1为成像元件水平方向长度,d 2为成像元件垂直方向长度,T为沿光轴方向图像采集装置传感元件到背景板的垂直距离,f为图像采集装置的焦距,A 1、A 2为经验系数; Among them, d 1 is the length of the imaging element in the horizontal direction, d 2 is the length of the imaging element in the vertical direction, T is the vertical distance from the sensor element of the image capture device to the background plate along the optical axis, f is the focal length of the image capture device, A 1 , A 2 is the experience coefficient;
    其中A 1>1.04,A 2>1.04。 Among them, A 1 >1.04 and A 2 >1.04.
  53. 如权利要求52所述的设备,其特征在于:2>A 1>1.1,2>A 2>1.1。 The device of claim 52, wherein: 2>A 1 >1.1, 2 >A 2 >1.1.
  54. 如权利要求51所述的设备,其特征在于:δ<0.603,优选δ<0.410。The device according to claim 51, characterized in that: δ<0.603, preferably δ<0.410.
  55. 一种3D识别设备,其特征在于:使用如权利要求51-54任一所述设备提供的3D信息。A 3D recognition device, characterized in that it uses 3D information provided by the device according to any one of claims 51-54.
  56. 一种3D制造设备,其特征在于:使用如权利要求51-54任一所述设备提供的3D信息。A 3D manufacturing equipment, characterized in that it uses 3D information provided by the equipment according to any one of claims 51-54.
PCT/CN2020/134757 2019-12-12 2020-12-09 3d information collection apparatus and method WO2021115297A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201911276052.0A CN110986768B (en) 2019-12-12 2019-12-12 High-speed acquisition and measurement equipment for 3D information of target object
CN201911276052.0 2019-12-12
CN201911276062.4A CN111060023B (en) 2019-12-12 2019-12-12 High-precision 3D information acquisition equipment and method
CN201911276062.4 2019-12-12

Publications (1)

Publication Number Publication Date
WO2021115297A1 true WO2021115297A1 (en) 2021-06-17

Family

ID=76329532

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134757 WO2021115297A1 (en) 2019-12-12 2020-12-09 3d information collection apparatus and method

Country Status (1)

Country Link
WO (1) WO2021115297A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989364A (en) * 2021-10-21 2022-01-28 北京航天创智科技有限公司 Full-automatic multispectral cultural relic information acquisition modeling system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160182903A1 (en) * 2014-12-19 2016-06-23 Disney Enterprises, Inc. Camera calibration
CN109141240A (en) * 2018-09-05 2019-01-04 天目爱视(北京)科技有限公司 A kind of measurement of adaptive 3 D and information acquisition device
CN109443235A (en) * 2018-11-02 2019-03-08 滁州市云米工业设计有限公司 A kind of industrial design product collecting device for outline
CN110986768A (en) * 2019-12-12 2020-04-10 天目爱视(北京)科技有限公司 High-speed acquisition and measurement equipment for 3D information of target object
CN111060023A (en) * 2019-12-12 2020-04-24 天目爱视(北京)科技有限公司 High-precision 3D information acquisition equipment and method
CN211085114U (en) * 2019-12-12 2020-07-24 天目爱视(北京)科技有限公司 Take 3D information acquisition equipment of background board

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160182903A1 (en) * 2014-12-19 2016-06-23 Disney Enterprises, Inc. Camera calibration
CN109141240A (en) * 2018-09-05 2019-01-04 天目爱视(北京)科技有限公司 A kind of measurement of adaptive 3 D and information acquisition device
CN109443235A (en) * 2018-11-02 2019-03-08 滁州市云米工业设计有限公司 A kind of industrial design product collecting device for outline
CN110986768A (en) * 2019-12-12 2020-04-10 天目爱视(北京)科技有限公司 High-speed acquisition and measurement equipment for 3D information of target object
CN111060023A (en) * 2019-12-12 2020-04-24 天目爱视(北京)科技有限公司 High-precision 3D information acquisition equipment and method
CN211085114U (en) * 2019-12-12 2020-07-24 天目爱视(北京)科技有限公司 Take 3D information acquisition equipment of background board

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989364A (en) * 2021-10-21 2022-01-28 北京航天创智科技有限公司 Full-automatic multispectral cultural relic information acquisition modeling system and method

Similar Documents

Publication Publication Date Title
CN111060023B (en) High-precision 3D information acquisition equipment and method
CN111292364B (en) Method for rapidly matching images in three-dimensional model construction process
CN110986768B (en) High-speed acquisition and measurement equipment for 3D information of target object
WO2021115301A1 (en) Close-range target 3d acquisition apparatus
CN111292239B (en) Three-dimensional model splicing equipment and method
WO2021185214A1 (en) Method for long-distance calibration in 3d modeling
CN111028341B (en) Three-dimensional model generation method
WO2021185217A1 (en) Calibration method based on multi-laser distance measurement and angle measurement
CN111160136B (en) Standardized 3D information acquisition and measurement method and system
WO2021115302A1 (en) 3d intelligent visual device
WO2021185216A1 (en) Calibration method based on multiple laser range finders
CN111006586B (en) Intelligent control method for 3D information acquisition
CN111340959B (en) Three-dimensional model seamless texture mapping method based on histogram matching
CN110973763B (en) Foot intelligence 3D information acquisition measuring equipment
WO2021115297A1 (en) 3d information collection apparatus and method
CN211085114U (en) Take 3D information acquisition equipment of background board
WO2021115298A1 (en) Glasses matching design device
CN111207690B (en) Adjustable iris 3D information acquisition measuring equipment
WO2021115296A1 (en) Ultra-thin three-dimensional capturing module for mobile terminal
CN113538552B (en) 3D information synthetic image matching method based on image sorting
CN111310661B (en) Intelligent 3D information acquisition and measurement equipment for iris
CN211085115U (en) Standardized biological three-dimensional information acquisition device
CN211375622U (en) High-precision iris 3D information acquisition equipment and iris recognition equipment
CN111121620B (en) Rotary 3D information rapid acquisition equipment
CN211672690U (en) Three-dimensional acquisition equipment of human foot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20900318

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20900318

Country of ref document: EP

Kind code of ref document: A1