CN213179901U - Combined acquisition system of multiple handheld 3D acquisition devices and information utilization device - Google Patents
Combined acquisition system of multiple handheld 3D acquisition devices and information utilization device Download PDFInfo
- Publication number
- CN213179901U CN213179901U CN202022298644.7U CN202022298644U CN213179901U CN 213179901 U CN213179901 U CN 213179901U CN 202022298644 U CN202022298644 U CN 202022298644U CN 213179901 U CN213179901 U CN 213179901U
- Authority
- CN
- China
- Prior art keywords
- acquisition
- handheld
- target object
- acquisition device
- equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000015572 biosynthetic process Effects 0.000 claims description 21
- 238000003786 synthesis reaction Methods 0.000 claims description 20
- 230000003287 optical effect Effects 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 2
- 238000000034 method Methods 0.000 description 31
- 238000005259 measurement Methods 0.000 description 14
- 238000005457 optimization Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 238000002474 experimental method Methods 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012876 topography Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 230000003019 stabilising effect Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000000087 stabilizing effect Effects 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
The embodiment of the utility model provides a combination formula collection system and information utilization equipment of a plurality of handheld 3D collection equipment, wherein the combination formula collection system of a plurality of handheld 3D collection equipment includes a plurality of handheld 3D collection equipment, in a plurality of handheld 3D collection equipment, the collection scope of every handheld 3D collection equipment and other handheld 3D collection equipment has overlap region; the overlapping region is at least partially located on the target object; the acquisition direction of the image acquisition device of the handheld 3D acquisition equipment is the direction deviating from the rotation center. The handheld self-rotating 3D acquisition equipment is arranged at a plurality of positions, so that a complete multi-position combined 3D acquisition system is formed. The acquisition of the inner space of a complex surface or a large-range target object is realized.
Description
Technical Field
The utility model relates to a topography measurement technical field, in particular to 3D topography measurement technical field.
Background
When performing 3D measurements, it is necessary to first acquire 3D information. Currently common methods include the use of machine vision and structured light, laser ranging, lidar.
Structured light, laser ranging and laser radar all need an active light source to emit to a target object, and can affect the target object under certain conditions, and the light source cost is high. And the light source structure is more accurate, easily damages.
The machine vision mode is to collect the pictures of the object at different angles and match and splice the pictures to form a 3D model, so that the cost is low and the use is easy. When the device collects pictures at different angles, a plurality of cameras can be arranged at different angles of an object to be detected, and the pictures can be collected from different angles through rotation of a single camera or a plurality of cameras. However, in either of these two methods, the capturing position of the camera needs to be set around the target (referred to as a wraparound method), but this method needs a large space for setting the capturing position for the image capturing device.
Moreover, besides the 3D construction of a single object, there are also requirements for 3D model construction of the internal space of the object and 3D model construction of the peripheral large field of view, which are difficult to achieve by the conventional surrounding type 3D acquisition device. Particularly, the surface of the target object is complex (the surface is uneven and the unevenness is deep) in the internal space or the large field range, and at the time, each part of the surface pit or the surface bump is difficult to cover by collecting at a single position, so that a complete 3D model is difficult to obtain during final synthesis, even synthesis fails, or synthesis time is prolonged. And in some situations, 3D acquisition needs to be performed quickly on site, which takes a lot of time if installation configuration is performed, and many environments do not allow for fixed installation. For example, when temporary part inspection is performed, there is no fixed inspection equipment on the production line, and a handheld device is needed to be flexibly used.
In the prior art, it has also been proposed to use empirical formulas including rotation angle, object size, object distance to define camera position, thereby taking into account the speed and effect of the synthesis. However, in practice this has been found to be feasible in wrap-around 3D acquisition, where the target size can be measured in advance. However, it is difficult to measure the target object in advance in an open space, and it is necessary to acquire 3D information of streets, traffic intersections, building groups, tunnels, traffic flows, and the like (not limited thereto). Which makes this approach difficult to work. Even if the dimensions of fixed, small objects, such as furniture, human body parts, etc., can be measured beforehand, this method is still subject to major limitations: the size of the target is difficult to accurately determine, and particularly, the target needs to be frequently replaced in certain application occasions, each measurement brings a large amount of extra workload, and professional equipment is needed to accurately measure irregular targets. The measured error causes the camera position setting error, thereby influencing the acquisition and synthesis speed and effect; accuracy and speed need to be further improved.
Although there are methods for optimizing the surround-type acquisition device in the prior art, there is no better optimization method in the prior art when the acquisition direction of the camera of the 3D acquisition and synthesis device and the direction of its rotation axis deviate from each other.
Therefore, there is an urgent need for a device capable of accurately, efficiently and conveniently collecting 3D information with complicated peripheral or internal space.
SUMMERY OF THE UTILITY MODEL
In view of the above, the present invention has been made to provide a combined acquisition system comprising a plurality of handheld 3D acquisition devices that overcomes or at least partially solves the above mentioned problems.
The embodiment of the utility model provides a combined acquisition system of a plurality of handheld 3D acquisition devices, which comprises a plurality of handheld 3D acquisition devices,
in the plurality of handheld 3D acquisition devices, each handheld 3D acquisition device has an overlapping area with the acquisition range of the other handheld 3D acquisition devices;
the overlapping region is at least partially located on the target object;
the acquisition direction of the image acquisition device of the handheld 3D acquisition equipment is the direction deviating from the rotation center.
In alternative embodiments: the plurality of handheld 3D acquisition devices include a first type of handheld 3D acquisition device and a second type of handheld 3D acquisition device.
In alternative embodiments: the sum of the acquisition ranges of the first type of handheld 3D acquisition equipment can cover the target object, and the sum of the acquisition ranges of the second type of handheld 3D acquisition equipment can cover a specific area of the target object.
In alternative embodiments: the plurality of handheld 3D acquisition devices comprise a first type of handheld 3D acquisition device and a second type of handheld 3D acquisition device, and the sum of the acquisition ranges of the first type of handheld 3D acquisition device is larger than that of the second type of handheld 3D acquisition device.
In alternative embodiments: and for a specific area of the target object, scanning and acquiring by adopting a first handheld 3D acquisition device and a second handheld 3D acquisition device together.
In alternative embodiments: the specific area is a user designated area, or a previous synthesis failure area, or an outline concave-convex change area.
In alternative embodiments: the included angle alpha of the optical axes of the image acquisition devices at two adjacent acquisition positions meets the following condition:
wherein, R is the distance from the rotation center to the surface of the target object, T is the sum of the object distance and the image distance during acquisition, d is the length or the width of a photosensitive element of the image acquisition device, F is the focal length of a lens of the image acquisition device, and u is an empirical coefficient.
In alternative embodiments: u < 0.498.
In alternative embodiments: u < 0.281.
The embodiment of the utility model provides an information utilization equipment is still provided, including the aforesaid arbitrary system.
Invention and technical effects
1. The 3D information of the inner space of the target object is collected by the 3D collecting equipment, and the method is suitable for wider space and thinner space.
2. The method has the advantages that the acquisition position of the camera is optimized by measuring the distance between the rotation center and the target object and the distance between the image sensing element and the target object, so that the speed and the effect of 3D construction are considered.
3. The handheld self-rotating 3D acquisition equipment is arranged at a plurality of positions, so that a complete multi-position combined 3D acquisition system is formed. The acquisition of the inner space of a complex surface or a large-range target object is realized, and the implementation is more flexible.
4. The method is provided for the first time, and specific scanning acquisition is carried out on a specific area through the arrangement of two types of acquisition equipment, so that accurate and efficient acquisition of complex objects is realized.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a schematic structural diagram of a 3D information acquisition device provided by an embodiment of the present invention;
fig. 2 is a schematic structural diagram illustrating another implementation manner of a 3D information acquisition device provided by an embodiment of the present invention;
fig. 3 shows a schematic diagram of a multi-position combined 3D acquisition system provided by an embodiment of the present invention.
The correspondence of reference numerals to the various components in the drawings is as follows:
1, an image acquisition device;
2, a rotating device;
and 3, carrying the device.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Hand-held type 3D information acquisition equipment structure
In order to solve the above technical problem, an embodiment of the utility model provides a combination formula collection system including a plurality of handheld 3D collection equipment, including handheld 3D information acquisition equipment, be called for short 3D collection equipment, please refer to fig. 1, include image acquisition device 1, rotary device 2 and bear device 3.
The image acquisition device 1 is connected with the rotating device 2, so that the rotating device 2 can be driven to stably rotate and scan, and 3D acquisition of peripheral objects is realized (the specific acquisition process is described in detail below). The rotating means 2 are mounted on a carrier means 3, the carrier means 3 being intended to carry the entire apparatus. The carrier means 3 may be a handle, so that the entire apparatus can be used for hand-held acquisition. The bearing device 3 can also be a base type bearing device and is used for being installed on other equipment, so that the whole intelligent 3D acquisition equipment is installed on the other equipment for common use. For example, the smart 3D acquisition device is mounted on a vehicle, and 3D acquisition is performed as the vehicle travels.
The bearing device 3 is used for bearing the weight of the whole equipment, and the rotating device 2 is connected with the bearing device 3. The carrying means may be a handle, a tripod, a base with a support means, etc. Typically, the rotating means is located in the central part of the carrying means to ensure balance. But in some special cases it can be located anywhere on the carrier. And the carrier is not necessary. The rotating device can also be directly installed in the application equipment, for example, the rotating device can be installed on the top of a vehicle. The inner space of the bearing device is used for accommodating a battery and supplying power to the 3D rotary acquisition stabilizing device. Simultaneously, for convenient to use, set up the button on bearing the device shell for control 3D rotation acquisition stabilising arrangement. Including on/off stabilization, on/off 3D rotational acquisition.
As shown in fig. 2, the image capturing device 1 is connected to the rotating shaft of the rotating device 2, and is driven by the rotating device to rotate. Of course, the rotation shaft of the rotation device may also be connected to the image capture device via a transmission device, such as a gear set. The rotating device 2 can be arranged in the handle, and part or all of the transmission device is also arranged in the handle, so that the volume of the equipment can be further reduced.
When the image capturing device makes a 360 ° rotation in the horizontal plane, it captures an image of the corresponding object at a specific position (the specific capturing position will be described later in detail). The shooting can be performed synchronously with the rotation action, or shooting can be performed after the rotation of the shooting position is stopped, and the rotation is continued after the shooting is finished, and the like. The rotating device can be a motor, a stepping motor, a servo motor, a micro motor and the like. The rotating device (e.g., various motors) can rotate at a prescribed speed under the control of the controller and can rotate at a prescribed angle, thereby achieving optimization of the acquisition position, which will be described in detail below. Of course, the image acquisition device can be mounted on the rotating device in the existing equipment.
The above device may further include a distance measuring device, the distance measuring device is fixedly connected to the image collecting device, and a direction of the distance measuring device is the same as an optical axis direction of the image collecting device. Of course, the distance measuring device can also be fixedly connected to the rotating device, as long as the distance measuring device can synchronously rotate along with the image acquisition device. Preferably, an installation platform can be arranged, the image acquisition device and the distance measurement device are both positioned on the platform, and the platform is installed on a rotating shaft of the rotating device and driven to rotate by the rotating device. The distance measuring device can use various modes such as a laser distance measuring instrument, an ultrasonic distance measuring instrument, an electromagnetic wave distance measuring instrument and the like, and can also use a traditional mechanical measuring tool distance measuring device. Of course, in some applications, the 3D acquisition device is located at a specific location, and its distance from the target object is calibrated, without additional measurements.
The device also comprises a light source which can be arranged on the periphery of the image acquisition device, the rotating device and the mounting platform. Of course, the light source may be separately provided, for example, a separate light source may be used to illuminate the target. Even when the lighting conditions are good, no light source is used. The light source can be an LED light source or an intelligent light source, namely, the light source parameters are automatically adjusted according to the conditions of the target object and the ambient light. Usually, the light sources are distributed around the lens of the image capturing device, for example, the light sources are ring-shaped LED lamps around the lens. Since in some applications it is desirable to control the intensity of the light source. In particular, a light softening means, for example a light softening envelope, may be arranged in the light path of the light source. Or the LED surface light source is directly adopted, so that the light is soft, and the light is more uniform. Preferably, an OLED light source can be adopted, the size is smaller, the light is softer, and the flexible OLED light source has the flexible characteristic and can be attached to a curved surface.
In order to facilitate the actual size measurement of the target object, a plurality of marking points can be arranged at the position of the target object. And the coordinates of these marked points are known. The absolute size of the 3D synthetic model is obtained by collecting the mark points and combining the coordinates thereof. These marking points may be previously set points or may be laser light spots. The method of determining the coordinates of the points may comprise: using laser to measure distance: and emitting laser towards the target object by using the calibration device to form a plurality of calibration point light spots, and obtaining the coordinates of the calibration points through the known position relation of the laser ranging units in the calibration device. And emitting laser towards the target by using the calibration device, so that the light beam emitted by the laser ranging unit in the calibration device falls on the target to form a light spot. Since the laser beams emitted from the laser ranging units are parallel to each other, the positional relationship between the respective units is known. The two-dimensional coordinates in the emission plane of the plurality of light spots formed on the target object can be obtained. The distance between each laser ranging unit and the corresponding light spot can be obtained by measuring the laser beam emitted by the laser ranging unit, namely the depth information equivalent to a plurality of light spots formed on the target object can be obtained. I.e. the depth coordinate perpendicular to the emission plane, can be obtained. Thereby, three-dimensional coordinates of each spot can be obtained. Secondly, distance measurement and angle measurement are combined: and respectively measuring the distances of the plurality of mark points and the included angles between the mark points, thereby calculating respective coordinates. Using other coordinate measuring tools: such as RTK, global coordinate positioning systems, satellite-sensitive positioning systems, position and pose sensors, etc.
Multi-position combined 3D acquisition system
As shown in fig. 3, the acquisition system includes a plurality of the above-described hand-held 3D information acquisition devices a, b, c …, which are respectively located at different spatial positions. The acquisition range of the acquisition device a comprises an area A, the acquisition range of the acquisition device B comprises an area B, the acquisition range of the acquisition device C comprises an area C … and the like. Their acquisition regions at least satisfy that the intersection between two acquisition regions is not empty. In particular, the intersection that is not empty should be located on the target. That is, each acquisition device overlaps with the acquisition ranges of the other two acquisition devices at least, and particularly, the acquisition range of each acquisition device on the target object overlaps with the acquisition ranges of the other two acquisition devices on the target object at least.
Whether they are objects in the interior space or in a wide field of view, they may have areas of more complex surface, referred to as specific areas. These areas either have inwardly recessed deep holes/pits or outwardly projecting higher protrusions or both, thus constituting a larger degree of surface relief. This presents challenges to acquisition devices that acquire in one direction. Due to the concave and convex, no matter where the device is arranged, the specific area of the target object can be acquired only from a single direction through rotary scanning, and therefore information of the specific area is greatly lost.
Therefore, the acquisition devices at multiple positions can be set to scan and acquire the specific area, so that the information of the area is obtained from different angles. For example, the intersection of the a region and the B region includes the specific region; the common intersection of the area A, the area B and the area C comprises the specific area; the intersection of the a region and the B region and the intersection of the C region and the D region each include the specific region, and so on. That is, the specific region is repeatedly scanned, which may also be referred to as a repeated scanning area, i.e., the specific region is scanned and acquired by a plurality of acquisition devices. The above conditions include that the intersection of the acquisition regions of two or more acquisition devices includes the specific region; the intersection of the acquisition regions of two or more acquisition devices and the intersection of the acquisition regions of other two or more acquisition devices both include the specific region.
The specific area can be obtained by analyzing the previous 3D synthesis condition, such as an area with a previous 3D synthesis failure or a higher failure rate; the area where the unevenness is largely changed or the area where the degree of unevenness is large may be defined in advance based on the experience of the operator.
In another embodiment, multiple handheld 3D information acquisition devices are not required, but only one handheld 3D information acquisition device is used. The user holds the equipment and respectively locates at different positions to carry out a plurality of times of rotating scanning so as to obtain the target object picture. At this time, it should be ensured that the scanning ranges of the handheld 3D information acquisition devices are overlapped to cover the whole target area each time the handheld 3D information acquisition devices are located.
3D information acquisition process
1. And selecting the number of the first type of 3D information acquisition equipment according to the size and the position of the target object, and arranging the position for each 3D information acquisition equipment.
(1) According to the acquisition requirement of the target object, the position where the 3D information acquisition equipment can be placed is set, and the distance between the 3D information acquisition equipment and the target object is determined.
(2) The number of the 3D information collecting devices is selected according to the size of the target object, the above-mentioned distance, and the collecting ranges A, B, C … of the plurality of 3D information collecting devices a, b, c … so that the sum of the collecting ranges of the 3D information collecting devices can cover the target object. However, in general, the sum of the acquisition ranges of the 3D information acquisition devices is required to cover the size of the target object, and in the case that the acquisition ranges of the adjacent 3D information acquisition devices overlap, the sum of the acquisition ranges still covers the size of the target object. For example, the overlap range accounts for more than 10% of the acquisition range.
(3) The selected plurality of 3D information collection devices a, b, c … are relatively uniformly arranged at the above-mentioned distance from the target object, thereby ensuring that the collection areas of the plurality of 3D information collection devices a, b, c … can cover the target object.
2. And setting the number of the second type of 3D information acquisition equipment according to the size, the number and the position of the specific area of the target object, and arranging the position for each 3D information acquisition equipment.
(1) And determining the number and the position of the specific areas of the target object. The determination may be based on prior knowledge, or on visual results, or on the distribution of regions not synthesized in the previous acquisition.
(2) One or more second type 3D information acquisition devices are arranged for each specific area according to the size of the specific area of the object so that their acquisition range can cover the specific area.
(3) And determining the number of the second type of 3D information acquisition equipment according to the number and the position of the specific areas of the target object and the number of the second type of 3D information acquisition equipment required by each specific area, and arranging the position for each 3D information acquisition equipment. In general, one or more second-type 3D information acquisition devices are inserted between the first-type 3D information acquisition devices, so as to repeatedly acquire a region with a weak acquisition range of the first-type 3D information acquisition devices, that is, repeatedly acquire a specific region, and form a repeated scanning region. The second type of 3D information acquisition device may also be located at other positions (e.g., closer or further away from the target object) to ensure that the rescanning area can obtain sufficiently different angle pictures. The position of the first type of acquisition device is called a first type of acquisition position, and the position of the second type of acquisition device is called a second type of acquisition position.
3. After the first type and the second type of 3D information acquisition equipment are arranged, each 3D information acquisition equipment is controlled to rotate to scan the target object, and the rotation meets the optimization condition of the image acquisition device of the 3D information acquisition equipment. That is, the image pickup device of each 3D information pickup apparatus may be controlled to rotate by the controller in accordance with the above conditions.
4. And sending pictures acquired by scanning of a plurality of 3D information acquisition devices to a processor, and performing synthetic modeling on the 3D model of the target object by using the pictures by the processor. Similarly, the plurality of pictures can be sent to a remote platform, a cloud platform, a server, an upper computer and/or a mobile terminal through a communication device, and 3D synthesis of the target object can be carried out by using a 3D model synthesis method.
In another embodiment, instead of setting a plurality of acquisition devices, one acquisition device is replaced with a plurality of acquisition devices in the above flow, and the acquisition position of each acquisition device is a position where the user holds the device to perform acquisition in sequence. Namely, the user holds the device and sequentially locates at the collecting positions of the first type and the second type of 3D information collecting devices. And when the handheld acquisition equipment is positioned at the acquisition position (the first type acquisition position or the second type acquisition position), the handheld acquisition equipment is controlled to rotate for acquisition. And finally, transmitting each acquired picture to a processor for 3D modeling and formation.
In another embodiment, in addition to the above-described combined acquisition using a plurality of 3D acquisition devices, it is understood that one 3D acquisition device or a limited number of 3D acquisition devices may be used to respectively perform the acquisition at the set positions in time division sequentially. That is, the images are not acquired simultaneously, but acquired at different positions in time division, and the images acquired at different times are collected to perform 3D synthesis. The different positions described here are the same as the positions described above for the different acquisition devices.
Optimization of camera position
In order to ensure that the device can give consideration to the effect and efficiency of 3D synthesis, the method can be used for optimizing the acquisition position of the camera besides the conventional method for optimizing the synthesis algorithm. Especially in the case of 3D acquisition synthesis devices in which the acquisition direction of the camera and the direction of its axis of rotation deviate from each other, the prior art does not mention how to perform a better optimization of the camera position for such devices. Even if some optimization methods exist, they are different empirical conditions obtained under different experiments. In particular, some existing position optimization methods require obtaining the size of the target, which is feasible in the wrap-around 3D acquisition, and can be measured in advance. However, it is difficult to measure in advance in an open space. It is therefore desirable to propose a method that can be adapted to camera position optimization when the acquisition direction of the camera of the 3D acquisition composition device and its rotation axis direction deviate from each other. This is the problem to be solved by the present invention, and the technical contribution made.
Therefore, the utility model discloses a large amount of experiments have been carried out, conclude that the interval that the camera was gathered when gathering is preferred the experience condition who satisfies as follows.
When 3D acquisition is carried out, the included angle alpha of the optical axis of the image acquisition device at two adjacent positions meets the following condition:
wherein,
r is the distance from the center of rotation to the surface of the target,
t is the sum of the object distance and the image distance during acquisition, namely the distance between the photosensitive unit of the image acquisition device and the target object.
d is the length or width of a photosensitive element (CCD) of the image acquisition device, and when the two positions are along the length direction of the photosensitive element, the length of the rectangle is taken as d; when the two positions are along the width direction of the photosensitive element, d takes a rectangular width.
And F is the focal length of the lens of the image acquisition device.
u is an empirical coefficient.
Usually, a distance measuring device, for example a laser distance meter, is arranged on the acquisition device. The optical axis of the distance measuring device is parallel to the optical axis of the image acquisition device, so that the distance from the acquisition device to the surface of the target object can be measured, and R and T can be obtained according to the known position relation between the distance measuring device and each part of the acquisition device by using the measured distance.
When the image acquisition device is at any one of the two positions, the distance from the photosensitive element to the surface of the target object along the optical axis is taken as T. In addition to this method, multiple averaging or other methods can be used, the principle being that the value of T should not deviate from the sum of the image distances from the object at the time of acquisition.
Similarly, when the image pickup device is in any one of the two positions, the distance from the rotation center to the surface of the object along the optical axis is defined as R. In addition to this method, multiple averaging or other methods can be used, with the principle that the value of R should not deviate from the radius of rotation at the time of acquisition.
In general, the size of an object is adopted as a method for estimating the position of a camera in the prior art. Since the object size will vary with the measurement object. For example, when a large object is acquired 3D information and then a small object is acquired, the size needs to be measured again and reckoning needs to be performed again. The inconvenient measurement and the repeated measurement bring errors in measurement, thereby causing errors in camera position estimation. According to the scheme, the experience conditions required to be met by the position of the camera are given according to a large amount of experimental data, and the size of an object does not need to be directly measured. In the empirical condition, d and F are both fixed parameters of the camera, and corresponding parameters can be given by a manufacturer when the camera and the lens are purchased without measurement. R, T is only a straight line distance that can be easily measured by conventional measuring methods such as a ruler and a laser rangefinder. Meanwhile, because the utility model discloses an in the equipment, the collection direction of image acquisition device (for example camera) deviates from its rotation axis direction each other, that is to say, the camera lens orientation is roughly opposite with the rotation center. At the moment, the included angle alpha of the optical axis for controlling the image acquisition device to perform twice positions is easier, and only the rotation angle of the rotary driving motor needs to be controlled. Therefore, it is more reasonable to use α to define the optimal position. Therefore, the utility model discloses an empirical formula makes the preparation process become convenient and fast, has also improved the degree of accuracy of arranging of camera position simultaneously for the camera can set up in the position of optimizing, thereby has compromise 3D synthetic precision and speed simultaneously.
According to a number of experiments, u should be less than 0.498 in order to ensure the speed and effect of the synthesis, and for better synthesis effect, u is preferably <0.411, especially preferably <0.359, in some applications u <0.281, or u <0.169, or u <0.041, or u < 0.028.
Utilize the utility model discloses the device is tested, and some experimental data are as follows, unit mm. (the following data are given by way of example only)
The above data are obtained only by experiments for verifying the conditions of the formula, and are not limited to the structure of the utility model. Without these data, the objectivity of the formula is not affected. Those skilled in the art can adjust the equipment parameters and the step details as required to perform experiments, and obtain other data which also meet the formula conditions.
It should be noted that a conventional 3D synthesis algorithm in the prior art can be used in cooperation with the scheme of the present invention to synthesize a 3D model, for example, a three-dimensional model generation method disclosed in chinese patent CN 2019112760643.
Examples of the applications
In order to construct the inside 3D model of a certain rectangular shape exhibition hall, can hand 3D collection equipment and walk in the room, stop in different positions to through many images of rotatory collection building, remove collection equipment to a plurality of indoor positions and rotate the collection many times again, carry out the synthesis of 3D model according to the synthesis algorithm, thereby construct the indoor 3D model, be convenient for follow-up fitment, show.
The target object, and the object all represent objects for which three-dimensional information is to be acquired. The object may be a solid object or a plurality of object components. The three-dimensional information of the target object comprises a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, a local three-dimensional feature, a three-dimensional size and all parameters with the three-dimensional feature of the target object. The utility model discloses the three-dimensional is that to have XYZ three direction information, especially has degree of depth information, and only two-dimensional plane information has essential difference. It is also fundamentally different from some definitions, which are called three-dimensional, panoramic, holographic, three-dimensional, but actually comprise only two-dimensional information, in particular not depth information.
The collection area of the present invention is the range that the image collection device (e.g., camera) can take. The utility model provides an image acquisition device can be CCD, CMOS, camera, industry camera, monitor, camera, cell-phone, flat board, notebook, mobile terminal, wearable equipment, intelligent glasses, intelligent wrist-watch, intelligent bracelet and have all equipment of image acquisition function.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: rather, the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Moreover, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. It will be appreciated by those skilled in the art that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality according to embodiments of the invention based on some or all of the components in the apparatus of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such a program implementing the invention may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
Thus, it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been shown and described in detail herein, many other variations and modifications can be made, consistent with the principles of the invention, which are directly determined or derived from the disclosure herein, without departing from the spirit and scope of the invention. Accordingly, the scope of the present invention should be understood and interpreted to cover all such other variations or modifications.
Claims (10)
1. A combined acquisition system for a plurality of handheld 3D acquisition devices, characterized by: comprising a plurality of handheld 3D acquisition devices,
in the plurality of handheld 3D acquisition devices, each handheld 3D acquisition device has an overlapping area with the acquisition range of the other handheld 3D acquisition devices;
the overlapping region is at least partially located on the target object;
the acquisition direction of the image acquisition device of the handheld 3D acquisition equipment is the direction deviating from the rotation center.
2. The system of claim 1, wherein: the plurality of handheld 3D acquisition devices include a first type of handheld 3D acquisition device and a second type of handheld 3D acquisition device.
3. The system of claim 2, wherein: the sum of the acquisition ranges of the first type of handheld 3D acquisition equipment can cover the target object, and the sum of the acquisition ranges of the second type of handheld 3D acquisition equipment can cover a specific area of the target object.
4. The system of claim 2, wherein: the plurality of handheld 3D acquisition devices comprise a first type of handheld 3D acquisition device and a second type of handheld 3D acquisition device, and the sum of the acquisition ranges of the first type of handheld 3D acquisition device is larger than that of the second type of handheld 3D acquisition device.
5. The system of claim 3, wherein: and for a specific area of the target object, scanning and acquiring by adopting a first handheld 3D acquisition device and a second handheld 3D acquisition device together.
6. The system of claim 3 or 5, wherein: the specific area is a user designated area, or a previous synthesis failure area, or an outline concave-convex change area.
7. The system of claim 1, wherein: the included angle alpha of the optical axes of the image acquisition devices at two adjacent acquisition positions meets the following condition:
wherein, R is the distance from the rotation center to the surface of the target object, T is the sum of the object distance and the image distance during acquisition, d is the length or the width of a photosensitive element of the image acquisition device, F is the focal length of a lens of the image acquisition device, and u is an empirical coefficient.
8. The system of claim 7, wherein: u < 0.498.
9. The system of claim 7, wherein: u < 0.281.
10. An information utilization apparatus, characterized by comprising the system of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202022298644.7U CN213179901U (en) | 2020-10-15 | 2020-10-15 | Combined acquisition system of multiple handheld 3D acquisition devices and information utilization device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202022298644.7U CN213179901U (en) | 2020-10-15 | 2020-10-15 | Combined acquisition system of multiple handheld 3D acquisition devices and information utilization device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN213179901U true CN213179901U (en) | 2021-05-11 |
Family
ID=75778894
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202022298644.7U Active CN213179901U (en) | 2020-10-15 | 2020-10-15 | Combined acquisition system of multiple handheld 3D acquisition devices and information utilization device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN213179901U (en) |
-
2020
- 2020-10-15 CN CN202022298644.7U patent/CN213179901U/en active Active
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112254670B (en) | 3D information acquisition equipment based on optical scanning and intelligent vision integration | |
CN112492292B (en) | Intelligent visual 3D information acquisition equipment of free gesture | |
CN112303423B (en) | Intelligent three-dimensional information acquisition equipment stable in rotation | |
CN112361962B (en) | Intelligent visual 3D information acquisition equipment of many every single move angles | |
CN112257537B (en) | Intelligent multi-point three-dimensional information acquisition equipment | |
CN112254680B (en) | Multi freedom's intelligent vision 3D information acquisition equipment | |
CN112254676B (en) | Portable intelligent 3D information acquisition equipment | |
CN112082486B (en) | Handheld intelligent 3D information acquisition equipment | |
CN112254638B (en) | Intelligent visual 3D information acquisition equipment that every single move was adjusted | |
CN112484663B (en) | Intelligent visual 3D information acquisition equipment of many angles of rolling | |
CN112253913B (en) | Intelligent visual 3D information acquisition equipment deviating from rotation center | |
CN213179863U (en) | 3D information acquisition, synthesis and utilization equipment with translation distance | |
CN214041102U (en) | Three-dimensional information acquisition, synthesis and utilization equipment with pitching angle | |
CN112254677B (en) | Multi-position combined 3D acquisition system and method based on handheld device | |
CN213179901U (en) | Combined acquisition system of multiple handheld 3D acquisition devices and information utilization device | |
CN112254671B (en) | Multi-time combined 3D acquisition system and method | |
CN213455317U (en) | Multi-acquisition-equipment combined three-dimensional acquisition system and information utilization equipment | |
CN112254673B (en) | Self-rotation type intelligent vision 3D information acquisition equipment | |
CN112254669B (en) | Intelligent visual 3D information acquisition equipment of many bias angles | |
CN213179862U (en) | Handheld device for collecting three-dimensional information, three-dimensional synthesis and information utilization device | |
CN213179861U (en) | Three-dimensional information acquisition, synthesis and information utilization equipment | |
CN213179858U (en) | Ultra-close range three-dimensional information acquisition, synthesis and information utilization equipment | |
CN112254653A (en) | Program control method for 3D information acquisition | |
CN112254679A (en) | Multi-position combined 3D acquisition system and method | |
CN213179859U (en) | Telescopic intelligent vision 3D information acquisition, synthesis and utilization equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant |