CN113048878A - Optical positioning system and method and multi-view three-dimensional reconstruction system and method - Google Patents

Optical positioning system and method and multi-view three-dimensional reconstruction system and method Download PDF

Info

Publication number
CN113048878A
CN113048878A CN201911373494.7A CN201911373494A CN113048878A CN 113048878 A CN113048878 A CN 113048878A CN 201911373494 A CN201911373494 A CN 201911373494A CN 113048878 A CN113048878 A CN 113048878A
Authority
CN
China
Prior art keywords
optical
positioning
scene
detector
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911373494.7A
Other languages
Chinese (zh)
Other versions
CN113048878B (en
Inventor
王瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Yinquepi Electronic Technology Co ltd
Original Assignee
Suzhou Yinquepi Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Yinquepi Electronic Technology Co ltd filed Critical Suzhou Yinquepi Electronic Technology Co ltd
Priority to CN201911373494.7A priority Critical patent/CN113048878B/en
Publication of CN113048878A publication Critical patent/CN113048878A/en
Application granted granted Critical
Publication of CN113048878B publication Critical patent/CN113048878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention discloses an optical positioning system and method and a multi-view three-dimensional reconstruction system and method. The optical positioning system includes: the detection range of the optical positioning system covers all the spatial detectors included in the multi-view reconstruction system, each spatial detector is provided with an optical marker, wherein: the optical positioning system is used for optically positioning each space detector according to the detection result of the optical marker in the detection range to obtain the positioning information of each space detector; the positioning information of each spatial detector is used for reconstructing a three-dimensional scene by combining scene views of a set scene acquired by each spatial detector. The embodiment of the invention can improve the efficiency of calculating the external parameters of the space detector, reduce the calculation amount of three-dimensional reconstruction and increase the application scene of the three-dimensional reconstruction.

Description

Optical positioning system and method and multi-view three-dimensional reconstruction system and method
Technical Field
The embodiment of the invention relates to an image processing technology, in particular to an optical positioning system and method and a multi-view three-dimensional reconstruction system and method.
Background
At present, the three-dimensional reconstruction technology has wide application in various application fields. The three-dimensional reconstruction is actually based on acquiring images of a target scene from a plurality of different viewing angles, and recovering three-dimensional information of the target scene.
In the three-dimensional reconstruction process, scene views of a set scene acquired by different space detectors are respectively based on respective camera coordinate systems, when three-dimensional reconstruction is performed according to the acquired scene views, the coordinate systems of the scene views need to be converted into a unified coordinate system, and the space information corresponding to the scene included in each scene view in one coordinate system is determined, so that three-dimensional modeling of the scene is performed in the coordinate system. Wherein, the transformation of the coordinate system of the scene view into the unified coordinate system can be realized by external parameters of the spatial detector (including the spatial position and posture parameters of the spatial detector).
At present, in a conventional multi-view reconstruction method, by means of external calibration on a spatial detector according to information of acquired multi-view images (i.e., images of a plurality of different viewing angles), external parameters of the spatial detector (including spatial position and orientation parameters of the spatial detector) are acquired, but the calculation amount of the method is huge, which results in low speed of calculation of the external parameters of the spatial detector.
Disclosure of Invention
The embodiment of the invention provides an optical positioning system and method, a multi-view three-dimensional reconstruction system and method, which can improve the efficiency of calculating external parameters of a space detector, reduce the calculation amount of three-dimensional reconstruction and increase the application scene of three-dimensional reconstruction.
In a first aspect, an embodiment of the present invention provides an optical positioning system of a multi-view reconstruction system, where a detection range of the optical positioning system covers all spatial detectors included in the multi-view reconstruction system, and each spatial detector is provided with an optical marker, where:
the optical positioning system is used for optically positioning each space detector according to the detection result of the optical marker in the detection range to obtain the positioning information of each space detector;
the positioning information of each spatial detector is used for reconstructing a three-dimensional scene by combining scene views of a set scene acquired by each spatial detector.
In a second aspect, an embodiment of the present invention provides an optical positioning method for a multi-view reconstruction system, which is applied to any one of the systems in the embodiments of the present invention, and includes:
detecting an optical marker within a detection range covering all spatial detectors of the multi-view reconstruction system;
according to the detection result, performing optical positioning on all the space detectors to obtain positioning information of each space detector;
the positioning information of each spatial detector is used for reconstructing a three-dimensional scene by combining scene views of a set scene acquired by each spatial detector.
In a third aspect, an embodiment of the present invention provides a system for three-dimensional reconstruction of a view, including: an optical positioning system as claimed in any one of the embodiments of the present invention, and a multi-view reconstruction system;
the optical positioning system is used for acquiring positioning information of all spatial detectors in the multi-view reconstruction system;
the multi-view reconstruction system is used for acquiring a scene view of a set scene;
the positioning information of each spatial detector is used for the multi-view three-dimensional reconstruction system to perform three-dimensional scene reconstruction on the scene by combining the scene view of the set scene acquired by each spatial detector.
In a fourth aspect, an embodiment of the present invention provides a multi-view three-dimensional reconstruction method, which is applied to the multi-view three-dimensional reconstruction system according to any one of the embodiments of the present invention, and includes:
detecting optical markers in a detection range through an optical positioning system to obtain positioning information of a plurality of space detectors, wherein the detection range covers all the space detectors of the multi-view reconstruction system;
detecting a set scene in a detection range through a space detector of a multi-view reconstruction system to obtain a scene view of the scene;
according to the acquisition time of each piece of positioning information, the identification information of the spatial detector to which each piece of positioning information belongs, the acquisition time of each scene view and the identification information of the spatial detector to which each scene view is acquired, the positioning information corresponding to each scene view is inquired from each piece of positioning information;
and reconstructing the three-dimensional scene according to each scene view and the corresponding positioning information.
The embodiment of the invention adopts the optical positioning system to detect the optical marker on the space detector to obtain the positioning information of the corresponding space detector under the condition that the detection range of the optical positioning system covers all the space detectors in the multi-view reconstruction system, replaces the situation that the positioning information of each space detector is obtained only through the scene view, namely directly detects the space detectors to obtain the positioning information, replaces the situation that the data processing is carried out on the scene view obtained from the space detectors to indirectly obtain the positioning information, realizes the reduction of the calculated amount of the data processing in an indirect positioning mode, solves the problems of large calculated amount and low efficiency of determining the positioning information of the space detectors by depending on the scene view obtained by the space detectors in the prior art, reduces the calculated amount of the positioning information of the space detectors, and improves the calculation efficiency of the positioning information, therefore, the efficiency of the three-dimensional scene reconstruction of the multi-view reconstruction system is improved.
Drawings
FIG. 1a is a schematic structural diagram of an optical positioning system of a multi-view reconstruction system according to a first embodiment of the present invention;
FIG. 1b is a schematic structural diagram of a space sensor according to a first embodiment of the present invention;
FIG. 1c is a schematic structural diagram of an optical positioning system of a multi-view reconstruction system according to a first embodiment of the present invention;
FIG. 1d is a schematic diagram of a linear light beam according to one embodiment of the present invention;
FIG. 2 is a flowchart of an optical positioning method of a multi-view reconstruction system according to a second embodiment of the present invention;
fig. 3a is a schematic structural diagram of a multi-view three-dimensional reconstruction system in a third embodiment of the present invention;
FIG. 3b is a diagram of an application scenario of a motion space detector according to a third embodiment of the present invention;
fig. 4 is a flowchart of a multi-view three-dimensional reconstruction method according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1a is a schematic structural diagram of an optical positioning system of a multi-view reconstruction system to which an embodiment of the present invention is applied. The detection range of the optical positioning system 110 covers all spatial detectors included in the multi-view reconstruction system 120, each spatial detector is provided with an optical marker, wherein:
and an optical positioning system 110, configured to perform optical positioning on each spatial detector according to a detection result of the optical marker in the detection range, so as to obtain positioning information of each spatial detector.
In the embodiment of the present invention, the optical positioning system 110 is applied in an application scenario of three-dimensional scene reconstruction, and is used for positioning the multi-view reconstruction system 120. The multi-view reconstruction system comprises a plurality of spatial detectors, and the spatial detectors are used for detecting spatial information of the set scene 130 and forming scene views, wherein the scene views are used for reconstructing a three-dimensional scene by combining positioning information of the spatial detectors.
Specifically, the three-dimensional scene reconstruction means that a data view of all objects in a set scene is obtained through a spatial detector, the view is analyzed and processed to obtain three-dimensional information of the objects in a real environment, and spatial information of each object in the set scene is constructed according to the three-dimensional information. Wherein the spatial information comprises object surface information and/or perspective information of the interior of the object.
The setting scene may refer to a detected real environment, and may be a static environment or a moving environment. Wherein the setting scene may include moving objects and stationary objects. For example, the setting scene is a park including stationary trees, stationary seats, moving people, and the like. For example, the scene is set to be in a traveling vehicle cabin, and the traveling vehicle cabin includes goods which are relatively static to the vehicle cabin, goods which move relative to the vehicle cabin, and the like. The setting scenario is determined as needed, and the embodiment of the present invention is not particularly limited.
It can be understood that the scene views acquired by the spatial detectors at different positions are views formed based on their own coordinate systems, and positioning information of the spatial detectors needs to be acquired to unify the coordinate systems of the scene views into the same coordinate system, and at this time, information in each scene view is represented by the same coordinate system, so that information fusion is facilitated, and thus, three-dimensional information of objects included in the scene views is accurately determined, and three-dimensional reconstruction is performed.
The detection range of the optical locating system is used to indicate the range that the optical locating system can detect. Specifically, the detection range of the optical positioning system may be an overlapping region (e.g., taking a union) or an overlapping region (e.g., taking an intersection) of the detection ranges of each optical positioning unit in the optical positioning system. The detection range of the optical locating system can be fixed or can be changed, the optical locating system can comprise an optical locating unit used for following the movement of a moving object, the detection range of the moving optical locating unit is changed, and therefore the total detection range determined by the optical locating system according to the detection ranges of all the optical locating units is also changed.
The spatial detector may refer to a sensor having a spatial position detection function, and may include an optical sensor or an acoustic sensor, for example. Illustratively, the optical-type sensor may include a camera (e.g., a monocular camera, a binocular camera, a multi-view camera, etc.) or a depth sensor (e.g., a Time of Flight (TOF) sensor, a structured light-based depth sensor, an active binocular sensor, a lidar, etc.). The acoustic-type sensor may include an acoustic radar, a sonar sensor, an ultrasonic imaging device, or the like. In addition, the spatial detector may also be a sensor operating in electromagnetic waves of other wavelengths, such as radio radar, X-ray imaging or Computed Tomography (CT), B-mode ultrasound or nuclear magnetic resonance, and the spatial detector may be selected according to the need, and the embodiment of the present invention is not limited in particular. Specifically, the optical sensor obtains an optical image, the acoustic sensor obtains an acoustic image, and both the optical image and the acoustic image are used for describing spatial information of a scene.
In addition, the spatial detector may be stationary with respect to the scene or may be moving with respect to the scene. Typically, a moving spatial detector is used to follow moving objects in a set scene. A relatively stationary spatial probe may be provided and/or a relatively moving spatial probe may be provided as desired.
Specifically, the optical marker is used for identifying the spatial detector, wherein the optical marker may be set as required, and the embodiment of the present invention is not limited specifically. Typically, a spatial detector is a collection of devices having a volume such that an easily detectable optical marker can be placed on each spatial detector to represent the spatial position of each spatial detector. The detection result of the optical marker is used to represent spatial position information and attitude (such as tilt angle or depth information) of the optical marker. It should be noted that the detection result of the optical marker actually refers to spatial position information of a certain key point in the optical marker. Typically, an optical marker is not a point, but a surface or a device having a volume. In this case, a key point can be determined from the optical marker for characterizing the detection result of the optical marker.
The same space detector may be configured with one optical marker, and may also be configured with a plurality of optical markers, for example, the optical markers are three light-emitting objects (such as fluorescent sheets) that are not in the same straight line, and in addition, the arrangement method and the number of the optical markers may have other forms, and may be set according to needs, and the embodiment of the present invention is not particularly limited. Illustratively, as shown in FIG. 1b, four optical markers 122 are disposed on the spatial detector 121.
And the positioning information of each space detector is used for reconstructing a three-dimensional scene by combining the scene view of the set scene acquired by each space detector.
The positioning information of the space probe includes a space position and a posture of the space probe. The positioning information is used to describe external parameters of the spatial probe. Specifically, the external parameters include 6 degrees of freedom of the spatial detector in space, specifically, a translational degree of freedom along an x-axis, a translational degree of freedom along a y-axis, a translational degree of freedom along a z-axis, a rotational degree of freedom for rotating around the x-axis, a rotational degree of freedom for rotating around the y-axis, and a rotational degree of freedom for rotating around the z-axis. Wherein the process of determining external parameters of the spatial probe is an external calibration.
In the embodiment of the invention, under the condition that the detection range of the optical positioning system covers all the space detectors in the multi-view reconstruction system, the optical positioning system is adopted to detect the optical markers on the space detectors to obtain the positioning information of the corresponding space detectors, instead of obtaining the positioning information of each space detector only through the scene view; the method and the device have the advantages that the positioning information is obtained by directly detecting the space detector, the positioning information is indirectly obtained by replacing the scene view obtained from the space detector with data processing, the calculated amount of data processing in an indirect positioning mode is reduced, the problems that the calculated amount of positioning information of the space detector is large and the efficiency is low due to the fact that the scene view obtained by the space detector is used for determining the positioning information of the space detector in the prior art are solved, the calculated amount of the positioning information of the space detector is reduced, the calculation efficiency of the positioning information is improved, and therefore the efficiency of three-dimensional scene reconstruction of a multi-view reconstruction system is improved.
Optionally, as shown in fig. 1c, the optical positioning system comprises: the controller 112, and the at least one optical localization unit 111, the detection range of the at least one optical localization unit 111, cover all spatial detectors comprised in the multi-view reconstruction system 120. Wherein at least one optical locating unit 111 is used for detecting the optical markers in the detection range. A controller 112 for obtaining the positioning information of each spatial detector according to the detection result obtained by the detection of the at least one optical positioning unit.
Wherein the optical locating unit 111 is used for detecting the optical markers. Generally, the different optical positioning units 111 are respectively located at different spatial positions for detecting the spatial detector at different viewing angles, so as to avoid the situation that the optical positioning units cannot detect the spatial detector due to occlusion, and increase the detection range of the optical positioning system. The controller 112 is configured to receive the detection data of the optical positioning unit 111, and perform data processing to obtain positioning information.
The optical positioning unit 111 may perform optical marker detection according to an external device or a server end communicatively connected to the optical positioning system, or according to a work control instruction sent by the controller 112.
In addition, the controller 112 is further configured to control the optical positioning unit 111 to operate, and perform three-dimensional scene reconstruction based on the positioning information in combination with the scene view sent by the multi-view reconstruction system 120; or for transmitting the positioning information to other devices, for example, to the multi-view reconstruction system 120, so that the controller 112, the multi-view reconstruction system 120, or other devices perform three-dimensional scene reconstruction in conjunction with the scene views of the set scene acquired by each of the spatial detectors. Meanwhile, the computer, the mobile terminal or other terminal devices and the like can directly obtain the reconstructed three-dimensional scene model through the external device or the server side. The processing operation of the controller 112 after obtaining the positioning information may be configured as required, and the embodiment of the present invention is not limited in particular.
The controller can be arranged in at least one optical positioning unit and connected with the optical positioning unit to form a whole; or arranged in a separate device outside each optical positioning unit and respectively connected with each optical positioning unit in communication.
The space detector is detected by configuring the controller and the optical positioning unit, and the positioning information of the space detector is obtained by processing data.
Alternatively, the optical positioning unit may include a monocular vision system, a binocular vision system, or a binocular vision system (e.g., a trinocular vision system, a tetraocular vision system, etc.). In addition, other vision systems may be selected as needed, and embodiments of the present invention are not particularly limited thereto.
Optionally, the optical positioning unit comprises a moving optical positioning unit or a fixed optical positioning unit, the moving optical positioning unit moves relative to the scene, and the fixed optical positioning unit is stationary relative to the scene.
Specifically, the optical positioning unit may be moving relative to the scene or stationary relative to the scene. Wherein both the fixed optical positioning unit and the moving optical positioning unit are used for detecting the spatial detector and/or other optical positioning units. It can be understood that the spatial detector may be in motion relative to the scene, the moving spatial detector may move to a region outside the detection range of the optical positioning system, and the fixed optical positioning unit may not accurately detect the spatial detector, and at this time, the moving optical positioning unit may be adopted to follow the moving spatial detector to detect the spatial detector. The moving optical positioning unit is also used for following the moving space detector to ensure that the moving space detector is positioned in the detection range of the moving optical positioning unit in real time. The moving speed and direction of the moving optical positioning unit and the following equipment can be different or the same.
In fact, the detection results of the spatial detector obtained by the optical positioning unit are also the results obtained by detection based on their own coordinate systems, and it is necessary to obtain the positioning information of each optical positioning unit to unify the coordinate systems of the detection results into the same coordinate system, so as to realize that each detection result is represented by the same coordinate system. Thus, it is necessary to acquire the positioning information of each fixed optical positioning unit and the positioning information of each moving optical positioning unit.
Optionally, the optical positioning system includes at least two optical positioning units, the optical positioning system includes a target optical positioning unit and at least one reference optical positioning unit, the reference optical positioning unit is provided with the optical marker, the at least one reference optical positioning unit is located in a detection range of the target optical positioning unit, and the target optical positioning unit is configured to obtain, by positioning each reference optical positioning unit, positioning information of the spatial detector detected by each reference optical positioning unit.
The target optical locating unit may be a moving optical locating unit or a fixed optical locating unit; the reference optical positioning unit may be a moving optical positioning unit or a fixed optical positioning unit.
The reference optical locating unit is provided with an optical marker for being detected by other optical locating units (target optical locating unit or reference optical locating unit). It is understood that the detection range of one optical locating unit is limited, and the other optical locating unit located in the detection range of the optical locating unit can be used for detecting the location information of the space detector outside the detection range of the optical locating unit, so that the optical locating unit can indirectly detect the location information of the space detector belonging to the detection range of the other optical locating unit.
The target optical locating unit may locate at least one reference optical locating unit, may transform the detection range of the reference optical locating unit into the detection range of the target optical locating unit, and may increase the detection range of the target optical locating unit such that the target optical locating unit indirectly detects the spatial detector located in the detection range of the reference optical locating unit. Therefore, the detection range of the target optical positioning unit is superposed with the detection range of at least one reference optical positioning unit to form the detection range of the optical positioning system.
The detection range of the target optical positioning unit may cover all of the reference optical positioning units, or the detection range of the target optical positioning unit may cover part of the reference optical positioning units, and part of the reference optical positioning units may be located in other reference optical positioning units, that is, in a cascade covering manner, the target optical positioning unit may directly or indirectly detect all of the reference optical positioning units. Generally, the number of stages of the cascade is 2 or less, and the larger the number of stages of the cascade, the lower the detection accuracy.
In a specific example, the target optical locating unit covers all N reference optical locating units.
As another example, a 2-stage cascade: the target optical locating unit covers N-M reference optical locating units, and the accumulated detection range of the N-M reference optical locating units covers M reference optical locating units.
As another example, a 3-stage cascade: the target optical positioning unit covers N-M reference optical positioning units, and the accumulated detection range of the N-M reference optical positioning units covers M-K reference optical positioning units; the accumulated detection ranges of the M-K reference optical locating units cover the K reference optical locating units.
In addition, there are other cascading manners, and thus, the embodiment of the present invention is not particularly limited.
It should be noted that each optical locating unit can detect at least one spatial detector and/or at least one optical locating unit. When the movable range of the scene is small, the positioning of all the spatial detectors in the scene can be realized by only selecting one optical positioning unit.
The optical positioning unit is configured into the target optical positioning unit and at least one reference optical positioning unit located in the detection range of the target optical positioning unit, the detection range of the plurality of optical positioning units is overlapped, the situation that a single optical positioning unit is blocked and cannot be detected during detection can be avoided, different application scenes can be suitable, the detection range of the optical positioning system is simultaneously enlarged, the flexibility of the detection range of the optical positioning system is improved, therefore, each space detector is ensured to be located in the detection range of the optical positioning system, and the optical positioning system can accurately detect the positioning information of the space detector.
The optical markers differ and, correspondingly, the detection method of the optical locating unit also differs. The optical marker may be a planar object or an object having a certain volume, and the positioning information of the optical marker may refer to spatial position information of a geometric center on the optical marker or spatial position information of a plurality of reference points on the optical marker.
Optionally, the controller is communicatively connected to the optical markers on each of the spatial detectors, wherein: the at least one optical positioning unit is specifically used for emitting a light beam in a detection range to scan the optical marker on the space detector; the optical marker on the space detector is used for generating an electric signal and sending the electric signal to the controller after receiving the light beam irradiation; and the controller is used for obtaining the positioning information of each space detector according to the received electric signals.
Wherein the optical identifier is configured to convert the received light into electricity and to transmit an electrical data communication to the controller. Illustratively, the optical marker is a photodetector. Accordingly, the optical locating unit is used for emitting light. Specifically, the optical positioning unit scans the optical markers on each spatial detector in an optical scanning manner. The light beam may be a spot light beam scanned in two-dimensional directions, or may be at least two non-parallel line-shaped light beams scanned in different directions, respectively. In addition, the light beam may have other forms, and the embodiment of the present invention is not particularly limited thereto. Illustratively, the optical positioning unit scans the optical marker on each spatial detector with at least two non-parallel linear beams, and specifically, as shown in fig. 1d, the optical positioning unit scans with two mutually perpendicular first and second linear beams. When an optical marker on the spatial detector is scanned, the optical marker records the instantaneous positioning information (e.g., angle) of the line-shaped light beam. When the linear light beam scans an optical marker in one direction, the optical marker can determine the position information of the optical marker relative to one dimension of the optical positioning unit, such as the angle information of the x direction; when the same optical marker is scanned by a linear beam that is not parallel to the linear beam, the optical marker can determine position information of the optical marker relative to another dimension of the optical positioning unit, for example, angle information in the y direction. Thereby, the angle information of two dimensions of the x direction and the y direction of the optical marker can be determined; because the position of the optical marker relative to the spatial detector is fixed and known, the positioning information of the spatial detector can be calculated according to the angle information of two dimensions of the x direction and the y direction of a plurality of optical markers which are not on a straight line on the spatial detector.
Optionally, the controller is communicatively connected to each of the optical positioning units, wherein: the at least one optical positioning unit is specifically used for controlling a light source to emit a light beam to an optical marker on the space detector within a detection range, receiving the light beam reflected by the optical marker, generating an electric signal and sending the electric signal to the controller; an optical marker on the spatial detector for reflecting the light beam emitted by the light source; and the controller is used for obtaining the positioning information of each space detector according to the received electric signals.
The optical marker is used for reflecting the light emitted by the optical positioning unit, and the reflected light is collected by the optical positioning unit. Specifically, the optical marker is a reflective object (e.g., an object having a surface with high reflectivity such as a mirror), and exemplarily, the optical marker is a reflective surface. Correspondingly, the optical positioning unit is used for emitting light, and the emitted light beam can be reflected by the optical marker and collected by the optical positioning unit; the optical positioning unit converts the collected reflected light beam into an electric signal. Specifically, the optical positioning unit scans the optical markers on each spatial detector in an optical scanning manner. The light beam may be a spot light beam scanned in two-dimensional directions, or may be at least two non-parallel line-shaped light beams scanned in different directions, respectively. In addition, the light beam may have other forms, and the embodiment of the present invention is not particularly limited thereto. Illustratively, the beam may be a spot beam scanned point-by-point in two dimensions, e.g., x-angle and y-angle directions; when the optical locating unit receives the reflected light beam of the optical marker, the optical marker can determine position information of the optical marker relative to at least two degrees of freedom of the optical locating unit, such as angle information in x and y directions. Because the position of the optical marker relative to the spatial detector is fixed and known, the positioning information of the spatial detector can be calculated according to the position information of at least two degrees of freedom of a plurality of optical markers which are not on a straight line on the spatial detector. In addition, some optical scanning systems, such as lidar, may detect position information for three degrees of freedom of an optical marker, such as position information in the x, y, and z directions. In this regard, the embodiments of the present invention are not particularly limited.
Optionally, the controller is communicatively connected to each of the optical positioning units, wherein: the at least one optical positioning unit is specifically used for shooting a positioning image of an optical marker on the space detector in a detection range; an optical marker on the spatial detector for reflecting light to identify the spatial detector in the positioning image; and the controller is used for obtaining the positioning information of each space detector according to the received positioning image.
Wherein the optical marker reflects light emitted by the light source, the reflected light being collected by the optical positioning unit. The light source is used for emitting light beams to the space detector, in this case, the light source can be arranged on the optical positioning unit or can be independently arranged outside the optical positioning unit. The optical positioning unit is used for shooting the optical markers on the space detectors, and determining the space position information of the optical markers associated with the space detectors according to the collected images of the optical markers, so that the positioning information of the space detectors is calculated. In this case, the optical positioning unit may be a monocular vision system, a binocular vision system, or a multi-ocular vision system, and captures the optical markers on each spatial detector, and determines the spatial position information of each optical marker associated with each spatial detector according to the acquired image of the optical marker, thereby calculating the positioning information of each spatial detector. For example, when the optical locating unit is a monocular vision system, locating images of optical markers belonging to the spatial detector may be captured, from which angular information of these optical markers may be determined with respect to two dimensions of the x-direction and the y-direction of the optical locating unit; according to the angle information of two dimensions of the x direction and the y direction of a plurality of optical markers which are not on a straight line on the space detector, the positioning information of the space detector can be calculated. For another example, when the optical positioning unit is a binocular vision system, the coordinates of the spatial position information of each optical marker may be determined using a least square method or a parallax ranging method, and thus, the positioning information of the spatial probe may be determined. In addition, the spatial position information of each optical marker may also be determined in other manners, and thus, the embodiment of the present invention is not limited in particular.
Optionally, the controller is communicatively connected to each of the optical positioning units, wherein: the at least one optical positioning unit is specifically used for shooting a positioning image of an optical marker on the space detector in a detection range; the optical marker on the space detector is used for emitting light and marking the space detector in the positioning image; and the controller is used for obtaining the positioning information of each space detector according to the received positioning image.
Wherein the optical marker is used for emitting a light beam, and the emitted light is collected by the optical positioning unit. Specifically, the optical marker is a light source, and illustratively, the optical marker is a light bulb. In this case, the optical positioning unit may be a monocular vision system, a binocular vision system, or a multi-ocular vision system, and captures the optical markers on each spatial detector, and determines the spatial position information of each optical marker associated with each spatial detector according to the acquired image of the optical marker, thereby calculating the positioning information of each spatial detector. For example, when the optical locating unit is a monocular vision system, locating images of optical markers belonging to the spatial detector may be captured, from which angular information of these optical markers may be determined with respect to two dimensions of the x-direction and the y-direction of the optical locating unit; according to the angle information of two dimensions of the x direction and the y direction of a plurality of optical markers which are not on a straight line on the space detector, the positioning information of the space detector can be calculated. For another example, when the optical positioning unit is a binocular vision system, the coordinates of the spatial position information of each optical marker may be determined using a least square method or a parallax ranging method, and thus, the positioning information of the spatial probe may be determined. In addition, the spatial position information of each optical marker may also be determined in other manners, and thus, the embodiment of the present invention is not limited in particular.
It should be noted that the optical positioning system may include a plurality of optical positioning units, and each of the optical positioning units may be positioned in the same positioning manner, or may also be positioned in different positioning manners, or may even be positioned in different positioning manners, which may be set as needed, and the embodiment of the present invention is not limited in particular.
By correspondingly configuring different optical positioning units and corresponding optical markers, different positioning modes can be adopted to detect the space detector as required, so that the flexibility of the optical positioning units is improved, and the applicability of the optical positioning units is improved.
Example two
Fig. 2 is a flowchart of an optical positioning method of a multi-view reconstruction system in a second embodiment of the present invention, where this embodiment is applicable to a case where positioning information of a spatial detector for acquiring each scene view is determined in a three-dimensional scene reconstruction process, and the method is based on the optical positioning system of the multi-view reconstruction system, and the method may be executed by a controller in the system or by an external device or a server end communicatively connected to the system, and specifically may be executed by an optical positioning apparatus provided in the second embodiment of the present invention, and the apparatus may be implemented in a software and/or hardware manner, and may generally be integrated in a computer device. As shown in fig. 2, the method of this embodiment is applied to the optical positioning system of the multi-view reconstruction system described in any of the above embodiments, and specifically includes:
s210, detecting the optical markers within a detection range covering all spatial detectors of the multi-view reconstruction system.
Specifically, the optical identifier, the spatial detector, the detection result, the positioning information, the setting scene, the scene view, and the three-dimensional scene reconstruction may all refer to the description of the foregoing embodiments.
S220, optically positioning all the space detectors according to the detection result to obtain positioning information of each space detector; the positioning information of each spatial detector is used for reconstructing a three-dimensional scene by combining scene views of a set scene acquired by each spatial detector.
According to the embodiment of the invention, under the condition that the detection range covers all the space detectors in the multi-view reconstruction system, the optical markers on the space detectors are detected to obtain the positioning information of the space detectors, and the positioning information of each space detector is obtained indirectly instead of only through the scene view, so that the calculated amount of data processing in an indirect positioning mode is reduced, the calculated amount of the positioning information of the space detectors is reduced, the calculation efficiency of the positioning information is improved, and finally the efficiency of three-dimensional scene reconstruction of the multi-view reconstruction system is improved.
EXAMPLE III
Fig. 3a is a schematic structural diagram of a multi-view three-dimensional reconstruction system to which a third embodiment of the present invention is applied. As shown in fig. 3a, the multi-view three-dimensional reconstruction system 310 includes: an optical positioning system 320, and a multi-view reconstruction system 330, as described in any of the previous embodiments;
the optical positioning system 320 is configured to obtain positioning information of all spatial detectors in the multi-view reconstruction system; a multi-view reconstruction system 330 for acquiring a scene view of a set scene; the positioning information of each spatial detector is used for the multi-view three-dimensional reconstruction system to perform three-dimensional scene reconstruction on the scene by combining the scene view of the set scene acquired by each spatial detector.
The optical positioning system in the embodiment of the present invention may refer to the description in the above embodiment.
The multi-view three-dimensional reconstruction system 310 may be an external device or a server independent from the optical positioning system 320 and the multi-view reconstruction system 330, or may be an internal device built in the optical positioning system 320 or the multi-view reconstruction system 330. Specifically, the multi-view three-dimensional reconstruction system 310 may be a control terminal device of the optical positioning system 320 and the multi-view reconstruction system 330, or may be a computer device in the optical positioning system 320 or the multi-view reconstruction system 330. The position of the multi-view three-dimensional reconstruction system 310 may be located in an optical positioning unit of the optical positioning system 320 (e.g., on a camera), in a spatial detector of the multi-view reconstruction system 330, or in a separate computer device communicatively coupled (either in wired or wireless communication) to the optical positioning system 320 and the multi-view reconstruction system 330, respectively. This may be set as needed, and the embodiment of the present invention is not particularly limited.
The multi-view reconstruction system 330 is used to acquire scene views of a set scene. Specifically, the multi-view reconstruction system 330 acquires a scene view of the set scene through each included spatial detector. Illustratively, when the spatial detector is a depth sensor, the scene view is a depth image; when the space detector is a color camera, the scene view is a color image; when the spatial probe is an ultrasound imaging device, the scene view is an image with only spatial information (object surface information or perspective), without color information.
In practice, the multi-view three-dimensional reconstruction system 310 is used to perform three-dimensional scene reconstruction on a set scene by combining a scene view and positioning information of a spatial detector when acquiring the scene view.
Specifically, the multi-view three-dimensional reconstruction system 310 groups the scene views according to the acquisition time, and determines the associated positioning information of each scene view in each group, where the associated positioning information refers to the positioning information of the spatial detector when acquiring the scene view. Therefore, according to the associated positioning information of each scene view, the coordinate systems of the scene views in each group can be unified, that is, the coordinates of the scene views in each group are unified into one coordinate system. Meanwhile, in each group, every two scene views can be compared, an overlapped part is searched, the two scene views are spliced based on the overlapped part, and therefore the scene views are spliced step by step to finally form the three-dimensional scene model. Or in each group, directly detecting objects from each scene view, optionally selecting one object as a target object, and respectively acquiring a plurality of scene views of the target object from each scene view, thereby modeling to form a three-dimensional model of the target object, thereby gradually performing three-dimensional reconstruction on each detected object, and finally forming the three-dimensional scene model. In addition, there are other ways to implement three-dimensional reconstruction, and thus, the embodiment of the present invention is not limited in particular.
Optionally, the multi-view reconstruction system includes one of: the fixed space detectors are static relative to the scene, and the positions of the fixed space detectors are different; at least one motion space detector, said motion space detector moving relative to said scene; or at least two spatial detectors comprising at least one motion spatial detector.
When the multi-view reconstruction system only includes the fixed space detectors, the number of the fixed space detectors is at least two, the fixed space detectors may be arranged around the scene, and specifically, the number and the positions of the fixed space detectors may be set as required, which is not limited in the embodiment of the present invention. Illustratively, as shown in fig. 1a, the multi-view reconstruction system 120 includes three fixed-space detectors, and is dispersed and triangularly arranged around the scene 130 with the scene 130 as a center, and meanwhile, the detection ranges of the three fixed-space detectors cover the scene 130. At this time, the plurality of fixed space detectors may photograph the scene at different positions to obtain a plurality of different scene views of the scene.
It should be noted that the fixed space probe may also be externally calibrated directly by human intervention to determine the positioning information of the fixed space probe.
When the multi-view reconstruction system only includes the motion space detectors, the number of the motion space detectors is at least one, and meanwhile, the position of the motion space detectors relative to the scene is changed, specifically, the number, the motion speed, and the motion direction of the motion space detectors may be set according to needs, and thus, embodiments of the present invention are not limited specifically. Illustratively, as shown in fig. 3b, the motion space detector 331 performs a circular motion around the scene 301 with a set distance as a radius around the center of the scene 301. At this time, one motion space detector may photograph a scene at different positions to obtain a plurality of different scene views of the scene.
It can be understood that the motion space detector can also be used for acquiring a scene view of a moving object in real time along with the motion of the moving object.
When the multi-view reconstruction system includes both moving space detectors and fixed space detectors, the number of moving space detectors or fixed space detectors is at least one. The distribution position of the fixed spatial detector relative to the scene, the movement speed and the movement direction of the movement spatial detector, and the like can be set according to needs, and the embodiment of the invention is not particularly limited.
The multi-view reconstruction system is configured to be a motion space detector and/or a fixed space detector, so that the use scenes of the multi-view reconstruction system are increased, dynamic scenes or static scenes are compatible, the use scenes of three-dimensional reconstruction are increased, and the limitation of the three-dimensional reconstruction is reduced.
According to the embodiment of the invention, the multi-scene view is acquired through the multi-view reconstruction system, the optical positioning system is used for positioning each space detector in the multi-view reconstruction system, the positioning information of each space detector in the multi-view reconstruction system is quickly and accurately obtained, the positioning information is combined with the multi-scene view, the three-dimensional scene reconstruction is carried out on the scene, and the three-dimensional scene reconstruction efficiency is improved.
Example four
Fig. 4 is a flowchart of a multi-view three-dimensional reconstruction method in a fourth embodiment of the present invention, where this embodiment is applicable to a case where positioning information of a spatial detector for acquiring each scene view and each scene view are combined to reconstruct a three-dimensional scene of a scene, and the method is based on a multi-view three-dimensional reconstruction system, and the method may be executed by a multi-view three-dimensional reconstruction apparatus provided in the fourth embodiment of the present invention, where the apparatus may be implemented in a software and/or hardware manner, and may be generally integrated in a computer device, and for example, the computer device includes a terminal device or a server. As shown in fig. 4, the method of this embodiment is applied to the multi-view three-dimensional reconstruction system described in any of the above embodiments, and specifically includes:
s410, detecting the optical markers in the detection range through the optical positioning system to obtain the positioning information of the plurality of spatial detectors, wherein the detection range covers all the spatial detectors of the multi-view reconstruction system.
The optical positioning system, the detection range, the optical marker, the spatial detector, the positioning information, the multi-view reconstruction system, the setting scene, the scene view and the three-dimensional scene reconstruction in the embodiments of the present invention may refer to the description of the above embodiments.
S420, detecting a set scene in a detection range through a space detector of the multi-view reconstruction system, and acquiring a scene view of the scene.
And S430, querying positioning information corresponding to each scene view from each positioning information according to the acquisition time of each positioning information, the identification information of the spatial probe to which each positioning information belongs, the acquisition time of each scene view, and the identification information of the spatial probe to which each scene view is acquired.
The identification information is used to identify the space probe, and is, for example, a number (e.g., ID) of the space probe, where the identification information may be user-defined. The acquisition time of the positioning information can be compared with the acquisition time of each scene view, and the identification information of the spatial detector to which the positioning information belongs can be compared with the identification information of the spatial detector to which each scene view is acquired. And establishing a corresponding relation between the positioning information and the scene view, which have the same acquisition time and the same identification information of the space detector. Therefore, positioning information corresponding to the scene view is determined.
It should be noted that, for a fixed space detector in the multi-view reconstruction system, since the positioning information of each fixed space detector is not changed, the positioning information of each fixed space detector can be acquired only at the starting time of each multi-view three-dimensional reconstruction operation, and the positioning information is continuously used at the subsequent time of the multi-view three-dimensional reconstruction operation, so as to reduce the calculation amount of the positioning information of the space detector and improve the positioning efficiency of the space detector.
S440, according to each scene view and the corresponding positioning information, three-dimensional scene reconstruction is carried out on the scene.
According to the embodiment of the invention, the multi-scene view is obtained through the multi-view reconstruction system, the optical positioning system is used for positioning each space detector in the multi-view reconstruction system, the positioning information of each space detector in the multi-view reconstruction system is quickly and accurately obtained, the positioning information corresponding to each scene view is determined from the positioning information, the three-dimensional scene reconstruction of the scene is realized according to each scene view and the corresponding positioning information, the three-dimensional scene is accurately established, and the three-dimensional scene reconstruction efficiency is improved.
Optionally, the querying, from the positioning information of each spatial detector, positioning information corresponding to each scene view includes: acquiring a scene view detected by at least one space detector at a target acquisition moment, and respectively correcting positioning information of the at least one space detector corresponding to the target acquisition moment; and inquiring the positioning information corresponding to each scene view from each corrected positioning information according to the corrected acquisition time of each positioning information, the corrected identification information of the spatial detector to which each positioning information belongs, the corrected acquisition time of each scene view and the corrected identification information of the spatial detector to which each scene view belongs.
In practice, external calibration of the spatial probe refers to acquiring positioning information of the spatial probe. The positioning information of the spatial detector may be equivalent to an external calibration of the spatial detector.
While the scene view itself can be used directly to enable external calibration of the spatial detector. Generally, an external calibration method based on a scene view is an iterative algorithm based on an algorithm space, wherein an initial value of the iterative algorithm is a certain value in the algorithm space, and at this time, a search needs to be performed in the algorithm space, and a search range is narrowed step by step to determine a final result, which is computationally expensive. However, after the positioning information of the space detector is determined, the positioning information of the space detector can be used as an initial value of the iterative algorithm, so that the search range of the algorithm space is greatly reduced, and the final result of the iterative algorithm is quickly determined. That is to say, on the basis of determining the positioning information of the space detector, when the iterative algorithm based on the scene view is used again for external calibration, the calculation amount of the external calibration can be reduced, the speed of the external calibration is increased, and meanwhile, the accuracy of the iterative algorithm is high, so that the accuracy of the external calibration is improved.
On the basis of determining the positioning information of the space detector, the external calibration is carried out by using the iterative algorithm based on the scene view again to correct the positioning information of the space detector, so that the accurate adjustment of the positioning information is realized, meanwhile, the calculation amount of the accurate adjustment of the positioning information is reduced, and the speed of the accurate adjustment of the positioning information is accelerated.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (18)

1. An optical positioning system of a multi-view reconstruction system, wherein a detection range of the optical positioning system covers all spatial detectors included in the multi-view reconstruction system, each of the spatial detectors having an optical marker disposed thereon, wherein:
the optical positioning system is used for optically positioning each space detector according to the detection result of the optical marker in the detection range to obtain the positioning information of each space detector;
the positioning information of each spatial detector is used for reconstructing a three-dimensional scene by combining scene views of a set scene acquired by each spatial detector.
2. The optical positioning system of claim 1, comprising: a controller, and at least one optical positioning unit;
the at least one optical positioning unit is used for detecting the optical marker in a detection range;
the controller is configured to obtain positioning information of each of the spatial detectors according to a detection result obtained by the detection of the at least one optical positioning unit.
3. The optical positioning system of claim 2, wherein the controller is communicatively coupled to an optical marker on each of the spatial detectors, wherein:
the at least one optical positioning unit is specifically used for emitting a light beam in a detection range to scan the optical marker on the space detector;
the optical marker on the space detector is used for generating an electric signal and sending the electric signal to the controller after receiving the light beam irradiation;
and the controller is used for obtaining the positioning information of each space detector according to the received electric signals.
4. The optical positioning system of claim 2, wherein the controller is communicatively coupled to each of the optical positioning units, wherein:
the at least one optical positioning unit is specifically used for controlling a light source to emit a light beam to an optical marker on the space detector within a detection range, receiving the light beam reflected by the optical marker, generating an electric signal and sending the electric signal to the controller;
an optical marker on the spatial detector for reflecting the light beam emitted by the light source;
and the controller is used for obtaining the positioning information of each space detector according to the received electric signals.
5. The optical positioning system of claim 2, wherein the controller is communicatively coupled to each of the optical positioning units, wherein:
the at least one optical positioning unit is specifically used for shooting a positioning image of an optical marker on the space detector in a detection range;
an optical marker on the spatial detector for reflecting light to identify the spatial detector in the positioning image;
and the controller is used for obtaining the positioning information of each space detector according to the received positioning image.
6. The optical positioning system of claim 2, wherein the controller is communicatively coupled to each of the optical positioning units, wherein:
the at least one optical positioning unit is specifically used for shooting a positioning image of an optical marker on the space detector in a detection range;
the optical marker on the space detector is used for emitting light and marking the space detector in the positioning image;
and the controller is used for obtaining the positioning information of each space detector according to the received positioning image.
7. Optical positioning system according to any of claims 5-6, characterized in that the optical positioning unit comprises a monocular, binocular or multiocular vision system.
8. Optical locating system according to any of claims 2-6, characterized in that the optical locating unit comprises a moving optical locating unit, which is moving relative to the scene, and/or a fixed optical locating unit, which is stationary relative to the scene.
9. The optical positioning system of any one of claims 2-6, wherein the optical positioning system comprises at least two optical positioning units, the optical positioning system comprises a target optical positioning unit and at least one reference optical positioning unit, the reference optical positioning unit is provided with the optical marker, the at least one reference optical positioning unit is located in a detection range of the target optical positioning unit, and the target optical positioning unit is configured to obtain positioning information of the spatial detector detected by each reference optical positioning unit by positioning each reference optical positioning unit.
10. The optical positioning system of claim 1, wherein the spatial detector comprises: monocular cameras, binocular cameras, multi-view cameras, depth sensors, electromagnetic wave sensors, acoustic radars, sonar sensors, or ultrasonic imaging devices.
11. The optical locating system of claim 3, wherein the optical markers on the spatial detector comprise photodetectors.
12. An optical locating system as claimed in any one of claims 4 or 5, wherein the optical markers on the spatial detector comprise reflectors.
13. The optical locating system of claim 6, wherein the optical marker on the spatial detector comprises a light source.
14. An optical positioning method of a multi-view reconstruction system, applied in the optical positioning system of any one of claims 1-13, comprising:
detecting an optical marker within a detection range covering all spatial detectors of the multi-view reconstruction system;
according to the detection result, performing optical positioning on all the space detectors to obtain positioning information of each space detector;
the positioning information of each spatial detector is used for reconstructing a three-dimensional scene by combining scene views of a set scene acquired by each spatial detector.
15. A multi-view three-dimensional reconstruction system, comprising: the optical positioning system of any of claims 1-13, and a multi-view reconstruction system;
the optical positioning system is used for acquiring positioning information of all spatial detectors in the multi-view reconstruction system;
the multi-view reconstruction system is used for acquiring a scene view of a set scene;
the positioning information of each spatial detector is used for reconstructing a three-dimensional scene of a set scene acquired by the multi-view three-dimensional reconstruction system in combination with each spatial detector.
16. The multi-view three-dimensional reconstruction system of claim 15, comprising one of:
the fixed space detectors are static relative to the scene, and the positions of the fixed space detectors are different;
at least one motion space detector, said motion space detector moving relative to said scene; or
At least two spatial detectors comprising at least one motion spatial detector.
17. A multi-view three-dimensional reconstruction method applied to the multi-view three-dimensional reconstruction system according to any one of claims 15 or 16, comprising:
detecting optical markers in a detection range through an optical positioning system to obtain positioning information of a plurality of space detectors, wherein the detection range covers all the space detectors of the multi-view reconstruction system;
detecting a set scene in a detection range through a space detector of a multi-view reconstruction system to obtain a scene view of the scene;
according to the acquisition time of each piece of positioning information, the identification information of the spatial detector to which each piece of positioning information belongs, the acquisition time of each scene view and the identification information of the spatial detector to which each scene view is acquired, the positioning information corresponding to each scene view is inquired from each piece of positioning information;
and reconstructing the three-dimensional scene according to each scene view and the corresponding positioning information.
18. The multi-view three-dimensional reconstruction method of claim 17, wherein said retrieving the positioning information corresponding to each of the scene views from the positioning information of each of the spatial detectors comprises:
acquiring a scene view detected by at least one space detector at a target acquisition moment, and respectively correcting positioning information of the at least one space detector corresponding to the target acquisition moment;
and inquiring the positioning information corresponding to each scene view from each corrected positioning information according to the corrected acquisition time of each positioning information, the corrected identification information of the spatial detector to which each positioning information belongs, the corrected acquisition time of each scene view and the corrected identification information of the spatial detector to which each scene view belongs.
CN201911373494.7A 2019-12-27 2019-12-27 Optical positioning system and method and multi-view three-dimensional reconstruction system and method Active CN113048878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911373494.7A CN113048878B (en) 2019-12-27 2019-12-27 Optical positioning system and method and multi-view three-dimensional reconstruction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911373494.7A CN113048878B (en) 2019-12-27 2019-12-27 Optical positioning system and method and multi-view three-dimensional reconstruction system and method

Publications (2)

Publication Number Publication Date
CN113048878A true CN113048878A (en) 2021-06-29
CN113048878B CN113048878B (en) 2023-08-29

Family

ID=76506227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911373494.7A Active CN113048878B (en) 2019-12-27 2019-12-27 Optical positioning system and method and multi-view three-dimensional reconstruction system and method

Country Status (1)

Country Link
CN (1) CN113048878B (en)

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1893559A (en) * 2005-06-28 2007-01-10 佳能株式会社 Information processing method and apparatus
DE102009007315A1 (en) * 2009-02-03 2010-10-28 KIENHÖFER, Carsten Input device for use as tennis racket for inputting position and/or state of articles, has optical marker comprising optical mark that is identified by image processing device, integrated sensor detecting actuating force of handle
DE102010042540A1 (en) * 2010-10-15 2012-04-19 Scopis Gmbh Method and apparatus for calibrating an optical system, distance determining device and optical system
US20120262695A1 (en) * 2011-04-13 2012-10-18 Ivan Faul Optical digitizer with improved distance measurement capability
DE102011111542A1 (en) * 2011-08-17 2013-02-21 Schott Ag Determination of subapertures on a test specimen for surface measurements on the specimen
CN103110429A (en) * 2012-06-11 2013-05-22 大连理工大学 Optical calibration method of ultrasonic probe
US20130155419A1 (en) * 2011-12-15 2013-06-20 Darren Glen Atkinson Locating and relocating device
CN104457569A (en) * 2014-11-27 2015-03-25 大连理工大学 Geometric parameter visual measurement method for large composite board
CN105769244A (en) * 2016-03-22 2016-07-20 上海交通大学 Calibration device for calibrating ultrasonic probe
US20170003372A1 (en) * 2015-06-30 2017-01-05 Faro Technologies, Inc. Apparatus and method of measuring six degrees of freedom
CN106952347A (en) * 2017-03-28 2017-07-14 华中科技大学 A kind of supersonic operation secondary navigation system based on binocular vision
CN107714082A (en) * 2017-09-04 2018-02-23 北京航空航天大学 A kind of ultrasonic probe caliberating device and method based on optical alignment
CN107854177A (en) * 2017-11-18 2018-03-30 上海交通大学医学院附属第九人民医院 A kind of ultrasound and CT/MR image co-registrations operation guiding system and its method based on optical alignment registration
CN107883870A (en) * 2017-10-24 2018-04-06 四川雷得兴业信息科技有限公司 Overall calibration method based on binocular vision system and laser tracker measuring system
CN108072327A (en) * 2017-12-31 2018-05-25 浙江维思无线网络技术有限公司 A kind of measuring method and device using control point
CN108106604A (en) * 2017-12-31 2018-06-01 浙江维思无线网络技术有限公司 A kind of photogrammetric optical measurement mark method of work and device
CN108759669A (en) * 2018-05-31 2018-11-06 武汉中观自动化科技有限公司 A kind of self-positioning 3-D scanning method and system in interior
CN109000582A (en) * 2018-03-15 2018-12-14 杭州思看科技有限公司 Scan method and system, storage medium, the equipment of tracking mode three-dimensional scanner
CN109870279A (en) * 2017-12-01 2019-06-11 中国科学院沈阳自动化研究所 Deflection of bridge span detection system and detection method based on digital image processing techniques
CN109916333A (en) * 2019-04-04 2019-06-21 大连交通大学 A kind of large scale target with high precision three-dimensional reconstruction system and method based on AGV
CN110230983A (en) * 2019-07-16 2019-09-13 北京欧比邻科技有限公司 Antivibration formula optical 3-dimensional localization method and device
CN110279467A (en) * 2019-06-19 2019-09-27 天津大学 Ultrasound image under optical alignment and information fusion method in the art of puncture biopsy needle
CN110554095A (en) * 2019-08-16 2019-12-10 上海工程技术大学 three-dimensional ultrasonic probe calibration device and method
CN110572630A (en) * 2018-09-21 2019-12-13 苏州因确匹电子科技有限公司 Three-dimensional image shooting system, method, device, equipment and storage medium

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1893559A (en) * 2005-06-28 2007-01-10 佳能株式会社 Information processing method and apparatus
DE102009007315A1 (en) * 2009-02-03 2010-10-28 KIENHÖFER, Carsten Input device for use as tennis racket for inputting position and/or state of articles, has optical marker comprising optical mark that is identified by image processing device, integrated sensor detecting actuating force of handle
DE102010042540A1 (en) * 2010-10-15 2012-04-19 Scopis Gmbh Method and apparatus for calibrating an optical system, distance determining device and optical system
US20120262695A1 (en) * 2011-04-13 2012-10-18 Ivan Faul Optical digitizer with improved distance measurement capability
DE102011111542A1 (en) * 2011-08-17 2013-02-21 Schott Ag Determination of subapertures on a test specimen for surface measurements on the specimen
US20130155419A1 (en) * 2011-12-15 2013-06-20 Darren Glen Atkinson Locating and relocating device
CN103110429A (en) * 2012-06-11 2013-05-22 大连理工大学 Optical calibration method of ultrasonic probe
CN104457569A (en) * 2014-11-27 2015-03-25 大连理工大学 Geometric parameter visual measurement method for large composite board
US20170003372A1 (en) * 2015-06-30 2017-01-05 Faro Technologies, Inc. Apparatus and method of measuring six degrees of freedom
CN105769244A (en) * 2016-03-22 2016-07-20 上海交通大学 Calibration device for calibrating ultrasonic probe
CN106952347A (en) * 2017-03-28 2017-07-14 华中科技大学 A kind of supersonic operation secondary navigation system based on binocular vision
CN107714082A (en) * 2017-09-04 2018-02-23 北京航空航天大学 A kind of ultrasonic probe caliberating device and method based on optical alignment
CN107883870A (en) * 2017-10-24 2018-04-06 四川雷得兴业信息科技有限公司 Overall calibration method based on binocular vision system and laser tracker measuring system
CN107854177A (en) * 2017-11-18 2018-03-30 上海交通大学医学院附属第九人民医院 A kind of ultrasound and CT/MR image co-registrations operation guiding system and its method based on optical alignment registration
CN109870279A (en) * 2017-12-01 2019-06-11 中国科学院沈阳自动化研究所 Deflection of bridge span detection system and detection method based on digital image processing techniques
CN108072327A (en) * 2017-12-31 2018-05-25 浙江维思无线网络技术有限公司 A kind of measuring method and device using control point
CN108106604A (en) * 2017-12-31 2018-06-01 浙江维思无线网络技术有限公司 A kind of photogrammetric optical measurement mark method of work and device
CN109000582A (en) * 2018-03-15 2018-12-14 杭州思看科技有限公司 Scan method and system, storage medium, the equipment of tracking mode three-dimensional scanner
CN108759669A (en) * 2018-05-31 2018-11-06 武汉中观自动化科技有限公司 A kind of self-positioning 3-D scanning method and system in interior
CN110572630A (en) * 2018-09-21 2019-12-13 苏州因确匹电子科技有限公司 Three-dimensional image shooting system, method, device, equipment and storage medium
CN109916333A (en) * 2019-04-04 2019-06-21 大连交通大学 A kind of large scale target with high precision three-dimensional reconstruction system and method based on AGV
CN110279467A (en) * 2019-06-19 2019-09-27 天津大学 Ultrasound image under optical alignment and information fusion method in the art of puncture biopsy needle
CN110230983A (en) * 2019-07-16 2019-09-13 北京欧比邻科技有限公司 Antivibration formula optical 3-dimensional localization method and device
CN110554095A (en) * 2019-08-16 2019-12-10 上海工程技术大学 three-dimensional ultrasonic probe calibration device and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
韩建栋等: "采用光学定位跟踪技术的三维数据拼接方法", 《光学精密工程》 *
韩建栋等: "采用光学定位跟踪技术的三维数据拼接方法", 《光学精密工程》, no. 01, 31 January 2009 (2009-01-31), pages 46 *

Also Published As

Publication number Publication date
CN113048878B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
US11629955B2 (en) Dual-resolution 3D scanner and method of using
EP3333538B1 (en) Scanner vis
CN112907676B (en) Calibration method, device and system of sensor, vehicle, equipment and storage medium
US9188430B2 (en) Compensation of a structured light scanner that is tracked in six degrees-of-freedom
US9322646B2 (en) Adaptive mechanism control and scanner positioning for improved three-dimensional laser scanning
US6061644A (en) System for determining the spatial position and orientation of a body
US11335182B2 (en) Methods and systems for detecting intrusions in a monitored volume
JP2006258486A (en) Device and method for measuring coordinate
US10697754B2 (en) Three-dimensional coordinates of two-dimensional edge lines obtained with a tracker camera
EP3471063A1 (en) Three-dimensional imaging method and system
US10254402B2 (en) Stereo range with lidar correction
EP3992662A1 (en) Three dimensional measurement device having a camera with a fisheye lens
CN113048878B (en) Optical positioning system and method and multi-view three-dimensional reconstruction system and method
US20170350968A1 (en) Single pulse lidar correction to stereo imaging
CN107835361B (en) Imaging method and device based on structured light and mobile terminal
CN113888702A (en) Indoor high-precision real-time modeling and space positioning device and method based on multi-TOF laser radar and RGB camera
US12008783B2 (en) Reality capture device
US20220414915A1 (en) Reality capture device
Orghidan et al. Catadioptric single-shot rangefinder for textured map building in robot navigation
US20240176025A1 (en) Generating a parallax free two and a half (2.5) dimensional point cloud using a high resolution image
WO2024118396A1 (en) Generating a parallax free two and a half (2.5) dimensional point cloud using a high resolution image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant