CN114004880A - Point cloud and strong-reflection target real-time positioning method of binocular camera - Google Patents

Point cloud and strong-reflection target real-time positioning method of binocular camera Download PDF

Info

Publication number
CN114004880A
CN114004880A CN202111263387.6A CN202111263387A CN114004880A CN 114004880 A CN114004880 A CN 114004880A CN 202111263387 A CN202111263387 A CN 202111263387A CN 114004880 A CN114004880 A CN 114004880A
Authority
CN
China
Prior art keywords
binocular camera
depth information
camera
point
positioning method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111263387.6A
Other languages
Chinese (zh)
Other versions
CN114004880B (en
Inventor
龚启勇
幸浩洋
黄晓琦
吕粟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
West China Hospital of Sichuan University
Original Assignee
West China Hospital of Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by West China Hospital of Sichuan University filed Critical West China Hospital of Sichuan University
Priority to CN202111263387.6A priority Critical patent/CN114004880B/en
Publication of CN114004880A publication Critical patent/CN114004880A/en
Application granted granted Critical
Publication of CN114004880B publication Critical patent/CN114004880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a point cloud and strong-reflection target real-time positioning method of a binocular camera, which belongs to a structured light object target positioning method, and the method comprises the steps that after the binocular camera and an RGB camera are started, structured light and illumination light alternately project a target, the binocular camera alternately exposes at intervals of frames, one frame is used for collecting an infrared structural image with structured light textures, and the other frame is used for positioning a high-reflection object; the high-reflection object is positioned to be recognized and counted by the RGB images collected by the RGB camera. In the method, a binocular camera and an RGB camera with the same shooting visual angle are used for respectively acquiring an RGB image and an infrared structural image, a point-to-point relation is established by taking the edge or the area center of a high-reflection object as a characteristic point, so that the depth information of the high-reflection object is obtained, the depth information of other objects in the infrared structural image is fused, three-dimensional coordinate registration of the high-reflection object and a non-high-reflection object in the infrared structural image is sequentially carried out, and synchronous real-time positioning of a target is further completed.

Description

Point cloud and strong-reflection target real-time positioning method of binocular camera
Technical Field
The invention relates to a structured light object target positioning method, in particular to a point cloud and strong reflection target real-time positioning method of a binocular camera.
Background
In medical surgery or other treatments, it is necessary to perform navigation of a surgical instrument or a treatment device by using the combination of imaging data and optical three-dimensional imaging data, and measure a spatial position of the surgical instrument or the treatment device relative to an organ or a lesion in real time, so that a doctor can perform precise surgery or treatment based on the spatial position. At present, the acquisition of optical three-dimensional imaging data mainly depends on a depth camera, and the main working modes of the depth camera include a binocular non-structured light depth camera, a binocular structured light depth camera, a TOF camera and the like. The binocular structured light depth camera (structured light RGBD camera) can obtain depth information of an object in an image through a limit equation based on known parameters such as lens focal length and polar line parallax. In the use process of the camera, an infrared emitter is added to project a known pattern on the basis of the binocular camera to solve the problems that a plane pure background cannot provide binocular matching feature points and is sensitive to environmental light. However, the existing infrared structured light RGBD camera has a disadvantage that the highly reflective object cannot obtain depth information because the structured light on the surface of the highly reflective object cannot be stably projected for imaging the pattern, so that further research and improvement on an image target positioning method with the highly reflective object by using a binocular structured light depth camera are necessary.
Disclosure of Invention
One of the objectives of the present invention is to solve the above-mentioned deficiencies, and to provide a method for real-time positioning of a point cloud and a highly reflective target of a binocular camera, so as to hopefully solve the technical problems that the binocular structured light depth camera in the prior art cannot stably image a projected pattern on the surface structured light of a highly reflective object, so that the highly reflective object cannot obtain depth information.
In order to solve the technical problems, the invention adopts the following technical scheme:
the invention provides a point cloud and strong reflective target real-time positioning method based on an active binocular camera, which comprises the following steps:
step A, after a binocular camera and an RGB camera are started, structured light and illumination light alternately project a target, the binocular camera alternately exposes at intervals of frames, one frame is used for collecting an infrared structural image with structured light texture, and the other frame is used for positioning a high-reflectivity object, and the two frames alternate; the high-reflection object is positioned by carrying out recognition counting on the high-reflection object through an RGB image acquired by an RGB camera;
b, establishing a central point or an edge point of the highly reflective object at a corresponding position in the infrared structural image, establishing a point-to-point relation through limit constraint, and calculating by a triangular distance measurement formula to obtain depth information of the highly reflective object;
and step C, fusing the depth information of the high-reflection object with the depth information of other objects in the infrared structural image, and then transmitting the depth information to the lower computer by combining the two-dimensional coordinates of the objects in the infrared structural image.
Preferably, the further technical scheme is as follows: the two-dimensional coordinates of the object in the target image are values of an X axis and a Y axis determined according to the size of pixels calibrated in advance by lenses of the binocular camera and the physical distance value between the lenses.
The further technical scheme is as follows: and D, acquiring the structured light texture angular points of other objects except the high-reflection object in the infrared structural image after the binocular camera in the step A is started, establishing the point-to-point relation of the texture angular points in the lens picture of the binocular camera, and calculating by using the triangulation ranging formula or solving the polar line equation to obtain the depth information of the other objects in the step C.
The further technical scheme is as follows: the formula of the triangular distance measurement is as follows:
Figure BDA0003326365050000021
wherein Z is depth information, d is parallax, B is a physical distance value between the lenses of the binocular camera, f is a focal length of the lenses of the binocular camera, and xlAnd xrThe positions of the same-name points of the images of the lens of the binocular camera on the image sensors of the lens on the left side and the lens on the right side are respectively.
The further technical scheme is as follows: and D, merging the depth information in the step C into depth information of the highly reflective object and depth information of other objects in the infrared structural image which are directly overlapped together to form the depth information of all objects in the infrared structural image acquired by the current binocular camera.
The further technical scheme is as follows: the shape of the high-reflectivity object in the step A is spherical.
The further technical scheme is as follows: the structured light is projected by an infrared emitter that is activated alternately with an illumination light source.
The further technical scheme is as follows: the method is realized on an FPGA platform.
Compared with the prior art, the invention has the following beneficial effects: the method comprises the steps of respectively collecting an infrared structural image and an RGB image through a binocular camera and an RGB camera with the same shooting visual angle, conveniently identifying the position of a high-reflectivity object in the infrared structural image in a target image, establishing a point-to-point relation by taking the edge or the area center of the high-reflectivity object as a characteristic point, further obtaining depth information of the high-reflectivity object, fusing depth information of other objects in the infrared structural image, sequentially registering three-dimensional coordinates of the high-reflectivity object and a non-high reflectivity object in the infrared structural image, further completing synchronous real-time positioning of the target, avoiding the influence of over-strong illumination light on structural light texture acquisition through frame-to-frame exposure of the binocular camera, controlling the heat productivity of an illumination light source by adopting a low-exposure mode when the illumination light and the structural light alternately project the target, and reducing environmental interference.
Drawings
FIG. 1 is a flow chart of a method for illustrating one embodiment of the present invention;
FIG. 2 is a diagram of an alternate exposure configuration of an RGBD camera and a binocular camera;
FIG. 3 is a schematic diagram illustrating triangulation in one embodiment of the invention;
fig. 4 is a block diagram illustrating an application of an embodiment of the present invention.
Detailed Description
The invention is further elucidated with reference to the drawing.
Referring to fig. 1, an embodiment of the present invention is a method for real-time positioning of a point cloud and a highly reflective target of a binocular camera, which is a method for solving depth information by using a common structured light texture registration epipolar line constraint for a non-highly reflective object by performing RGB imaging based on the principle of an existing structured light binocular camera and by using differences between highly reflective objects (not limited to spheres, etc.) in RGB images and background brightness to identify and count the highly reflective objects. And for the highly reflective object, registering the edge or the area centroid as a characteristic point pair, then carrying out independent depth information calculation, and then fusing with the depth information in the original image.
The system supporting the realization of the method adopts one RGB camera and two near infrared (850nm) cameras (namely binocular cameras), the resolution ratio adopts 1920 x 1080, two sets of light sources are additionally arranged, one set of light sources is infrared structured light and is used for active binocular texture supplementation and depth image imaging, and the other set of light sources is annular light sources sleeved outside the left and right near infrared cameras and is used for photosphere positioning. The two sets of light sources work alternately, one frame of the near-infrared camera is used for solving a depth map, the other frame of the near-infrared camera is used for positioning an optical sphere, and the cameras work at 60fps and can respectively reach 30 fps. The RGBD imaging and the photosphere positioning adopt frame-spaced exposure, the photosphere positioning adopts a near-infrared annular light source, and a low exposure mode is adopted, so that the environmental interference is reduced, and the heat productivity of the system can be controlled.
Based on the foregoing manner, in the present embodiment, the method provided by the present invention is described by taking the reflective bead as an example, and is performed according to the following steps:
step S1, after the binocular camera and the RGB camera are started, the two cameras have the same shooting angle, and two images, namely, an infrared structural image (RGBD imaging) and a conventional RBG image, can be obtained simultaneously. Specifically, as shown in fig. 2, in the drawing, the connecting line a indicates RGBD imaging, and the connecting line B indicates photosphere positioning, i.e., the initial point t indicates initial exposure. In the RGB image, the identification of the small reflective balls is more obvious, and the illumination light does not influence the acquisition of the structured light texture, so in the step, the small reflective balls are positioned to be identified and counted by the RGB image acquired by the RGB camera; judging which pixels form the whole of the small reflective balls according to the connectivity, so that the mass centers of a plurality of small reflective balls and each small reflective ball can be judged, and the mass centers can be obtained by averaging the horizontal and vertical coordinates of the highlight pixels of one small reflective ball;
in the step, the binocular camera is similar to the existing similar binocular camera, the structured light is projected by the infrared emitter, so as to form an infrared structural image with structured light texture, and in the step, the infrared emitter and the illumination light source are alternately started;
step S2, establishing a central point or an edge point of the reflective small ball at a corresponding position in the infrared structural image, selecting the central point (centroid) or the edge point of the reflective small ball when establishing a point pair relationship in a binocular camera lens picture, and calculating to obtain the depth information of the reflective small ball through limit constraint and a triangular distance measurement formula; the point-to-point relationship refers to two points of the same object (or point) on two pictures of the binocular left camera and the binocular right camera; the respective centroids of two corresponding small reflecting balls on two pictures acquired by two lenses of the binocular camera establish a point pair of the centroids; if the edge points are utilized, the edge point pairs are established; the foregoing steps are shown in the right-hand side of FIG. 1;
and S3, fusing the depth information of the small reflective balls with the depth information of other objects in the infrared structural image, and then transmitting the depth information to a lower computer by combining the two-dimensional coordinates of the objects in the infrared structural image.
In this step, the two-dimensional coordinates of the object in the target image are values of the X axis and the Y axis determined according to the size of pixels calibrated in advance by the lenses of the binocular camera and the physical distance value between the lenses. And the depth information of other objects in the infrared structural image is obtained by acquiring the structured light texture angular points of other objects except for the reflective beads in the infrared structural image after the binocular camera is started in step S1, establishing the point-to-point relation of the texture angular points in the lens frame of the binocular camera, and calculating through the triangulation distance measurement formula or the solution polar line equation, namely the step shown on the left side of fig. 1.
The limit constraint applied in the present embodiment is a method of reducing the calculation amount of the feature point pair matching algorithm commonly used in the structured light depth camera. Epipolar constraints describe the constraints that the image point, the camera optical center, forms under the projection model when the same point is projected onto the images from two different perspectives.
The triangulation method applied in this embodiment is also a depth information calculation method commonly used by structured light depth cameras, and as shown in fig. 3, a typical expression thereof is as follows:
Figure BDA0003326365050000061
the principle is shown in fig. 3, wherein Z is depth information, d is parallax, B is a physical distance value between the lenses of the binocular camera, f is a focal length of the lenses of the binocular camera, and x islAnd xrThe positions on the left lens and right lens image sensors, respectively.
Specifically, the depth information in this embodiment is fused to directly overlap the depth information of the small reflective balls and the depth information of other objects in the infrared structural image, so as to form the depth information of all objects in the infrared structural image acquired by the current binocular camera. Specifically, the depth information fusion means that a textured place uses texture establishing point pairs to obtain depth information, a place which cannot form structured light texture uses centroid/edge solution to obtain depth information, and the two are overlapped together to obtain a space information three-dimensional coordinate and a centroid/edge point three-dimensional coordinate of the textured place; because the binocular camera is calibrated in application, namely the distance between the two cameras, the pixel size of a camera sensor, the imaging parameters such as a lens and the like establish a definite relation with a physical space at a certain distance, on the basis, the three-dimensional coordinate can be directly obtained by solving the triangular distance measurement formula of the type.
Referring to fig. 4, in a preferred application example of the present invention, the inventor considers that the mainstream of the existing depth camera is a depth calculation algorithm implemented on an FPGA platform, and therefore, the carrier implemented by the above method uses an FPGA S OC platform, and a specific block diagram is shown in fig. 4, that is, an asynchronous calculation scheme using a logic unit and a processing core is adopted. The computing platform is also an embedded computer which can be a PC or based on ARM.
It should be noted that the epipolar constraint and the triangulation formula have been applied in the art, and therefore, the principle of the constraint and the formula and the solution method will not be described in detail.
In addition to the foregoing, it should be further appreciated that reference throughout this specification to "one embodiment," "another embodiment," "an embodiment," or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment described generally herein. The appearances of the same phrase in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the scope of the invention to effect such feature, structure, or characteristic in connection with other embodiments.
Although the invention has been described herein with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More specifically, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, other uses will also be apparent to those skilled in the art.

Claims (8)

1. A binocular camera point cloud and strong light reflecting target real-time positioning method is characterized by comprising the following steps:
step A, after a binocular camera and an RGB camera are started, structured light and illumination light alternately project a target, the binocular camera alternately exposes at intervals of frames, one frame is used for collecting an infrared structural image with structured light texture, and the other frame is used for positioning a high-reflectivity object; the high-reflection object is positioned by carrying out recognition counting on the high-reflection object through an RGB image acquired by an RGB camera;
b, establishing a central point or an edge point of the highly reflective object at a corresponding position in the infrared structural image, establishing a point-to-point relation through limit constraint, and calculating by a triangular distance measurement formula to obtain depth information of the highly reflective object;
and step C, fusing the depth information of the high-reflection object with the depth information of other objects in the infrared structural image, and then transmitting the depth information to the lower computer by combining the two-dimensional coordinates of the objects in the infrared structural image.
2. The binocular camera point cloud and highly reflective target real-time positioning method according to claim 1, wherein: and C, the two-dimensional coordinates of the object in the target image in the step C are values of an X axis and a Y axis which are determined according to the size of pixels calibrated in advance by lenses of the binocular camera and the physical distance value between the lenses.
3. The binocular camera point cloud and highly reflective target real-time positioning method according to claim 1, wherein: and D, acquiring the structured light texture angular points of other objects except the high-reflection object in the infrared structural image after the binocular camera in the step A is started, establishing the point-to-point relation of the texture angular points in the lens picture of the binocular camera, and calculating by using the triangulation ranging formula or solving the polar line equation to obtain the depth information of the other objects in the step C.
4. The binocular camera point cloud and highly reflective target real-time positioning method according to claim 1 or 3, wherein the triangulation formula is:
Figure FDA0003326365040000011
wherein Z is depth information, d is parallax, B is a physical distance value between the lenses of the binocular camera, f is a focal length of the lenses of the binocular camera, and xlAnd xrThe positions of the same-name points of the images of the lens of the binocular camera on the image sensors of the lens on the left side and the lens on the right side are respectively.
5. The binocular camera point cloud and strongly reflective target real-time positioning method according to any one of claims 1 to 3, wherein: and D, merging the depth information in the step C into depth information of the highly reflective object and depth information of other objects in the infrared structural image which are directly overlapped together to form the depth information of all objects in the infrared structural image acquired by the current binocular camera.
6. The binocular camera point cloud and strongly reflective target real-time positioning method according to any one of claims 1 to 3, wherein: the shape of the high-reflectivity object in the step A is spherical.
7. The binocular camera point cloud and strongly reflective target real-time positioning method according to any one of claims 1 to 3, wherein: the structured light is projected by an infrared emitter that is activated alternately with an illumination light source.
8. The binocular camera point cloud and strongly reflective target real-time positioning method according to any one of claims 1 to 3, wherein: the method is realized on an FPGA platform.
CN202111263387.6A 2021-04-08 2021-04-08 Point cloud and strong reflection target real-time positioning method of binocular camera Active CN114004880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111263387.6A CN114004880B (en) 2021-04-08 2021-04-08 Point cloud and strong reflection target real-time positioning method of binocular camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111263387.6A CN114004880B (en) 2021-04-08 2021-04-08 Point cloud and strong reflection target real-time positioning method of binocular camera
CN202110375959.3A CN113052898B (en) 2021-04-08 2021-04-08 Point cloud and strong-reflection target real-time positioning method based on active binocular camera

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110375959.3A Division CN113052898B (en) 2021-04-08 2021-04-08 Point cloud and strong-reflection target real-time positioning method based on active binocular camera

Publications (2)

Publication Number Publication Date
CN114004880A true CN114004880A (en) 2022-02-01
CN114004880B CN114004880B (en) 2023-04-25

Family

ID=76519395

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110375959.3A Active CN113052898B (en) 2021-04-08 2021-04-08 Point cloud and strong-reflection target real-time positioning method based on active binocular camera
CN202111263387.6A Active CN114004880B (en) 2021-04-08 2021-04-08 Point cloud and strong reflection target real-time positioning method of binocular camera

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110375959.3A Active CN113052898B (en) 2021-04-08 2021-04-08 Point cloud and strong-reflection target real-time positioning method based on active binocular camera

Country Status (1)

Country Link
CN (2) CN113052898B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471874B (en) * 2022-10-28 2023-02-07 山东新众通信息科技有限公司 Construction site dangerous behavior identification method based on monitoring video
CN118115551A (en) * 2024-01-04 2024-05-31 山东卓业医疗科技有限公司 Medical image focus labeling extraction method and system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300637A1 (en) * 2010-10-04 2013-11-14 G Dirk Smits System and method for 3-d projection and enhancements for interactivity
CN104905765A (en) * 2015-06-08 2015-09-16 四川大学华西医院 FPGA implementation method based on Camshift algorithm in eye movement tracking
CN104905764A (en) * 2015-06-08 2015-09-16 四川大学华西医院 High-speed sight tracking method based on FPGA
CN108470373A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 It is a kind of based on infrared 3D 4 D datas acquisition method and device
CN108564041A (en) * 2018-04-17 2018-09-21 广州云从信息科技有限公司 A kind of Face datection and restorative procedure based on RGBD cameras
US20190164256A1 (en) * 2017-11-30 2019-05-30 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and device for image processing
CN110021035A (en) * 2019-04-12 2019-07-16 哈尔滨工业大学 The marker of Kinect depth camera and virtual tag object tracking based on the marker
CN110297491A (en) * 2019-07-02 2019-10-01 湖南海森格诺信息技术有限公司 Semantic navigation method and its system based on multiple structured light binocular IR cameras
CN110390719A (en) * 2019-05-07 2019-10-29 香港光云科技有限公司 Based on flight time point cloud reconstructing apparatus
CN110533708A (en) * 2019-08-28 2019-12-03 维沃移动通信有限公司 A kind of electronic equipment and depth information acquisition method
CN111012370A (en) * 2019-12-25 2020-04-17 四川大学华西医院 AI-based X-ray imaging analysis method and device and readable storage medium
CN111657947A (en) * 2020-05-21 2020-09-15 四川大学华西医院 Positioning method of nerve regulation target area
CN111750806A (en) * 2020-07-20 2020-10-09 西安交通大学 Multi-view three-dimensional measurement system and method
CN111950426A (en) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Target detection method and device and delivery vehicle

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088116A (en) * 1998-03-11 2000-07-11 Pfanstiehl; John Quality of finish measurement optical instrument
CN104111036A (en) * 2013-04-18 2014-10-22 中国科学院沈阳自动化研究所 Mirror object measuring device and method based on binocular vision
CN103868460B (en) * 2014-03-13 2016-10-05 桂林电子科技大学 Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN107123156A (en) * 2017-03-10 2017-09-01 西北工业大学 A kind of active light source projection three-dimensional reconstructing method being combined with binocular stereo vision
CN109978953A (en) * 2019-01-22 2019-07-05 四川大学 Method and system for target three-dimensional localization
CN110097024B (en) * 2019-05-13 2020-12-25 河北工业大学 Human body posture visual recognition method of transfer, transportation and nursing robot
CN110349251B (en) * 2019-06-28 2020-06-16 深圳数位传媒科技有限公司 Three-dimensional reconstruction method and device based on binocular camera
CN110349213B (en) * 2019-06-28 2023-12-12 Oppo广东移动通信有限公司 Pose determining method and device based on depth information, medium and electronic equipment
CN112465905A (en) * 2019-09-06 2021-03-09 四川大学华西医院 Characteristic brain region positioning method of magnetic resonance imaging data based on deep learning
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 3D imaging method based on coded structured light and dual purposes
CN111121722A (en) * 2019-12-13 2020-05-08 南京理工大学 Binocular three-dimensional imaging method combining laser dot matrix and polarization vision
CN111336947A (en) * 2020-03-02 2020-06-26 南昌航空大学 Mirror surface object line laser scanning method based on binocular point cloud fusion
CN111754573B (en) * 2020-05-19 2024-05-10 新拓三维技术(深圳)有限公司 Scanning method and system
CN111951376B (en) * 2020-07-28 2023-04-07 中国科学院深圳先进技术研究院 Three-dimensional object reconstruction method fusing structural light and photometry and terminal equipment
CN112053432B (en) * 2020-09-15 2024-03-26 成都贝施美医疗科技股份有限公司 Binocular vision three-dimensional reconstruction method based on structured light and polarization
CN112254670B (en) * 2020-10-15 2022-08-12 天目爱视(北京)科技有限公司 3D information acquisition equipment based on optical scanning and intelligent vision integration
CN112308014B (en) * 2020-11-18 2024-05-14 成都集思鸣智科技有限公司 High-speed accurate searching and positioning method for pupil and cornea reflecting spot of two eyes
CN112595262B (en) * 2020-12-08 2022-12-16 广东省科学院智能制造研究所 Binocular structured light-based high-light-reflection surface workpiece depth image acquisition method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130300637A1 (en) * 2010-10-04 2013-11-14 G Dirk Smits System and method for 3-d projection and enhancements for interactivity
CN104905765A (en) * 2015-06-08 2015-09-16 四川大学华西医院 FPGA implementation method based on Camshift algorithm in eye movement tracking
CN104905764A (en) * 2015-06-08 2015-09-16 四川大学华西医院 High-speed sight tracking method based on FPGA
US20190164256A1 (en) * 2017-11-30 2019-05-30 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and device for image processing
CN108470373A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 It is a kind of based on infrared 3D 4 D datas acquisition method and device
CN108564041A (en) * 2018-04-17 2018-09-21 广州云从信息科技有限公司 A kind of Face datection and restorative procedure based on RGBD cameras
CN110021035A (en) * 2019-04-12 2019-07-16 哈尔滨工业大学 The marker of Kinect depth camera and virtual tag object tracking based on the marker
CN110390719A (en) * 2019-05-07 2019-10-29 香港光云科技有限公司 Based on flight time point cloud reconstructing apparatus
CN110297491A (en) * 2019-07-02 2019-10-01 湖南海森格诺信息技术有限公司 Semantic navigation method and its system based on multiple structured light binocular IR cameras
CN110533708A (en) * 2019-08-28 2019-12-03 维沃移动通信有限公司 A kind of electronic equipment and depth information acquisition method
CN111012370A (en) * 2019-12-25 2020-04-17 四川大学华西医院 AI-based X-ray imaging analysis method and device and readable storage medium
CN111657947A (en) * 2020-05-21 2020-09-15 四川大学华西医院 Positioning method of nerve regulation target area
CN111750806A (en) * 2020-07-20 2020-10-09 西安交通大学 Multi-view three-dimensional measurement system and method
CN111950426A (en) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Target detection method and device and delivery vehicle

Also Published As

Publication number Publication date
CN113052898B (en) 2022-07-12
CN113052898A (en) 2021-06-29
CN114004880B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
US11741624B2 (en) Method and system for determining spatial coordinates of a 3D reconstruction of at least part of a real object at absolute spatial scale
US12010431B2 (en) Systems and methods for multi-camera placement
US11867978B2 (en) Method and device for determining parameters for spectacle fitting
CN106643699B (en) Space positioning device and positioning method in virtual reality system
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
WO2017211066A1 (en) Iris and pupil-based gaze estimation method for head-mounted device
CN111160136B (en) Standardized 3D information acquisition and measurement method and system
EP2870428A1 (en) 3-d scanning and positioning system
WO2004044522A1 (en) Three-dimensional shape measuring method and its device
JP2001008235A (en) Image input method for reconfiguring three-dimensional data and multiple-lens data input device
WO2022078442A1 (en) Method for 3d information acquisition based on fusion of optical scanning and smart vision
CN114998499A (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
CN114004880B (en) Point cloud and strong reflection target real-time positioning method of binocular camera
KR20160121509A (en) Structured light matching of a set of curves from two cameras
CN113223135A (en) Three-dimensional reconstruction device and method based on special composite plane mirror virtual image imaging
WO2016142489A1 (en) Eye tracking using a depth sensor
CN116188558B (en) Stereo photogrammetry method based on binocular vision
US20220398781A1 (en) System and method for digital measurements of subjects
CN110909571B (en) High-precision face recognition space positioning method
WO2022111104A1 (en) Smart visual apparatus for 3d information acquisition from multiple roll angles
Harvent et al. Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system
CN114155349A (en) Three-dimensional mapping method, three-dimensional mapping device and robot
JP2024535978A (en) Method and system for detecting obstacle elements using a visual aid - Patents.com
Zhang et al. A simplified 3D gaze tracking technology with stereo vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant