CN114004880B - Point cloud and strong reflection target real-time positioning method of binocular camera - Google Patents

Point cloud and strong reflection target real-time positioning method of binocular camera Download PDF

Info

Publication number
CN114004880B
CN114004880B CN202111263387.6A CN202111263387A CN114004880B CN 114004880 B CN114004880 B CN 114004880B CN 202111263387 A CN202111263387 A CN 202111263387A CN 114004880 B CN114004880 B CN 114004880B
Authority
CN
China
Prior art keywords
binocular camera
depth information
point
image
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111263387.6A
Other languages
Chinese (zh)
Other versions
CN114004880A (en
Inventor
龚启勇
幸浩洋
黄晓琦
吕粟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
West China Hospital of Sichuan University
Original Assignee
West China Hospital of Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by West China Hospital of Sichuan University filed Critical West China Hospital of Sichuan University
Priority to CN202111263387.6A priority Critical patent/CN114004880B/en
Publication of CN114004880A publication Critical patent/CN114004880A/en
Application granted granted Critical
Publication of CN114004880B publication Critical patent/CN114004880B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Abstract

The invention discloses a point cloud and strong reflective target real-time positioning method of a binocular camera, which belongs to a structured light object target positioning method, and the method is that after the binocular camera and an RGB camera are started, structured light and illumination light alternately project targets, the binocular camera alternately exposes at intervals of frames, one frame is used for collecting infrared structured images with structured light textures, and the other frame is used for positioning a high reflective object; the high-reflectivity object is positioned to be identified and counted through the RGB image acquired by the RGB camera. In the method, an RGB image and an infrared structural image are respectively acquired by a binocular camera and an RGB camera with the same shooting visual angle, the edge or the area center of a high-reflection object is used as a characteristic point to establish a point-to-point relation, further, depth information of the high-reflection object is obtained, depth information of other objects in the infrared structural image is fused, three-dimensional coordinate registration of the high-reflection object and a non-high-reflection object in the infrared structural image is sequentially carried out, and further synchronous real-time positioning of a target is completed.

Description

Point cloud and strong reflection target real-time positioning method of binocular camera
Technical Field
The invention relates to a structured light object target positioning method, in particular to a point cloud and strong reflection target real-time positioning method of a binocular camera.
Background
In medical surgery or other treatments, the navigation of a surgical instrument or a treatment device is needed to be performed by utilizing the cooperation of imaging data and optical three-dimensional imaging data, and the spatial position of the surgical instrument or the treatment instrument relative to an organ or a focus is measured in real time, so that a doctor can perform accurate surgery or treatment based on the spatial position. At present, the acquisition of optical three-dimensional imaging data mainly depends on a depth camera, and the main working modes of the depth camera include a binocular non-structural light depth camera, a binocular structural light depth camera, a TOF camera and the like. The binocular structured light depth camera (structured light RGBD camera) can obtain depth information of an object in an image through a limit equation based on known parameters such as lens focal length, polar parallax and the like. In the use process of the camera, for solving the problems that the planar pure background can not provide binocular matching characteristic points, is sensitive to ambient light and the like, the known pattern projected by the infrared emitter is added on the basis of the binocular camera. However, the existing infrared structured light RGBD camera has a disadvantage in that the depth information cannot be obtained for the highly reflective object because the surface structured light of the highly reflective object cannot be imaged in a stable projection pattern, and thus further research and improvement on an image target positioning method for the highly reflective object with respect to the binocular structured light depth camera are necessary.
Disclosure of Invention
The invention aims to solve the defects, and provides a point cloud and strong reflection target real-time positioning method of a binocular camera, so as to solve the technical problems that a binocular structured light depth camera cannot stably project patterns on the surface of a highly-reflecting object to image, and the highly-reflecting object cannot obtain depth information in the prior art.
In order to solve the technical problems, the invention adopts the following technical scheme:
the invention provides a point cloud and strong reflection target real-time positioning method based on an active binocular camera, which comprises the following steps:
step A, after the binocular camera and the RGB camera are started, alternately projecting a target by using structural light and illumination light, alternately exposing the binocular camera at intervals of frames, wherein one frame is used for collecting an infrared structural image with structural light textures, and one frame is used for positioning a highly reflective object, so that the two frames are alternately used; the high-reflection object is positioned to be identified and counted by the RGB image acquired by the RGB camera;
step B, establishing a center point or an edge point of the high-reflection object at a corresponding position in the infrared structural image, establishing a point-to-point relationship through limit constraint, and calculating by using a triangle to obtain depth information of the high-reflection object;
and C, fusing the depth information of the high-reflection object with the depth information of other objects in the infrared structural image, and transmitting the fused depth information to a lower computer by combining the two-dimensional coordinates of the objects in the infrared structural image.
Preferably, the further technical scheme is as follows: the two-dimensional coordinates of the object in the target image are values of an X axis and a Y axis determined according to the pixel size calibrated in advance by the lenses of the binocular camera and the physical distance value between the lenses.
The further technical scheme is as follows: and C, acquiring the depth information of other objects in the step after the binocular camera in the step A is started, acquiring structural light texture corner points of other objects except for the high-reflectivity object in the infrared structural image, establishing a point-to-point relation of the texture corner points in the lens picture of the binocular camera, and calculating through the triangular ranging formula or solving the polar line equation.
The further technical scheme is as follows: the triangular ranging formula is as follows:
Figure BDA0003326365050000021
wherein Z is depth information, d is parallax, B is a physical distance value between lenses of the binocular camera, f is a lens focal length of the binocular camera, and x is a distance between lenses of the binocular camera l And x r The positions of the same name points of the binocular camera lens image on the left lens and the right lens image sensors are respectively.
The further technical scheme is as follows: and C, fusing the depth information of the high-reflection object with the depth information of other objects in the infrared structural image to form the depth information of all the objects in the infrared structural image acquired by the current binocular camera.
The further technical scheme is as follows: and (C) the shape of the highly reflective object in the step A is spherical.
The further technical scheme is as follows: the structured light is projected by an infrared emitter, which is alternately activated with the illumination light source.
The further technical scheme is as follows: the method is implemented on an FPGA platform.
Compared with the prior art, the invention has one of the following beneficial effects: the binocular camera and the RGB camera with the same shooting visual angle are used for respectively acquiring the infrared structural image and the RGB image, so that the position of a high-reflection object in the infrared structural image in a target image can be conveniently identified, the edge or the area center of the high-reflection object is used as a characteristic point to establish a point-to-point relation, further the depth information of the high-reflection object is obtained, the depth information of other objects in the infrared structural image is fused, the three-dimensional coordinate registration of the high-reflection object and the non-high-reflection object in the infrared structural image is sequentially carried out, the synchronous real-time positioning of the target is further completed, the condition that the structure light texture is influenced by the excessively strong illumination light can be avoided through the frame-isolation exposure of the binocular camera, the heating value of the illumination light source can be controlled by adopting a mode of low exposure for alternately projecting the target by the illumination light and the structure light, and the environmental interference is reduced.
Drawings
FIG. 1 is a flow chart illustrating a method of one embodiment of the present invention;
FIG. 2 is a block diagram of alternate exposure of an RGBD camera and a binocular camera;
FIG. 3 is a schematic diagram illustrating triangulation ranging in one embodiment of the present invention;
fig. 4 is a block diagram illustrating an application of an embodiment of the present invention.
Detailed Description
The invention is further elucidated below in connection with the accompanying drawings.
Referring to fig. 1, one embodiment of the present invention is a method for locating point cloud and strong reflective targets in real time by using RGB imaging based on the principle of existing structured light binocular cameras, to identify and count the difference between the high reflective objects (not limited to spheres, etc.) and the background brightness in the RGB image, and to solve the depth information for the non-high reflective objects by using the common structured light texture registration line constraint. For a highly reflective object, the edge or the regional centroid is used as a characteristic point pair for registration, and then independent depth information calculation is carried out, and then the depth information is fused with the depth information in the original image.
The system supporting the realization of the method adopts an RGB camera and two near infrared (850 nm) cameras (namely, binocular cameras), the resolution is 1920 x 1080, two sets of light sources are additionally arranged, one set is infrared structured light and is used for active binocular texture supplementation and depth map imaging, and the other set is annular light source sleeved outside the left and right near infrared cameras and is used for light bulb positioning. The two sets of light sources work alternately, one frame of near infrared camera is used for depth map solving, one frame of near infrared camera is used for light bulb positioning, and the cameras work at 60fps and can reach 30fps respectively. RGBD imaging and light bulb positioning adopt frame-separated exposure, light bulb positioning adopts a near infrared annular light source, and a low exposure mode is adopted, so that on one hand, the environmental interference is reduced, and on the other hand, the heating value of the system can be controlled.
Based on the foregoing, in this embodiment, the method provided by the present invention is described by taking the reflective pellets as an example, and the method is performed according to the following steps:
step S1, after the binocular camera and the RGB camera are started, the angles shot by the binocular camera and the RGB camera are the same, and two images which are respectively an infrared structural image (RGBD imaging) and a conventional RBG image can be obtained at the same time. When the illumination light and the structured light alternately project the target, the binocular camera alternately performs frame-spaced exposure, one frame is used for collecting an infrared structured image with structured light textures, namely RGBD imaging, and one frame is used for positioning the reflective pellets, thus, the cyclic alternating frame-spaced exposure can be specifically seen as shown in fig. 2, wherein the connection line A in the figure refers to RGBD imaging, the connection line B refers to light pellet positioning, namely the initial point t is initial exposure. Because the identification of the reflective pellets is more obvious in the RGB image and the illumination light does not influence the collection of the texture of the structured light, in the step, the reflective pellets are positioned to identify and count the RGB image collected by the RGB camera; judging which pixels form the whole of the reflective small sphere according to connectivity, judging that a plurality of reflective small spheres and the mass centers of the reflective small spheres exist, and averaging the horizontal and vertical coordinates of the highlighted pixels of one reflective small sphere to obtain the mass centers;
in this step, similar to the existing similar binocular camera, the structured light is projected by an infrared emitter, so as to form an infrared structured image with structured light texture, and in the above-mentioned step, the infrared emitter and the illumination light source are alternately started;
step S2, establishing a center point or an edge point of the light reflecting small sphere at a corresponding position in the infrared structural image, and when establishing a point-to-point relationship in a binocular camera lens picture, selecting the center point (mass center) or the edge point of the light reflecting small sphere, and calculating to obtain depth information of the light reflecting small sphere through limit constraint and a triangular ranging formula; the aforementioned point-to-point relationship refers to two points of the same object (or point) on two pictures of the two cameras on the left and right of the binocular; the respective centroids of the two corresponding reflective pellets on the two pictures acquired by the two lenses of the binocular camera establish a point pair of centroid points; if the edge points are utilized, establishing edge point pairs; the steps are shown on the right side of the figure 1;
and S3, fusing the depth information of the reflective pellets with the depth information of other objects in the infrared structural image, and transmitting the fused depth information to a lower computer by combining the two-dimensional coordinates of the objects in the infrared structural image.
In this step, the two-dimensional coordinates of the object in the target image are values of the X-axis and the Y-axis determined according to the pixel size calibrated in advance by the lenses of the binocular camera and the physical distance value between the lenses. And the depth information of other objects in the infrared structural image is obtained by collecting structural light texture corner points of other objects except the reflective small balls in the infrared structural image after the binocular camera is started in step S1, then establishing a point-to-point relation of the texture corner points in a lens picture of the binocular camera, and then calculating through the triangular ranging formula or solving an epipolar equation, namely the step shown in the left side of the figure 1.
The limit constraint applied in this embodiment is a method of reducing the computation of the feature point pair matching algorithm commonly used for structured light depth cameras. Epipolar constraints describe constraints that an image point, camera optical center, forms under a projection model when the same point is projected onto two images at different perspectives.
The triangulation method applied in the present embodiment is also a common depth information calculation method for a structured light depth camera, as shown in fig. 3, and a more typical formula is as follows:
Figure BDA0003326365050000061
the principle is shown in figure 3, in the above formula, Z is depth information, d is parallax, B is physical distance value between the lenses of the binocular camera, f is lens focal length of the binocular camera, and x l And x r Respectively, on the left lens and right lens image sensors.
Specifically, the depth information in this embodiment is fused by directly overlapping the depth information of the reflective pellets with the depth information of other objects in the infrared structural image, so as to form the depth information of all the objects in the infrared structural image acquired by the current binocular camera. Specifically, the fusion of depth information means that a textured place uses texture to establish a point pair to obtain depth information, and a place incapable of forming a structured light texture uses a mass center/edge solution to obtain depth information, and the depth information and the mass center/edge solution are overlapped to obtain a space information three-dimensional coordinate of the textured place and a mass center/edge point three-dimensional coordinate; because the binocular camera is calibrated in application, namely the distance between the two cameras, the pixel size of the sensor of the camera and imaging parameters such as a lens and the like are established with a physical space on a certain distance, and on the basis, three-dimensional coordinates can be directly obtained by solving the triangular ranging formula of the type.
Referring to fig. 4, in a preferred application example of the present invention, the inventor considers that the existing depth camera mainstream is a depth calculation algorithm implemented on an FPGA platform, so that the carrier implemented by the method adopts an FPGA S OC platform, and a specific block diagram is shown in fig. 4, that is, a logic unit and processing core asynchronous calculation scheme is adopted. The computing platform is an embedded computer which can be a PC or an ARM-based computer.
It should be noted that the above epipolar constraint and triangulation formula has application in the art, and therefore the principles of the constraint and formula and the manner of solving are not described in detail.
In addition to the foregoing, it should be noted that references in the specification to "one embodiment," "another embodiment," "an embodiment," etc., mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment described generally in the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is intended that such feature, structure, or characteristic be implemented within the scope of the invention.
Although the invention has been described herein with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the scope and spirit of the principles of this disclosure. More specifically, various variations and modifications may be made to the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, drawings and claims of this application. In addition to variations and modifications in the component parts and/or arrangements, other uses will be apparent to those skilled in the art.

Claims (7)

1. The method for positioning the point cloud and the strong reflection target of the binocular camera in real time is characterized by comprising the following steps:
step A, after a binocular camera and an RGB camera are started, alternately projecting a target by using structural light and illumination light, alternately exposing the binocular camera at intervals of frames, wherein one frame is used for collecting an infrared structural image with structural light textures, and the other frame is used for positioning a highly reflective object; the high-reflection object is positioned to be identified and counted by the RGB image acquired by the RGB camera;
step B, establishing a center point or an edge point of the high-reflection object at a corresponding position in the infrared structural image, establishing a point-to-point relationship through limit constraint, and calculating by using a triangular ranging formula to obtain depth information of the high-reflection object;
step C, fusing the depth information of the high-reflection object with the depth information of other objects in the infrared structural image, and transmitting the fused depth information to a lower computer by combining the two-dimensional coordinates of the objects in the infrared structural image; and D, after the binocular camera is started in the step A, collecting structural light texture corner points of other objects except the high-reflectivity object in the infrared structural image, establishing a point-to-point relation of the texture corner points in a lens picture of the binocular camera, and calculating by using the triangular ranging formula or solving an polar line equation.
2. The method for real-time positioning of a point cloud and a strongly reflective target of a binocular camera according to claim 1, wherein: and C, the two-dimensional coordinates of the object in the target image are values of an X axis and a Y axis, which are determined according to the size of pixels calibrated in advance by the lenses of the binocular camera and the physical distance value between the lenses.
3. The method for locating the point cloud and the strong reflection target of the binocular camera in real time according to claim 1, wherein the triangle ranging formula is as follows:
Figure FDA0004134935080000011
wherein Z is depth information, d is parallax, B is a physical distance value between lenses of the binocular camera, f is a lens focal length of the binocular camera, and x is a distance between lenses of the binocular camera l And x r The positions of the same name points of the binocular camera lens image on the left lens and the right lens image sensors are respectively.
4. The method for real-time positioning of a point cloud and a strongly reflective target of a binocular camera according to claim 1 or 2, wherein: and C, fusing the depth information of the high-reflection object with the depth information of other objects in the infrared structural image to form the depth information of all the objects in the infrared structural image acquired by the current binocular camera.
5. The method for real-time positioning of a point cloud and a strongly reflective target of a binocular camera according to claim 1 or 2, wherein: and (C) the shape of the highly reflective object in the step A is spherical.
6. The method for real-time positioning of a point cloud and a strongly reflective target of a binocular camera according to claim 1 or 2, wherein: the structured light is projected by an infrared emitter, which is alternately activated with the illumination light source.
7. The method for real-time positioning of a point cloud and a strongly reflective target of a binocular camera according to claim 1 or 2, wherein: the method is implemented on an FPGA platform.
CN202111263387.6A 2021-04-08 2021-04-08 Point cloud and strong reflection target real-time positioning method of binocular camera Active CN114004880B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111263387.6A CN114004880B (en) 2021-04-08 2021-04-08 Point cloud and strong reflection target real-time positioning method of binocular camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111263387.6A CN114004880B (en) 2021-04-08 2021-04-08 Point cloud and strong reflection target real-time positioning method of binocular camera
CN202110375959.3A CN113052898B (en) 2021-04-08 2021-04-08 Point cloud and strong-reflection target real-time positioning method based on active binocular camera

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110375959.3A Division CN113052898B (en) 2021-04-08 2021-04-08 Point cloud and strong-reflection target real-time positioning method based on active binocular camera

Publications (2)

Publication Number Publication Date
CN114004880A CN114004880A (en) 2022-02-01
CN114004880B true CN114004880B (en) 2023-04-25

Family

ID=76519395

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110375959.3A Active CN113052898B (en) 2021-04-08 2021-04-08 Point cloud and strong-reflection target real-time positioning method based on active binocular camera
CN202111263387.6A Active CN114004880B (en) 2021-04-08 2021-04-08 Point cloud and strong reflection target real-time positioning method of binocular camera

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110375959.3A Active CN113052898B (en) 2021-04-08 2021-04-08 Point cloud and strong-reflection target real-time positioning method based on active binocular camera

Country Status (1)

Country Link
CN (2) CN113052898B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115471874B (en) * 2022-10-28 2023-02-07 山东新众通信息科技有限公司 Construction site dangerous behavior identification method based on monitoring video

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110297491A (en) * 2019-07-02 2019-10-01 湖南海森格诺信息技术有限公司 Semantic navigation method and its system based on multiple structured light binocular IR cameras
CN110533708A (en) * 2019-08-28 2019-12-03 维沃移动通信有限公司 A kind of electronic equipment and depth information acquisition method
CN111012370A (en) * 2019-12-25 2020-04-17 四川大学华西医院 AI-based X-ray imaging analysis method and device and readable storage medium
CN111657947A (en) * 2020-05-21 2020-09-15 四川大学华西医院 Positioning method of nerve regulation target area

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088116A (en) * 1998-03-11 2000-07-11 Pfanstiehl; John Quality of finish measurement optical instrument
EP2625845B1 (en) * 2010-10-04 2021-03-03 Gerard Dirk Smits System and method for 3-d projection and enhancements for interactivity
CN104111036A (en) * 2013-04-18 2014-10-22 中国科学院沈阳自动化研究所 Mirror object measuring device and method based on binocular vision
CN103868460B (en) * 2014-03-13 2016-10-05 桂林电子科技大学 Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN104905765B (en) * 2015-06-08 2017-01-18 四川大学华西医院 Field programmable gate array (FPGA) implement method based on camshift (CamShift) algorithm in eye movement tracking
CN104905764B (en) * 2015-06-08 2017-09-12 四川大学华西医院 A kind of high speed sight tracing based on FPGA
CN107123156A (en) * 2017-03-10 2017-09-01 西北工业大学 A kind of active light source projection three-dimensional reconstructing method being combined with binocular stereo vision
CN107948520A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method and device
CN108470373B (en) * 2018-02-14 2019-06-04 天目爱视(北京)科技有限公司 It is a kind of based on infrared 3D 4 D data acquisition method and device
CN108564041B (en) * 2018-04-17 2020-07-24 云从科技集团股份有限公司 Face detection and restoration method based on RGBD camera
CN109978953A (en) * 2019-01-22 2019-07-05 四川大学 Method and system for target three-dimensional localization
CN110021035B (en) * 2019-04-12 2020-12-11 哈尔滨工业大学 Marker of Kinect depth camera and virtual marker tracking method based on marker
CN110390719B (en) * 2019-05-07 2023-02-24 香港光云科技有限公司 Reconstruction equipment based on flight time point cloud
CN110097024B (en) * 2019-05-13 2020-12-25 河北工业大学 Human body posture visual recognition method of transfer, transportation and nursing robot
CN110349213B (en) * 2019-06-28 2023-12-12 Oppo广东移动通信有限公司 Pose determining method and device based on depth information, medium and electronic equipment
CN110349251B (en) * 2019-06-28 2020-06-16 深圳数位传媒科技有限公司 Three-dimensional reconstruction method and device based on binocular camera
CN112465905A (en) * 2019-09-06 2021-03-09 四川大学华西医院 Characteristic brain region positioning method of magnetic resonance imaging data based on deep learning
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 3D imaging method based on coded structured light and dual purposes
CN111121722A (en) * 2019-12-13 2020-05-08 南京理工大学 Binocular three-dimensional imaging method combining laser dot matrix and polarization vision
CN111336947A (en) * 2020-03-02 2020-06-26 南昌航空大学 Mirror surface object line laser scanning method based on binocular point cloud fusion
CN111754573A (en) * 2020-05-19 2020-10-09 新拓三维技术(深圳)有限公司 Scanning method and system
CN111750806B (en) * 2020-07-20 2021-10-08 西安交通大学 Multi-view three-dimensional measurement system and method
CN111951376B (en) * 2020-07-28 2023-04-07 中国科学院深圳先进技术研究院 Three-dimensional object reconstruction method fusing structural light and photometry and terminal equipment
CN111950426A (en) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Target detection method and device and delivery vehicle
CN112053432B (en) * 2020-09-15 2024-03-26 成都贝施美医疗科技股份有限公司 Binocular vision three-dimensional reconstruction method based on structured light and polarization
CN112254670B (en) * 2020-10-15 2022-08-12 天目爱视(北京)科技有限公司 3D information acquisition equipment based on optical scanning and intelligent vision integration
CN112308014A (en) * 2020-11-18 2021-02-02 成都集思鸣智科技有限公司 High-speed accurate searching and positioning method for reflective points of pupils and cornea of eyes
CN112595262B (en) * 2020-12-08 2022-12-16 广东省科学院智能制造研究所 Binocular structured light-based high-light-reflection surface workpiece depth image acquisition method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110297491A (en) * 2019-07-02 2019-10-01 湖南海森格诺信息技术有限公司 Semantic navigation method and its system based on multiple structured light binocular IR cameras
CN110533708A (en) * 2019-08-28 2019-12-03 维沃移动通信有限公司 A kind of electronic equipment and depth information acquisition method
CN111012370A (en) * 2019-12-25 2020-04-17 四川大学华西医院 AI-based X-ray imaging analysis method and device and readable storage medium
CN111657947A (en) * 2020-05-21 2020-09-15 四川大学华西医院 Positioning method of nerve regulation target area

Also Published As

Publication number Publication date
CN113052898A (en) 2021-06-29
CN113052898B (en) 2022-07-12
CN114004880A (en) 2022-02-01

Similar Documents

Publication Publication Date Title
CN106595528B (en) A kind of micro- binocular stereo vision measurement method of telecentricity based on digital speckle
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN110044300A (en) Amphibious 3D vision detection device and detection method based on laser
JP3624353B2 (en) Three-dimensional shape measuring method and apparatus
CA2875820C (en) 3-d scanning and positioning system
CN105004324B (en) A kind of monocular vision sensor with range of triangle function
CN108388341B (en) Man-machine interaction system and device based on infrared camera-visible light projector
CN110288656A (en) A kind of object localization method based on monocular cam
CN114998499A (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
CN107481288A (en) The inside and outside ginseng of binocular camera determines method and apparatus
CN111127540B (en) Automatic distance measurement method and system for three-dimensional virtual space
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
CN109242898A (en) A kind of three-dimensional modeling method and system based on image sequence
CN114004880B (en) Point cloud and strong reflection target real-time positioning method of binocular camera
CN116188558A (en) Stereo photogrammetry method based on binocular vision
CN110909571B (en) High-precision face recognition space positioning method
Yamauchi et al. Calibration of a structured light system by observing planar object from unknown viewpoints
CN206300653U (en) A kind of space positioning apparatus in virtual reality system
CN111862170A (en) Optical motion capture system and method
CN108090930A (en) Barrier vision detection system and method based on binocular solid camera
CN107063131B (en) A kind of time series correlation non-valid measurement point minimizing technology and system
WO2022111104A1 (en) Smart visual apparatus for 3d information acquisition from multiple roll angles
CN212256370U (en) Optical motion capture system
CN114155349A (en) Three-dimensional mapping method, three-dimensional mapping device and robot
TW202203645A (en) Method, processing device, and display system for information display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant