CN103971405A - Method for three-dimensional reconstruction of laser speckle structured light and depth information - Google Patents
Method for three-dimensional reconstruction of laser speckle structured light and depth information Download PDFInfo
- Publication number
- CN103971405A CN103971405A CN201410190263.3A CN201410190263A CN103971405A CN 103971405 A CN103971405 A CN 103971405A CN 201410190263 A CN201410190263 A CN 201410190263A CN 103971405 A CN103971405 A CN 103971405A
- Authority
- CN
- China
- Prior art keywords
- depth information
- structured light
- speckle
- laser speckle
- laser
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The invention provides a method for three-dimensional reconstruction of laser speckle structured light and depth information. The three-dimensional reconstruction technology is an important subject for machine vision research and refers to the content that a three-dimensional space geometrical shape of a three-dimensional body is restored through images of the three-dimensional body. Generally, three-dimensional reconstruction is conducted through the binocular parallax principle of a binocular camera or through a triangulation method or space codes are obtained through the structured light and the depth information is obtained through the triangulation method. The method aims at obtaining the depth information through the laser speckle structured light, a similar invention such as the kinect of the Microsoft Corporation also obtains the depth information (namely different depths are matched through a cross-correlation function of laser speckles) of an object through the method, and the difference is an algorithm for obtaining the depth information through the speckles. According to the method, parallel code number sorting is conducted on each pixel block one by one by a thinning window through multiple support vector machines, so that the depth of each pixel window is obtained, coordinates under a world coordinate system of the object are obtained by inversely solving a camera model through the depth information, and therefore the depth information with the higher accuracy can be obtained.
Description
Technical field
The invention belongs to machine vision research field, is a kind of three-dimensional reconstruction based on laser speckle structured light.
Background technology
Three-dimensional reconstruction is one of important topic of machine vision research, refers to that its image by three-dimensional body recovers the three dimensions geometric configuration of three-dimensional body.The mode of general three-dimensional reconstruction has by the binocular parallax principle of twin camera passes through triangulation, or obtains space encoding by structured light, obtains depth information by triangulation.
The invention is intended to obtain depth information with a kind of laser speckle structured light, similarly invention as the kinect of Microsoft be also the depth information (cross correlation function by laser speckle mates different depth) that obtains object by this kind of method, different is by the algorithm of the depth information of reentrying after speckle, the present invention's proposition encodes to obtain the degree of depth of each pixel window by many support vector machine parallel sorting to individual element piece by refinement window, obtain the coordinate of object under world coordinate system by the anti-camera model of separating of depth information.
Three-dimensional reconstruction is generally the world coordinates that solves object point according to the video camera of having demarcated by the inside and outside portion of known demarcation parameter, and solution procedure has determined that because of the restriction of camera model and solving equation group the result solving can only be a ray equation, and cannot directly obtain by its equation 3 world coordinatess of object point.The invention is intended to propose a kind of new mode and obtain the depth coordinate of object point, and then carry out 3 coordinates under the three-dimensional world coordinate system of direct solution object point by the equation of linear camera model and radial distortion.
When laser is by coarse transparent surface (as frosted glass) and while being incident upon body surface, can observe the light and shade spot of random distribution at body surface, i.e. laser speckle.The generation of this laser speckle is in the time that Ear Mucosa Treated by He Ne Laser Irradiation is on rough surface, on surface, every bit is all wanted scattered light, and all receiving the irradiation of these coherent scattering light, a space point just forms laser speckle, speckle field is divided into two kinds by light path, a kind of speckle field is propagated and is formed (also referred to as objective speckle) in free space, another kind forms (also referred to as subjective speckle) by lens imaging, and that the present invention uses is the latter.
And the regular speckle forming for space each point, wherein comprise the depth information in every, space, catch formed speckle by thermal camera, by the extraction of feature, the training of sorter (SVM), finally can obtain the depth information comprising in the speckle forming about every, space.
By obtained depth information, pass through formula
Carry out the three-dimensional coordinate of solution room each point.
Summary of the invention
What camera calibration process adopted conventionally is classical pin-hole imaging model, and this model is generally described with following formula:
Wherein: in space, the homogeneous coordinates of any point P in world coordinate system are P
w(x
wy
wz
w1)
t, the homogeneous coordinates in image coordinate system are p (u v 1)
t.λ is arbitrary scale factor; K is camera intrinsic parameter matrix, and wherein s is the pattern distortion factor, f
u, f
vfor the unable coordinate of image picture point in u direction and v direction is to the scale-up factor of image pixel coordinate, i.e. effective focal length, (u
ov
o) be the image coordinate of primary optical axis and plane of delineation intersection point.R is a rotation matrix that 3*3 unit is orthogonal; T is a translation vector; Meanwhile, (R, T) is the position of camera coordinate system with respect to world coordinate system.
By above-mentioned formula, the in the situation that of known inside and outside parameter, can obtain two system of equations, wherein unknown is the object point coordinate under world coordinate system, so, if solved and must first be tried to achieve a coordinate by above-mentioned formula, the present invention solves all the other two coordinates by the unknown number number that solves depth information and reduce in above-mentioned equation.
Use laser speckle to encode to space, the present invention uses subjective speckle, i.e. the speckle that scioptics imaging forms is encoded to the space of certain angle, obtains coded image by infrared camera.
Be used as the training set of proper vector as SVM by the feature extraction of the coded image to different distance, send in SVM and train, it should be noted that, each sub-picture is all the conduct training set of the affiliated SVM of distance separately, will obtain like this one group of SVM, also here determine for the precision that finally solves the depth distance obtaining, distance interval in the scope of speckle encoding between different images is less naturally better, but obtain simultaneously the number of SVM also can become many thereupon, like this can the cost more time on the time of compute depth distance subsequently, have not little impact for the real-time of image.
The calculating of depth distance is the test set that is used as SVM by the feature extraction of speckle test pattern, encode by multiclass SVM and many SVM are carried out to computing simultaneously and finally obtain one group of binary coding, be multiplied by quality coefficient by this group coding and obtain concrete depth information.In the time obtaining the depth information of target area, the zones of different even distance between different pixels is different, want to obtain desirable depth information, need to be in the time of coupling SVM, extract computing to making window in test set, even for accurately will be to each speckle, pixel be used as window, in window, carry out feature extraction, as the training set of SVM.
After Depth Information Acquistion, pass through
Formula is brought to solve corresponding all the other coordinates into and is obtained the three-dimensional coordinate under object point world coordinate system; Can calculate easily rotation matrix and translation matrix by suitably choosing world coordinate system, when mobile, the three-dimensional reconstruction of object is fine so.
Brief description of the drawings
Fig. 1 main flow chart of the present invention.
The infrared SVM calibration process of Fig. 2 the present invention schematic diagram.
Fig. 3 test set of the present invention is sent into the schematic diagram of classifying in SVM.Fig. 4 feature extraction of the present invention with organize schematic diagram.
Certain SVM training process flow diagram of Fig. 5 the present invention.
Fig. 6 SVM of the present invention trains process flow diagram in batches.
Fig. 7 svm classifier process flow diagram of the present invention.
Embodiment
Basic hardware used in the present invention:
1, infrared camera;
2, generating laser;
3,3 of diffractive optical components;
Below in conjunction with accompanying drawing, elaborate embodiments of the present invention:
Fig. 1 has shown that the present invention is to three-dimensional reconstruction flow process, and embodiment is described as follows:
1, algorithm flow of the present invention is as follows:
(1) demarcate first.Demarcate first the stated accuracy having determined in use procedure below, because field involved in the present invention is at three-dimensional reconstruction, stated accuracy claimed range is wider, can choose the scaling method of different accuracy under different three-dimensional reconstruction scenes.
(2) choose suitable world coordinate system.Three-dimensional scenic is carried out in process of reconstruction, sometimes need attitude adjustment or movement to video camera; For not needing in mobile situation, rotation matrix and translation matrix not variation there is no impact to later stage calculating three-dimensional coordinate; Situation about moving for needs as long as choose suitable world coordinate system, solves rotation and translation matrix by analyzing the motion track of video camera in video camera moving process.
(3) laser speckle structured light projection.The laser that laser generator produces obtains required speckle pattern by optics (as diffractive optical components) scattering, each zero level energy for safe laser regulation is no more than the 0.4mw upper limit, we need special optics that the laser zero level energy scattering out is reduced to this below scope, the optical design (number of patent application CN200880119911) of the patent of applying at home such as Prime Sensez for zero level is reduced.
(4) as Fig. 2, the laser speckle scattering out has certain calibration range to space, this scope can obtain by the scattering angle of diffuse optical parts, within the scope of this, the specific demarcation thing of the interior use that keeps at a certain distance away carries out mark to speckle, if spread is that-30 degree (comprise upper and lower to 30 degree, left and right), depth distance is 0.5-3.5 rice, can get speckle image one time every 1mm with the demarcation thing that is greater than range of scatter angles, and this spacing distance also determines to ask for the precision of depth distance below.
(5) in step 4, in the scope of 3 meters, get and once demarcate speckle image every 1mm, have like this 300 sub-pictures, every width image is carried out to feature extraction, by PCA method, the speckle image of window is carried out to feature extraction, simultaneously for make up PCA some time the inefficacy of catching to feature, make 300 points of training sets by the feature such as brightness, diameter of extracting speckle.As Fig. 4
(6), in step 5, the flow process of PCA algorithm is to its normalization matrix X separately of the every width image calculation in 300 width images
i *, calculate X
i *covariance matrix C; Covariance matrix is carried out to Eigenvalues Decomposition, choose maximum p eigenwert characteristic of correspondence vector composition projection matrix; Here the standard of selected characteristic value is to be comprehensively greater than 90% of all eigenwert summations for selected eigenwert; Original sample matrix is carried out to projection, obtain its major component S
*.The concrete formula using is as follows:
(C-λ
i)p=0
S
*=S*p
(7) because of the defect of PCA self, under certain conditions, PCA method can can't be very effective to the extraction of some feature, form composite character so need to add other features to be used as characteristic attribute in major component feature set, concrete is exactly by speckle size, brightness jointly forms composite character as new feature and major component feature and is used as training set, the method of its mixing is that its speckle characteristics of major component of extracting speckle place block of pixels by judgement is consistent, as: the matrix that certain speckle place picture element matrix is formed for a 115*109, if be selected in the corresponding major component feature of this matrix so, speckle attributive character in the time mixing in this matrix is consistent.
(8) training set is sent in SVM and trained, SVM is that a quantity is followed the equal SVM group of training set quantity here, and in this group SVM, each SVM trains a training set, and the organizational form between SVM is considered parallel or tandem.As Fig. 6.
(9) in step 8, be consistent to the training of each SVM, as Fig. 5 specific practice is, the kernel function of first selecting SVM to use, is used RBF kernel function here, and its RBF kernel functional parameter σ initial value is set; Its feature space of calculation of parameter using the feature set of extracting in step 7 as RBF; Solve Lagrange-duality antithesis factor-alpha, calculate sorting parameter w, b by α; The disaggregated model calculating is calculated to its average accuracy by the method for crosscheck, if average accuracy is greater than 90%, training finishes, otherwise, to kernel functional parameter, σ resets, and so circulation of training again, until find the kernel functional parameter σ that average accuracy is the highest.The part formula of wherein using is as follows:
E
i=r
i-y
i, η=K (x
1, x
1)+K (x
1, x
1)-2K (x
1, x
2), r is geometric distance, y is tag along sort.
(10) by the projection of step 3, can obtain equally the speckle image to certain particular space, classification process is also from here on.As Fig. 7
(11) pixel of supposing speckle image is 1080*768, need to carry out window to the speckle image obtaining, in order to obtain the accurate as far as possible depth information of every bit, can consider on ageing basis as far as possible by window drawdown ratio as the window of choosing be 6*6, speckle image is also just divided into 180*128 piece.
(12) each piece window is carried out to feature extraction, here feature used is that the feature extracted with step 5 is consistent, suppose that the feature of extracting is one 5 dimension row vector, pass through so a series of processing, the image of this width 1080*768 has just been divided into the test set of 180*128*5 part.
(13) test set of step 9 is sent in SVM and classified, each test set can obtain the binary coding of a group 300, as Fig. 3.
(14) binary coding obtaining in step 10 is passed through to formula D=L
s+ B*dist wherein D represents depth distance, L
sthe initial distance that represents the depth distance in step 4, B represents that the binary coding of SVM is converted to the number of 10 systems, dist represents spacing distance.Try to achieve the depth distance of every window by formula, the distance between every window is tried to achieve to the depth information of each pixel by image interpolation algorithm.
(15) pass through formula
The three-dimensional coordinate of obtaining every some pixel carries out scene rebuilding.
Claims (10)
1. the three-dimensional rebuilding method that obtains depth information based on laser speckle structured light by support vector machine, is characterized in that, comprises
Following committed step:
Step 1: camera calibration;
Step 2: choose suitable coordinate system;
Step 3: laser speckle structured light projection;
Step 4: obtain and demarcate speckle image and speckle test pattern;
Step 5: feature extraction;
Step 6: SVM is training in batches;
Step 7: the test pattern that step 4 is obtained carries out window;
Step 8: feature extraction;
Step 9: svm classifier;
Step 10: according to classification results compute depth.
2. according to claim 1, the three-dimensional rebuilding method that obtains depth information by support vector machine based on laser speckle structured light is characterized in that: in step 1, the camera of demarcating is infrared camera and common CCD camera, scaling method can be used but not limited to following methods: two-step approach, three-step approach, least square method etc.
3. according to claim 1, the three-dimensional rebuilding method that obtains depth information by support vector machine based on laser speckle structured light is characterized in that: in step 2, in the time choosing world coordinate system, generally choose the camera coordinate system at thermal camera place, making has controlled moving can calculate fast rotation matrix and translation matrix once video camera.
4. according to claim 1, the three-dimensional rebuilding method that obtains depth information by support vector machine based on laser speckle structured light is characterized in that: in step 3, laser speckle projection will be passed through the diffractive optical components group of particular design, to at least pass through the diffractive optical components (DOE) of three parts specific to the laser penetrating from laser maker, by first component, for by laser light scattering certain angle, diffraction goes out the speckle pattern of given shape simultaneously; The 2nd DOE position can be contained the pattern that first component institute diffraction goes out substantially, by the diffraction of the 2nd DOE, speckle is done to optical convolution and form specific pattern, that spot size is after projecting to the requirement of these two DOE, the spot size catching after returning by target area scattering is at least greater than or is substantially all greater than the size of a pixel, the 3rd optics selects to have with the 2nd DOE the physical dimension in same space cycle, but design DOE transport function, make to be less than 0.1mw by its zero level energy.
5. according to claim 1, the three-dimensional rebuilding method that obtains depth information by support vector machine based on laser speckle structured light is characterized in that: in step 4, for demarcating speckle image, get the sampled images that dist is interval, every width image will cover whole demarcation planes, make sampled images speckle can fully characterize the speckle characteristics of certain distance, according to claim 1, the three-dimensional rebuilding method that obtains depth information by support vector machine based on laser speckle structured light is characterized in that: in step 5, for the pattern collecting, carry out following feature extraction but be not limited to these: PCA major component, spot size, brightness, gray shade scale, and coefficient of autocorrelation and Hurst index between them.
6. according to claim 1, the three-dimensional rebuilding method that obtains depth information by support vector machine based on laser speckle structured light is characterized in that: in step 6, the feature of each width figure under dist distance is sent in following SVM group as training set: the parameter of each SVM is all consistent, and use RBF kernel function, make its kernel functional parameter σ and regularization parameter c optimum by SVM cross validation, optimum standard is that its training result is more than 90%.
7. according to claim 1, the three-dimensional rebuilding method that obtains depth information by support vector machine based on laser speckle structured light is characterized in that: in step 7, described window method can arbitrarily be chosen, also can realize self-adapting window by window function, no matter but which kind of window method is all the consideration based on computing time and precision, post-processed speed at least reaches 60ms/f.
8. according to claim 1, the three-dimensional rebuilding method that obtains depth information by support vector machine based on laser speckle structured light is characterized in that: in step 8, related feature extraction is consistent with the feature extraction described in claim 6, can be, but not limited to following characteristics: spot size, brightness, gray shade scale, and coefficient of autocorrelation and Hurst index between them.
9. according to claim 1, the three-dimensional rebuilding method that obtains depth information by support vector machine based on laser speckle structured light is characterized in that: in step 9, test set is sent in SVM group, and the classification time for each window sample in certain SVM is no more than 0.015ms.
10. according to claim 1, the three-dimensional rebuilding method that obtains depth information by support vector machine based on laser speckle structured light is characterized in that: in step 10, the two-stage system data that obtain according to SVM group, by formula D=L
s+ B*dist calculates the depth information of each window representative, then in window pixel, carries out interpolation according to selected window, obtains the depth information of each pixel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410190263.3A CN103971405A (en) | 2014-05-06 | 2014-05-06 | Method for three-dimensional reconstruction of laser speckle structured light and depth information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410190263.3A CN103971405A (en) | 2014-05-06 | 2014-05-06 | Method for three-dimensional reconstruction of laser speckle structured light and depth information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103971405A true CN103971405A (en) | 2014-08-06 |
Family
ID=51240850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410190263.3A Pending CN103971405A (en) | 2014-05-06 | 2014-05-06 | Method for three-dimensional reconstruction of laser speckle structured light and depth information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103971405A (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104360633A (en) * | 2014-10-10 | 2015-02-18 | 南开大学 | Human-computer interaction system for service robot |
CN105468375A (en) * | 2015-11-30 | 2016-04-06 | 扬州大学 | Surface structure light point cloud data oriented corresponding point search structure construction method |
CN105675549A (en) * | 2016-01-11 | 2016-06-15 | 武汉大学 | Portable crop parameter measurement and growth vigor intelligent analysis device and method |
CN106352809A (en) * | 2016-08-24 | 2017-01-25 | 中国科学院上海光学精密机械研究所 | Method for detecting fogging depth of neodymium-doped laser phosphate glass surface and method for fogging removal |
CN106576159A (en) * | 2015-06-23 | 2017-04-19 | 华为技术有限公司 | Photographing device and method for acquiring depth information |
CN106643492A (en) * | 2016-11-18 | 2017-05-10 | 中国民航大学 | Aeroengine damaged blade three-dimensional digital speckle moulding method |
CN107392874A (en) * | 2017-07-31 | 2017-11-24 | 广东欧珀移动通信有限公司 | U.S. face processing method, device and mobile device |
CN107491302A (en) * | 2017-07-31 | 2017-12-19 | 广东欧珀移动通信有限公司 | terminal control method and device |
CN107563304A (en) * | 2017-08-09 | 2018-01-09 | 广东欧珀移动通信有限公司 | Unlocking terminal equipment method and device, terminal device |
CN107833254A (en) * | 2017-10-11 | 2018-03-23 | 中国长光卫星技术有限公司 | A kind of camera calibration device based on diffraction optical element |
CN108050955A (en) * | 2017-12-14 | 2018-05-18 | 合肥工业大学 | Based on structured light projection and the relevant high temperature air disturbance filtering method of digital picture |
CN108645353A (en) * | 2018-05-14 | 2018-10-12 | 四川川大智胜软件股份有限公司 | Three dimensional data collection system and method based on the random binary coding light field of multiframe |
CN108696682A (en) * | 2018-04-28 | 2018-10-23 | Oppo广东移动通信有限公司 | Data processing method, device, electronic equipment and computer readable storage medium |
CN108921027A (en) * | 2018-06-01 | 2018-11-30 | 杭州荣跃科技有限公司 | A kind of running disorder object recognition methods based on laser speckle three-dimensional reconstruction |
CN109087350A (en) * | 2018-08-07 | 2018-12-25 | 西安电子科技大学 | Fluid light intensity three-dimensional rebuilding method based on projective geometry |
CN109100740A (en) * | 2018-04-24 | 2018-12-28 | 北京航空航天大学 | A kind of three-dimensional image imaging device, imaging method and system |
CN109102559A (en) * | 2018-08-16 | 2018-12-28 | Oppo广东移动通信有限公司 | Threedimensional model treating method and apparatus |
CN109167904A (en) * | 2018-10-31 | 2019-01-08 | Oppo广东移动通信有限公司 | Image acquiring method, image acquiring device, structure optical assembly and electronic device |
CN109405765A (en) * | 2018-10-23 | 2019-03-01 | 北京的卢深视科技有限公司 | A kind of high accuracy depth calculation method and system based on pattern light |
CN109581327A (en) * | 2018-11-20 | 2019-04-05 | 天津大学 | Totally-enclosed Laser emission base station and its implementation |
CN109798838A (en) * | 2018-12-19 | 2019-05-24 | 西安交通大学 | A kind of ToF depth transducer and its distance measuring method based on laser speckle projection |
CN109887022A (en) * | 2019-02-25 | 2019-06-14 | 北京超维度计算科技有限公司 | A kind of characteristic point matching method of binocular depth camera |
CN109978809A (en) * | 2017-12-26 | 2019-07-05 | 同方威视技术股份有限公司 | Image processing method, device and computer readable storage medium |
CN110012206A (en) * | 2019-05-24 | 2019-07-12 | Oppo广东移动通信有限公司 | Image acquiring method, image acquiring device, electronic equipment and readable storage medium storing program for executing |
CN110009673A (en) * | 2019-04-01 | 2019-07-12 | 四川深瑞视科技有限公司 | Depth information detection method, device and electronic equipment |
CN110264573A (en) * | 2019-05-31 | 2019-09-20 | 中国科学院深圳先进技术研究院 | Three-dimensional rebuilding method, device, terminal device and storage medium based on structure light |
CN110337674A (en) * | 2019-05-28 | 2019-10-15 | 深圳市汇顶科技股份有限公司 | Three-dimensional rebuilding method, device, equipment and storage medium |
CN110415226A (en) * | 2019-07-23 | 2019-11-05 | Oppo广东移动通信有限公司 | Measuring method, device, electronic equipment and the storage medium of stray light |
CN110969656A (en) * | 2019-12-10 | 2020-04-07 | 长春精仪光电技术有限公司 | Airborne equipment-based laser beam spot size detection method |
CN112669362A (en) * | 2021-01-12 | 2021-04-16 | 四川深瑞视科技有限公司 | Depth information acquisition method, device and system based on speckles |
US11050918B2 (en) | 2018-04-28 | 2021-06-29 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for performing image processing, and computer readable storage medium |
CN113379816A (en) * | 2021-06-29 | 2021-09-10 | 北京的卢深视科技有限公司 | Structure change detection method, electronic device, and storage medium |
US11126016B2 (en) | 2016-04-04 | 2021-09-21 | Carl Zeiss Vision International Gmbh | Method and device for determining parameters for spectacle fitting |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101984767A (en) * | 2008-01-21 | 2011-03-09 | 普莱姆森斯有限公司 | Optical designs for zero order reduction |
US20120056982A1 (en) * | 2010-09-08 | 2012-03-08 | Microsoft Corporation | Depth camera based on structured light and stereo vision |
CN103279987A (en) * | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
-
2014
- 2014-05-06 CN CN201410190263.3A patent/CN103971405A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101984767A (en) * | 2008-01-21 | 2011-03-09 | 普莱姆森斯有限公司 | Optical designs for zero order reduction |
US20120056982A1 (en) * | 2010-09-08 | 2012-03-08 | Microsoft Corporation | Depth camera based on structured light and stereo vision |
CN103279987A (en) * | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
Non-Patent Citations (2)
Title |
---|
牛连丁等: "基于支持向量机的图像深度提取方法", 《哈尔滨商业大学学报(自然科学版)》 * |
范哲: "基于Kinect的三维重建", 《中国硕士学位论文全文数据库信息科技辑》 * |
Cited By (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104360633A (en) * | 2014-10-10 | 2015-02-18 | 南开大学 | Human-computer interaction system for service robot |
CN106576159A (en) * | 2015-06-23 | 2017-04-19 | 华为技术有限公司 | Photographing device and method for acquiring depth information |
US10560686B2 (en) | 2015-06-23 | 2020-02-11 | Huawei Technologies Co., Ltd. | Photographing device and method for obtaining depth information |
CN105468375B (en) * | 2015-11-30 | 2019-03-05 | 扬州大学 | A kind of construction method of the corresponding points searching structure towards area-structure light point cloud data |
CN105468375A (en) * | 2015-11-30 | 2016-04-06 | 扬州大学 | Surface structure light point cloud data oriented corresponding point search structure construction method |
CN105675549A (en) * | 2016-01-11 | 2016-06-15 | 武汉大学 | Portable crop parameter measurement and growth vigor intelligent analysis device and method |
CN105675549B (en) * | 2016-01-11 | 2019-03-19 | 武汉大学 | A kind of Portable rural crop parameter measurement and growing way intellectual analysis device and method |
US11126016B2 (en) | 2016-04-04 | 2021-09-21 | Carl Zeiss Vision International Gmbh | Method and device for determining parameters for spectacle fitting |
US11867978B2 (en) | 2016-04-04 | 2024-01-09 | Carl Zeiss Vision International Gmbh | Method and device for determining parameters for spectacle fitting |
CN106352809A (en) * | 2016-08-24 | 2017-01-25 | 中国科学院上海光学精密机械研究所 | Method for detecting fogging depth of neodymium-doped laser phosphate glass surface and method for fogging removal |
CN106352809B (en) * | 2016-08-24 | 2018-11-20 | 中国科学院上海光学精密机械研究所 | The detection method and hair mist minimizing technology of phosphate laser neodymium glass surface steaminess degree |
CN106643492A (en) * | 2016-11-18 | 2017-05-10 | 中国民航大学 | Aeroengine damaged blade three-dimensional digital speckle moulding method |
CN106643492B (en) * | 2016-11-18 | 2018-11-02 | 中国民航大学 | A kind of aero-engine damaged blade 3-dimensional digital speckle formative method |
CN107392874A (en) * | 2017-07-31 | 2017-11-24 | 广东欧珀移动通信有限公司 | U.S. face processing method, device and mobile device |
CN107491302A (en) * | 2017-07-31 | 2017-12-19 | 广东欧珀移动通信有限公司 | terminal control method and device |
CN107563304B (en) * | 2017-08-09 | 2020-10-16 | Oppo广东移动通信有限公司 | Terminal equipment unlocking method and device and terminal equipment |
CN107563304A (en) * | 2017-08-09 | 2018-01-09 | 广东欧珀移动通信有限公司 | Unlocking terminal equipment method and device, terminal device |
CN107833254A (en) * | 2017-10-11 | 2018-03-23 | 中国长光卫星技术有限公司 | A kind of camera calibration device based on diffraction optical element |
CN108050955A (en) * | 2017-12-14 | 2018-05-18 | 合肥工业大学 | Based on structured light projection and the relevant high temperature air disturbance filtering method of digital picture |
CN108050955B (en) * | 2017-12-14 | 2019-10-18 | 合肥工业大学 | Filtering method is disturbed based on structured light projection high temperature air relevant to digital picture |
CN109978809A (en) * | 2017-12-26 | 2019-07-05 | 同方威视技术股份有限公司 | Image processing method, device and computer readable storage medium |
CN109100740A (en) * | 2018-04-24 | 2018-12-28 | 北京航空航天大学 | A kind of three-dimensional image imaging device, imaging method and system |
CN108696682A (en) * | 2018-04-28 | 2018-10-23 | Oppo广东移动通信有限公司 | Data processing method, device, electronic equipment and computer readable storage medium |
US11050918B2 (en) | 2018-04-28 | 2021-06-29 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for performing image processing, and computer readable storage medium |
CN108645353A (en) * | 2018-05-14 | 2018-10-12 | 四川川大智胜软件股份有限公司 | Three dimensional data collection system and method based on the random binary coding light field of multiframe |
CN108921027A (en) * | 2018-06-01 | 2018-11-30 | 杭州荣跃科技有限公司 | A kind of running disorder object recognition methods based on laser speckle three-dimensional reconstruction |
CN109087350B (en) * | 2018-08-07 | 2020-06-26 | 西安电子科技大学 | Fluid light intensity three-dimensional reconstruction method based on projective geometry |
CN109087350A (en) * | 2018-08-07 | 2018-12-25 | 西安电子科技大学 | Fluid light intensity three-dimensional rebuilding method based on projective geometry |
CN109102559A (en) * | 2018-08-16 | 2018-12-28 | Oppo广东移动通信有限公司 | Threedimensional model treating method and apparatus |
CN109405765A (en) * | 2018-10-23 | 2019-03-01 | 北京的卢深视科技有限公司 | A kind of high accuracy depth calculation method and system based on pattern light |
CN109167904A (en) * | 2018-10-31 | 2019-01-08 | Oppo广东移动通信有限公司 | Image acquiring method, image acquiring device, structure optical assembly and electronic device |
CN109167904B (en) * | 2018-10-31 | 2020-04-28 | Oppo广东移动通信有限公司 | Image acquisition method, image acquisition device, structured light assembly and electronic device |
CN109581327A (en) * | 2018-11-20 | 2019-04-05 | 天津大学 | Totally-enclosed Laser emission base station and its implementation |
CN109581327B (en) * | 2018-11-20 | 2023-07-18 | 天津大学 | Totally-enclosed laser emission base station and implementation method thereof |
CN109798838B (en) * | 2018-12-19 | 2020-10-27 | 西安交通大学 | ToF depth sensor based on laser speckle projection and ranging method thereof |
CN109798838A (en) * | 2018-12-19 | 2019-05-24 | 西安交通大学 | A kind of ToF depth transducer and its distance measuring method based on laser speckle projection |
CN109887022A (en) * | 2019-02-25 | 2019-06-14 | 北京超维度计算科技有限公司 | A kind of characteristic point matching method of binocular depth camera |
CN110009673A (en) * | 2019-04-01 | 2019-07-12 | 四川深瑞视科技有限公司 | Depth information detection method, device and electronic equipment |
CN110012206A (en) * | 2019-05-24 | 2019-07-12 | Oppo广东移动通信有限公司 | Image acquiring method, image acquiring device, electronic equipment and readable storage medium storing program for executing |
CN110337674A (en) * | 2019-05-28 | 2019-10-15 | 深圳市汇顶科技股份有限公司 | Three-dimensional rebuilding method, device, equipment and storage medium |
WO2020237492A1 (en) * | 2019-05-28 | 2020-12-03 | 深圳市汇顶科技股份有限公司 | Three-dimensional reconstruction method, device, apparatus, and storage medium |
CN110264573A (en) * | 2019-05-31 | 2019-09-20 | 中国科学院深圳先进技术研究院 | Three-dimensional rebuilding method, device, terminal device and storage medium based on structure light |
CN110264573B (en) * | 2019-05-31 | 2022-02-18 | 中国科学院深圳先进技术研究院 | Three-dimensional reconstruction method and device based on structured light, terminal equipment and storage medium |
CN110415226A (en) * | 2019-07-23 | 2019-11-05 | Oppo广东移动通信有限公司 | Measuring method, device, electronic equipment and the storage medium of stray light |
CN110969656B (en) * | 2019-12-10 | 2023-05-12 | 长春精仪光电技术有限公司 | Detection method based on laser beam spot size of airborne equipment |
CN110969656A (en) * | 2019-12-10 | 2020-04-07 | 长春精仪光电技术有限公司 | Airborne equipment-based laser beam spot size detection method |
CN112669362A (en) * | 2021-01-12 | 2021-04-16 | 四川深瑞视科技有限公司 | Depth information acquisition method, device and system based on speckles |
CN112669362B (en) * | 2021-01-12 | 2024-03-29 | 四川深瑞视科技有限公司 | Depth information acquisition method, device and system based on speckles |
CN113379816A (en) * | 2021-06-29 | 2021-09-10 | 北京的卢深视科技有限公司 | Structure change detection method, electronic device, and storage medium |
CN113379816B (en) * | 2021-06-29 | 2022-03-25 | 北京的卢深视科技有限公司 | Structure change detection method, electronic device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103971405A (en) | Method for three-dimensional reconstruction of laser speckle structured light and depth information | |
US8836756B2 (en) | Apparatus and method for acquiring 3D depth information | |
Maeno et al. | Light field distortion feature for transparent object recognition | |
US10674139B2 (en) | Methods and systems for human action recognition using 3D integral imaging | |
CN108256411A (en) | By the method and system of camera review vehicle location | |
CN102982334B (en) | The sparse disparities acquisition methods of based target edge feature and grey similarity | |
WO2019176235A1 (en) | Image generation method, image generation device, and image generation system | |
CN111191667A (en) | Crowd counting method for generating confrontation network based on multiple scales | |
CN103593641B (en) | Object detecting method and device based on stereo camera | |
CN103198475B (en) | Based on the total focus synthetic aperture perspective imaging method that multilevel iteration visualization is optimized | |
CN111028273B (en) | Light field depth estimation method based on multi-stream convolution neural network and implementation system thereof | |
US9323989B2 (en) | Tracking device | |
CN111027581A (en) | 3D target detection method and system based on learnable codes | |
CN114241422A (en) | Student classroom behavior detection method based on ESRGAN and improved YOLOv5s | |
Kirkland et al. | Imaging from temporal data via spiking convolutional neural networks | |
CN109523590A (en) | A kind of 3D rendering depth information visual comfort appraisal procedure based on sample | |
CN116883588A (en) | Method and system for quickly reconstructing three-dimensional point cloud under large scene | |
CN103605968A (en) | Pupil locating method based on mixed projection | |
Ruvalcaba-Cardenas et al. | Object classification using deep learning on extremely low-resolution time-of-flight data | |
Schreiberhuber et al. | GigaDepth: Learning depth from structured light with branching neural networks | |
CN115953447A (en) | Point cloud consistency constraint monocular depth estimation method for 3D target detection | |
KR101391667B1 (en) | A model learning and recognition method for object category recognition robust to scale changes | |
CN115063428A (en) | Spatial dim small target detection method based on deep reinforcement learning | |
Shimizu et al. | Sensor-independent pedestrian detection for personal mobility vehicles in walking space using dataset generated by simulation | |
KR20160039447A (en) | Spatial analysis system using stereo camera. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140806 |
|
WD01 | Invention patent application deemed withdrawn after publication |