CN110335311A - Dynamic vision displacement measurement method based on autocoder - Google Patents

Dynamic vision displacement measurement method based on autocoder Download PDF

Info

Publication number
CN110335311A
CN110335311A CN201910620869.9A CN201910620869A CN110335311A CN 110335311 A CN110335311 A CN 110335311A CN 201910620869 A CN201910620869 A CN 201910620869A CN 110335311 A CN110335311 A CN 110335311A
Authority
CN
China
Prior art keywords
characteristic point
obtains
point
binocular
collection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910620869.9A
Other languages
Chinese (zh)
Inventor
陈志聪
苏忆艳
吴丽君
陈疏影
洪志宸
林培杰
程树英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201910620869.9A priority Critical patent/CN110335311A/en
Publication of CN110335311A publication Critical patent/CN110335311A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • G01B11/005Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates coordinate measuring machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The present invention relates to the dynamic vision displacement measurement methods based on autocoder, comprising the following steps: step S1: building invests the characteristic point of periphery;Step S2: dynamic video data acquisition is carried out using cylindrical body of the binocular acquisition system to affix characteristic point, obtains left and right viewdata collection;Step S3: according to obtained left and right viewdata collection, extracting target feature point, obtains the two-dimensional image coordinate for obtaining feature angle point;Step S4: carrying out calibration processing to binocular acquisition system, obtains the internal reference and outer ginseng matrix of binocular camera;Step S5: according to the two-dimensional image coordinate of the internal reference and outer ginseng matrix and feature angle point that obtain binocular camera, characteristic point three-dimensional world coordinate collection is reconstructed;Step S6: cylinder fitting is carried out using characteristic point three-dimensional world coordinate collection according to cylindrical structure, and the Eulerian angles of rotation are calculated.The characteristic point information that the present invention acquires target object can calculate the displacement of target, rotation angle information in dynamic video, be not necessarily to direct contact target object.

Description

Dynamic vision displacement measurement method based on autocoder
Technical field
The present invention relates to a kind of dynamic vision displacement measurement method based on autocoder.
Background technique
Traditional displacement structure measurement method can be divided into the sensor of contact-type and non-contact type, and contact type sensor is for example Linear variable difference transformer, accelerometer, angular-rate sensor, non-contact sensor such as GPS, laser sensor, radar The modes such as measuring system.Due to traditional approach tested person technology, cost effectiveness, exists and tend to change cable mechanical property Limitation, often has difficulties in practical applications, and with the development of machine vision, vision measurement is obtained in every field in recent years It obtained and was more and more widely used.All it is currently based on the displacement measurement method main flow of vision: obtains two dimensional image feature Information;The inside and outside parameter of left and right camera is solved by binocular calibration;The signature tracking on image is carried out, can be obtained corresponding after being displaced Two dimensional image characteristic information;The reconstruct of 2 d-to-3 d is realized using camera parameter and characteristic information, and then obtains three-dimensional displacement Information.But wherein there are many more the details that can be modified, so scholar both domestic and external is devoted to find efficient, accurate and stablize The contactless displacement information measuring method based on binocular vision.
Wherein, the accurate of characteristic point extracts the precision that can promote measurement.Artificial target's object is due to can artificially provide Apparent characteristic point, is widely used in vision measurement.But still not currently based on vision six-degree of freedom displacement measurement method Generally, an important reason is a lack of the characteristic information that can be used to characterize object six-freedom motion among these.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of dynamic vision displacement measurement side based on autocoder Method, it is only necessary to corresponding code set can be automatically generated with code set quantity according to the number of encoding bits of offer, then utilize mesh Mark segmentation, characteristic point detection, three-dimensionalreconstruction can get space displacement information.
To achieve the above object, the present invention adopts the following technical scheme:
A kind of dynamic vision displacement measurement method based on autocoder, which comprises the following steps:
Step S1: building invests the characteristic point of periphery;
Step S2: dynamic video data acquisition is carried out using cylindrical body of the binocular acquisition system to affix characteristic point, obtains a left side Right view data set;
Step S3: according to obtained left and right viewdata collection, extracting target feature point, obtains the two-dimensional image for obtaining feature angle point Coordinate;
Step S4: carrying out calibration processing to binocular acquisition system, obtains the internal reference and outer ginseng matrix of binocular camera;
Step S5: according to the two-dimensional image coordinate of the internal reference and outer ginseng matrix and feature angle point that obtain binocular camera, spy is reconstructed Sign point three-dimensional world coordinate collection;
Step S6: cylinder fitting is carried out using characteristic point three-dimensional world coordinate collection according to cylindrical structure, and rotation is calculated Eulerian angles.
Further, the step S1 specifically:
Step S11: according to required number of encoding bitsWith code set number, initialization first trip binary coding is 0,1 alternate Coding, obtain the first row binary coding;
Step S12: according to code set numberScreened by numerical value, filter out the binary-coding not occurred corresponding ten into Number processed;
Step S13: judging the judgement for whether meeting corner conditions between former and later two codings, and the binary system of the second row of confirmation is compiled Code carries out n times, obtains unduplicated binary-coding sequence;
Step S14: generating corresponding checkerboard image for unduplicated binary code sequence, wherein 0 corresponding white square, 1 is corresponding Black square is affixed on periphery, represents periphery location information with coding, obtains the characteristic point for investing periphery.
Further, the step S3 specifically:
Collected left and right viewdata collection is divided into frame by step S31 respectively, as unit of two frame of front and back to every frame image into Row KCF target following is detected and is split.
Step S32: carrying out Feature corner extraction to the target being partitioned into, and the two-dimensional image for obtaining respective feature angle point is sat Mark.
Further, the step S4 specifically:
Step S41: scaling board or so viewdata collection is acquired by binocular acquisition system;
Step S42: the internal reference and outer ginseng matrix that binocular calibration obtains binocular camera are carried out using scaling board or so viewdata collection.
Further, the characteristic point ask between four bits of coded around each characteristic point 0 number be necessary for odd number, Either two 0 are distributed on cornerwise position.
Compared with the prior art, the invention has the following beneficial effects:
Present invention combination periphery characteristic point autocoder can be calculated in the characteristic point information for extracting target object Dynamic displacement, rotation angle equipotential move information, and the code set that the autocoder proposed generates can be according to required coding Digit automatically generates corresponding code set with code set quantity, and provides feature angle point as much as possible, the code pattern of generation When case invests body surface, even if also applicable in the case where Partial Feature point is blocked.
Detailed description of the invention
Fig. 1 is overview flow chart of the present invention.
Fig. 2 is feature of present invention point autocoder flow chart.
Fig. 3 is numerical value screening process figure of the present invention.
Fig. 4 is the coding situation that the present invention is unsatisfactory for corner conditions.
Fig. 5 is that the present invention judges whether the flow chart for meeting corner conditions.
Fig. 6 is the coding pattern effect picture after the characteristic point autocoding of one embodiment of the invention.
Fig. 7 is one embodiment of the invention to the target progress binocular acquisition figure with characteristic point.
Fig. 8 is the target detection trace flow schematic diagram of one embodiment of the invention.
Fig. 9 is the target following result figure of one embodiment of the invention.
Figure 10 is the feature angle point testing result figure of one embodiment of the invention.
Figure 11 is the feature angle point three-dimensional coordinate reconstruction result figure of one embodiment of the invention.
Figure 12 is the dynamic detection result figure of one embodiment of the invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
Fig. 1 is please referred to, the present invention provides a kind of dynamic vision displacement measurement method based on autocoder, and feature exists In, comprising the following steps:
Step S1: building invests the characteristic point of periphery;
Step S2: dynamic video data acquisition is carried out using cylindrical body of the binocular camera to affix characteristic point, obtains left and right view Diagram data collection;
Step S3: according to obtained left and right viewdata collection, extracting target feature point, obtains the two-dimensional image for obtaining feature angle point Coordinate;
Step S4: carrying out calibration processing to binocular camera, obtains the internal reference and outer ginseng matrix of binocular camera;
Step S5: according to the two-dimensional image coordinate of the internal reference and outer ginseng matrix and feature angle point that obtain binocular camera, spy is reconstructed Sign point three-dimensional world coordinate collection;Utilize four kinds of camera coordinates system, image coordinate system, pixel coordinate system, world coordinate system coordinate systems Between relationship, it is known that the conversion process that 3-D image coordinate is converted to by two dimensional image coordinate, by mapping calculation can get such as Three-dimensional world coordinate figure shown in Figure 11.
Step S6: cylinder fitting is carried out using characteristic point three-dimensional world coordinate collection according to cylindrical structure, and is calculated The Eulerian angles of rotation.
In the present embodiment, the step S1 specifically:
Step S11: according to required number of encoding bitsWith code set number, initialization first trip binary coding is 0,1 alternate Coding, obtain the first row binary coding;As shown in Fig. 2, initialization the first row binary coding is 0,1 alternate coding, Obtain the first row binary coding, wherein 0,1 alternate coding, such as 010101 or 101010.
Step S12: according to code set numberIt is screened by numerical value, it is corresponding to filter out the binary-coding not occurred Decimal number;As shown in Fig. 2, utilizing the corresponding decimal number of the first row binary coding, number of encoding bitsWith code set NumberNumerical value screening is carried out, the corresponding decimal number of candidate binary coding of the second row is obtained
Step S13: judge the judgement for whether meeting corner conditions between former and later two codings, confirm the binary system of the second row Coding carries out n times, obtains unduplicated binary-coding sequence;
Due to needingThe binary coding of position, may make up number, as shown in figure 3, enabling, at this time wherein IfIt is not equal toThen as candidate binary coding, output is ready for use, otherwiseInto subsequent cycle, It is not equal to until generatingCandidate binary coding output is ready for use.It utilizes, due to each ownThe binary system of position Coding at most can produce a feature angle point if meeting the case where forming obvious characteristic angle point between coding.Since it is desired that can provide Differentiable characteristic point as much as possible must encode, and it requires between four bits of coded around each characteristic point 0 number It being necessary for odd number or is two and is distributed on cornerwise position, this is the condition for meeting angle point, it is corresponding, four Coding be all 0, be all 1 or only there are two 0 but without distribution be then judged to being unsatisfactory on the diagonal set angle point situation, As shown in Figure 4.Bits of coded around each characteristic point must satisfy the case where forming obvious characteristic angle point, then will all meetAs binary codingOutput, otherwise will move out circulation, willIt is judged as not available coding, as shown in Figure 5.IfFor not available coding, then carry outIt is returned to step B1, reenters circulation, as shown in figure 3, directly Meet the binary coding of corner conditions to generationUntil.
For required code set number, and so on, by all binary numbers、…、It is combined into required Code set without repeating and per in the ranks can produce obvious angle point.
Step S14: generating corresponding checkerboard image for unduplicated binary code sequence, wherein 0 corresponding white square, 1 Corresponding black square represents periphery location information as shown in fig. 6, being affixed on periphery with coding, acquisition invests cylindrical body The characteristic point on surface.
In the present embodiment, the step S3 specifically:
Collected left and right viewdata collection is divided into frame by step S31 respectively, as unit of two frame of front and back to every frame image into Row KCF target following is detected and is split.Use kcf core related algorithm as target following detection algorithm, flow chart such as Fig. 8 It is shown, firstly, carrying out the choosing of target frame, the sample training constituent class device that cyclic shift obtains to the image containing known target;Then, To target image solid box area sampling, responded using classifier calculated;Finally, will respond maximum frame is selected as target location Domain.Tracing detection result is as shown in Figure 9.
Step S32: carrying out Feature corner extraction to the target being partitioned into, and the two-dimensional image for obtaining respective feature angle point is sat Mark.
In the present embodiment, the step S4 specifically:
Step S41: scaling board or so viewdata collection is acquired by binocular acquisition system;
Step S42: internal reference and outer ginseng using scaling board or so viewdata collection progress binocular calibration acquisition binocular acquisition system Matrix.Binocular calibration tool box is carried using MATLAB, and binocular calibration is carried out to the scaling board image that left and right cameras acquires, it can Obtain left and right camera respectively internal referenceAnd two outer ginseng between camera
In the present embodiment, the characteristic point asks between four bits of coded around each characteristic point 0 number to be necessary for Odd number or it is two 0 and is distributed on cornerwise position.
In the present embodiment, the autocoder that the method for the present invention is proposed is verified, it is assumed that required number of encoding bitsAnd volume Code character numberIt is tested for 6,10.Table 1 gives experimental result of the invention, the corresponding black and white lattice point of this experimental result Cloth situation is as shown in Figure 6.
Table 1:
As it can be seen from table 1 the coding mode that the method for the present invention generates is unique, meanwhile, from Fig. 6 it is also seen that the present invention The autocoder that method is proposed is as much as possible to provide differentiable characteristic point.It can thus be appreciated that the feature that the present invention refers to Point autocoder can automatically generate corresponding code set according to required code set quantity, and provide spy as much as possible Angle point is levied, can be used for the coding of artificial target's object surface location in the displacement measurement of view-based access control model.
In addition, investing cylindrical drum table using binary coding as shown in FIG. 6 to verify the effect of the method for the present invention Face.Displacement measurement of the invention is compared with practical rotation angle with displacement.Table 2 gives displacement of the invention Measurement method is in the comparison in known practical rotation angle and displacement.And give dynamic detection result such as Figure 11 of lasting rotation It is shown.
Table 2:
It can be seen that the displacement letter that the method for the present invention can measure the displacement in three directions and rotation angle is constituted from table 2 and Figure 12 Breath.It can thus be appreciated that the method that present invention combination periphery characteristic point autocoder proposes, it is only necessary to utilize left and right camera The information of the characteristic point of target object can calculate the freedom degrees such as dynamic displacement, rotation angle in two groups of dynamic videos of shooting Information, it is also applicable in the case where rotating angle and being blocked.
The foregoing is merely presently preferred embodiments of the present invention, all equivalent changes done according to scope of the present invention patent with Modification, is all covered by the present invention.

Claims (5)

1. a kind of dynamic vision displacement measurement method based on autocoder, which comprises the following steps:
Step S1: building invests the characteristic point of periphery;
Step S2: dynamic video data acquisition is carried out using cylindrical body of the binocular acquisition system to affix characteristic point, obtains a left side Right view data set;
Step S3: according to obtained left and right viewdata collection, extracting target feature point, obtains the two-dimensional image for obtaining feature angle point Coordinate;
Step S4: carrying out calibration processing to binocular acquisition system, obtains the internal reference and outer ginseng matrix of binocular camera;
Step S5: according to the two-dimensional image coordinate of the internal reference and outer ginseng matrix and feature angle point that obtain binocular camera, spy is reconstructed Sign point three-dimensional world coordinate collection;
Step S6: cylinder fitting is carried out using characteristic point three-dimensional world coordinate collection according to cylindrical structure, and rotation is calculated Eulerian angles.
2. the dynamic vision displacement measurement method according to claim 1 based on autocoder, it is characterised in that: described Step S1 specifically:
Step S11: according to required number of encoding bitsWith code set number, initialization first trip binary coding is 0,1 alternate Coding, obtain the first row binary coding;
Step S12: according to code set numberIt is screened by numerical value, filters out the corresponding decimal system of the binary-coding not occurred Number;
Step S13: judging the judgement for whether meeting corner conditions between former and later two codings, and the binary system of the second row of confirmation is compiled Code carries out n times, obtains unduplicated binary-coding sequence;
Step S14: generating corresponding checkerboard image for unduplicated binary code sequence, wherein 0 corresponding white square, 1 is corresponding Black square is affixed on periphery, represents periphery location information with coding, obtains the characteristic point for investing periphery.
3. the dynamic vision displacement measurement method according to claim 1 based on autocoder, which is characterized in that described Step S3 specifically:
Collected left and right viewdata collection is divided into frame by step S31 respectively, as unit of two frame of front and back to every frame image into Row KCF target following is detected and is split;
Step S32: Feature corner extraction is carried out to the target being partitioned into, obtains the two-dimensional image coordinate of respective feature angle point.
4. the dynamic vision displacement measurement method according to claim 1 based on autocoder, which is characterized in that institute State step S4 specifically:
Step S41: scaling board or so viewdata collection is acquired by binocular acquisition system;
Step S42: the internal reference and outer ginseng matrix that binocular calibration obtains binocular camera are carried out using scaling board or so viewdata collection.
5. the dynamic vision displacement measurement method according to claim 1 based on autocoder, it is characterised in that: described Characteristic point ask between four bits of coded around each characteristic point 0 number be necessary for odd number or be two 0 be distributed in it is diagonal On the position of line.
CN201910620869.9A 2019-07-10 2019-07-10 Dynamic vision displacement measurement method based on autocoder Pending CN110335311A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910620869.9A CN110335311A (en) 2019-07-10 2019-07-10 Dynamic vision displacement measurement method based on autocoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910620869.9A CN110335311A (en) 2019-07-10 2019-07-10 Dynamic vision displacement measurement method based on autocoder

Publications (1)

Publication Number Publication Date
CN110335311A true CN110335311A (en) 2019-10-15

Family

ID=68146005

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910620869.9A Pending CN110335311A (en) 2019-07-10 2019-07-10 Dynamic vision displacement measurement method based on autocoder

Country Status (1)

Country Link
CN (1) CN110335311A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1332430A (en) * 2001-07-27 2002-01-23 南开大学 3D tracking and measurement method of moving objects by 2D code
US20170206669A1 (en) * 2016-01-14 2017-07-20 RetailNext, Inc. Detecting, tracking and counting objects in videos
US20170366814A1 (en) * 2016-06-17 2017-12-21 Gopro, Inc. Apparatus and methods for image encoding using spatially weighted encoding quality parameters
CN108734744A (en) * 2018-04-28 2018-11-02 国网山西省电力公司电力科学研究院 A kind of remote big field-of-view binocular scaling method based on total powerstation
CN208223391U (en) * 2018-04-01 2018-12-11 深圳慎始科技有限公司 Non-slip-ring type rotary color three-dimensional modeling apparatus
CN109373912A (en) * 2018-12-21 2019-02-22 福州大学 A kind of non-contact six-freedom displacement measurement method based on binocular vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1332430A (en) * 2001-07-27 2002-01-23 南开大学 3D tracking and measurement method of moving objects by 2D code
US20170206669A1 (en) * 2016-01-14 2017-07-20 RetailNext, Inc. Detecting, tracking and counting objects in videos
US20170366814A1 (en) * 2016-06-17 2017-12-21 Gopro, Inc. Apparatus and methods for image encoding using spatially weighted encoding quality parameters
CN208223391U (en) * 2018-04-01 2018-12-11 深圳慎始科技有限公司 Non-slip-ring type rotary color three-dimensional modeling apparatus
CN108734744A (en) * 2018-04-28 2018-11-02 国网山西省电力公司电力科学研究院 A kind of remote big field-of-view binocular scaling method based on total powerstation
CN109373912A (en) * 2018-12-21 2019-02-22 福州大学 A kind of non-contact six-freedom displacement measurement method based on binocular vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹焱: ""基于双目立体视觉信息的三维重建方法研究"", 《中国优秀硕士学位论文全文数据库 息科技I辑》 *

Similar Documents

Publication Publication Date Title
CN109949899B (en) Image three-dimensional measurement method, electronic device, storage medium, and program product
Werner et al. Rendering real-world objects using view interpolation
CN102032878B (en) Accurate on-line measurement method based on binocular stereo vision measurement system
Park et al. A multiview 3D modeling system based on stereo vision techniques
CN106951669B (en) A kind of rolling bearing variable working condition method for diagnosing faults of view-based access control model cognition
CN108734744A (en) A kind of remote big field-of-view binocular scaling method based on total powerstation
CN110264573B (en) Three-dimensional reconstruction method and device based on structured light, terminal equipment and storage medium
CN103714530B (en) A kind of vanishing point detection and image correction method
Zou et al. A method of stereo vision matching based on OpenCV
CN109373912A (en) A kind of non-contact six-freedom displacement measurement method based on binocular vision
CN106155299B (en) A kind of pair of smart machine carries out the method and device of gesture control
CN104616348A (en) Method for reconstructing fabric appearance based on multi-view stereo vision
Hafeez et al. Image based 3D reconstruction of texture-less objects for VR contents
CN110096993A (en) The object detection apparatus and method of binocular stereo vision
CN105976431A (en) Rotating-light-field-based three-dimensional surface reconstruction method
CN107374638A (en) A kind of height measuring system and method based on binocular vision module
Hafeez et al. 3D surface reconstruction of smooth and textureless objects
Gonzalez-Aguilera et al. From point cloud to CAD models: Laser and optics geotechnology for the design of electrical substations
Xiong et al. Automatic three-dimensional reconstruction based on four-view stereo vision using checkerboard pattern
CN110335311A (en) Dynamic vision displacement measurement method based on autocoder
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method
Zhu et al. Plant Modeling Based on 3D Reconstruction and Its Application in Digital Museum.
Liu et al. New anti-blur and illumination-robust combined invariant for stereo vision in human belly reconstruction
Guo et al. Using facial landmarks to assist the stereo matching in fringe projection based 3D face profilometry
CN104616343B (en) A kind of texture gathers the method and system mapped online in real time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191015