CN107346551A - A kind of light field light source orientation method - Google Patents

A kind of light field light source orientation method Download PDF

Info

Publication number
CN107346551A
CN107346551A CN201710510442.4A CN201710510442A CN107346551A CN 107346551 A CN107346551 A CN 107346551A CN 201710510442 A CN201710510442 A CN 201710510442A CN 107346551 A CN107346551 A CN 107346551A
Authority
CN
China
Prior art keywords
light source
image
identification point
point
orientation method
Prior art date
Application number
CN201710510442.4A
Other languages
Chinese (zh)
Inventor
伊恩·罗伊·舒
李建亿
Original Assignee
太平洋未来有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 太平洋未来有限公司 filed Critical 太平洋未来有限公司
Priority to CN201710510442.4A priority Critical patent/CN107346551A/en
Publication of CN107346551A publication Critical patent/CN107346551A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Abstract

The present invention relates to a kind of light field light source orientation method, including 1) identifying target, 2) image preprocessing, 3) Coordinate Conversion, 4) point location is identified, 5) identification point coordinate position confirms, 6) location information is transmitted, 7) identification point position determines, 8) depth value measures, 9) nine steps of framing, the light field light source orientation method, simultaneously using the two methods of measurement of mark point location and depth value, virtual-real combination, during mark independent positioning method delay, the measurement of depth value can be completed, two kinds of results are subjected to COMPREHENSIVE CALCULATING, so that the positioning to light source is more accurate.

Description

A kind of light field light source orientation method

Technical field

The present invention relates to a kind of light field light source orientation method.

Background technology

VR/AR technologies are a kind of projection technologies more popular at present, in existing VR/AR projections technology, Main research direction is exactly the precise positioning to object, and the localization method of prior art is using identification point segmentation positioning mostly Method, although this method can play positioning action, actually positioning gets up to be delayed bigger, so as to cause position data It is inaccurate.

The content of the invention

The technical problem to be solved in the present invention is:For overcome the deficiencies in the prior art, there is provided a kind of light field light source orientation Method.

The technical solution adopted for the present invention to solve the technical problems is:A kind of light field light source orientation method, including it is following Step:

1) target is identified:View data is gathered by headend equipment, and identifies the collection view data;

2) image preprocessing:Including carrying out distortion correction, the segmentation of identification point, size and center to identification point to image Point is calculated;

3) Coordinate Conversion:Coordinate system corresponding to headend equipment is transformed into world coordinate system;

4) point location is identified:The information of comprehensive multiple image positions to all identification points in space;

5) identification point coordinate position confirms:According to right in the locus of headend equipment world coordinate system and multiple image The intersection point of multiple directions vector or approximate intersection point in the direction vector of the identification point, should be calculated, determines that the world of the identification point is sat Mark;

6) location information is transmitted:The location information of each identification point is transmitted to computing unit;

7) identification point position determines:Calculated by coordinate, draw the average distance between image and headend equipment;

8) depth value measures:Emitted beam by light beam generator, and light is received by light sensor, calculated Phase difference, so as to obtain the depth value between equipment and actual scene;

9) framing:The average distance that computing unit is calculated carries out mean value computation, the distance drawn with depth value As the distance between headend equipment and image, this distance is embodied in world coordinate system, you can calculate image Coordinate.

As preferred:In step 1), headend equipment is VR/AR glasses, and light source is fixed into image and enters line number by VR/AR glasses According to collection.

As preferred:In step 2), the calculating of size and central point to identification point is completed by computer.

As preferred:In step 8), the light beam generator is RF transmitter, and the light sensor is infrared ray Receiver.

The invention has the advantages that the light field light source orientation method, while using the survey of mark point location and depth value Two methods are measured, virtual-real combination, during mark independent positioning method delay, the measurement of depth value can be completed, Two kinds of results are subjected to COMPREHENSIVE CALCULATING so that the positioning to light source is more accurate.

Embodiment

Embodiment:

A kind of light field light source orientation method, comprises the following steps:

1) target is identified:View data is gathered by headend equipment, and identifies the collection view data;

2) image preprocessing:Including carrying out distortion correction, the segmentation of identification point, size and center to identification point to image Point is calculated;

3) Coordinate Conversion:Coordinate system corresponding to headend equipment is transformed into world coordinate system;

4) point location is identified:The information of comprehensive multiple image positions to all identification points in space;

5) identification point coordinate position confirms:According to right in the locus of headend equipment world coordinate system and multiple image The intersection point of multiple directions vector or approximate intersection point in the direction vector of the identification point, should be calculated, determines that the world of the identification point is sat Mark;

6) location information is transmitted:The location information of each identification point is transmitted to computing unit;

7) identification point position determines:Calculated by coordinate, draw the average distance between image and headend equipment;

8) depth value measures:Emitted beam by light beam generator, and light is received by light sensor, calculated Phase difference, so as to obtain the depth value between equipment and actual scene;

9) framing:The average distance that computing unit is calculated carries out mean value computation, the distance drawn with depth value As the distance between headend equipment and image, this distance is embodied in world coordinate system, you can calculate image Coordinate.

As preferred:In step 1), headend equipment is VR/AR glasses, and light source is fixed into image and enters line number by VR/AR glasses According to collection.

As preferred:In step 2), the calculating of size and central point to identification point is completed by computer.

As preferred:In step 8), the light beam generator is RF transmitter, and the light sensor is infrared ray Receiver.

In step 2), distortion correction, the segmentation of identification point, the meter of size and central point to identification point are carried out to image Calculate, actually depend on the support to identification point number in the size and computer process ability of positioning image here, in order to Ensure real time handling requirement, the operation for being related to identification point processing is actually needs in VR/AR glasses and the processing list of computer It is allocated between member.

Moreover, the data transfer involved by above step, preferable with the effect of wired data transfer.

Compared with prior art, the light field light source orientation method, while using the measurement two of mark point location and depth value Kind method, virtual-real combination, during mark independent positioning method delay, the measurement of depth value can be completed, by two Kind result carries out COMPREHENSIVE CALCULATING so that the positioning to light source is more accurate.

It is complete by above-mentioned description, relevant staff using the above-mentioned desirable embodiment according to the present invention as enlightenment Various changes and amendments can be carried out without departing from the scope of the technological thought of the present invention' entirely.The technology of this invention Property scope is not limited to the content on specification, it is necessary to determines its technical scope according to right.

Claims (4)

  1. A kind of 1. light field light source orientation method, it is characterised in that:Comprise the following steps:
    1) target is identified:View data is gathered by headend equipment, and identifies the collection view data;
    2) image preprocessing:Including carrying out distortion correction, the segmentation of identification point to image, size and center to identification point click through Row calculates;
    3) Coordinate Conversion:Coordinate system corresponding to headend equipment is transformed into world coordinate system;
    4) point location is identified:The information of comprehensive multiple image positions to all identification points in space;
    5) identification point coordinate position confirms:Correspond to according in the locus of headend equipment world coordinate system and multiple image The direction vector of the identification point, the intersection point of multiple directions vector or approximate intersection point are calculated, determine the world coordinates of the identification point;
    6) location information is transmitted:The location information of each identification point is transmitted to computing unit;
    7) identification point position determines:Calculated by coordinate, draw the average distance between image and headend equipment;
    8) depth value measures:Emitted beam by light beam generator, and light is received by light sensor, calculate phase Difference, so as to obtain the depth value between equipment and actual scene;
    9) framing:The average distance that computing unit is calculated carries out mean value computation with depth value, and the distance drawn is The distance between headend equipment and image, this distance is embodied in world coordinate system, you can calculate the coordinate of image.
  2. A kind of 2. light field light source orientation method as claimed in claim 1, it is characterised in that:In step 1), headend equipment VR/ Light source is fixed into image and carries out data acquisition by AR glasses, VR/AR glasses.
  3. A kind of 3. light field light source orientation method as claimed in claim 1, it is characterised in that:In step 2), to the big of identification point Small and central point calculating is completed by computer.
  4. A kind of 4. light field light source orientation method as claimed in claim 1, it is characterised in that:In step 8), the light occurs Device is RF transmitter, and the light sensor is infrared receiver.
CN201710510442.4A 2017-06-28 2017-06-28 A kind of light field light source orientation method CN107346551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710510442.4A CN107346551A (en) 2017-06-28 2017-06-28 A kind of light field light source orientation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710510442.4A CN107346551A (en) 2017-06-28 2017-06-28 A kind of light field light source orientation method

Publications (1)

Publication Number Publication Date
CN107346551A true CN107346551A (en) 2017-11-14

Family

ID=60257208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710510442.4A CN107346551A (en) 2017-06-28 2017-06-28 A kind of light field light source orientation method

Country Status (1)

Country Link
CN (1) CN107346551A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019104453A1 (en) * 2017-11-28 2019-06-06 深圳市大疆创新科技有限公司 Image processing method and apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101033972A (en) * 2007-02-06 2007-09-12 华中科技大学 Method for obtaining three-dimensional information of space non-cooperative object
CN101458822A (en) * 2008-12-30 2009-06-17 暨南大学 Fast generating method for computation hologram of 3D model
CN102314708A (en) * 2011-05-23 2012-01-11 北京航空航天大学 Optical field sampling and simulating method by utilizing controllable light source
CN104236540A (en) * 2014-06-24 2014-12-24 上海大学 Indoor passive navigation and positioning system and indoor passive navigation and positioning method
CN104899855A (en) * 2014-03-06 2015-09-09 株式会社日立制作所 Three-dimensional obstacle detection method and apparatus
CN106780601A (en) * 2016-12-01 2017-05-31 北京未动科技有限公司 A kind of locus method for tracing, device and smart machine

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101033972A (en) * 2007-02-06 2007-09-12 华中科技大学 Method for obtaining three-dimensional information of space non-cooperative object
CN101458822A (en) * 2008-12-30 2009-06-17 暨南大学 Fast generating method for computation hologram of 3D model
CN102314708A (en) * 2011-05-23 2012-01-11 北京航空航天大学 Optical field sampling and simulating method by utilizing controllable light source
CN104899855A (en) * 2014-03-06 2015-09-09 株式会社日立制作所 Three-dimensional obstacle detection method and apparatus
CN104236540A (en) * 2014-06-24 2014-12-24 上海大学 Indoor passive navigation and positioning system and indoor passive navigation and positioning method
CN106780601A (en) * 2016-12-01 2017-05-31 北京未动科技有限公司 A kind of locus method for tracing, device and smart machine

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019104453A1 (en) * 2017-11-28 2019-06-06 深圳市大疆创新科技有限公司 Image processing method and apparatus

Similar Documents

Publication Publication Date Title
US9325969B2 (en) Image capture environment calibration method and information processing apparatus
EP2426642B1 (en) Method, device and system for motion detection
CN101976455B (en) Color image three-dimensional reconstruction method based on three-dimensional matching
Wu et al. Tracking a large number of objects from multiple views
CN101458434B (en) System for precision measuring and predicting table tennis track and system operation method
Faessler et al. A monocular pose estimation system based on infrared leds
CN103353758B (en) A kind of Indoor Robot navigation method
CN102773862A (en) Quick and accurate locating system used for indoor mobile robot and working method thereof
CN101876532A (en) Camera on-field calibration method in measuring system
CN104848851B (en) Intelligent Mobile Robot and its method based on Fusion composition
CN1847789A (en) Method and apparatus for measuring position and orientation
US20050256395A1 (en) Information processing method and device
CN101839692A (en) Method for measuring three-dimensional position and stance of object with single camera
CN100453966C (en) Spatial three-dimensional position attitude measurement method for video camera
CN101226640B (en) Method for capturing movement based on multiple binocular stereovision
CN100487724C (en) Quick target identification and positioning system and method
CN103955939A (en) Boundary feature point registering method for point cloud splicing in three-dimensional scanning system
US7190826B2 (en) Measuring the location of objects arranged on a surface, using multi-camera photogrammetry
CN101699237A (en) Three-dimensional model attitude angle video measuring system for wind tunnel model test
RU2013148372A (en) Automatic calibration of augmented reality report system
CN103337094A (en) Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN103411553A (en) Fast calibration method of multiple line structured light visual sensor
CN104197928B (en) Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle
CN102376089B (en) Target correction method and system
CN103489214A (en) Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination