CN109685889A - A kind of scene Scan orientation method, storage medium and system - Google Patents
A kind of scene Scan orientation method, storage medium and system Download PDFInfo
- Publication number
- CN109685889A CN109685889A CN201811574472.2A CN201811574472A CN109685889A CN 109685889 A CN109685889 A CN 109685889A CN 201811574472 A CN201811574472 A CN 201811574472A CN 109685889 A CN109685889 A CN 109685889A
- Authority
- CN
- China
- Prior art keywords
- user
- defined identification
- information
- scanning
- scanning device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
Abstract
A kind of scene Scan orientation method, storage medium and system, wherein method includes the following steps, puts up user-defined identification in object surface, the user-defined identification includes azimuth information, dimension information;Indoor physical contents environment is scanned, the obtained result of scanning is subjected to digitization, after scanning is to user-defined identification, identifies azimuth information on user-defined identification, dimension information, according to the azimuth information, dimension information, spatial position locating for current device is calculated.Design cost is reduced, the accuracy of three-dimensional scenic modeling is improved.
Description
Technical field
The present invention relates in augmented reality scene scanning modeling technology more particularly to it is a kind of improve modeling accuracy field
Scape Scan orientation method.
Background technique
During AR scene Recognition, the step of being substantially carried out is the location technology for the interior space.Space orientation technique
Complexity needs special corresponding equipment, higher cost.We need a kind of to carry out simplified three-dimensional modeling by two-dimension picture
Special method.
Summary of the invention
For this reason, it may be necessary to provide a kind of new scene Scan orientation method, design cost is reduced, improves three-dimensional scenic modeling
Accuracy.
To achieve the above object, a kind of scene Scan orientation method is inventor provided, is included the following steps, in object table
User-defined identification is puted up in face, and the user-defined identification includes azimuth information, dimension information;Indoor physical contents environment is swept
It retouches, the obtained result of scanning is subjected to digitization, after scanning is to user-defined identification, identify the orientation letter on user-defined identification
Breath, dimension information calculate spatial position locating for current device according to the azimuth information, dimension information.
Specifically, scanning device direction is calculated according to the azimuth information, according to dimension information and scanning device to making by oneself
Justice mark imaging results calculate scanning device between user-defined identification at a distance from.
Further, further include step, user-defined identification is arranged in the discontinuous place of object surface gradient.
A kind of scene Scan orientation storage medium, is stored with computer program, and the computer program is held when being run
Row following steps are scanned indoor physical contents environment, and the result that scanning is obtained carries out digitization, when object is arrived in scanning
After the user-defined identification on surface, azimuth information, dimension information on identification user-defined identification, according to the azimuth information, ruler
Very little information calculates spatial position locating for current device.
Specifically, the computer program specifically executes step, calculates scanning device direction, root according to the azimuth information
According to the imaging results of dimension information and scanning device to user-defined identification calculate scanning device between user-defined identification at a distance from.
Further, the user-defined identification is arranged in the discontinuous place of object surface gradient.
A kind of scene Scan orientation system, including user-defined identification, scanning device, the user-defined identification are set to object
Surface, the user-defined identification include azimuth information, dimension information;The scanning device be used for indoor physical contents environment into
The obtained result of scanning is carried out digitization by row scanning, after scanning is to user-defined identification, identifies the side on user-defined identification
Position information, dimension information calculate spatial position locating for current device according to the azimuth information, dimension information.
Specifically, the scanning device is also used to calculate scanning device direction according to the azimuth information, is believed according to size
Breath and scanning device to the imaging results of user-defined identification calculate scanning device between user-defined identification at a distance from.
Further, the user-defined identification is arranged in the discontinuous place of object surface gradient.
It is different from the prior art, above-mentioned technical proposal enables to when the positioning of scene indoors by putting up in advance
User-defined identification preferably determines camera positioning, so that the opposite scanning result for other objects also can be more accurate, from
And solves the problems, such as modeling accuracy in class closing scene.
Detailed description of the invention
Fig. 1 is scene Scan orientation method flow diagram described in the specific embodiment of the invention;
Fig. 2 is that AR equipment described in the specific embodiment of the invention acquires schematic diagram in spatial scene;
Fig. 3 is AR equipment visual angle schematic diagram described in the specific embodiment of the invention;
Fig. 4 is that auxiliary described in the specific embodiment of the invention calculates schematic diagram.
Specific embodiment
Technology contents, construction feature, the objects and the effects for detailed description technical solution, below in conjunction with specific reality
It applies example and attached drawing is cooperated to be explained in detail.
Referring to Fig. 1, being a kind of scene space localization method of the invention, the method for the present invention can be used for closing or half-open
The scene space positioning under formula scene is put, such as locating enclosure space model similar to scene under indoor scene.Such as figure
It is shown, include the following steps, S100 puts up user-defined identification in practical object surface, and the user-defined identification includes orientation letter
Breath, dimension information;The user-defined identification can be picture, can also include the picture of different colors, size, azimuth information
Refer to the place orientation of the user-defined identification puted up in space, four corners of the world orientation as where instruction in indoor scene,
Or the number of practical object, such as wall 1,2,3, due to wall be in space it is fixed, then it is numbered naturally also hidden
Contain and thought corresponding relationship there are position between number, these azimuth informations can be indicated with color, such as the object of different direction
The mark etc. for putting up different colours can achieve the effect that record azimuth information.The side can also be recorded in other ways
Position information, dimension information, the azimuth information and ruler for such as carrying out record user-defined identification with the mode that two dimensional code encodes on picture
Very little information.After the completion of putting up, step S102 can be carried out, indoor physical contents environment is scanned by AR equipment, by existing
Some means scan obtained result to the space structure of indoor environment and carry out digitization, generate three-dimensional indoor scene mould
Type;S104 identifies azimuth information on user-defined identification, length information, according to the side after scanning is to user-defined identification
Position information, length information, calculate spatial position locating for current device.By taking user-defined identification is picture two dimensional code as an example, work as solution
It is precipitated after the orientation recorded on user-defined identification, size information, the particular orientation where the user-defined identification scanned
And dimension information, further according to the user-defined identification size scanned in AR equipment, determine orientation where AR equipment and with from
Define the distance of mark.
Since there are dimension informations, then equipment can be according to the size and actual size that user-defined identification is shown in imaging
Information can obtain being illustrated as a result, attempting an example here by general mathematical conversion: embodiment shown in Fig. 2
In give AR equipment camera obtained in spatial scene acquisition face face wall on user-defined identification on information, at this time
The visual angle of camera such as Fig. 3.Since camera has relative to two-dimension picture wall the inclination at left and right visual angle.From the figure we can see that left
The length of right line segment CC ' and BB ' is different.The scene conversion that Fig. 2, Fig. 3 are shown is geometry 2 d plane picture, as shown in figure 4,
According to camera informations such as the focal length of camera, size sensors, the size of camera angle of visibility ∠ EAD can be learnt.Then according to BC
Ratio in picture show that ∠ CAB, auxiliary line CF constitute isosceles triangle △ ACF, AB/AF=CC '/BB '.It can by calculating
Obtain AB=AC × CC '/BB '
COS ∠ CAG=AG/AC can obtain BG=AC × CC '/BB '-COS ∠ CAG × AC.CG=AC × Ctan ∠ CAG.It is logical
Cross triangle △ CBD.AC length can be calculated.According to BC × AH=AB × CG.AH actual range can be calculated.I.e. camera away from
With a distance from wall.When camera height and two-dimension picture identify there is also when gap, can equally pass through similar mathematics
Method calculates the relative position of camera in space.Therefore by the above method, enable to scene positioning indoors when
It waits and preferably determines camera positioning by the user-defined identification puted up in advance, thus the opposite scanning result for other objects
Also can be more accurate, to solve the problems, such as modeling accuracy in class closing scene.
In a further embodiment, in order to preferably carry out identification scene environment, the operation modeled, we may be used also
To carry out step, user-defined identification is arranged in the discontinuous place of object surface gradient.The step of being arranged such be,
AR equipment when carrying out environmental scanning, the disjunction between the object of same color can not be accomplished it is very clear, such as
The concave-convex of turning, cabinet inner face and outside between wall and wall rises and falls, and optical sensor appears to connect in the case where pure color
Continuous is a piece of, it is difficult to distinguish.Through the above steps, two-dimension picture mark setting is existed, the discontinuous place of object surface gradient,
At the intersection between metope, putting up for user-defined identification is carried out, the part edge of user-defined identification can be between metope
Intersection or imbricate, thus during the scanning of AR equipment, it is only necessary to can effectively distinguish object by optical element
Noncontinuous surface, to achieve the effect that the identification of better object dimensional surface and Accurate Model.
It further include a kind of scene Scan orientation storage medium, dielectric memory in other specific embodiments of the invention
Computer program is contained, the computer program executes following steps when being run, and sweeps to indoor physical contents environment
It retouches, the result that scanning is obtained carries out digitization, after the user-defined identification of object surface is arrived in scanning, identifies user-defined identification
On azimuth information, dimension information spatial position locating for current device calculated according to the azimuth information, dimension information.On
Storage medium design is stated according to the particular orientation and dimension information where the user-defined identification scanned, further according in AR equipment
The user-defined identification size scanned determines orientation where AR equipment and at a distance from user-defined identification.
Specifically, the computer program specifically executes step, calculates scanning device direction, root according to the azimuth information
According to the imaging results of dimension information and scanning device to user-defined identification calculate scanning device between user-defined identification at a distance from.
Further, the user-defined identification is arranged in the discontinuous place of object surface gradient.To be swept in AR equipment
During retouching, it is only necessary to the noncontinuous surface of object can be effectively distinguished by optical element, to reach better object
The effect of the identification of body three-dimensional surface and Accurate Model.
In some other specific embodiment of the invention, a kind of scene Scan orientation system is also disclosed, including customized
It identifies, scanning device, the storage medium in above-described embodiment can be run in the electronic equipment of this system.The customized mark
Knowledge is set to object surface, and the user-defined identification includes azimuth information, dimension information;The scanning device is used for indoor real
Hold environment in vivo to be scanned, the obtained result of scanning is subjected to digitization, after scanning is to user-defined identification, identification is made by oneself
Azimuth information, dimension information in justice mark calculate space locating for current device according to the azimuth information, dimension information
Position.Above system design can according to the particular orientation and dimension information where the user-defined identification scanned, further according to
The user-defined identification size scanned in AR equipment determines orientation where AR equipment and at a distance from user-defined identification.
Specifically, the scanning device is also used to calculate scanning device direction according to the azimuth information, is believed according to size
Breath and scanning device to the imaging results of user-defined identification calculate scanning device between user-defined identification at a distance from.
Further, the user-defined identification is arranged in the discontinuous place of object surface gradient.To be swept in AR equipment
During retouching, it is only necessary to the noncontinuous surface of object can be effectively distinguished by optical element, to reach better object
The effect of the identification of body three-dimensional surface and Accurate Model.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or the terminal device that include a series of elements not only include those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or end
The intrinsic element of end equipment.In the absence of more restrictions, being limited by sentence " including ... " or " including ... "
Element, it is not excluded that there is also other elements in process, method, article or the terminal device for including the element.This
Outside, herein, " being greater than ", " being less than ", " being more than " etc. are interpreted as not including this number;" more than ", " following ", " within " etc. understand
Being includes this number.
It should be understood by those skilled in the art that, the various embodiments described above can provide as method, apparatus or computer program production
Product.Complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in these embodiments
Form.The all or part of the steps in method that the various embodiments described above are related to can be instructed by program relevant hardware come
It completes, the program can store in the storage medium that computer equipment can be read, for executing the various embodiments described above side
All or part of the steps described in method.The computer equipment, including but not limited to: personal computer, server, general-purpose computations
It is machine, special purpose computer, the network equipment, embedded device, programmable device, intelligent mobile terminal, smart home device, wearable
Smart machine, vehicle intelligent equipment etc.;The storage medium, including but not limited to: RAM, ROM, magnetic disk, tape, CD, sudden strain of a muscle
It deposits, USB flash disk, mobile hard disk, storage card, memory stick, webserver storage, network cloud storage etc..
The various embodiments described above are referring to the method according to embodiment, equipment (system) and computer program product
Flowchart and/or the block diagram describes.It should be understood that can be realized by computer program instructions every in flowchart and/or the block diagram
The combination of process and/or box in one process and/or box and flowchart and/or the block diagram.It can provide these computers
Program instruction generates a machine to the processor of computer equipment, so that the finger executed by the processor of computer equipment
It enables and generates to specify in one or more flows of the flowchart and/or one or more blocks of the block diagram
The device of function.
These computer program instructions, which may also be stored in, to be able to guide computer equipment computer operate in a specific manner and sets
In standby readable memory, so that the instruction being stored in the computer equipment readable memory generates the manufacture including command device
Product, command device realization refer in one or more flows of the flowchart and/or one or more blocks of the block diagram
Fixed function.
These computer program instructions can also be loaded into computer equipment, so that executing on a computing device a series of
Operating procedure is to generate computer implemented processing, so that the instruction executed on a computing device is provided for realizing in process
The step of function of being specified in figure one process or multiple processes and/or block diagrams one box or multiple boxes.
Although the various embodiments described above are described, once a person skilled in the art knows basic wounds
The property made concept, then additional changes and modifications can be made to these embodiments, so the above description is only an embodiment of the present invention,
It is not intended to limit scope of patent protection of the invention, it is all to utilize equivalent structure made by description of the invention and accompanying drawing content
Or equivalent process transformation, being applied directly or indirectly in other relevant technical fields, similarly includes in patent of the invention
Within protection scope.
Claims (9)
1. a kind of scene Scan orientation method, which is characterized in that include the following steps, puts up user-defined identification in object surface,
The user-defined identification includes azimuth information, dimension information;Indoor physical contents environment is scanned, the knot that scanning is obtained
Fruit carries out digitization, after scanning is to user-defined identification, identifies azimuth information on user-defined identification, dimension information, according to
The azimuth information, dimension information calculate spatial position locating for current device.
2. scene Scan orientation method according to claim 1, which is characterized in that calculated and scanned according to the azimuth information
Equipment direction calculates scanning device and user-defined identification to the imaging results of user-defined identification according to dimension information and scanning device
Between distance.
3. scene Scan orientation method according to claim 1, which is characterized in that further include step, by user-defined identification
It is arranged in the discontinuous place of object surface gradient.
4. a kind of scene Scan orientation storage medium, which is characterized in that be stored with computer program, the computer program is in quilt
Following steps are executed when operation, indoor physical contents environment is scanned, the obtained result of scanning is subjected to digitization, when sweeping
After the user-defined identification for retouching object surface, azimuth information, dimension information on identification user-defined identification, according to the orientation
Information, dimension information calculate spatial position locating for current device.
5. scene Scan orientation storage medium according to claim 4, which is characterized in that the computer program is specifically held
Row step calculates scanning device direction according to the azimuth information, according to dimension information and scanning device to user-defined identification
Imaging results calculate scanning device between user-defined identification at a distance from.
6. scene Scan orientation storage medium according to claim 4, which is characterized in that the user-defined identification setting exists
The discontinuous place of object surface gradient.
7. a kind of scene Scan orientation system, which is characterized in that including user-defined identification, scanning device, the user-defined identification
It is set to object surface, the user-defined identification includes azimuth information, dimension information;The scanning device is used for indoor entity
Content environment is scanned, and the obtained result of scanning is carried out digitization, after scanning is to user-defined identification, is identified customized
Azimuth information, dimension information in mark calculate space bit locating for current device according to the azimuth information, dimension information
It sets.
8. scene Scan orientation system according to claim 7, which is characterized in that the scanning device is also used to according to institute
It states azimuth information and calculates scanning device direction, the imaging results calculating of user-defined identification is swept according to dimension information and scanning device
Retouch equipment between user-defined identification at a distance from.
9. scene Scan orientation system according to claim 1, which is characterized in that the user-defined identification is arranged in object
Surface graded discontinuous place.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810391934 | 2018-04-27 | ||
CN2018103919340 | 2018-04-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109685889A true CN109685889A (en) | 2019-04-26 |
Family
ID=66188901
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811574472.2A Pending CN109685889A (en) | 2018-04-27 | 2018-12-21 | A kind of scene Scan orientation method, storage medium and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109685889A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110288650A (en) * | 2019-05-27 | 2019-09-27 | 盎锐(上海)信息科技有限公司 | Data processing method and end of scan for VSLAM |
CN113744585A (en) * | 2020-05-28 | 2021-12-03 | 中国石油化工股份有限公司 | Fire accident emergency treatment drilling system and method |
WO2022036475A1 (en) * | 2020-08-17 | 2022-02-24 | 南京翱翔智能制造科技有限公司 | Augmented reality-based indoor positioning system for multi-source data fusion |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090323121A1 (en) * | 2005-09-09 | 2009-12-31 | Robert Jan Valkenburg | A 3D Scene Scanner and a Position and Orientation System |
CN105787534A (en) * | 2016-02-29 | 2016-07-20 | 上海导伦达信息科技有限公司 | Realization method of content identification and learning and augmented reality, fused with two-dimensional code and AR code |
CN106130981A (en) * | 2016-06-28 | 2016-11-16 | 努比亚技术有限公司 | The self-defined device and method of digital label of augmented reality equipment |
CN106816077A (en) * | 2015-12-08 | 2017-06-09 | 张涛 | Interactive sandbox methods of exhibiting based on Quick Response Code and augmented reality |
CN107390875A (en) * | 2017-07-28 | 2017-11-24 | 腾讯科技(上海)有限公司 | Information processing method, device, terminal device and computer-readable recording medium |
-
2018
- 2018-12-21 CN CN201811574472.2A patent/CN109685889A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090323121A1 (en) * | 2005-09-09 | 2009-12-31 | Robert Jan Valkenburg | A 3D Scene Scanner and a Position and Orientation System |
CN106816077A (en) * | 2015-12-08 | 2017-06-09 | 张涛 | Interactive sandbox methods of exhibiting based on Quick Response Code and augmented reality |
CN105787534A (en) * | 2016-02-29 | 2016-07-20 | 上海导伦达信息科技有限公司 | Realization method of content identification and learning and augmented reality, fused with two-dimensional code and AR code |
CN106130981A (en) * | 2016-06-28 | 2016-11-16 | 努比亚技术有限公司 | The self-defined device and method of digital label of augmented reality equipment |
CN107390875A (en) * | 2017-07-28 | 2017-11-24 | 腾讯科技(上海)有限公司 | Information processing method, device, terminal device and computer-readable recording medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110288650A (en) * | 2019-05-27 | 2019-09-27 | 盎锐(上海)信息科技有限公司 | Data processing method and end of scan for VSLAM |
CN110288650B (en) * | 2019-05-27 | 2023-02-10 | 上海盎维信息技术有限公司 | Data processing method and scanning terminal for VSLAM |
CN113744585A (en) * | 2020-05-28 | 2021-12-03 | 中国石油化工股份有限公司 | Fire accident emergency treatment drilling system and method |
CN113744585B (en) * | 2020-05-28 | 2024-03-29 | 中国石油化工股份有限公司 | Fire accident emergency treatment drilling system and treatment method |
WO2022036475A1 (en) * | 2020-08-17 | 2022-02-24 | 南京翱翔智能制造科技有限公司 | Augmented reality-based indoor positioning system for multi-source data fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110568447B (en) | Visual positioning method, device and computer readable medium | |
US9286538B1 (en) | Adaptive 3D to 2D projection for different height slices and extraction of robust morphological features for 3D object recognition | |
CN110415342A (en) | A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors | |
Fidler et al. | 3d object detection and viewpoint estimation with a deformable 3d cuboid model | |
US8824781B2 (en) | Learning-based pose estimation from depth maps | |
KR101135186B1 (en) | System and method for interactive and real-time augmented reality, and the recording media storing the program performing the said method | |
EP2751777B1 (en) | Method for estimating a camera motion and for determining a three-dimensional model of a real environment | |
US10810718B2 (en) | Method and device for three-dimensional reconstruction | |
CN109685889A (en) | A kind of scene Scan orientation method, storage medium and system | |
CN109559349A (en) | A kind of method and apparatus for calibration | |
Houshiar et al. | A study of projections for key point based registration of panoramic terrestrial 3D laser scan | |
Urban et al. | Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds | |
Daoudi et al. | 3D face modeling, analysis and recognition | |
Pound et al. | A patch-based approach to 3D plant shoot phenotyping | |
Davies et al. | Advanced methods and deep learning in computer vision | |
CN114766042A (en) | Target detection method, device, terminal equipment and medium | |
CN110264523A (en) | A kind of method and apparatus of the location information of target image in determining test image | |
Shufelt | Geometric constraints for object detection and delineation | |
CN109064533A (en) | A kind of 3D loaming method and system | |
Perez-Yus et al. | Peripheral expansion of depth information via layout estimation with fisheye camera | |
Hou et al. | Handheld 3D reconstruction based on closed-loop detection and nonlinear optimization | |
Xompero et al. | Multi-view shape estimation of transparent containers | |
Belghit et al. | Tracking color marker using projective transformation for augmented reality application | |
Li et al. | Stereo neural vernier caliper | |
Zins et al. | 3d-aware ellipse prediction for object-based camera pose estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190426 |
|
WD01 | Invention patent application deemed withdrawn after publication |