CN104835199A - 3D reconstruction method based on augmented reality - Google Patents

3D reconstruction method based on augmented reality Download PDF

Info

Publication number
CN104835199A
CN104835199A CN201510262874.9A CN201510262874A CN104835199A CN 104835199 A CN104835199 A CN 104835199A CN 201510262874 A CN201510262874 A CN 201510262874A CN 104835199 A CN104835199 A CN 104835199A
Authority
CN
China
Prior art keywords
window
augmented reality
pixel
image
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510262874.9A
Other languages
Chinese (zh)
Inventor
罗勇
胡强仁
谢然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU LVYE ORIGIN TECHNOLOGY Co Ltd
Original Assignee
CHENGDU LVYE ORIGIN TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU LVYE ORIGIN TECHNOLOGY Co Ltd filed Critical CHENGDU LVYE ORIGIN TECHNOLOGY Co Ltd
Priority to CN201510262874.9A priority Critical patent/CN104835199A/en
Publication of CN104835199A publication Critical patent/CN104835199A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a 3D reconstruction method based on augmented reality, and the method comprises the steps: obtaining a real-time video flow of equipment; carrying out the median filtering of the obtained video flow; obtaining positional information; and carrying out correction transformation of a target region in a visual range. The image processing is carried out based on the corrected target region, and is presented to a user. The method provided by the invention solves a problem that the pattern of manifestation is fixed, and effectively eliminates noise points.

Description

Based on the 3D method for reconstructing of augmented reality
Technical field
The present invention relates to augmented reality field, particularly a kind of 3D method for reconstructing based on augmented reality.
Background technology
Augmented reality has started to popularize abroad, but domesticly just just starts.The use field of augmented reality is really fairly widespread, such as: real estate, education sector, medical treatment, military, the industries such as game.The application of major part augmented reality all shows not have editability based on the mode of 2d identification figure.But can content be allowed more to enrich the practicality and the interactive that directly determine application, how realizing content editability becomes the very important problem of pendulum in face of us.Further, noise hinders mankind's sense organ to the understanding of accepted information source information, and some common noises have impulsive noise, Gaussian noise etc.If have requirement to the size and shape of object, noise goes out at the edge of image too fuzzy; An image also exists white point or the stain in some unknown sources; The distortion, distortion etc. of image.These problems all can affect to the result of follow-up three-dimensional reconstruction process substantially.
Therefore, prior art need to improve and development.Based on this, be badly in need of a kind of improved plan.
Summary of the invention
The present invention proposes a kind of 3D method for reconstructing based on augmented reality, can solve one or more problems that restriction and defect due to prior art cause fully.
Additional advantages of the present invention, object and characteristic, a part will be elucidated in the following description, and another part for those of ordinary skill in the art by being obvious to the investigation of explanation below or acquiring from enforcement of the present invention.Can realize and obtain the object of the invention and advantage by the structure pointed out especially in the instructions of word and claims and accompanying drawing.
The invention provides a kind of 3D method for reconstructing based on augmented reality, the method comprises the following steps:
Step 1: obtain equipment live video stream, and carry out pre-service to the sequence frame pictorial information of obtained video flowing, described pre-service comprises medium filtering process.
Pass through median filter, the moving window one being contained odd pixel element is sequentially mobile successively in the picture, in each position, the pixel in window is sorted by order from small to large, then fetch bit in the gray-scale value of intermediate pixel as the output valve of window center pixel.
Step 2: utilize the augmented reality target area of presetting, analyzed by video flowing, obtain the positional information of described target area in video camera perspective view, this positional information is saved in database.
Step 3: judge target area whether in visual range, if so, then rectification conversion process is carried out to target area.
Step 4: based on the target area after rectification, carry out image procossing, described image procossing comprises digitizing, geometric transformation, normalization, smoothly, restores and strengthen.
Step 5: sent to by the video stream data after the image procossing in step 4 virtual performance module in augmented reality system to present video to user.
Preferably, described median filter is expressed as wherein, A is filter window, f a(i, j) for the gray value sequence of pixel each in image window, g (i, j) be the image through medium filtering process gained.
Preferably, the size of the window of described median filter is no more than the size of smallest object in image.
Preferably, the shape of the window of described median filter is circular, square or cruciform.
Preferably, described filter window is the square window of 3x3, concrete filter step is: slided with row or column direction order in the drawings by window, until all pixels are all through medium filtering process, first overlapped the position of window center with certain pixel of image; The gray-scale value of each respective pixel under reading window, and these gray-scale values are arranged from small to large; Find out intermediate value wherein, and give corresponding window center position pixel by this intermediate value assignment.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the 3D method for reconstructing based on augmented reality according to the embodiment of the present invention.
Embodiment
With reference to the accompanying drawings the present invention is described more fully, exemplary embodiment of the present invention is wherein described.
As shown in Figure 1, comprise the following steps according to the 3D method for reconstructing based on augmented reality of the embodiment of the present invention:
Step 1: obtain equipment live video stream, and carry out pre-service to the sequence frame pictorial information of obtained video flowing, described pre-service comprises medium filtering process.
Pass through median filter, the moving window one being contained odd pixel element is sequentially mobile successively in the picture, in each position, the pixel in window is sorted by order from small to large, then fetch bit in the gray-scale value of intermediate pixel as the output valve of window center pixel.
Median filter can be expressed as:
wherein, A is filter window, f a(i, j) for the gray value sequence of pixel each in image window, g (i, j) be the image through medium filtering process gained.
The size design of median filter and window shape affect larger on the filter effect of image.The size of window is of a size of the best to be no more than object minimum in image.For window shape, circular or square window is applicable to the image of the long object of profile, and cruciform window is suitable for the image of the object having angle, pinnacle.
For the square window of 3x3, concrete filter step is as follows:
(1) window is slided with row or column direction order in the drawings, until all pixels are all through medium filtering process, first the position of window center with certain pixel of image is overlapped;
(2) read the gray-scale value of each respective pixel under window, and these gray-scale values are arranged from small to large;
(3) find out intermediate value wherein, and give corresponding window center position pixel by this intermediate value assignment.
By medium filtering, and the pixel that surrounding pixel gray-scale value difference is larger, change to take the gray-scale value close with surrounding pixel values, thus eliminate isolated noise spot.
Step 2: utilize the augmented reality target area of presetting, analyzed by video flowing, obtain the positional information of described target area in video camera perspective view, this positional information is saved in database.
Step 3: judge target area whether in visual range, if so, then rectification conversion process is carried out to target area.
Described rectification conversion process, specifically comprises: the translation data obtaining multiple preset coordinates point in target area, calculates and corrects transformation matrix, carry out rectification conversion process according to calculated rectification transformation matrix to target area according to described translation data.
The conversion that (Affine Transformation) is rectangular coordinate system in space corrected by picture, another two-dimensional coordinate is transformed to from a two-dimensional coordinate, it is a linear transformation that picture is corrected, maintain " collimation " and " grazing " of image, namely original in image straight line and parallel lines, still keep original straight line and parallel lines after conversion, the special transformation that mapping transformation is relatively commonly used has translation (Translation), convergent-divergent (Scale), upset (Flip), rotates (Rotation) and shear (Shear).
Correct conversion (Perspective Transformation) process and refer to the condition utilizing the centre of perspectivity, picture point, impact point three point on a straight line, image-bearing surface (perspective plane) is made to rotate a certain angle around trace (axis of homology) by chasles theorem, destroy original projected light wire harness, the conversion that the geometric figure that still can keep image-bearing surface projects is constant.
For rectification conversion, correcting transformation matrix is:
A = M B = m 11 m 12 b 1 m 21 m 22 b 2
M = m 11 m 12 m 21 m 22 B = b 1 b 2
Wherein, A corrects transformation matrix, and this rectification matrix is 2*3 matrix, rotates scaled matrix M and translation matrix B comprising having.Rotate the Rotation and Zoom that scaled matrix M is 2*2 matrix, denotation coordination axle, comprise and rotate zoom factor m 11, m 12, m 21, m 22.Translation matrix B is the translation of 2*1 matrix, denotation coordination axle, comprises translation coefficient b 1, b 2.
By obtaining the translation data of multiple preset coordinates point in target area, such as, obtain the rectification conversion between 3 points, 3 known point transformation results just can be utilized to calculate the rotation zoom factor m corrected in transformation matrix 11, m 12, m 21, m 22and translation coefficient b 1, b 2, thus obtain correcting transformation matrix, according to obtained rectification transformation matrix, rectification conversion process is carried out to target area.
When carrying out rectification conversion, rectification transformation matrix is utilized to adopt following formula to carry out coordinate transform:
X ′ Y ′ = M * X y + B
Wherein, (X, Y) is original image coordinate, and (X ', Y ') is the Picture Coordinate after conversion.
Step 4: based on the target area after rectification, carry out image procossing, described image procossing comprises digitizing, geometric transformation, normalization, smoothly, restores and strengthen.
Step 5: the video stream data after the image procossing in step 4 is sent to the virtual performance module in augmented reality system.
The present invention proposes and utilize graphics technology process, live video stream is followed the tracks of frame by frame and analyzes, and the reverse form of expression is revised, the final 3D realized based on augmented reality rebuilds, overcome the problem that the form of expression is fixing, effectively eliminate noise spot.
Above content is only preferred embodiment of the present invention, and for those of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, this description should not be construed as limitation of the present invention.

Claims (5)

1., based on a 3D method for reconstructing for augmented reality, the method comprises the following steps:
Step 1: obtain equipment live video stream, and carry out pre-service to the sequence frame pictorial information of obtained video flowing, described pre-service comprises medium filtering process;
Medium filtering process is carried out by median filter, the moving window one being contained odd pixel element is sequentially mobile successively in the picture, in each position, pixel in window is sorted by order from small to large, then fetch bit in the gray-scale value of intermediate pixel as the output valve of window center pixel;
Step 2: utilize the augmented reality target area of presetting, analyzed by video flowing, obtain the positional information of described target area in video camera perspective view, this positional information is saved in database;
Step 3: judge target area whether in visual range, if so, then rectification conversion process is carried out to target area;
Step 4: based on the target area after rectification, carry out image procossing, described image procossing comprises digitizing, geometric transformation, normalization, smoothly, restores and strengthen;
Step 5: sent to by the video stream data after the image procossing in step 4 virtual performance module in augmented reality system to present video to user.
2. the 3D method for reconstructing based on augmented reality according to claim 1, it is characterized in that, described median filter is expressed as wherein, A is filter window, f a(i, j) for the gray value sequence of pixel each in image window, g (i, j) be the image through medium filtering process gained.
3. the 3D method for reconstructing based on augmented reality according to claim 2, is characterized in that, the size of the window of described median filter is no more than the size of smallest object in image.
4. the 3D method for reconstructing based on augmented reality according to claim 3, is characterized in that, the shape of the window of described median filter is circular, square or cruciform.
5. the 3D method for reconstructing based on augmented reality according to claim 2, is characterized in that, described filter window is the square window of 3x3, and concrete filter step is:
Window is slided with row or column direction order in the drawings, until all pixels are all through medium filtering process, first the position of window center with certain pixel of image is overlapped; The gray-scale value of each respective pixel under reading window, and these gray-scale values are arranged from small to large; Find out intermediate value wherein, and give corresponding window center position pixel by this intermediate value assignment.
CN201510262874.9A 2015-05-21 2015-05-21 3D reconstruction method based on augmented reality Pending CN104835199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510262874.9A CN104835199A (en) 2015-05-21 2015-05-21 3D reconstruction method based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510262874.9A CN104835199A (en) 2015-05-21 2015-05-21 3D reconstruction method based on augmented reality

Publications (1)

Publication Number Publication Date
CN104835199A true CN104835199A (en) 2015-08-12

Family

ID=53813063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510262874.9A Pending CN104835199A (en) 2015-05-21 2015-05-21 3D reconstruction method based on augmented reality

Country Status (1)

Country Link
CN (1) CN104835199A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113678786A (en) * 2021-08-19 2021-11-23 陆荣清 Ecological breeding method for improving disease resistance of live pigs

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6219444B1 (en) * 1997-02-03 2001-04-17 Yissum Research Development Corporation Of The Hebrew University Of Jerusalem Synthesizing virtual two dimensional images of three dimensional space from a collection of real two dimensional images
CN102667811A (en) * 2010-03-08 2012-09-12 英派尔科技开发有限公司 Alignment of objects in augmented reality
CN102737405A (en) * 2011-03-31 2012-10-17 索尼公司 Image processing apparatus, image processing method, and program
CN102968809A (en) * 2012-12-07 2013-03-13 成都理想境界科技有限公司 Method for realizing virtual information marking and drawing marking line in enhanced practical field

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6219444B1 (en) * 1997-02-03 2001-04-17 Yissum Research Development Corporation Of The Hebrew University Of Jerusalem Synthesizing virtual two dimensional images of three dimensional space from a collection of real two dimensional images
CN102667811A (en) * 2010-03-08 2012-09-12 英派尔科技开发有限公司 Alignment of objects in augmented reality
CN102737405A (en) * 2011-03-31 2012-10-17 索尼公司 Image processing apparatus, image processing method, and program
CN102968809A (en) * 2012-12-07 2013-03-13 成都理想境界科技有限公司 Method for realizing virtual information marking and drawing marking line in enhanced practical field

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘丽梅等: "中值滤波技术发展研究", 《云南师范大学学报》 *
李鸿林等: "中值滤波技术在图像处理中的应用", 《信息技术》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113678786A (en) * 2021-08-19 2021-11-23 陆荣清 Ecological breeding method for improving disease resistance of live pigs

Similar Documents

Publication Publication Date Title
US11537894B2 (en) Fully convolutional interest point detection and description via homographic adaptation
US11341722B2 (en) Computer vision method and system
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
US9742994B2 (en) Content-aware wide-angle images
US6671400B1 (en) Panoramic image navigation system using neural network for correction of image distortion
CN103632366A (en) Parameter identification method for elliptical target
US20230186590A1 (en) Method for omnidirectional dense regression for machine perception tasks via distortion-free cnn and spherical self-attention
CN112396640A (en) Image registration method and device, electronic equipment and storage medium
CN115564969A (en) Panorama saliency prediction method, device and storage medium
Mei et al. Fast central catadioptric line extraction, estimation, tracking and structure from motion
KR102372298B1 (en) Method for acquiring distance to at least one object located in omni-direction of vehicle and vision device using the same
CN104851129B (en) A kind of 3D method for reconstructing based on multiple views
CN103970432B (en) A kind of method and apparatus of simulating real page turning effect
Kang et al. Detecting maritime obstacles using camera images
Wang et al. SS-INR: Spatial-spectral implicit neural representation network for hyperspectral and multispectral image fusion
Bergmann et al. Gravity alignment for single panorama depth inference
CN113327295A (en) Robot rapid grabbing method based on cascade full convolution neural network
CN104835199A (en) 3D reconstruction method based on augmented reality
EP4187483A1 (en) Apparatus and method with image processing
CN108537810B (en) Improved Zernike moment sub-pixel edge detection method
CN104835200A (en) Recognition image 3D reconstruction method based on technology of augmented reality
CN112734628B (en) Projection position calculation method and system for tracking point after three-dimensional conversion
CN105046679A (en) Method and apparatus for multi-band registration of remote sensing satellite image
CN111027389A (en) Training data generation method based on deformable Gaussian kernel in crowd counting system
CN111340695A (en) Super-resolution reconstruction method of dome screen video

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150812