CN109544694A - A kind of augmented reality system actual situation hybrid modeling method based on deep learning - Google Patents

A kind of augmented reality system actual situation hybrid modeling method based on deep learning Download PDF

Info

Publication number
CN109544694A
CN109544694A CN201811366602.3A CN201811366602A CN109544694A CN 109544694 A CN109544694 A CN 109544694A CN 201811366602 A CN201811366602 A CN 201811366602A CN 109544694 A CN109544694 A CN 109544694A
Authority
CN
China
Prior art keywords
background
model
foreground
actual situation
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811366602.3A
Other languages
Chinese (zh)
Inventor
罗志勇
夏文彬
王月
耿琦琦
杨美美
蔡婷
韩冷
郑焕平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201811366602.3A priority Critical patent/CN109544694A/en
Publication of CN109544694A publication Critical patent/CN109544694A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Abstract

A kind of augmented reality system actual situation hybrid modeling method based on deep learning is claimed in the present invention, for augmented reality system actual situation hybrid modeling problem, this method first all extracts the dummy model view of consecutive frame and the discrepant region of actual object picture, the image of input first passes around PBAS algorithm and is detected, complete the segmentation to foreground target, then suspected target region segmentation obtained is sent into VGGNet-16 model and carries out secondary judgement, the foreground image coordinate judged is exported, binding model textures and initial pictures, obtain the model result of actual situation mixing.Utilize actual situation hybrid modeling scheme proposed by the present invention, the operand of algorithm entirety can either be greatly lowered, it is effectively reduced demand of the algorithm to hardware, the hi-vision classification accuracy that can make full use of depth convolutional neural networks model VGGNET-16 again guarantees target detection effect, effectively improves modeling accuracy.

Description

A kind of augmented reality system actual situation hybrid modeling method based on deep learning
Technical field
The invention belongs to augmented reality fields, and in particular to a kind of augmented reality system actual situation based on deep learning Hybrid modeling method.
Background technique
Augmented reality (Augmented Reality, AR) technology is as an emerging technology, can generate computer two Dimension or three-dimensional virtual object are superimposed in real time with real scene;And it is realized between real scene and dummy object using interaction technique Interaction, the experience of exceeding reality is sensuously brought from audiovisual to people, by the digital information of additional virtual with promoted user with The interactive experience of true environment.The substantially process of augmented reality are as follows: the then positioning shooting seat in the plane appearance first in real scene is adopted Dummy object is registered to the application view that virtual reality fusion is generated in real scene with computer graphics rendering technology.But due to The image that single camera perspective relation carries out virtual-real synthesis cannot be identified according to taken the photograph Object Depth relationship and be optimized display, It is poor that synthesized actual situation binding model is usually present the sense of reality, in conjunction with the problems such as more coarse.
For augmented reality system actual situation hybrid modeling problem, since existing depth recognition method for registering cannot completely move mesh The actual situation for marking sufficient long period span models alignment, and long partition image sequence will lead to the large change of interframe background, frame difference method, Adaptability when the methods of gauss hybrid models change greatly background is insufficient, and VIBE method also uses constant context update Threshold value is difficult to use in strong reality system actual situation hybrid modeling.PBAS algorithm is a kind of effective exercise target inspection proposed in recent years Survey method, it makes use of the method for background modeling, context update threshold value and foreground segmentation threshold value can be with background complexities certainly It adapts to change, there is certain robustness simultaneously for illumination.Classifier based on deep learning carries out secondary judgement, Ke Yiyou Effect improves modeling accuracy.The present invention merges the advantages of above several schemes, proposes a kind of augmented reality based on deep learning System actual situation hybrid modeling method.
Summary of the invention
Present invention seek to address that the above problem of the prior art.Algorithm entirety can either be greatly lowered by proposing one kind Operand is effectively reduced demand of the algorithm to hardware, and can make full use of depth convolutional neural networks model VGGNET-16 Hi-vision classification accuracy guarantee target detection effect, effectively improve the augmented reality based on deep learning of modeling accuracy System actual situation hybrid modeling method.Technical scheme is as follows:
A kind of augmented reality system actual situation hybrid modeling method based on deep learning comprising following steps:
1) dummy model view and actual object image, are inputted, is primarily based on target priori knowledge to the virtual of consecutive frame Model view and actual object picture have carried out preliminary screening, get rid of the discrepant region of significant ground false target;
2), the dummy model view after the completion first step and actual object image are detected by PBAS algorithm, complete The segmentation of pairs of foreground target, obtains suspected target region;Wherein, the background modeling of SACON algorithm has been merged in PBAS algorithm The foreground detection part of part and VIBE algorithm;
3), the suspected target region for then obtaining segmentation is sent into VGGNet-16 model and carries out secondary judgement, will judge Foreground image coordinate output;
4), binding model textures and initial pictures obtain the model result of actual situation mixing.
Further, the step 1) is to have carried out preliminary screening to result based on target priori knowledge, is got rid of significant False target.
Further, the step 2) is detected by PBAS algorithm, is completed the segmentation to foreground target, is obtained doubtful Target area specifically includes:
A1, using the background modeling method of similar SACON algorithm, N frame pixel obtains background as background modeling before collecting Model;
A2, under step A1 background model, current pixel belongs to prospect or background by comparing present frame I (xi) and back Scape Model B (xi) determine, by comparing in sample set pixel value and current frame pixel value color space Euclidean away from From if distance is less than distance threshold R (xi) number of samples than current frame pixel value color space Euclidean distance sample Number SdminIt is few, then determine that current pixel point is otherwise background dot for foreground point;
A3, the update of background model and background complexity calculating;
A4, the adaptive adjustment of foreground segmentation threshold value and more new strategy;
A5, cavity filling and nontarget area removal process.
Further, the step A1 is specifically included: for each pixel, background model is indicated are as follows:
B(xi)={ B1(xi),…,Bk(xi),…,BN(xi)}
Wherein, xiRepresent first pixel of the i-th frame image, B (xi) indicate the i-th frame when background model, Bk(xi) represent Background model B (xi) in a sample pixel value, for color image, Bk(xi)=(ri,gi,bi), it is corresponding its The value of rgb space;It is then gray value for gray level image.
Further, the foreground detection result of the step A2 are as follows:
F(xi) it is foreground image pixel xiSet, wherein if distance be less than distance threshold R (xi) number of samples ratio Euclidean distance number of samples S of the current frame pixel value in color spacedminIt is at least foreground point, otherwise numerical value 1 is background Point, numerical value 0, dist indicate pixel and its Euclidean distance in the corresponding point of background model on color space.
Further, the update of the step A3 model and the calculating of background complexity specifically include:
In background model renewal process, random selection needs the sample being replaced, and randomly chooses the sample set of neighborhood of pixels It closes and updates, specifically foreground area is without updating, and background area is with current context update probabilityRandomly select back A sampled pixel value B in scape modelk(xi), with current pixel value I (xi) be replaced, what each background pixel was replaced Probability isAt the same time, in x selected at randomiNeighborhood in, then randomly select a pixel yi, take identical side The current pixel value V (y of formulai) replacement background pixel point Bk(yi);
Using measurement of the average value of minimum range as background complexity, background are complicated when Sample Refreshment in sample set The calculating process of degree is as follows: building background model B (xi) while, also construct a minimum range model D (xi):
D(xi)={ D1(xi),…,DN(xi)}
Current lowest distance value is dmin(xi)=minkdist(I(xi),Bk(xi)), it can be constructed according to above step Minimum range model, corresponding relationship dmin(xi)→Dk(xi), the complexity of background at this time is determined by the mean value of minimum range Degree:N is minimum range sample number.
Further, the adaptive adjustment of the step A4 foreground segmentation threshold value and more new strategy, specifically include:
R(xi) it is foreground detection as a result, Rinc\decWith RscaleIt is constant constant;
The adaptive adjustment current pixel point x of background model renewal rateiWhen for background dot, its corresponding background mould is updated Type, if xiNeighborhood point yiFor foreground pixel point, the update of background model equally can also occur, introduce parameter T (xi) dynamic control The speed for making this process makes it when pixel is judged as background, and renewal rate improves, and when being judged as prospect, updates Rate reduces;When scene changes are more violent, background complexity is relatively high, and foreground segmentation is easier to judge by accident, Raising or lowering for renewal rate can suitably slow down at this time;Conversely, when scene is more stable, the raising of renewal rate Or reduce and should suitably accelerate, more new strategy is specific as follows
F(xi) it is foreground detection as a result, TincAnd TdecRespectively indicate the amplitude of increase, the reduction of turnover rate.
Further, the filling of the cavity the step A5 and nontarget area removal process, specifically include:
Firstly, carrying out empty elimination using morphology opening operation;
The area in the connection region on foreground image is extracted, region of the elemental area less than 100 is abandoned;
The length-width ratio for calculating the boundary rectangle in the region left, the region by length-width ratio greater than 4:3 abandon.
Further, the step 3) sets 2 for the output layer class categories number of VGGNET-16 model, network remaining Part-structure remains unchanged, i.e. two class classification problems of solution real picture and model picture use warp in trim process The entire convolutional neural networks adjusted of original VGGNET-16 network model parameter initialization of ImageNet data set training, Then using augmented reality system acquisition to sample parameter is finely adjusted, obtain the new convolutional Neural for secondary judgement Network returns if the foreground image coordinate precision of output is below standard, otherwise exports the foreground image coordinate judged, in conjunction with Model pinup picture and initial pictures obtain the model result of actual situation mixing.
It advantages of the present invention and has the beneficial effect that:
The purpose of the present invention is to provide a kind of augmented reality system actual situation hybrid modeling method based on deep learning, needle To augmented reality system actual situation hybrid modeling problem, this method is first by the dummy model view of consecutive frame and actual object picture Discrepant region all extracts, and the image of input first passes around PBAS algorithm and detected, and completes to foreground target Segmentation, the suspected target region for then obtaining segmentation are sent into VGGNet-16 model and carry out secondary judgement, the prospect that will be judged Image coordinate output, binding model textures and initial pictures, obtain the model result of actual situation mixing.Utilize void proposed by the present invention Real hybrid modeling scheme, can either be greatly lowered the operand of algorithm entirety, be effectively reduced demand of the algorithm to hardware, again The hi-vision classification accuracy that can make full use of depth convolutional neural networks model VGGNET-16 guarantees target detection effect, Effectively improve modeling accuracy.
Detailed description of the invention
Fig. 1 is a kind of augmented reality system actual situation mixing based on deep learning that the present invention provides that preferred embodiment provides Modeling method flow diagram.
Fig. 2 is the Preliminary detection schematic diagram provided by the invention based on PBAS algorithm.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, detailed Carefully describe.Described embodiment is only a part of the embodiments of the present invention.
The technical solution that the present invention solves above-mentioned technical problem is:
A kind of augmented reality system actual situation hybrid modeling method based on deep learning, mainly comprises the steps that
1. image inputs, and has carried out preliminary screening to result based on target priori knowledge, it is false to get rid of significant ground Target.
2. the Preliminary detection based on PBAS algorithm.Target detection, PBAS are carried out using the preferable PBAS algorithm of comprehensive performance SACON algorithm has been used for reference in background modeling part in algorithm, and foreground detection part has used for reference VIBE algorithm, enabled the algorithm to root Adaptively change the renewal rate of background model and the judgment threshold of foreground segmentation according to the complexity of background, to adapt to The variation of scene.
1) PBAS algorithm uses the background modeling method of similar SACON algorithm, before collecting N frame pixel as background modeling, Then for each pixel, background model can be indicated are as follows:
B(xi)={ B1(xi),…,Bk(xi),…,BN(xi)}
Wherein, xiRepresent first pixel of the i-th frame image, B (xi) indicate the i-th frame when background model.Bk(xi) represent Background model B (xi) in a sample pixel value.For color image, Bk(xi)=(ri,gi,bi), it is corresponding its The value of rgb space;It is then gray value for gray level image.
2) model that previous step is established is a kind of background model based on sampling statistics, under such background model, Current pixel belongs to prospect or background can be by comparing present frame I (xi) and background model B (xi) determine.By comparing Pixel value in sample set and current frame pixel value color space Euclidean distance, if distance is less than distance threshold R (xi) Number of samples ratio SdminIt is few, then determine that current pixel point is otherwise background dot for foreground point.Foreground detection result:
3) calculating of the update of model and background complexity.
A) in background model renewal process, random selection needs the sample being replaced, and randomly chooses the sample of neighborhood of pixels Set updates.Specifically foreground area is without updating, and background area is with current context update probabilityIt randomly selects A sampled pixel value B in background modelk(xi), with current pixel value I (xi) be replaced.Each background pixel is replaced Probability beAt the same time, in x selected at randomiNeighborhood in, then randomly select a pixel yi, take identical Mode with current pixel value V (yi) replacement background pixel point Bk(yi)。
Measurement of the average value of minimum range as background complexity when b) using Sample Refreshment in sample set.Background is multiple The calculating process of miscellaneous degree is as follows: building background model B (xi) while, also construct a minimum range model D (xi):
D(xi)={ D1(xi),…,DN(xi)}
Current lowest distance value is dmin(xi)=minkdist(I(xi),Bk(xi)).It can be constructed according to above step Minimum range model, corresponding relationship dmin(xi)→Dk(xi).The complexity of background at this time is determined by the mean value of minimum range Degree:
4) the adaptive adjustment of foreground segmentation threshold value and more new strategy.
A) during the adjustment of foreground segmentation threshold value, scene changes are more violent, and background complexity is higher, and background pixel point is got over Be easy it is misjudged break as prospect, so segmentation threshold should increase accordingly at this time, before guaranteeing that background pixel is not mistaken for Scape;Otherwise scene is more stable, and background complexity is lower, and segmentation threshold should be smaller, complete to foreground segmentation to guarantee, specifically Adjustable strategies it is as follows:
R(xi) it is foreground detection as a result, Rinc\decWith RscaleIt is constant constant.
B) the adaptive adjustment current pixel point x of background model renewal rateiWhen for background dot, its corresponding back will be updated Scape model, if xiNeighborhood point yiFor foreground pixel point, the update of background model equally can also occur, this shows quiet for a long time The edge of foreground area only can gradually be judged as background.This algorithm introduces parameter T (xi) dynamically control the speed of this process Degree, makes it when pixel is judged as background, and renewal rate improves, and when being judged as prospect, renewal rate is reduced.Work as scene When changing more violent, background complexity is relatively high, foreground segmentation is easier to judge by accident, and renewal rate mentions at this time High or reduction can suitably slow down;Conversely, raising or lowering for renewal rate should suitably add when scene is more stable Fastly, more new strategy is specific as follows
F(xi) it is foreground detection as a result, TincAnd TdecRespectively indicate the amplitude of increase, the reduction of turnover rate.
5) cavity filling is eliminated with nontarget area
After foreground segmentation process, in foreground area there may be cavitation, and also have can for original testing result Itself there can be incompleteness, this can have an impact the accuracy of detection.It is also required to reduce simultaneously and is sent into convolutional neural networks progress The region quantity of secondary judgement, and then reduce overall calculation amount.In conclusion need to carry out the foreground area that is partitioned into Lower processing:
A) firstly, carrying out empty elimination using morphology opening operation.This algorithm using 3 pixel wides expansion with Corrosion;
B) area for extracting the connection region on foreground image, abandons region of the elemental area less than 100;
C) length-width ratio for calculating the boundary rectangle in the region left, the region by length-width ratio greater than 4:3 abandon.Above step In 3 pixel wides, foreground area region threshold 100 and length-width ratio 4:3 are obtained by repetition test.
3. the secondary classification based on deep learning algorithm judges
Still include a large amount of false datas in the foreground image coordinate screened in aforementioned manners, needs to pass through classification The higher convolutional neural networks model of precision carries out further classification judgement.
Transfer learning is carried out for convolutional neural networks, the present invention joins primarily with respect to the whole of entire convolutional neural networks Several or certain a part of layer parameter is finely adjusted, and is modified the output classification number of the last layer and is utilized the sample of target scene micro- Adjust VGGNET-16 network model.
2 are set by the output layer class categories number of VGGNET-16 model, network rest part structure remains unchanged, i.e., Solve two class classification problems of real picture and model picture.In trim process, using through the training of ImageNet data set Then the entire convolutional neural networks adjusted of original VGGNET-16 network model parameter initialization utilize augmented reality system Collected sample is finely adjusted parameter, obtains the new convolutional neural networks for secondary judgement.
4. the return step 3 if the foreground image coordinate precision of output is below standard, otherwise sits the foreground image judged Mark output, binding model textures and initial pictures, obtain the model result of actual situation mixing.
Specifically, as shown in Figure 1, a kind of augmented reality system actual situation hybrid modeling method based on deep learning is specifically transported Row process is as follows:
Step 1, image input have carried out preliminary screening to result using and based on target priori knowledge, have got rid of aobvious Land false target.
Step 2, the Preliminary detection based on PBAS algorithm are as shown in Figure 2.Step 3, secondary point based on deep learning algorithm Class judgement.
Step 4, the return step 3 if the foreground image coordinate precision of output is below standard, the foreground picture that otherwise will be judged As coordinate output, binding model textures and initial pictures obtain the model result of actual situation mixing.
1, the present invention is directed to augmented reality system actual situation hybrid modeling problem, and this method is first by the dummy model of consecutive frame View and the discrepant region of actual object picture all extract, and the image of input first passes around PBAS algorithm and examined It surveys, completes the segmentation to foreground target, it is secondary that the progress of VGGNet-16 model is sent into the suspected target region for then obtaining segmentation Judgement exports the foreground image coordinate judged, binding model textures and initial pictures obtain the model knot of actual situation mixing Fruit.Using actual situation hybrid modeling scheme proposed by the present invention, the operand of algorithm entirety can either be greatly lowered, effectively drop Demand of the low algorithm to hardware, but the hi-vision classification that can make full use of depth convolutional neural networks model VGGNET-16 is quasi- True rate guarantees target detection effect, effectively improves modeling accuracy.
2, target detection is carried out using the preferable PBAS algorithm of comprehensive performance, the background modeling part in PBAS algorithm is used for reference VIBE algorithm has been used for reference in SACON algorithm, foreground detection part, enables the algorithm to according to the complexity of background adaptively Change the renewal rate of background model and the judgment threshold of foreground segmentation, to adapt to the variation of scene.Particularly, PBAS is calculated Method uses the background modeling method of similar SACON algorithm, and N frame pixel then carrys out each pixel as background modeling before collecting It says, background model can indicate are as follows:
B(xi)={ B1(xi),…,Bk(xi),…,BN(xi)}
3, by comparing in sample set pixel value and current frame pixel value color space Euclidean distance, if distance Less than distance threshold R (xi) number of samples ratio SdminIt is few, then determine that current pixel point is otherwise background dot for foreground point.Prospect Testing result:
4, foreground area is without updating, and background area is with current context update probabilityRandomly select background mould A sampled pixel value B in typek(xi), with current pixel value I (xi) be replaced.The probability that each background pixel is replaced It isAt the same time, in x selected at randomiNeighborhood in, then randomly select a pixel yi, take identical mode With current pixel value V (yi) replacement background pixel point Bk(yi)。
5, background model B (x is constructedi) while, also construct a minimum range model D (xi):
D(xi)={ D1(xi),…,DN(xi)}
6, the adaptive re-configuration police of foreground segmentation threshold value is as follows:
7, more new strategy is specific as follows
8, the foreground area being partitioned into is carried out the following processing in cavity filling and nontarget area elimination:
A) firstly, carrying out empty elimination using morphology opening operation.This algorithm using 3 pixel wides expansion with Corrosion;
B) area for extracting the connection region on foreground image, abandons region of the elemental area less than 100;
C) length-width ratio for calculating the boundary rectangle in the region left, the region by length-width ratio greater than 4:3 abandon.Above step In 3 pixel wides, foreground area region threshold 100 and length-width ratio 4:3 are obtained by repetition test.
9,2 being set by the output layer class categories number of VGGNET-16 model, network rest part structure remains unchanged, Solve two class classification problems of real picture and model picture.In trim process, using through the training of ImageNet data set The entire convolutional neural networks adjusted of original VGGNET-16 network model parameter initialization, then utilize augmented reality system Collected sample of uniting is finely adjusted parameter, obtains the new convolutional neural networks for secondary judgement.
10, previous step is returned to if the foreground image coordinate precision of output is below standard, the foreground picture that otherwise will be judged As coordinate output, binding model textures and initial pictures obtain the model result of actual situation mixing.Good feedback regulation is reached Effect.
The above embodiment is interpreted as being merely to illustrate the present invention rather than limit the scope of the invention.? After the content for having read record of the invention, technical staff can be made various changes or modifications the present invention, these equivalent changes Change and modification equally falls into the scope of the claims in the present invention.

Claims (9)

1. a kind of augmented reality system actual situation hybrid modeling method based on deep learning, which comprises the following steps:
1) dummy model view and actual object image, are inputted, is primarily based on target priori knowledge to the dummy model of consecutive frame View and actual object picture have carried out preliminary screening, get rid of the discrepant region of significant ground false target;
2), the dummy model view after the completion first step and actual object image are detected by PBAS algorithm, completion pair The segmentation of foreground target obtains suspected target region;Wherein, the background modeling part of SACON algorithm has been merged in PBAS algorithm With the foreground detection part of VIBE algorithm;
3), the suspected target region for then obtaining segmentation is sent into VGGNet-16 model and carries out secondary judgement, before judging The output of scape image coordinate;
4), binding model textures and initial pictures obtain the model result of actual situation mixing.
2. a kind of augmented reality system actual situation hybrid modeling method based on deep learning according to claim 1, special Sign is that the step 1) is to have carried out preliminary screening to result based on target priori knowledge, gets rid of significant false target.
3. a kind of augmented reality system actual situation hybrid modeling method based on deep learning according to claim 1, special Sign is that the step 2) is detected by PBAS algorithm, completes the segmentation to foreground target, obtains suspected target region tool Body includes:
A1, using the background modeling method of similar SACON algorithm, N frame pixel obtains background mould as background modeling before collecting Type;
A2, under step A1 background model, current pixel belongs to prospect or background by comparing present frame I (xi) and background mould Type B (xi) determine, by comparing in sample set pixel value and current frame pixel value color space Euclidean distance, if Distance is less than distance threshold R (xi) number of samples than current frame pixel value color space Euclidean distance number of samples Sdmin It is few, then determine that current pixel point is otherwise background dot for foreground point;
A3, the update of background model and background complexity calculating;
A4, the adaptive adjustment of foreground segmentation threshold value and more new strategy;
A5, cavity filling and nontarget area removal process.
4. a kind of augmented reality system actual situation hybrid modeling method based on deep learning according to claim 3, special Sign is that the step A1 is specifically included: for each pixel, background model is indicated are as follows:
B(xi)={ B1(xi),…,Bk(xi),…,BN(xi)}
Wherein, xiRepresent first pixel of the i-th frame image, B (xi) indicate the i-th frame when background model, Bk(xi) represent background Model B (xi) in a sample pixel value, for color image, Bk(xi)=(ri,gi,bi), it is corresponded in RGB sky Between value;It is then gray value for gray level image.
5. a kind of augmented reality system actual situation hybrid modeling method based on deep learning according to claim 4, special Sign is, the foreground detection result of the step A2 are as follows:
F(xi) it is foreground image pixel xiSet, wherein if distance be less than distance threshold R (xi) number of samples than current Euclidean distance number of samples S of the frame pixel value in color spacedminIt is at least foreground point, otherwise numerical value 1 is background dot, number Value is that 0, dist indicates pixel and its Euclidean distance in the corresponding point of background model on color space.
6. a kind of augmented reality system actual situation hybrid modeling method based on deep learning according to claim 5, special Sign is that the update of the step A3 model and the calculating of background complexity specifically include:
In background model renewal process, random selection needs the sample being replaced, and randomly chooses the sample set of neighborhood of pixels more Newly, specifically foreground area is without updating, and background area is with current context update probabilityRandomly select background mould A sampled pixel value B in typek(xi), with current pixel value I (xi) be replaced, the probability that each background pixel is replaced It isAt the same time, in x selected at randomiNeighborhood in, then randomly select a pixel yi, take identical mode With current pixel value V (yi) replacement background pixel point Bk(yi);
Using measurement of the average value of minimum range as background complexity when Sample Refreshment in sample set, background complexity Calculating process is as follows: building background model B (xi) while, also construct a minimum range model D (xi):
D(xi)={ D1(xi),…,DN(xi)}
Current lowest distance value is dmin(xi)=minkdist(I(xi),Bk(xi)), minimum can be constructed according to above step Distance model, corresponding relationship dmin(xi)→Dk(xi), the complexity of background at this time is determined by the mean value of minimum range:N is minimum range sample number.
7. a kind of augmented reality system actual situation hybrid modeling method based on deep learning according to claim 6, special Sign is that the adaptive adjustment of the step A4 foreground segmentation threshold value and more new strategy specifically include:
R(xi) it is foreground detection as a result, Rinc\decWith RscaleIt is constant constant;
The adaptive adjustment current pixel point x of background model renewal rateiWhen for background dot, its corresponding background model is updated, such as Fruit xiNeighborhood point yiFor foreground pixel point, the update of background model equally can also occur, introduce parameter T (xi) dynamically control this The speed of one process makes it when pixel is judged as background, and renewal rate improves, when being judged as prospect, renewal rate It reduces;When scene changes are more violent, background complexity is relatively high, and foreground segmentation is easier to judge by accident, at this time Raising or lowering for renewal rate can suitably slow down;Conversely, when scene is more stable, the raising of renewal rate or Reduction should suitably be accelerated, and more new strategy is specific as follows
F(xi) it is foreground detection as a result, TincAnd TdecRespectively indicate the amplitude of increase, the reduction of turnover rate.
8. a kind of augmented reality system actual situation hybrid modeling method based on deep learning according to claim 7, special Sign is that the filling of the cavity the step A5 and nontarget area removal process specifically include:
Firstly, carrying out empty elimination using morphology opening operation;
The area in the connection region on foreground image is extracted, region of the elemental area less than 100 is abandoned;
The length-width ratio for calculating the boundary rectangle in the region left, the region by length-width ratio greater than 4:3 abandon.
9. a kind of augmented reality system actual situation hybrid modeling method based on deep learning according to claim 6, special Sign is that the step 3) sets 2 for the output layer class categories number of VGGNET-16 model, and network rest part structure is protected Hold it is constant, i.e., solution real picture and model picture two class classification problems, in trim process, using through ImageNet data Collect the entire convolutional neural networks adjusted of original VGGNET-16 network model parameter initialization of training, then utilizes enhancing The collected sample of reality system is finely adjusted parameter, obtains the new convolutional neural networks for secondary judgement, if output Foreground image coordinate precision it is below standard, return, otherwise by the foreground image coordinate judged export, binding model textures with Initial pictures obtain the model result of actual situation mixing.
CN201811366602.3A 2018-11-16 2018-11-16 A kind of augmented reality system actual situation hybrid modeling method based on deep learning Pending CN109544694A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811366602.3A CN109544694A (en) 2018-11-16 2018-11-16 A kind of augmented reality system actual situation hybrid modeling method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811366602.3A CN109544694A (en) 2018-11-16 2018-11-16 A kind of augmented reality system actual situation hybrid modeling method based on deep learning

Publications (1)

Publication Number Publication Date
CN109544694A true CN109544694A (en) 2019-03-29

Family

ID=65848028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811366602.3A Pending CN109544694A (en) 2018-11-16 2018-11-16 A kind of augmented reality system actual situation hybrid modeling method based on deep learning

Country Status (1)

Country Link
CN (1) CN109544694A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503664A (en) * 2019-08-07 2019-11-26 江苏大学 One kind being based on improved local auto-adaptive sensitivity background modeling method
CN110888535A (en) * 2019-12-05 2020-03-17 上海工程技术大学 AR system capable of improving on-site reality
CN111178291A (en) * 2019-12-31 2020-05-19 北京筑梦园科技有限公司 Parking payment system and parking payment method
CN112003999A (en) * 2020-09-15 2020-11-27 东北大学 Three-dimensional virtual reality synthesis algorithm based on Unity 3D
CN112101232A (en) * 2020-09-16 2020-12-18 国网上海市电力公司 Flame detection method based on multiple classifiers
CN114327341A (en) * 2021-12-31 2022-04-12 江苏龙冠影视文化科技有限公司 Remote interactive virtual display system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020084974A1 (en) * 1997-09-01 2002-07-04 Toshikazu Ohshima Apparatus for presenting mixed reality shared among operators
GB0818561D0 (en) * 2008-10-09 2008-11-19 Isis Innovation Visual tracking of objects in images, and segmentation of images
WO2015144209A1 (en) * 2014-03-25 2015-10-01 Metaio Gmbh Method and system for representing a virtual object in a view of a real environment
US20170361216A1 (en) * 2015-03-26 2017-12-21 Bei Jing Xiao Xiao Niu Creative Technologies Ltd. Method and system incorporating real environment for virtuality and reality combined interaction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020084974A1 (en) * 1997-09-01 2002-07-04 Toshikazu Ohshima Apparatus for presenting mixed reality shared among operators
GB0818561D0 (en) * 2008-10-09 2008-11-19 Isis Innovation Visual tracking of objects in images, and segmentation of images
WO2015144209A1 (en) * 2014-03-25 2015-10-01 Metaio Gmbh Method and system for representing a virtual object in a view of a real environment
US20170361216A1 (en) * 2015-03-26 2017-12-21 Bei Jing Xiao Xiao Niu Creative Technologies Ltd. Method and system incorporating real environment for virtuality and reality combined interaction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
万剑等: "自适应邻域相关性的背景建模", 《中国图象图形学报》 *
侯畅等: "基于深度编解码网络的运动目标检测算法", 《计算机系统应用》 *
闫春江等: "基于深度学习的输电线路工程车辆入侵检测", 《信息技术》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503664A (en) * 2019-08-07 2019-11-26 江苏大学 One kind being based on improved local auto-adaptive sensitivity background modeling method
CN110888535A (en) * 2019-12-05 2020-03-17 上海工程技术大学 AR system capable of improving on-site reality
CN111178291A (en) * 2019-12-31 2020-05-19 北京筑梦园科技有限公司 Parking payment system and parking payment method
CN112003999A (en) * 2020-09-15 2020-11-27 东北大学 Three-dimensional virtual reality synthesis algorithm based on Unity 3D
CN112101232A (en) * 2020-09-16 2020-12-18 国网上海市电力公司 Flame detection method based on multiple classifiers
CN114327341A (en) * 2021-12-31 2022-04-12 江苏龙冠影视文化科技有限公司 Remote interactive virtual display system

Similar Documents

Publication Publication Date Title
CN109544694A (en) A kind of augmented reality system actual situation hybrid modeling method based on deep learning
CN109460754B (en) A kind of water surface foreign matter detecting method, device, equipment and storage medium
CN104392468B (en) Based on the moving target detecting method for improving visual background extraction
CN103577793B (en) Gesture identification method and device
CN110119728A (en) Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network
CN105894484B (en) A kind of HDR algorithm for reconstructing normalized based on histogram with super-pixel segmentation
CN110276264B (en) Crowd density estimation method based on foreground segmentation graph
CN111539273A (en) Traffic video background modeling method and system
CN111553837B (en) Artistic text image generation method based on neural style migration
CN108364272A (en) A kind of high-performance Infrared-Visible fusion detection method
CN110826389B (en) Gait recognition method based on attention 3D frequency convolution neural network
CN101371273A (en) Video sequence partition
CN111161313B (en) Multi-target tracking method and device in video stream
CN111626993A (en) Image automatic detection counting method and system based on embedded FEFnet network
CN107730526A (en) A kind of statistical method of the number of fish school
CN111462027B (en) Multi-focus image fusion method based on multi-scale gradient and matting
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN108564120A (en) Feature Points Extraction based on deep neural network
CN112686276A (en) Flame detection method based on improved RetinaNet network
CN107578039A (en) Writing profile comparison method based on digital image processing techniques
Liu et al. Image edge recognition of virtual reality scene based on multi-operator dynamic weight detection
CN114742758A (en) Cell nucleus classification method in full-field digital slice histopathology picture
CN115393734A (en) SAR image ship contour extraction method based on fast R-CNN and CV model combined method
CN102074017A (en) Method and device for detecting and tracking barbell central point
CN110633727A (en) Deep neural network ship target fine-grained identification method based on selective search

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination