CN109978987A - A kind of control method, apparatus and system constructing panorama based on multiple depth cameras - Google Patents
A kind of control method, apparatus and system constructing panorama based on multiple depth cameras Download PDFInfo
- Publication number
- CN109978987A CN109978987A CN201711466220.3A CN201711466220A CN109978987A CN 109978987 A CN109978987 A CN 109978987A CN 201711466220 A CN201711466220 A CN 201711466220A CN 109978987 A CN109978987 A CN 109978987A
- Authority
- CN
- China
- Prior art keywords
- camera
- camera data
- data
- depth
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000001514 detection method Methods 0.000 claims abstract description 45
- 238000009499 grossing Methods 0.000 claims description 10
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 13
- 239000000428 dust Substances 0.000 description 5
- 230000004888 barrier function Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 241000239290 Araneae Species 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 229910052500 inorganic mineral Inorganic materials 0.000 description 1
- 239000011707 mineral Substances 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 239000000843 powder Substances 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000000392 somatic effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The present invention provides a kind of control method and device based on multiple depth cameras building panorama, for realizing real time panoramic scanning, comprising: a. obtains whole camera data S in the unit time t that single detects based on N number of depth camera;B. unit time t detected based on the whole camera data S and the single is determined in multiple moment tkWhen camera data acquisition system Sk;C. it is based on the camera data acquisition system SkAnd matching algorithm determines the final camera data of single detection;D. the final camera data based on single detection determine the full-view modeling image of single detection;E. step a to d is repeated, until determining final full-view modeling image, the present invention passes through 3 or 3 or more depth cameras, ambient enviroment is scanned in real time, surrounding three-dimensional information is obtained, output carries out 3D modeling to CPU, can be handheld device, robot etc..The configuration of the present invention is simple, it is easy to use, it is practical, there is high commercial value.
Description
Technical field
The invention belongs to computer vision field, especially a kind of controlling party based on multiple depth cameras building panorama
Method, apparatus and system.
Background technique
As the epoch constantly improve, the continuous development of society, the mankind are also more more and more intense to the exploration interest of the Nature,
And it is also increasing to the mining requirement of mineral, certainly, in order to avoid the generation of accident, and in order to be better understood by massif
Internal structure, people need to carry out three-dimensional modeling to massif inside using depth camera, so that inside entire massif
Situation can show on three-dimensional space, allow people to have more intuitive understanding to massif internal information, to avoid one
The generation of a little contingencies.
For at present, the application range of depth camera is than wide, its main exploitation is the reason is that be used for instantly very
Popular somatic sensation television game, being able to use family has better experience, and however as being constantly progressive for technology, depth camera is virtual
Real aspect can also have very novel application, and some boutiques apply to this virtual technology on fitting room, Ke Huwu
It need to try on both and true clothes effect can be observed;In engineering, application range mainly includes the reconstruction of small-scale scene, object
The 3D scanning and printing of facility.
However, there is no a kind of at present based on multiple depth cameras building panorama in terms of cavern or tunnel detection
Control method and device.
Summary of the invention
For technological deficiency of the existing technology, the object of the present invention is to provide one kind to be based on multiple depth camera structures
The control method and device of panorama are built, according to an aspect of the invention, there is provided a kind of constructed based on multiple depth cameras
The control method of panorama includes the following steps: for realizing real time panoramic scanning
A. whole camera data S in the unit time t that single detects are obtained based on N number of depth camera;
B. unit time t detected based on the whole camera data S and the single is determined in multiple moment tk
When camera data acquisition system Sk;
C. it is based on the camera data acquisition system SkAnd matching algorithm determines the final camera data of single detection;
D. the final camera data based on single detection determine the full-view modeling image of single detection;
E. step a to d is repeated, until determining final full-view modeling image.
Preferably, before the step a, including step i: location information and depth based on N number of depth camera
The quantity N of camera determines the timing between N number of depth camera.
It preferably, further include step i i before the step i: creation initial coordinate system base map.
Preferably, the step a includes the following steps:
A1. the camera data S of each depth camera is obtainedN;
A2. based on the camera data S of each depth cameraNPixel coordinate, phase carry out choice operation,
Obtain the amendment camera data S of treated each depth cameraN′;
A3. the amendment camera data S based on each depth cameraN' determine whole camera data S.
Preferably, the step c includes the following steps:
C1: it is based on the camera data acquisition system SkJudge in continuous whole image data with the presence or absence of the picture of missing
Vegetarian refreshments, and if it exists, then follow the steps c2, if it does not exist, then follow the steps c3;
C2: the pixel based on the missing is drawn up virtual representation vegetarian refreshments, and the virtual representation vegetarian refreshments is filled into described take the photograph
As head data acquisition system SkIn;
C3: by the camera data acquisition system SkIt is matched in continuous whole image data;
C4: image co-registration, edge smoothing processing are carried out to the image data after the matching, obtain final camera number
According to.
Preferably, the step d includes the following steps d1: final camera data and three-dimensional based on single detection are built
Mould module determines the full-view modeling image of single detection.
Preferably, the number N of the depth camera is any one of following:
3;
4;Or
5.
According to another aspect of the present invention, a kind of control dress based on multiple depth cameras building panorama is provided
It sets, for realizing real time panoramic scanning, comprising:
First obtains module 1: it is used to obtain whole camera data S in unit time t;
First determining module 2: it is used to determine in multiple moment tkWhen camera data acquisition system Sk;
Second determining module 3: it is used to determine final camera data;
Third determining module 4: it is used to determine full-view modeling image.
Preferably, further includes:
4th determining module 5: it is used to determine the timing between N number of depth camera.
Preferably, first acquisition device includes:
Second obtains module 11: its camera data SN for being used to obtain each depth camera;
Third obtains module 12: its amendment camera data for being used to obtain treated each depth camera
SN′;
4th determining module 13: it is used to determine whole camera data S.
Preferably, second determining module further include:
First judgment module 31: it is used for based on the camera data acquisition system SkJudge in continuous whole image data
In with the presence or absence of missing pixel;
First processing module 32: it is used for virtual representation vegetarian refreshments of drawing up based on the pixel of the missing, and will be described virtual
Pixel is filled into the camera data acquisition system SkIn;
Second processing module 33: it is used for the camera data acquisition system SkIt is matched to continuous whole image data
In;
Third processing module 34: it is used to carry out the image data after the matching image co-registration, edge smoothing processing,
Obtain final camera data.
According to another aspect of the present invention, a kind of control system based on multiple depth cameras building panorama is provided
System, for realizing real time panoramic scanning, comprising:
N number of depth camera: it is used to obtain depth camera data;
Control centre: it is used to receive the depth camera data, data processing and outbound data transmission;
Modeling: its data for being used to receive control centre's processing, and the data based on the processing are modeled.
The present invention provides a kind of control method and device based on multiple depth cameras building panorama, the present invention passes through
Using 3 or 3 or more depth cameras, ambient enviroment is scanned in real time, so that the three-dimensional information of environment is obtained, it is defeated
Carrying out 3D modeling to CPU out can apply to obtain the true three-dimension figure of external environment in the field for needing to carry out 3D modeling
Scape can be handheld device, robot etc., and the configuration of the present invention is simple is easy to use, practical, have high business valence
Value.
Detailed description of the invention
Upon reading the detailed description of non-limiting embodiments with reference to the following drawings, other feature of the invention,
Objects and advantages will become more apparent upon:
Fig. 1 shows a specific embodiment of the invention, a kind of control based on multiple depth cameras building panorama
The idiographic flow schematic diagram of method;
Fig. 2 shows the first embodiment of the present invention, are scanned into based on creation base map and by multiple depth cameras
Row filling, constructs the idiographic flow schematic diagram of final full-view modeling image;
Fig. 3 shows the second embodiment of the present invention, obtains the list detected in single based on N number of depth camera
The idiographic flow schematic diagram of whole camera data S in the time t of position;
Fig. 4 shows the third embodiment of the present invention, is based on the camera data acquisition system SkAnd matching algorithm is true
The idiographic flow schematic diagram of the final camera data of order time detection;
Fig. 5 shows another embodiment of the present invention, a kind of to construct panorama based on multiple depth cameras
The module connection diagram of control device;And
Fig. 6 shows the fourth embodiment of the present invention, and a kind of process based on multiple depth cameras building panorama is opened up
Flutter figure.
Specific embodiment
In order to preferably technical solution of the present invention be made clearly to show, the present invention is made into one with reference to the accompanying drawing
Walk explanation.
Fig. 1 shows a specific embodiment of the invention, a kind of control based on multiple depth cameras building panorama
The idiographic flow schematic diagram of method specifically includes the following steps:
Firstly, entering step S101, obtained in the unit time t that single detects based on N number of depth camera
Whole camera data S, N be the depth camera quantity, including but not limited to 3 or 4 or 5, when the depth
When to spend camera quantity be 3, camera towards distribution should by left, preceding, right direction or it is forward and backward, on direction, when described
When the quantity of depth camera is 4, camera should be by the direction of left, right, front and rear or preceding, upper, left and right towards distribution
Direction, and when the quantity of the depth camera be 5 when, then camera towards distribution should by left, right, front and rear, on side
To what t was represented is that single detects the total time unit for being, S then indicates N number of camera whole collected in chronomere t
Camera data.
Then, S102 is entered step, the unit time t detected based on the whole camera data S and the single
It determines in multiple moment tkWhen camera data acquisition system Sk, when get the whole camera data S and single detection it is total
After duration t, the chronomere t of detection can be averagely divided into multiple moment tk, and acquire tkCamera data acquisition system in moment
Sk, can also be divided by detection range, the detection time in every section of distance is determined as tk, then acquire tkIn moment
Camera data acquisition system Sk, equally multiple moment t can also be determined by detecting the complexity of environmentk。
And then, S103 is entered step, the camera data acquisition system S is based onkAnd matching algorithm determines that single detects
Final camera data, by multiple moment tkInterior depth camera data acquisition system was determined by matching algorithm in the moment
The final camera data of detection identify same place between two width or multiple image by certain matching algorithm, such as two
It ties up in images match by comparing the related coefficient of the window of same size in target area and the field of search, takes phase relation in the field of search
Window center corresponding to number maximum is as same place.
Subsequently, S104 is entered step, determines that the panorama of single detection is built based on the final camera data of single detection
Mould image, by the camera data acquisition system SkAnd matching algorithm determines that the final camera data of single detection are transmitted to control
Whole moment t are collected by center processed, control centrekThe interior camera data acquisition system SkAfterwards, then it is all data being collected into are sharp
Full-view modeling is carried out with modeling.
Finally, entering step S105, step S101 to S104 is repeated, until determine final full-view modeling image,
It can be by the interference of light, barrier, dust or dust, so not can guarantee single detection energy when being detected due to indoor or cavern
Complete information is got, it is preliminary as a result, simultaneously base showing indoor environment in modeling after detecting for the first time
In second, secondary even multiple repetition detection three times, constantly collect new data and improve modeling effect picture, until last
The data of secondary detection fit like a glove with modeling data, when having no new detection data and occurring, can determine final full-view modeling
Image.
Fig. 2 shows the first embodiment of the present invention, show the first embodiment of the present invention, based on creation base map
And be filled by the scanning of multiple depth cameras, the idiographic flow schematic diagram of final full-view modeling image is constructed, specifically
Ground includes the following steps:
Firstly, entering step S201, initial coordinate system base map is created, before carrying out the detection of indoor or cavern, is in modeling
Blank base map is pre-created in system, is modeled on base map by the data that multiple depth cameras return, and by multiple
The result of detection constantly improves full-view modeling image on base map.
Then, S202 is entered step, the quantity N of location information and depth camera based on N number of depth camera is true
Timing between fixed N number of depth camera, in order to make adjacent depth camera not interact during detecting shooting,
The shooting timing that each camera is set by the quantity and placement position of the depth camera is then needed, such as the depth camera
Head quantity is 3, be towards position it is left, preceding, right, then at left and right sides of the shooting timing of depth camera can synchronize, but should be with
The camera in front is staggered, for another example the quantity of the depth camera be 5, be respectively facing front, back, left, right, up, then before
It should be staggered afterwards towards the shooting timing between depth camera with left and right towards depth camera, the depth camera of top direction
It is independent of each other between other depth cameras, shooting timing can be formulated by shooting demand.
It will be appreciated by those skilled in the art that the step S203 to step S206 in the present embodiment can be with reference to the step in Fig. 1
S101 to step S104, step S208 can be with reference to the step S105 in Fig. 1, and it will not be described here.
Finally, entering step S207, the final camera data and three-dimensional modeling module based on single detection determine single
The full-view modeling image of secondary detection, the final data of each depth camera is converted, and you can get it, and each camera is clapped
The modeled images of direction are taken the photograph, then are merged each data towards depth camera by three-dimensional modeling module, so that it may be obtained
The full-view modeling image detected to complete single.
Fig. 3 shows the second embodiment of the present invention, obtains the list detected in single based on N number of depth camera
The idiographic flow schematic diagram of whole camera data S in the time t of position, this figure is the sub-step of Fig. 1 step S101, specifically,
It is as follows:
Firstly, entering step S1011, the camera data S of each depth camera is obtainedN, according to the depth
The shooting direction of the quantity of camera and each depth camera, extracts the photographed data of each depth camera respectively, and transmits
To control centre's classification processing.
Then, S1012 is entered step, the camera data S based on each depth cameraNPixel coordinate,
Phase carries out choice operation, the amendment camera data S for each depth camera that obtains that treatedN', since detection is clapped
It is constantly mobile for taking the photograph, so the shooting image of each depth camera is also constantly to be changed by the movement of position
, according to being continuously shot for depth camera, the pixel coordinate of adjacent two or plurality of pictures is got, and find its bat
The similarity for taking the photograph content gets the shooting object of each coordinate points, and every picture is carried out phase and accepts or rejects operation, obtains more
Accurate amendment camera data SN′。
Finally, S1013 is entered step, the amendment camera data S based on each depth cameraN' determine all
Camera data S, the amendment camera data S that will be got in step S1012N' arranged and merged, such as adjacent two
It, need to be using the higher image of pixel as data acquisition target when having image overlapping when acquiring data between camera.
Fig. 4 shows the third embodiment of the present invention, is based on the camera data acquisition system SkAnd matching algorithm is true
The idiographic flow schematic diagram of the final camera data of order time detection, this figure is the sub-step of step S103 in Fig. 1, specifically
Ground, as follows:
Firstly, entering step S1031, it is based on the camera data acquisition system SkJudge in continuous whole image data
With the presence or absence of the pixel of missing, and if it exists, then follow the steps S1032, if it does not exist, then follow the steps S1033, this step is
Judgment step is used to judge that depth camera whether there is the pixel of missing during continuously taking pictures, i.e., in dust, powder
Dirt, bat, spider or other barriers interference under, can all have the pixel of the image missing of shooting.
Then, S1032 is entered step, the pixel based on the missing is drawn up virtual representation vegetarian refreshments, and by the virtual representation
Vegetarian refreshments is filled into the camera data acquisition system SkIn, such as there are the pixels of the missing in detection shooting picture, can lead to
It crosses virtual pixel point of drawing up to be filled, can also be judged by one or more adjacent picture, and find missing pixel
The true picture of point, then be filled.
In a preferred embodiment, in detection process dash forward meet a bat fly over, in the image of shooting due to
The process of bat causes plurality of pictures all to there is the pixel of missing, but since bat is also mobile, so missing picture
The plurality of pictures of vegetarian refreshments is adjacent or can restore the pixel of missing between each other.
And in another more preferably embodiment, prominent chance a large amount of dust or dust fall in detection process and area is huge
When big, in the case where plurality of pictures all missing pixel points, the pixel of missing can not be restored in adjacent or mutual picture, so that it may
Using drawing up, virtual pixel point is filled, and the pixel lacked again in second of detection or repeatedly detection is gone back
It is former.
Subsequently, S1033 is entered step, by the camera data acquisition system SkIt is matched to continuous whole image data
In, the camera data acquisition system SkIt is by acquiring moment tkInterior camera data acquisition system, by acquisitions all in detection time t
Moment tkInterior camera data acquisition system SkIt is merged and is matched in complete image data.
Finally, entering step S1034, image co-registration, edge smoothing processing are carried out to the image data after the matching, obtained
Take final camera data, after system receives image data, according to specific algorithm, first pre-process to image, then
It is matched according to the amplitude of each pixel, phase information, matches and then carry out image co-registration, the operation such as edge smoothing
Final camera data are got, and are that foundation is modeled with this data.
Fig. 5 shows another embodiment of the present invention, a kind of to construct panorama based on multiple depth cameras
The module connection diagram of control device, it will be appreciated by those skilled in the art that described a kind of complete based on the building of multiple depth cameras
The control device of scape includes following module:
First acquisition module 1: it is used to obtain whole camera data S in unit time t, when obtaining single detection
Interior whole camera data, which is taken from the second acquisition module 11, third obtains module 12 and the 4th determining module
13。
Further, a kind of control device based on multiple depth cameras building panorama further includes the first determining mould
Block 2: it is used to determine in multiple moment tkWhen camera data acquisition system Sk, can be by the way that the chronomere t of detection be averagely divided
At multiple moment tk, and acquire tkCamera data acquisition system S in momentk, it can also be divided by detection range, it will be every
Detection time in section distance is determined as tk, then acquire moment tkInterior camera data acquisition system Sk, equally can also be by detecting ring
The complexity in border determines multiple moment tk。
Further, a kind of control device based on multiple depth cameras building panorama further includes the second determining mould
Block 3: it is used to determine final camera data, determine the drawing of multiple depth cameras shooting do not have pixel missing point and to
Image data after matching carries out image co-registration, edge smoothing processing, determines final camera data.
Further, a kind of control device based on multiple depth cameras building panorama further includes that third determines mould
Block 4: it is used to determine full-view modeling image, and the final photographed data that multiple depth cameras determine is integrated, and according to
The shooting direction of each depth camera carries out three-dimensional modeling in base map, constructs full-view modeling image.
Further, a kind of control device based on multiple depth cameras building panorama further includes the 4th determining mould
Block 5: it is used to determine the timing between N number of depth camera, to avoid each depth camera mutual in shooting process
It influences, the shooting timing of each camera need to be set by the quantity and placement position of the depth camera.
Further, a kind of control device based on multiple depth cameras building panorama further includes the second acquisition mould
Block 11: its camera data S for being used to obtain each depth cameraN, i.e., each moment tkInterior acquired camera number
According to set.
Further, a kind of control device based on multiple depth cameras building panorama further includes that third obtains mould
Block 12: its amendment camera data S for being used to obtain treated each depth cameraN', by each moment tkInterior institute
The camera data acquisition system S of acquisitionNCamera data be modified, get each depth camera amendment camera shooting
Head data SN′。
Further, a kind of control device based on multiple depth cameras building panorama further includes the 4th determining mould
Block 13: it is used to determine whole camera data S, by all moment tkInterior acquired whole amendment camera Data Data SN′
It is collected and merges.
Further, a kind of control device based on multiple depth cameras building panorama further includes the first judgement mould
Block 31: it is used for based on the camera data acquisition system SkJudge in continuous whole image data with the presence or absence of the picture of missing
Vegetarian refreshments, the pixel lacked if it exists then transfer to first processing module 32 to carry out virtual representation vegetarian refreshments of drawing up.
Further, the control device based on multiple depth cameras building panorama further includes first processing module
32: it is used for virtual representation vegetarian refreshments of drawing up based on the pixel of the missing, and the virtual representation vegetarian refreshments is filled into the camera shooting
Head data acquisition system SkIn, by being deduced before without the picture of pixel missing, virtual pixel of drawing up out is filled out
It fills, or the pixel that whether can restore missing in adjacent image is found.
Further, a kind of control device based on multiple depth cameras building panorama further includes second processing mould
Block 33: it is used for the camera data acquisition system SkIt is matched in continuous whole image data, will own in detection time t
Acquire moment tkInterior camera data acquisition system SkIt is merged and is matched in complete image data.
Further, a kind of control device based on multiple depth cameras building panorama further includes third processing mould
Block 34: it is used to carry out the image data after the matching image co-registration, edge smoothing processing, obtains final camera number
According to after receiving image data, according to specific algorithm, first being pre-processed to image, then according to the width of each pixel
Degree, phase information are matched, and match and then carry out image co-registration, the operations such as edge smoothing get final camera
Data, and be that foundation is modeled with this data.
Fig. 6 shows the fourth embodiment of the present invention, and a kind of process based on multiple depth cameras building panorama is opened up
Figure is flutterred, it will be appreciated by those skilled in the art that as shown in fig. 6, in the process topological diagram, including N number of depth camera: it is used for
Depth camera data are obtained, N number of depth camera can be 3,4,5 even more than the depth camera obtains
The camera data got can pass through the various mode transmissions such as WIFI, bluetooth to control centre.
Further, the process topological diagram further includes control centre: it is used to receive the depth camera data, data
Processing and outbound data transmission, the photographed data of multiple depth cameras is received, and is modified to all pictures
And processing, i.e., after system receives image data, according to specific algorithm, first image is pre-processed, then according to each
Amplitude, the phase information of pixel are matched, and image co-registration is matched and then carry out, and the operation such as edge smoothing can also pass through
Control centre controls the synchronization and shooting timing of each depth camera.
Further, the process topological diagram further includes modeling: its data for being used to receive control centre's processing, and
Data based on the processing are modeled, and three-dimensional modeling module carries out image modeling, ultimately form according to image pixel information
One complete threedimensional model.
Specific embodiments of the present invention are described above.It is to be appreciated that the invention is not limited to above-mentioned
Particular implementation, those skilled in the art can make various deformations or amendments within the scope of the claims, this not shadow
Ring substantive content of the invention.
Claims (12)
1. a kind of control method based on multiple depth cameras building panorama, for realizing real time panoramic scanning, feature
It is, includes the following steps:
A. whole camera data S in the unit time t that single detects are obtained based on N number of depth camera;
B. unit time t detected based on the whole camera data S and the single is determined in multiple moment tkWhen take the photograph
As head data acquisition system Sk;
C. it is based on the camera data acquisition system SkAnd matching algorithm determines the final camera data of single detection;
D. the final camera data based on single detection determine the full-view modeling image of single detection;
E. the full-view modeling image that step a to d obtains multiple single detections is repeated, until determining that final panorama is built
Mould image.
2. control method according to claim 1, which is characterized in that before the step a, including step i: it is based on N
The location information of a depth camera and the quantity N of depth camera determine the timing between N number of depth camera.
3. control method according to claim 2, which is characterized in that further include step ii before the step i: wound
Build initial coordinate system base map.
4. control method according to claim 1 or 2, which is characterized in that the step a includes the following steps:
A1. the camera data S of each depth camera is obtainedN;
A2. based on the camera data S of each depth cameraNPixel coordinate, phase carry out choice operation, obtain
The amendment camera data S for each depth camera that treatedN′;
A3. the amendment camera data S based on each depth cameraN' determine whole camera data S.
5. control method according to claim 3, which is characterized in that the step c includes the following steps:
C1: it is based on the camera data acquisition system SkJudge the pixel that whether there is missing in continuous whole image data,
If it exists, c2 is thened follow the steps, if it does not exist, thens follow the steps c3;
C2: the pixel based on the missing is drawn up virtual representation vegetarian refreshments, and the virtual representation vegetarian refreshments is filled into the camera
Data acquisition system SkIn;
C3: by the camera data acquisition system SkIt is matched in continuous whole image data;
C4: image co-registration, edge smoothing processing are carried out to the image data after the matching, obtain final camera data.
6. control method according to claim 4, which is characterized in that the step d includes the following steps d1: being based on single
The final camera data and three-dimensional modeling module of detection determine the full-view modeling image of single detection.
7. control method according to any one of claim 1 to 5, which is characterized in that the number N of the depth camera
It is any one of following:
3;
4;Or
5.
8. a kind of control device based on multiple depth cameras building panorama, for realizing real time panoramic scanning, feature
It is, comprising:
First obtains module (1): it is used to obtain whole camera data S in unit time t;
First determining module (2): it is used to determine in multiple moment tkWhen camera data acquisition system Sk;
Second determining module (3): it is used to determine final camera data;
Third determining module (4): it is used to determine full-view modeling image.
9. control device according to claim 7, which is characterized in that further include:
4th determining module (5): it is used to determine the timing between N number of depth camera.
10. control device according to claim 7, which is characterized in that first acquisition device includes:
Second obtains module (11): its camera data S for being used to obtain each depth cameraN;
Third obtains module (12): its amendment camera data S for being used to obtain treated each depth cameraN′;
4th determining module (13): it is used to determine whole camera data S.
11. control device according to claim 7, which is characterized in that second determining module further include:
First judgment module (31): it is used for based on the camera data acquisition system SkJudge in continuous whole image data
With the presence or absence of the pixel of missing;
First processing module (32): it is used for virtual representation vegetarian refreshments of drawing up based on the pixel of the missing, and by the virtual representation
Vegetarian refreshments is filled into the camera data acquisition system SkIn;
Second processing module (33): it is used for the camera data acquisition system SkIt is matched in continuous whole image data;
Third processing module (34): it is used to carry out the image data after the matching image co-registration, edge smoothing processing, obtains
Take final camera data.
12. a kind of control system based on multiple depth cameras building panorama, for realizing real time panoramic scanning, feature
It is, comprising:
N number of depth camera: it is used to obtain depth camera data;
Control centre: it is used to receive the depth camera data, data processing and outbound data transmission;
Modeling: its data for being used to receive control centre's processing, and the data based on the processing are modeled.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711466220.3A CN109978987A (en) | 2017-12-28 | 2017-12-28 | A kind of control method, apparatus and system constructing panorama based on multiple depth cameras |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711466220.3A CN109978987A (en) | 2017-12-28 | 2017-12-28 | A kind of control method, apparatus and system constructing panorama based on multiple depth cameras |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109978987A true CN109978987A (en) | 2019-07-05 |
Family
ID=67075327
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711466220.3A Withdrawn CN109978987A (en) | 2017-12-28 | 2017-12-28 | A kind of control method, apparatus and system constructing panorama based on multiple depth cameras |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109978987A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112135088A (en) * | 2019-06-25 | 2020-12-25 | 北京京东尚科信息技术有限公司 | Method for displaying trial assembly effect, trial assembly terminal and storage medium |
CN116628800A (en) * | 2023-05-09 | 2023-08-22 | 海南华筑国际工程设计咨询管理有限公司 | Building design system based on BIM |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103106604A (en) * | 2013-01-23 | 2013-05-15 | 东华大学 | Three dimensional (3D) virtual fitting method based on somatosensory technology |
CN103824318A (en) * | 2014-02-13 | 2014-05-28 | 西安交通大学 | Multi-camera-array depth perception method |
CN105160663A (en) * | 2015-08-24 | 2015-12-16 | 深圳奥比中光科技有限公司 | Method and system for acquiring depth image |
CN105516654A (en) * | 2015-11-25 | 2016-04-20 | 华中师范大学 | Scene-structure-analysis-based urban monitoring video fusion method |
CN105763917A (en) * | 2016-02-22 | 2016-07-13 | 青岛海信电器股份有限公司 | Terminal booting control method and terminal booting control system |
CN106485781A (en) * | 2016-09-30 | 2017-03-08 | 广州博进信息技术有限公司 | Three-dimensional scene construction method based on live video stream and its system |
CN106878687A (en) * | 2017-04-12 | 2017-06-20 | 吉林大学 | A kind of vehicle environment identifying system and omni-directional visual module based on multisensor |
CN206611512U (en) * | 2016-11-11 | 2017-11-03 | 广西师范大学 | Three-dimensional panoramic video supervising device |
CN107317998A (en) * | 2016-04-27 | 2017-11-03 | 成都理想境界科技有限公司 | Full-view video image fusion method and device |
CN206741555U (en) * | 2016-12-17 | 2017-12-12 | 深圳市彬讯科技有限公司 | The house type 3D modeling system of indoor 3D scannings based on more splicing cameras |
-
2017
- 2017-12-28 CN CN201711466220.3A patent/CN109978987A/en not_active Withdrawn
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103106604A (en) * | 2013-01-23 | 2013-05-15 | 东华大学 | Three dimensional (3D) virtual fitting method based on somatosensory technology |
CN103824318A (en) * | 2014-02-13 | 2014-05-28 | 西安交通大学 | Multi-camera-array depth perception method |
CN105160663A (en) * | 2015-08-24 | 2015-12-16 | 深圳奥比中光科技有限公司 | Method and system for acquiring depth image |
CN105516654A (en) * | 2015-11-25 | 2016-04-20 | 华中师范大学 | Scene-structure-analysis-based urban monitoring video fusion method |
CN105763917A (en) * | 2016-02-22 | 2016-07-13 | 青岛海信电器股份有限公司 | Terminal booting control method and terminal booting control system |
CN107317998A (en) * | 2016-04-27 | 2017-11-03 | 成都理想境界科技有限公司 | Full-view video image fusion method and device |
CN106485781A (en) * | 2016-09-30 | 2017-03-08 | 广州博进信息技术有限公司 | Three-dimensional scene construction method based on live video stream and its system |
CN206611512U (en) * | 2016-11-11 | 2017-11-03 | 广西师范大学 | Three-dimensional panoramic video supervising device |
CN206741555U (en) * | 2016-12-17 | 2017-12-12 | 深圳市彬讯科技有限公司 | The house type 3D modeling system of indoor 3D scannings based on more splicing cameras |
CN106878687A (en) * | 2017-04-12 | 2017-06-20 | 吉林大学 | A kind of vehicle environment identifying system and omni-directional visual module based on multisensor |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112135088A (en) * | 2019-06-25 | 2020-12-25 | 北京京东尚科信息技术有限公司 | Method for displaying trial assembly effect, trial assembly terminal and storage medium |
CN112135088B (en) * | 2019-06-25 | 2024-04-16 | 北京京东尚科信息技术有限公司 | Method for displaying trial assembly effect, trial assembly terminal and storage medium |
CN116628800A (en) * | 2023-05-09 | 2023-08-22 | 海南华筑国际工程设计咨询管理有限公司 | Building design system based on BIM |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107247834B (en) | A kind of three dimensional environmental model reconstructing method, equipment and system based on image recognition | |
US20170193693A1 (en) | Systems and methods for generating time discrete 3d scenes | |
Turner et al. | Fast, automated, scalable generation of textured 3D models of indoor environments | |
Chen et al. | Rise of the indoor crowd: Reconstruction of building interior view via mobile crowdsourcing | |
CN104376596B (en) | A kind of three-dimensional scene structure modeling and register method based on single image | |
CN105096382A (en) | Method and apparatus for associating actual object information in video monitoring image | |
US20180182163A1 (en) | 3d model generating system, 3d model generating method, and program | |
CN107818592A (en) | Method, system and the interactive system of collaborative synchronous superposition | |
CN109683699A (en) | The method, device and mobile terminal of augmented reality are realized based on deep learning | |
CN107862744A (en) | Aviation image three-dimensional modeling method and Related product | |
JP2012038334A5 (en) | ||
CN105989625A (en) | Data processing method and apparatus | |
CN113192200B (en) | Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm | |
CN104599284A (en) | Three-dimensional facial reconstruction method based on multi-view cellphone selfie pictures | |
CN104915978A (en) | Realistic animation generation method based on Kinect | |
CN107580207A (en) | The generation method and generating means of light field 3D display cell picture | |
CN103942820A (en) | Method and device for simulating three-dimensional map in multi-angle mode | |
WO2022088819A1 (en) | Video processing method, video processing apparatus and storage medium | |
CN102938164A (en) | Rapid modeling method based on aerial remote sensing photogrammetry | |
Jin et al. | An Indoor Location‐Based Positioning System Using Stereo Vision with the Drone Camera | |
CN109978987A (en) | A kind of control method, apparatus and system constructing panorama based on multiple depth cameras | |
CN111583386A (en) | Multi-view human body posture reconstruction method based on label propagation algorithm | |
CN205247208U (en) | Robotic system | |
KR102433646B1 (en) | 3d modeling system based on 2d image recognition and method therefor | |
CN114881841A (en) | Image generation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220124 Address after: 2 Zhongdian lighting building, Gaoxin South 12th Road, Nanshan District, Shenzhen, Guangdong Applicant after: Shenzhen point cloud Intelligent Technology Co.,Ltd. Address before: 518023 No. 3039 Baoan North Road, Luohu District, Shenzhen City, Guangdong Province Applicant before: Zhou Qinna |
|
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190705 |