CN103500013A - Real-time three-dimensional mapping system and method based on Kinect and streaming media technology - Google Patents
Real-time three-dimensional mapping system and method based on Kinect and streaming media technology Download PDFInfo
- Publication number
- CN103500013A CN103500013A CN201310490129.0A CN201310490129A CN103500013A CN 103500013 A CN103500013 A CN 103500013A CN 201310490129 A CN201310490129 A CN 201310490129A CN 103500013 A CN103500013 A CN 103500013A
- Authority
- CN
- China
- Prior art keywords
- stream media
- real
- kinect
- time
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
The invention discloses a real-time three-dimensional mapping system and method based on the Kinect and the streaming media technology. The system comprises a 3D somatosensory camera, a streaming media terminal, a server and a portable power source, wherein the streaming media terminal is fixed on the 3D somatosensory camera, the portable power source provides power for the 3D somatosensory camera which is connected with the server, and the server is connected with the streaming media terminal through a wireless transmission module. According to the real-time three-dimensional mapping system and method based on the Kinect and the streaming media technology, the 3D somatosensory camera is used as indoor three-dimensional scene scanning equipment, and three-dimensional mapping is more convenient, more flexible and low in cost. The real-time three-dimensional mapping system and method based on the Kinect and the streaming media technology are especially suitable for real-time three-dimensional mapping in indoor areas which are crowd, rich in detail and not suitable for carrying of large-sized equipment.
Description
Technical field
The invention belongs to three-dimensional mapping field, particularly a kind of real-time three-dimensional mapping system and method based on Kinect and stream media technology.
Background technology
Along with deepening continuously of China's geographic information public service platform, three-dimensional geographic information system further moves towards meticulous from macroscopic view, from outdoor, moves towards indoor.Under these circumstances, conventional art is no longer applicable, develops the three-dimensional mapping technology of quick, light, full automatic new indoor (3D Mapping) and has important practical significance and application prospect.
The three-dimensional mapping technology of current indoor scene is mainly based on LiDAR, LiDAR(Light Detection And Ranging) be the abbreviation of laser acquisition and range measurement system, its cardinal principle is by calculating the return laser beam time with laser sensor, accurately record the distance between sensor and background return point, can directly measure thus the three-dimensional coordinate of ground and atural object each point.LiDAR, in the use of indoor scene three-dimensional modeling, is mainly to carry out 360 degree scannings by set up fixed station at multiple spot at present, simultaneously at intervisibility location arrangements target, the scene of each fixed station is carried out to registration and merging.(list of references: V Verma, 3d building detection and modeling from aerial lidar data, Computer Vision and Pattern, 2006)
With respect to LiDAR, Kinect has relatively big difference on form and principle.Kinect is the body sense equipment that the exploitation of a kind of Microsoft commercialization are produced, and it mainly consists of a colour imagery shot (RGB Camera), an infrared pick-up head (IR Camera) and an infrared ray projector (IR Projector).By the cooperation of infrared ray projector and camera, Kinect can analyze the degree of depth (distance) information of infrared ray stereogram to obtain sweep object, thereby sets up the three-dimensional model of object.Therefore the Kinect degree of depth and the colouring information of acquisition target simultaneously, thus the texture of model automatically completed.Different with heavy expensive LiDAR, Kinect is cheap and light, approximately 150 dollars of prices, and weight is 450 grams only.(list of references: http://www.ros.org/wiki/kinect_calibration/technical; Remondino, F., Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning. Remote Sensing, 2011.)
As the developer of Kinect, a kind of three-dimensional modeling algorithm Kinect Fusion based on Kinect that Microsoft and affiliate thereof provide, can realize utilizing the three-dimensional model of a kinect real-time reconstruction object moved around object.Be different from the splicing of simple three-dimensional point cloud, the attracting characteristic of this dimensional Modeling Technology is: if object is continued to scanning, reconstruction accuracy can from coarse to finely improve gradually, especially is applicable to being applied in the augmented reality field.This dimensional Modeling Technology supports CPU and other model of GPU level to calculate simultaneously, and core concept is based on frame to frame surface tracking, Real-time modeling set and predictive mode.(list of references: Richard A.Newcombe, Shahram lzadi, KinectFusion:Real-Time Dense Surface Mapping and Tracking, 2011)
Mapping is a kind of novel mapping concept in real time, and its theory is relative with the off-line mapping, allows people's participation in Mapping Process, has very high dirigibility.After traditional off-line mapping has scanned at the scene, then carry out the off-line modeling of data.Mapping is mobile mapping in real time, real time scan, Real-time modeling set, and the current modeling result of Real-time Feedback is to the user.The user, according to current modeling result, can dynamically adjust scanning process, gives play to a subjective initiative.Tradition off-line mapping adopts the pattern of arranging in advance, planning sth. ahead to the mapping flow process, when scan-data goes wrong, as block, when disappearance, ghost image and parasitic error etc., the user does not know in real time.When data being carried out to after off-line modeling fed back, mend more difficult carrying out of survey work, mapping is avoided this drawback in real time.(list of references: DL Tulloch, Many, many maps:Empowerment and online participatory mapping, 2007; Kurt Konolige and Motilal Agrawal, Frame-Frame Matching for Realtime Consistent Visual Mapping, IEEE, 2007).
Summary of the invention
The deficiency existed for off-line mapping in prior art, the invention provides a kind of real-time three-dimensional mapping system and method based on Kinect and stream media technology light, flexible, with low cost.
For addressing the above problem, the present invention adopts following technical scheme:
One, a kind of real-time three-dimensional mapping system based on Kinect and stream media technology, comprise: 3D body sense video camera, stream media terminal, server and portable power source, stream media terminal is fixed on 3D body sense video camera, portable power source provides power supply to 3D body sense video camera, 3D body sense video camera is connected with server, and server is connected with stream media terminal by wireless transport module.
The Kinect scanning device that above-mentioned 3D body sense video camera is Microsoft.
Above-mentioned stream media terminal is smart mobile phone, preferably the Android mobile phone.
Above-mentioned server is portable computer.
Two, a kind of real-time three-dimensional plotting method based on Kinect and stream media technology comprises:
Gather the scan-data of indoor scene by 3D body sense video camera, and scan-data is passed to server, described scan-data comprises depth information and colouring information;
Server, based on scan-data structure present frame surface model, based on present frame surface model real-time update overall situation volume elements model drafting feedback frame, will feed back frame and be sent to the stream media terminal demonstration;
The feedback framing control scanning process that the user shows according to stream media terminal.
Above-mentioned real-time three-dimensional plotting method also comprises characteristics: the scanning sequence of opening, fixing tentatively or close 3D body sense video camera by stream media terminal.
Above-mentioned server and stream media terminal carry out information interaction based on WLAN.
Above-mentioned server, based on scan-data structure present frame surface model, based on present frame surface model real-time update overall situation volume elements model drafting feedback frame, will feed back frame and be sent to the stream media terminal demonstration, be specially:
To after the depth information denoising in scan-data, be converted to three-dimensional point cloud, i.e. the present frame surface model;
By the locus of present frame surface model and overall volume elements model coupling, and the anti-3D body sense video camera space position parameter that solves;
Merge present frame surface model and overall volume elements model and obtain the overall volume elements model after upgrading, according to 3D body sense video camera space position parameter, the overall volume elements model after upgrading is carried out to light and play up, the screenshotss of the overall volume elements model after light is played up are the feedback frame.
The feedback framing control scanning process that above-mentioned user shows according to stream media terminal is specially:
If the feedback frame that stream media terminal shows exists because blocking the cavity of causing, mobile 3 D body sense video camera adjust scanning angle and block with avoidance; Or the feedback frame that the stream media terminal of take shows carries out multiple scanning or employing " close " mode continues scanning as target, to strengthen present frame surface model reconstruction precision.
At present, the three-dimensional mapping of indoor scene often adopts costliness and heavy LiDAR laser scanner, need arrange in advance and take website and target, and the scanning preliminary preparation is longer, and is difficult for the cloth station in crowded environment; Institute's cloth scanning movement can not move, and for complex shape, indoor environment that details is abundant, is prone to the problems such as shortage of data that cause because blocking.Due to the scan-data processed offline, can not scan effect by real-time inspection, cause scan-data easily to have the defects such as parasitic error, ghost image.Survey if adopt hand-held LiDAR to mend, also, often because of factors such as the equipment coverage are little, coverage is short, cause collecting efficiency lower; And exist mapping operation relative complex, later stage to splice the problems such as difficulty.
With above-mentioned traditional mapping technology, compare, the present invention is more flexible, and can find in real time the problem that scan-data exists, thereby is mended in time survey work.
Beneficial effect of the present invention is as follows:
One, light, flexible, with low cost.
The scanning device of 3D body sense video camera as indoor three-dimensional scenic usingd in the present invention.The Kinect scanning device of Microsoft is a kind of 3D body sense video camera, and the Kinect scanning device is fixed a price approximately 150 dollars, and weight is 450 grams only, add miscellaneous part, and price is no more than 300 dollars, and system weight is lighter, single can easily carrying., difficulty more crowded for some carries main equipment and the abundant room area (such as scenes such as scenes of a crime) of details all also can carry out mapping, affected by space environment little; Simultaneously, due to cheap, the general public also has purchasing power; For the changeable zone of indoor environment, can, by the spontaneous Data Update of carrying out of the public, improve the data trend of the times.
Two, realize real-time mapping.
During use, the hand-held 3D body sense video camera that is fixed with stream media terminal of user, by server and portable power source be placed in knapsack carry on the back in it.The user can initiatively select to scan route and angle according to actual needs, can avoid because blocking the disappearance of the contextual data caused.Scan-data is sent to server in real time, and the server process scan-data also feeds back to the renewal present frame surface model obtained after processing stream media terminal and shows.The user is adjusted scanning process according to the result of feeding back on stream media terminal, with defects such as the parasitic error of avoiding scan-data, ghost images.
Three, automaticity is high, and plotting accuracy is high, and mapping efficiency is high.
Server is according to the enforcement scan-data received, and by automatic real-time matching between present frame surface model and overall model of place, than traditional matching ways such as layout targets, automaticity is higher, and plotting accuracy and mapping efficiency are also improved greatly.
Four, previous work is consuming time few, scans effective.
The present invention arranges without carry out website and target in scene, singlely carries the system of the present invention scanning of can marching into the arena, without any requirement to scene environment, and without preliminary work in early stage; Scanning process can be accepted or rejected according to the actual requirements flexibly, for area-of-interest, can be strengthened mapping by modes such as multiple scanning, close scannings, but for wall, floor rapid scanning.
The accompanying drawing explanation
Fig. 1 is system architecture diagram of the present invention;
Fig. 2 is embodiments of systems of the invention's schematic diagram;
Fig. 3 is that the present frame surface model builds process flow diagram;
Fig. 4 is that server will feed back frame and be sent to the process flow diagram of client with the Streaming Media form;
The process flow diagram that Fig. 5 is the instruction of stream media terminal processing and exchanging.
In figure, 1-knapsack, 2-power lead, 3-data line, 4-portable power source, 5-server, 6-stream media terminal, 7-3D body sense video camera, 8-cable.
Embodiment
See Fig. 1 ~ 2, system of the present invention comprises: 3D body sense video camera (7), stream media terminal (6), server (5) and portable power source (4).Stream media terminal (6) is fixed on 3D body sense video camera (7), portable power source (4) provides power supply to 3D body sense video camera (7), 3D body sense video camera (7) is connected with server (5), and server (5) is connected with stream media terminal (6) by wireless transport module.
The 3D body sense video camera (7) that is fixed with stream media terminal (6) is handheld device, and server (5) and portable power source (4) are placed in knapsack (1).User's back knapsack (1), hand-held 3D body sense video camera (7), scanned indoor scene.Stream media terminal (6) carries out message exchange with 3D body sense video camera (7) by the wireless network mode.3D body sense video camera (7) is connected with portable power source (4) by cable (8) server (5) built-in with knapsack, cable (8) is synthetic by data line (3) and power lead (2) conllinear, cable (8) one ends are connected with 3D body sense video camera (7), the other end is divided into power interface and data-interface, power interface is connected with portable power source (4), and data-interface is connected with the USB interface of server (5).
For reaching portable purpose, portable power source (4) physical dimension adopted in this concrete enforcement is 20cm*8cm*3cm, only heavily about 1.2kg; Server (5) is portable computer; The Kinect scanning device that 3D body sense video camera (7) is Microsoft; Stream media terminal (6) is smart mobile phone, preferably the Android mobile phone.
3D body sense video camera (7) is used for gathering indoor scene information to obtain scan-data; and the data-interface by cable (8) transfers to server (5) by scan-data; server (5) obtains the present frame surface model to the scan-data Real-time modeling set received; and by modeling result by radio network feedback in stream media terminal (6), the present frame surface model that the user shows by stream media terminal (6) is adjusted and is controlled scanning process.In the present invention, stream media terminal is except showing in real time the present frame surface model, and the worker can also control by stream media terminal the scanning process of 3D body sense video camera.
Based on above-mentioned real-time three-dimensional mapping system, real-time three-dimensional plotting method of the present invention comprises step:
Step 2, mobile 3 D body sense video camera gathers the scan-data of indoor scene, comprises depth information and colouring information, and scan-data is passed to server; In this concrete enforcement, scan-data is passed to server with the speed of per second 30 frames.
Below will describe specific implementation of the present invention in detail.
Following, stream media terminal be take the Android mobile phone as example, and server be take portable computer as example, and 3D body sense video camera be take the Kinect scanning device as example.
(1) set up server and client side's environment.
In this concrete enforcement, client refers to stream media terminal.At first, set up the WLAN environment, concrete method for building up can determine according to on-site actual situations.
Retrieve and record the IP address of portable computer.Newly-built two server processes on Kinect scanning sequence on portable computer: streaming media server process and interactive server process.Present frame after streaming media server is used for portable computer is upgraded is sent to client in real time, streaming media server separate port number (as 5556) is set, pattern is " PUB ", PUB modal representation one-way data distribution (Publish),, streaming media server pushes one group and upgrades present frame to client, need not client respond.Interactive server is used for responding client and sends various instructions, and interactive server separate port number (as 5555) is set, pattern be " REP " (Reply)." REP " pattern coordinates with following " REQ " pattern, is a kind of two-way news pattern, must obtain after the other side responds could continuing to send, thus guaranteeing data security property.
In like manner, the Kinect scanning sequence on the Android mobile phone is set up two client threads: Streaming Media client thread and mutual client thread.The streaming media server port (as 5556) of Streaming Media client binding portable computer IP, the Streaming Media client mode be " SUB " (Subscribe), the Streaming Media client is used for receiving the present frame after the renewal of portable computer transmission.The interactive server port (as 5556) of mutual client binding portable computer, mutual client mode be " REQ " (Request), client is used for sending and responding with portable computer various instructions alternately.
(2) scanning of Kinect scanning device and modeling.
Every width depth information of Kinect scanning device scanning is understood to the frame surface model of scene, and in this concrete enforcement, portable computer adopts body weight structure technology to build the frame surface model based on scan-data, is specially:
(a) predefine three dimensions, be subdivided into regular volume elements continuously by this three dimensions, obtains initial overall volume elements model.When initial, all do not comprise any data in each volume elements.To after current depth information denoising, being converted to three-dimensional point cloud, i.e. present frame surface model; By many scales ICP(Iterative Closest Point, iterative closest point) registration Algorithm, the locus of present frame surface model and overall volume elements model is mated, thereby by each present frame surface model naturalization under the same coordinate system, and the anti-Kinect scanning device location parameter that solves thus.
(b) by TSDF(Truncated Signed Distance Function, block signed distance function) each present frame surface model and overall volume elements model are merged to the overall volume elements model after being upgraded, be specially: utilize TSDF to calculate the value of three dimensions arbitrfary point, thereby generate Signed Distance Field; The positive and negative intersection of field is for rebuilding body surface.The method can continue surperficial warp tendency well, and the cavity on repairing model surface well.Same scene surface has several depth datas from different directions, so tackles several depth data weightings when data fusion and be averaging, and so also can improve gradually the reconstruction precision of this scene surface.
(c) simultaneously, according to Kinect scanning device location parameter, the overall volume elements model after the renewal after merging is carried out to ray trace (Ray Casting), thus the overall volume elements model after the renewal of current image capturing range is carried out to shadow and play up; And the overall volume elements model after playing up exports stream media terminal to and shows as the feedback frame, so that the user tests to the scanning effect.After the end of scan, final overall volume elements model is the indoor three-dimensional scene models that scanning obtains.
(3) server will upgrade frame and be sent to client with the Streaming Media form and show in client.
See Fig. 4.Monitor Kinect scanning sequence host process by " streaming media server ", whenever upgrading frame, generate and just will upgrade the frame core position and transmit into.Upgrading frame data stream is 30Hz triple channel 640 x480 images, and it is up to the 27M per second, and network is difficult to burden, so this concrete enforcement adopts the Jpeg compact model to be compressed upgrading frame data, after compression, the per second data stream is down to 150K.Next " streaming media server " encoded packed data (encode) and sent, the specific coding mode is: according to packed data length to compressed data packet, each group basis protocols having adds gauge outfit information, and is issued at the streaming media server port.
Corresponding with it, " the Streaming Media client " of stream media terminal monitored the streaming media server port, when the streaming media server port has the new data issue, receive and decoded data, receiving mode is asynchronous communication model (Asynchronous Message Passing).Concrete receive mode for grouping receives data, that is, is reassembled into complete data according to gauge outfit information by each group, i.e. compressed image; Then, the core position of compressed image is sent to main thread, the anti-decoding compressed file of main thread, the image of acquisition bitmap (Bitmap) form; Finally, show image in the stream media terminal setting regions.
(4) stream media terminal receives and processes various interactive instructions.
See Fig. 5.Stream media terminal is accepted user's various instructions, as beginning, end, time-out etc.Mutual client numbers at the interactive server port instruction accordingly with " request " (Request) form transmission, interactive server is mated with the operational order storehouse after receiving the numbering that mutual client sends, if without corresponding operational order " responses " (Reply) its instruction ignore of client alternately; If any the corresponding operating instruction carry out (as, open or close corresponding thread), and respond mutual client and " be disposed ".
Claims (10)
1. the real-time three-dimensional mapping system based on Kinect and stream media technology, is characterized in that, comprising:
3D body sense video camera, stream media terminal, server and portable power source, stream media terminal is fixed on 3D body sense video camera, portable power source provides power supply to 3D body sense video camera, and 3D body sense video camera is connected with server, and server is connected with stream media terminal by wireless transport module.
2. the real-time three-dimensional mapping system based on Kinect and stream media technology as claimed in claim 1 is characterized in that:
The Kinect scanning device that described 3D body sense video camera is Microsoft.
3. the real-time three-dimensional mapping system based on Kinect and stream media technology as claimed in claim 1 is characterized in that:
Described stream media terminal is smart mobile phone.
4. the real-time three-dimensional mapping system based on Kinect and stream media technology as claimed in claim 1, its feature is being:
Described server is portable computer.
5. the real-time three-dimensional plotting method based on Kinect and stream media technology, is characterized in that, comprising:
Gather the scan-data of indoor scene by 3D body sense video camera, and scan-data is passed to server, described scan-data comprises depth information and colouring information;
Server, based on scan-data structure present frame surface model, based on present frame surface model real-time update overall situation volume elements model drafting feedback frame, will feed back frame and be sent to the stream media terminal demonstration;
The feedback framing control scanning process that the user shows according to stream media terminal.
6. the real-time three-dimensional plotting method based on Kinect and stream media technology as claimed in claim 5 is characterized in that:
Open, fix tentatively or close 3D body sense video camera scanning sequence by stream media terminal.
7. the real-time three-dimensional plotting method based on Kinect and stream media technology as claimed in claim 5 is characterized in that:
Described server and stream media terminal carry out information interaction based on WLAN.
8. the real-time three-dimensional plotting method based on Kinect and stream media technology as claimed in claim 5 is characterized in that:
Described server, based on scan-data structure present frame surface model, based on present frame surface model real-time update overall situation volume elements model drafting feedback frame, will feed back frame and be sent to the stream media terminal demonstration, be specially:
To after the depth information denoising in scan-data, be converted to three-dimensional point cloud, i.e. the present frame surface model;
By the locus of present frame surface model and overall volume elements model coupling, and the anti-3D body sense video camera space position parameter that solves;
Merge present frame surface model and overall volume elements model and obtain the overall volume elements model after upgrading, according to 3D body sense video camera space position parameter, the overall volume elements model after upgrading is carried out to light and play up, the screenshotss of the overall volume elements model after light is played up are the feedback frame.
9. the real-time three-dimensional plotting method based on Kinect and stream media technology as claimed in claim 5 is characterized in that:
The feedback framing control scanning process that described user shows according to stream media terminal is specially:
If the feedback frame that stream media terminal shows exists because blocking the cavity of causing, mobile 3 D body sense video camera adjust scanning angle and block with avoidance.
10. the real-time three-dimensional plotting method based on Kinect and stream media technology as claimed in claim 5 is characterized in that:
The feedback framing control scanning process that described user shows according to stream media terminal is specially:
The feedback frame that the stream media terminal of take shows carries out multiple scanning or employing " close " mode continues scanning as target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310490129.0A CN103500013B (en) | 2013-10-18 | 2013-10-18 | Real-time three-dimensional plotting method based on Kinect and stream media technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310490129.0A CN103500013B (en) | 2013-10-18 | 2013-10-18 | Real-time three-dimensional plotting method based on Kinect and stream media technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103500013A true CN103500013A (en) | 2014-01-08 |
CN103500013B CN103500013B (en) | 2016-05-11 |
Family
ID=49865232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310490129.0A Expired - Fee Related CN103500013B (en) | 2013-10-18 | 2013-10-18 | Real-time three-dimensional plotting method based on Kinect and stream media technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103500013B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123751A (en) * | 2014-07-24 | 2014-10-29 | 福州大学 | Combined type measurement and three-dimensional reconstruction method combing Kinect and articulated arm |
CN104363673A (en) * | 2014-09-17 | 2015-02-18 | 张灏 | Demand-oriented illumination energy conservation control system based on body recognition |
CN104639912A (en) * | 2015-02-11 | 2015-05-20 | 尼森科技(湖北)有限公司 | Individual soldier fire protection and disaster rescue equipment and system based on infrared three-dimensional imaging |
CN104660995A (en) * | 2015-02-11 | 2015-05-27 | 尼森科技(湖北)有限公司 | Disaster relief visual system |
CN105654492A (en) * | 2015-12-30 | 2016-06-08 | 哈尔滨工业大学 | Robust real-time three-dimensional (3D) reconstruction method based on consumer camera |
CN103888738B (en) * | 2014-04-03 | 2016-09-28 | 华中师范大学 | A kind of multi-source multiaspect battle array unmanned vehicle GIS data acquisition platform |
CN106412559A (en) * | 2016-09-21 | 2017-02-15 | 北京物语科技有限公司 | Full-vision photographing technology |
CN106548466A (en) * | 2015-09-16 | 2017-03-29 | 富士通株式会社 | The method and apparatus of three-dimensional reconstruction object |
CN107749998A (en) * | 2017-10-23 | 2018-03-02 | 西华大学 | Streaming media visualization method of portable 3D scanner |
CN107798704A (en) * | 2016-08-30 | 2018-03-13 | 成都理想境界科技有限公司 | A kind of realtime graphic stacking method and device for augmented reality |
CN108286945A (en) * | 2017-11-10 | 2018-07-17 | 广东康云多维视觉智能科技有限公司 | The 3 D scanning system and method for view-based access control model feedback |
CN108322742A (en) * | 2018-02-11 | 2018-07-24 | 北京大学深圳研究生院 | A kind of point cloud genera compression method based on intra prediction |
CN108332660A (en) * | 2017-11-10 | 2018-07-27 | 广东康云多维视觉智能科技有限公司 | Robot three-dimensional scanning system and scan method |
CN108362223A (en) * | 2017-11-24 | 2018-08-03 | 广东康云多维视觉智能科技有限公司 | A kind of portable 3D scanners, scanning system and scan method |
CN109509215A (en) * | 2018-10-30 | 2019-03-22 | 浙江大学宁波理工学院 | A kind of the point cloud auxiliary registration apparatus and its method of KinFu |
CN114327334A (en) * | 2021-12-27 | 2022-04-12 | 苏州金羲智慧科技有限公司 | Environment information transmission system based on light ray analysis and transmission method thereof |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102824176A (en) * | 2012-09-24 | 2012-12-19 | 南通大学 | Upper limb joint movement degree measuring method based on Kinect sensor |
CN102938142A (en) * | 2012-09-20 | 2013-02-20 | 武汉大学 | Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect |
CN103220543A (en) * | 2013-04-25 | 2013-07-24 | 同济大学 | Real time three dimensional (3D) video communication system and implement method thereof based on Kinect |
-
2013
- 2013-10-18 CN CN201310490129.0A patent/CN103500013B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102938142A (en) * | 2012-09-20 | 2013-02-20 | 武汉大学 | Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect |
CN102824176A (en) * | 2012-09-24 | 2012-12-19 | 南通大学 | Upper limb joint movement degree measuring method based on Kinect sensor |
CN103220543A (en) * | 2013-04-25 | 2013-07-24 | 同济大学 | Real time three dimensional (3D) video communication system and implement method thereof based on Kinect |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103888738B (en) * | 2014-04-03 | 2016-09-28 | 华中师范大学 | A kind of multi-source multiaspect battle array unmanned vehicle GIS data acquisition platform |
CN104123751A (en) * | 2014-07-24 | 2014-10-29 | 福州大学 | Combined type measurement and three-dimensional reconstruction method combing Kinect and articulated arm |
CN104363673A (en) * | 2014-09-17 | 2015-02-18 | 张灏 | Demand-oriented illumination energy conservation control system based on body recognition |
CN104660995B (en) * | 2015-02-11 | 2018-07-31 | 尼森科技(湖北)有限公司 | A kind of disaster relief rescue visible system |
CN104639912A (en) * | 2015-02-11 | 2015-05-20 | 尼森科技(湖北)有限公司 | Individual soldier fire protection and disaster rescue equipment and system based on infrared three-dimensional imaging |
CN104660995A (en) * | 2015-02-11 | 2015-05-27 | 尼森科技(湖北)有限公司 | Disaster relief visual system |
CN106548466B (en) * | 2015-09-16 | 2019-03-29 | 富士通株式会社 | The method and apparatus of three-dimensional reconstruction object |
CN106548466A (en) * | 2015-09-16 | 2017-03-29 | 富士通株式会社 | The method and apparatus of three-dimensional reconstruction object |
CN105654492B (en) * | 2015-12-30 | 2018-09-07 | 哈尔滨工业大学 | Robust real-time three-dimensional method for reconstructing based on consumer level camera |
CN105654492A (en) * | 2015-12-30 | 2016-06-08 | 哈尔滨工业大学 | Robust real-time three-dimensional (3D) reconstruction method based on consumer camera |
CN107798704B (en) * | 2016-08-30 | 2021-04-30 | 成都理想境界科技有限公司 | Real-time image superposition method and device for augmented reality |
CN107798704A (en) * | 2016-08-30 | 2018-03-13 | 成都理想境界科技有限公司 | A kind of realtime graphic stacking method and device for augmented reality |
CN106412559A (en) * | 2016-09-21 | 2017-02-15 | 北京物语科技有限公司 | Full-vision photographing technology |
CN107749998B (en) * | 2017-10-23 | 2020-03-31 | 西华大学 | Streaming media visualization method of portable 3D scanner |
CN107749998A (en) * | 2017-10-23 | 2018-03-02 | 西华大学 | Streaming media visualization method of portable 3D scanner |
CN108332660A (en) * | 2017-11-10 | 2018-07-27 | 广东康云多维视觉智能科技有限公司 | Robot three-dimensional scanning system and scan method |
CN108286945A (en) * | 2017-11-10 | 2018-07-17 | 广东康云多维视觉智能科技有限公司 | The 3 D scanning system and method for view-based access control model feedback |
CN108362223A (en) * | 2017-11-24 | 2018-08-03 | 广东康云多维视觉智能科技有限公司 | A kind of portable 3D scanners, scanning system and scan method |
CN108322742A (en) * | 2018-02-11 | 2018-07-24 | 北京大学深圳研究生院 | A kind of point cloud genera compression method based on intra prediction |
CN109509215A (en) * | 2018-10-30 | 2019-03-22 | 浙江大学宁波理工学院 | A kind of the point cloud auxiliary registration apparatus and its method of KinFu |
CN109509215B (en) * | 2018-10-30 | 2022-04-01 | 浙江大学宁波理工学院 | KinFu point cloud auxiliary registration device and method thereof |
CN114327334A (en) * | 2021-12-27 | 2022-04-12 | 苏州金羲智慧科技有限公司 | Environment information transmission system based on light ray analysis and transmission method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN103500013B (en) | 2016-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103500013B (en) | Real-time three-dimensional plotting method based on Kinect and stream media technology | |
JP6171079B1 (en) | Inconsistency detection system, mixed reality system, program, and inconsistency detection method | |
CN102509348B (en) | Method for showing actual object in shared enhanced actual scene in multi-azimuth way | |
CN107330978B (en) | Augmented reality modeling experience system and method based on position mapping | |
CN111415416A (en) | Method and system for fusing monitoring real-time video and scene three-dimensional model | |
WO2022160790A1 (en) | Three-dimensional map construction method and apparatus | |
CN104165600A (en) | Wireless hand-held 3D laser scanning system | |
Mossel et al. | Streaming and exploration of dynamically changing dense 3d reconstructions in immersive virtual reality | |
CN111241615A (en) | Highly realistic multi-source fusion three-dimensional modeling method for transformer substation | |
CN109685893B (en) | Space integrated modeling method and device | |
CN103650001A (en) | Moving image distribution server, moving image playback device, control method, program, and recording medium | |
JP2018106661A (en) | Inconsistency detection system, mixed reality system, program, and inconsistency detection method | |
CN113487723B (en) | House online display method and system based on measurable panoramic three-dimensional model | |
CN109035392A (en) | A kind of modeling method for substation's threedimensional model | |
JP2023546739A (en) | Methods, apparatus, and systems for generating three-dimensional models of scenes | |
CN105300310A (en) | Handheld laser 3D scanner with no requirement for adhesion of target spots and use method thereof | |
CN109035399A (en) | Utilize the method for three-dimensional laser scanner quick obtaining substation three-dimensional information | |
CN113379901A (en) | Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data | |
CN205102796U (en) | Do not paste hand -held type laser 3D scanner of target spot | |
CN114531700A (en) | Non-artificial base station antenna work parameter acquisition system and method | |
CN113532424A (en) | Integrated equipment for acquiring multidimensional information and cooperative measurement method | |
CN105183142A (en) | Digital information reproduction method by means of space position nailing | |
CN117692441A (en) | Fusion method and device of video stream and three-dimensional GIS scene | |
CN117041512A (en) | Real-time transmission and visual communication system for road surface three-dimensional information detection data | |
CN105208372A (en) | 3D landscape generation system and method with interaction measurable function and reality sense |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160511 Termination date: 20211018 |
|
CF01 | Termination of patent right due to non-payment of annual fee |