CN103500013B - Real-time three-dimensional plotting method based on Kinect and stream media technology - Google Patents

Real-time three-dimensional plotting method based on Kinect and stream media technology Download PDF

Info

Publication number
CN103500013B
CN103500013B CN201310490129.0A CN201310490129A CN103500013B CN 103500013 B CN103500013 B CN 103500013B CN 201310490129 A CN201310490129 A CN 201310490129A CN 103500013 B CN103500013 B CN 103500013B
Authority
CN
China
Prior art keywords
stream media
real
kinect
body sense
video camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310490129.0A
Other languages
Chinese (zh)
Other versions
CN103500013A (en
Inventor
呙维
朱欣焰
刘异
陈呈辉
胡涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201310490129.0A priority Critical patent/CN103500013B/en
Publication of CN103500013A publication Critical patent/CN103500013A/en
Application granted granted Critical
Publication of CN103500013B publication Critical patent/CN103500013B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a kind of real-time three-dimensional mapping system and method based on Kinect and stream media technology, system of the present invention comprises: 3D body sense video camera, stream media terminal, server and portable power source, stream media terminal is fixed on 3D body sense video camera, portable power source provides power supply to 3D body sense video camera, 3D body sense video camera is connected with server, and server is connected with stream media terminal by wireless transport module. The scanning device of the present invention using 3D body sense video camera as indoor three-dimensional scenic, makes three-dimensional mapping lighter, flexible, with low cost, is particularly useful for the real-time three-dimensional mapping of the room area that some is more crowded, difficulty is carried main equipment and details is abundant.

Description

Real-time three-dimensional plotting method based on Kinect and stream media technology
Technical field
The invention belongs to three-dimensional mapping field, particularly a kind of real-time three-dimensional mapping system and method based on Kinect and stream media technology.
Background technology
Along with deepening continuously of China's geographic information public service platform, three-dimensional geographic information system further moves towards meticulous from macroscopic view, moves towards indoor from outdoor. Under these circumstances, conventional art is no longer applicable, develops the three-dimensional mapping technology of quick, light, full automatic new indoor (3DMapping) and has important practical significance and application prospect.
The three-dimensional mapping technology of current indoor scene is mainly based on LiDAR, LiDAR(LightDetectionAndRanging) be the abbreviation of laser acquisition and range-measurement system, its cardinal principle is by calculating the return laser beam time with laser sensor, accurately record the distance between sensor and background return point, can directly measure thus the three-dimensional coordinate of ground and atural object each point. LiDAR, in the use of indoor scene three-dimensional modeling, is mainly to carry out 360 degree scannings by set up fixed station at multiple spot at present, simultaneously at intervisibility location arrangements target, the scene of each fixed station is carried out to registration and merging. (bibliography: VVerma, 3dbuildingdetectionandmodelingfromaeriallidardata, ComputerVisionandPattern, 2006)
With respect to LiDAR, Kinect has relatively big difference in form and principle. Kinect is the body sense equipment that the exploitation of a kind of Microsoft commercialization are produced, and it is mainly made up of a colour imagery shot (RGBCamera), an infrared pick-up head (IRCamera) and an infrared ray projecting apparatus (IRProjector). By the cooperation of infrared ray projecting apparatus and camera, Kinect can analyze infrared ray stereogram to obtain the degree of depth (distance) information of sweep object, thereby sets up the threedimensional model of object. Therefore the simultaneously degree of depth and the colouring information of acquisition target of Kinect, thus the texture mapping of model automatically completed. Different with heavy expensive LiDAR, Kinect is cheap and light, approximately 150 dollars of prices, only 450 grams of weight. (bibliography: http://www.ros.org/wiki/kinect_calibration/technical; Remondino, F., HeritageRecordingand3DModelingwithPhotogrammetryand3DSca nning.RemoteSensing, 2011.)
As the developer of Kinect, a kind of three-dimensional modeling algorithm KinectFusion based on Kinect that Microsoft and affiliate thereof provide, can realize the threedimensional model that utilizes a kinect real-time reconstruction object moving around object. Be different from the splicing of simple three-dimensional point cloud, the attracting characteristic of this dimensional Modeling Technology is: if object is continued to scanning, reconstruction accuracy can from coarse to finely improve gradually, is especially applicable to being applied in augmented reality field. This dimensional Modeling Technology supports the model of CPU and GPU rank to calculate simultaneously, and core concept is to frame surface tracking, Real-time modeling set and predictive mode based on frame. (bibliography: RichardA.Newcombe, Shahramlzadi, KinectFusion:Real-TimeDenseSurfaceMappingandTracking, 2011)
Mapping is a kind of novel mapping concept in real time, and its theory is relative with off-line mapping, allows people's participation in Mapping Process, has very high flexibility. After traditional off-line mapping has scanned at the scene, then carry out the off-line modeling of data. Mapping is mobile mapping in real time, real time scan, Real-time modeling set, and the current modeling result of Real-time Feedback is to user. User, according to current modeling result, can dynamically adjust scanning process, gives play to a subjective initiative. Tradition off-line mapping adopts the pattern of arranging in advance, planning sth. ahead to mapping flow process, when scan-data goes wrong, as block, when disappearance, ghost image and parasitic error etc., user does not know in real time. When data being carried out to after off-line modeling obtains feedback, mend survey work more difficult carrying out, mapping is avoided this drawback in real time. (bibliography: DLTulloch, Many, manymaps:Empowermentandonlineparticipatorymapping, 2007; KurtKonoligeandMotilalAgrawal, Frame-FrameMatchingforRealtimeConsistentVisualMapping, IEEE, 2007).
Summary of the invention
The deficiency existing for off-line mapping in prior art, the invention provides a kind of real-time three-dimensional mapping system and method based on Kinect and stream media technology light, flexible, with low cost.
For addressing the above problem, the present invention adopts following technical scheme:
One, a kind of real-time three-dimensional mapping system based on Kinect and stream media technology, comprise: 3D body sense video camera, stream media terminal, server and portable power source, stream media terminal is fixed on 3D body sense video camera, portable power source provides power supply to 3D body sense video camera, 3D body sense video camera is connected with server, and server is connected with stream media terminal by wireless transport module.
The Kinect scanning device that above-mentioned 3D body sense video camera is Microsoft.
Above-mentioned stream media terminal is smart mobile phone, preferably Android mobile phone.
Above-mentioned server is portable computer.
Two, the real-time three-dimensional plotting method based on Kinect and stream media technology, comprising:
Gather the scan-data of indoor scene by 3D body sense video camera, and scan-data is passed to server, described scan-data comprises depth information and colouring information;
Server builds present frame surface model based on scan-data, based on present frame surface model real-time update overall situation volume elements model drafting feedback frame, feedback frame is sent to stream media terminal and shows;
The feedback framing control scanning process that user shows according to stream media terminal.
Above-mentioned real-time three-dimensional plotting method also comprises feature: the scanning imaging system of being opened, being fixed tentatively or close 3D body sense video camera by stream media terminal.
Above-mentioned server and stream media terminal carry out information interaction based on WLAN.
Above-mentioned server builds present frame surface model based on scan-data, based on present frame surface model real-time update overall situation volume elements model drafting feedback frame, feedback frame is sent to stream media terminal and shows, is specially:
To after the depth information denoising in scan-data, be converted to three-dimensional point cloud, i.e. present frame surface model;
Present frame surface model is mated with the locus of overall volume elements model, and the anti-3D body sense video camera space position parameter that solves;
Merge present frame surface model and overall volume elements model and obtain the overall volume elements model after upgrading, according to 3D body sense video camera space position parameter, the overall volume elements model after upgrading is carried out to light and play up, the screenshotss of the overall volume elements model after light is played up are feedback frame.
The feedback framing control scanning process that above-mentioned user shows according to stream media terminal, is specially:
If stream media terminal show feedback frame exists because blocking the cavity of causing, mobile 3 D body sense video camera and adjust scanning angle with avoidance block; Or the feedback frame showing taking stream media terminal carries out multiple scanning or employing " close " mode as target and continues scanning, to strengthen present frame surface model reconstruction precision.
At present, the three-dimensional mapping of indoor scene often adopts costliness and heavy LiDAR laser scanner, need arrange in advance and take website and target, and scanning preliminary preparation is longer, and in crowded environment, is difficult for cloth station; Institute's cloth scanning movement can not move, and for complex shape, indoor environment that details is abundant, is prone to because blocking the problems such as the shortage of data causing. Due to scan-data processed offline, can not scan effect by real-time inspection, cause scan-data easily to have the defect such as parasitic error, ghost image. Survey if adopt hand-held LiDAR to mend, also, often because of factors such as equipment coverage are little, coverage is short, cause collecting efficiency lower; And exist mapping operation relative complex, later stage to splice the problems such as difficulty.
Compared with above-mentioned traditional mapping technology, the present invention is more flexible, and can find in real time the problem that scan-data exists, thereby mends in time survey work.
Beneficial effect of the present invention is as follows:
One, light, flexible, with low cost.
The scanning device of the present invention using 3D body sense video camera as indoor three-dimensional scenic. The Kinect scanning device of Microsoft is a kind of 3D body sense video camera, and Kinect scanning device is fixed a price approximately 150 dollars, and only 450 grams of weight, add miscellaneous part, and price is no more than 300 dollars, and system weight is lighter, single can easily carrying. , difficulty more crowded for some carries main equipment and the abundant room area (scene such as such as scene of a crime) of details all also can carry out mapping, affected by space environment little; Meanwhile, due to cheap, the general public also has purchasing power; For the changeable region of indoor environment, can, by the spontaneous Data Update of carrying out of the public, improve the data trend of the times.
Two, realize real-time mapping.
When use, the hand-held 3D body sense video camera that is fixed with stream media terminal of user, by server and portable power source be placed in knapsack, carry on the back in it. User can initiatively select to scan route and angle according to actual needs, can avoid the disappearance because blocking the contextual data causing. Scan-data is sent to server in real time, and server process scan-data also feeds back to the renewal present frame surface model obtaining after processing stream media terminal and shows. User adjusts scanning process according to the result of feeding back on stream media terminal, to avoid the defect such as parasitic error, ghost image of scan-data.
Three, automaticity is high, and mapping accuracy is high, and mapping efficiency is high.
Server is according to the enforcement scan-data receiving, and by automatic real-time matching between present frame surface model and overall model of place, than arranging traditional matching ways such as target, automaticity is higher, and mapping accuracy and mapping efficiency are also improved greatly.
Four, previous work is consuming time few, scans effective.
The present invention arranges without carry out website and target in scene, singlely carries the system of the present invention scanning of can marching into the arena, without any requirement to scene environment, and without preliminary preparation; Scanning process can be accepted or rejected according to the actual requirements flexibly, for area-of-interest can be by multiple scanning, strengthen mapping near the mode such as scanning, for wall, floor can rapid scanning.
Brief description of the drawings
Fig. 1 is system architecture diagram of the present invention;
Fig. 2 is embodiments of systems of the invention's schematic diagram;
Fig. 3 is that present frame surface model builds flow chart;
Fig. 4 is server is sent to feedback frame client flow chart with Streaming Media form;
Fig. 5 is the flow chart of stream media terminal processing and exchanging instruction.
In figure, 1-knapsack, 2-power line, 3-data wire, 4-portable power source, 5-server, 6-stream media terminal, 7-3D body sense video camera, 8-cable.
Detailed description of the invention
See Fig. 1 ~ 2, system of the present invention comprises: 3D body sense video camera (7), stream media terminal (6), server (5) and portable power source (4). Stream media terminal (6) is fixed on 3D body sense video camera (7), portable power source (4) provides power supply to 3D body sense video camera (7), 3D body sense video camera (7) is connected with server (5), and server (5) is connected with stream media terminal (6) by wireless transport module.
The 3D body sense video camera (7) that is fixed with stream media terminal (6) is handheld device, and server (5) and portable power source (4) are placed in knapsack (1). User's back knapsack (1), hand-held 3D body sense video camera (7), scans indoor scene. Stream media terminal (6) carries out information exchange with 3D body sense video camera (7) by wireless network mode. 3D body sense video camera (7) is connected with portable power source (4) by cable (8) server (5) built-in with knapsack, cable (8) is synthetic by data wire (3) and power line (2) conllinear, cable (8) one end is connected with 3D body sense video camera (7), the other end is divided into power interface and data-interface, power interface is connected with portable power source (4), and data-interface is connected with the USB interface of server (5).
For reaching portable object, portable power source (4) appearance and size adopting in this concrete enforcement is 20cm*8cm*3cm, only heavily about 1.2kg; Server (5) is portable computer; The Kinect scanning device that 3D body sense video camera (7) is Microsoft; Stream media terminal (6) is smart mobile phone, preferably Android mobile phone.
3D body sense video camera (7) is used for gathering indoor scene information to obtain scan-data; and scan-data is transferred to server (5) by the data-interface of cable (8); server (5) obtains present frame surface model to the scan-data Real-time modeling set receiving; and by modeling result by radio network feedback in stream media terminal (6), user by stream media terminal (6) show present frame surface model scanning process is adjusted and is controlled. In the present invention, stream media terminal is except showing in real time present frame surface model, and worker can also be by the scanning process of stream media terminal control 3D body sense video camera.
Based on above-mentioned real-time three-dimensional mapping system, real-time three-dimensional plotting method of the present invention comprises step:
Step 1, opens 3D body sense video camera by stream media terminal, makes 3D body sense video camera enter scanning mode.
Step 2, mobile 3 D body sense video camera gathers the scan-data of indoor scene, comprises depth information and colouring information, and scan-data is passed to server; In this concrete enforcement, scan-data is passed to server with the speed of 30 frames per second.
Step 3, the real-time processing scan data of server, builds present frame surface model based on scan-data, based on present frame surface model real-time update overall situation volume elements model drafting feedback frame.
Step 4, will feed back frame and issue in wireless network, and stream media terminal receives and shows in real time.
Step 5, user according to stream media terminal show feedback framing control scanning process, for example: if stream media terminal show feedback frame exists because blocking the cavity of causing, mobile Kinect scanning device and adjust scanning angle with avoidance block; If interested in details object etc., carry out multiple scanning taking this details object as target, strengthen gradually Model Reconstruction precision, also can adopt " close " mode to improve the scanning accuracy of target.
Step 6, mobile Kinect scanning device also repeats step 2 ~ five, until complete the scanning of indoor whole scenes, closes the scanning mode of Kinect scanning device by stream media terminal.
To describe specific implementation of the present invention below in detail.
Following, stream media terminal is taking Android mobile phone as example, and server is taking portable computer as example, and 3D body sense video camera is taking Kinect scanning device as example.
(1) set up server and client side's environment.
In this concrete enforcement, client refers to stream media terminal. First, set up WLAN environment, concrete method for building up can determine according to on-site actual situations.
Retrieve and record the IP address of portable computer. Newly-built two server processes on Kinect scanning imaging system on portable computer: streaming media server process and interactive server process. Streaming media server is used for the present frame after portable computer renewal to be sent in real time client, streaming media server separate port number (as 5556) is set, pattern is " PUB ", PUB modal representation one-way data distribution (Publish),, streaming media server pushes one group and upgrades present frame to client, need not client respond. Interactive server is used for responding client and sends various instructions, and interactive server separate port number (as 5555) is set, pattern be " REP " (Reply). " REP " pattern coordinates with following " REQ " pattern, is a kind of two-way news pattern, must obtain after the other side responds could continuing to send, thus guaranteeing data security property.
In like manner, the Kinect scanning imaging system on Android mobile phone is set up two client threads: Streaming Media client thread and mutual client thread. The streaming media server port (as 5556) of Streaming Media client binding portable computer IP, Streaming Media client mode be " SUB " (Subscribe), Streaming Media client is used for receiving the present frame after the renewal of portable computer transmission. The interactive server port (as 5556) of mutual client binding portable computer, mutual client mode be " REQ " (Request), client is used for sending and responding with portable computer various instructions alternately.
(2) scanning of Kinect scanning device and modeling.
Every width depth information of Kinect scanning device scanning is understood to the frame surface model of scene, and in this concrete enforcement, portable computer adopts body reconfiguration technique to build frame surface model based on scan-data, is specially:
(a) predefined three dimensions, is subdivided into regular volume elements continuously by this three dimensions, obtains initial overall volume elements model. When initial, in each volume elements, all do not comprise any data. To being converted to three-dimensional point cloud after current depth information denoising, i.e. present frame surface model; By many scales ICP(IterativeClosestPoint, iterative closest point) registration Algorithm, present frame surface model is mated with the locus of overall volume elements model, thereby by the naturalization of each present frame surface model under the same coordinate system, and the anti-Kinect scanning device location parameter that solves thus.
(b) pass through TSDF(TruncatedSignedDistanceFunction, block signed distance function) each present frame surface model and overall volume elements model are merged to the overall volume elements model obtaining after renewal, be specially: utilize TSDF to calculate the value of three dimensions arbitrfary point, thereby generate Signed Distance Field; The positive and negative intersection of field is for rebuilding body surface. The method can continue surperficial warp tendency well, and the cavity on repairing model surface well. Same scene surface has several depth datas from different directions, therefore tackles several depth data weightings when data fusion and is averaging, and so also can improve gradually the reconstruction precision of this scene surface.
(c) simultaneously, according to Kinect scanning device location parameter, the overall volume elements model after the renewal after merging is carried out to ray trace (RayCasting), thus the overall volume elements model after the renewal of current image capturing range is carried out to shadow and play up; And overall volume elements model after playing up is as feedback frame, exports stream media terminal to and shows, so that user tests to scanning effect. After the end of scan, final overall volume elements model is the indoor three-dimensional scene models that scanning obtains.
(3) server is sent to client by renewal frame with Streaming Media form and shows in client.
See Fig. 4. Monitor Kinect scanning imaging system host process by " streaming media server ", just renewal frame core position is transmitted into whenever upgrading frame generation. Upgrading frame data stream is 30Hz triple channel 640x480 image, and it is per second up to 27M, and network is difficult to burden, so this concrete enforcement adopts Jpeg compact model to compress upgrading frame data, after compression, data flow per second is down to 150K. Next " streaming media server " encoded packed data (encode) and sent, specific coding mode is: according to packed data length to compressed data packet, each group basis protocols having adds gauge outfit information, and issues at streaming media server port.
Corresponding with it, " the Streaming Media client " of stream media terminal monitored streaming media server port, in the time that streaming media server port has new data to issue, receive and decoded data, receiving mode is asynchronous communication model (AsynchronousMessagePassing). Concrete receive mode for grouping receives data, that is, is reassembled into complete data according to gauge outfit information by each group, i.e. compressed image; Then, the core position of compressed image is sent to main thread, the anti-decoding compressed file of main thread, the image of acquisition bitmap (Bitmap) form; Finally, show image in stream media terminal setting regions.
(4) stream media terminal receives and processes various interactive instructions.
See Fig. 5. Stream media terminal is accepted user's various instructions, as beginning, end, time-out etc. Mutual client is numbered at interactive server port with " request " (Request) form transmission corresponding instruction, interactive server mates with operational order storehouse after receiving the numbering that mutual client sends, if without respective operations instruction, and " responses " (Reply) its instruction ignore of client alternately; If any corresponding operating instruction carry out (as, open or close corresponding thread), and respond mutual client and " be disposed ".

Claims (7)

1. the real-time three-dimensional plotting method based on Kinect and stream media technology, is characterized in that, comprising:
Gather the scan-data of indoor scene by 3D body sense video camera, and scan-data be passed to server,Described scan-data comprises depth information and colouring information;
Server builds present frame surface model based on scan-data, complete based on present frame surface model real-time updateOffice's volume elements model is also drawn feedback frame, feedback frame is sent to stream media terminal and shows; This step is specially:
To after the depth information denoising in scan-data, be converted to three-dimensional point cloud, i.e. present frame surface model;
Present frame surface model is mated with the locus of overall volume elements model, and instead solve 3D body sense photographyMachine space position parameter;
Merge present frame surface model and overall volume elements model and obtain the overall volume elements model after upgrading, according to 3DBody sense video camera space position parameter is carried out light and is played up the overall volume elements model after upgrading, and light is after playing upThe screenshotss of overall situation volume elements model are feedback frame;
The feedback framing control scanning process that user shows according to stream media terminal;
Described stream media terminal is fixed on 3D body sense video camera, and portable power source is carried to 3D body sense video cameraPower supply source, 3D body sense video camera is connected with server, and server is by wireless transport module and stream media terminalBe connected.
2. the real-time three-dimensional plotting method based on Kinect and stream media technology as claimed in claim 1, itsFeature is:
Open, fix tentatively or close 3D body sense video camera scanning imaging system by stream media terminal.
3. the real-time three-dimensional plotting method based on Kinect and stream media technology as claimed in claim 1, itsBe characterised in that:
The feedback framing control scanning process that described user shows according to stream media terminal, is specially:
If the feedback frame that stream media terminal shows exists because blocking the cavity of causing, mobile 3 D body sense video cameraAnd adjust scanning angle with avoid block.
4. the real-time three-dimensional plotting method based on Kinect and stream media technology as claimed in claim 1, itsBe characterised in that:
The feedback framing control scanning process that described user shows according to stream media terminal, is specially:
The feedback frame showing taking stream media terminal carries out multiple scanning or employing " close " mode continues to sweep as targetRetouch.
5. the real-time three-dimensional plotting method based on Kinect and stream media technology as claimed in claim 1, itsBe characterised in that:
The Kinect scanning device that described 3D body sense video camera is Microsoft.
6. the real-time three-dimensional plotting method based on Kinect and stream media technology as claimed in claim 1, itsBe characterised in that:
Described stream media terminal is smart mobile phone.
7. the real-time three-dimensional plotting method based on Kinect and stream media technology as claimed in claim 1, itsBe characterised in that:
Described server is portable computer.
CN201310490129.0A 2013-10-18 2013-10-18 Real-time three-dimensional plotting method based on Kinect and stream media technology Expired - Fee Related CN103500013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310490129.0A CN103500013B (en) 2013-10-18 2013-10-18 Real-time three-dimensional plotting method based on Kinect and stream media technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310490129.0A CN103500013B (en) 2013-10-18 2013-10-18 Real-time three-dimensional plotting method based on Kinect and stream media technology

Publications (2)

Publication Number Publication Date
CN103500013A CN103500013A (en) 2014-01-08
CN103500013B true CN103500013B (en) 2016-05-11

Family

ID=49865232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310490129.0A Expired - Fee Related CN103500013B (en) 2013-10-18 2013-10-18 Real-time three-dimensional plotting method based on Kinect and stream media technology

Country Status (1)

Country Link
CN (1) CN103500013B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019091116A1 (en) * 2017-11-10 2019-05-16 Guangdong Kang Yun Technologies Limited Systems and methods for 3d scanning of objects by providing real-time visual feedback

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103888738B (en) * 2014-04-03 2016-09-28 华中师范大学 A kind of multi-source multiaspect battle array unmanned vehicle GIS data acquisition platform
CN104123751A (en) * 2014-07-24 2014-10-29 福州大学 Combined type measurement and three-dimensional reconstruction method combing Kinect and articulated arm
CN104363673A (en) * 2014-09-17 2015-02-18 张灏 Demand-oriented illumination energy conservation control system based on body recognition
CN104639912A (en) * 2015-02-11 2015-05-20 尼森科技(湖北)有限公司 Individual soldier fire protection and disaster rescue equipment and system based on infrared three-dimensional imaging
CN104660995B (en) * 2015-02-11 2018-07-31 尼森科技(湖北)有限公司 A kind of disaster relief rescue visible system
CN106548466B (en) * 2015-09-16 2019-03-29 富士通株式会社 The method and apparatus of three-dimensional reconstruction object
CN105654492B (en) * 2015-12-30 2018-09-07 哈尔滨工业大学 Robust real-time three-dimensional method for reconstructing based on consumer level camera
CN107798704B (en) * 2016-08-30 2021-04-30 成都理想境界科技有限公司 Real-time image superposition method and device for augmented reality
CN106412559B (en) * 2016-09-21 2018-08-07 北京物语科技有限公司 Full vision photographic device
CN107749998B (en) * 2017-10-23 2020-03-31 西华大学 Streaming media visualization method of portable 3D scanner
CN108332660B (en) * 2017-11-10 2020-05-05 广东康云多维视觉智能科技有限公司 Robot three-dimensional scanning system and scanning method
CN108362223B (en) * 2017-11-24 2020-10-27 广东康云多维视觉智能科技有限公司 Portable 3D scanner, scanning system and scanning method
CN108322742B (en) * 2018-02-11 2019-08-16 北京大学深圳研究生院 A kind of point cloud genera compression method based on intra prediction
CN109509215B (en) * 2018-10-30 2022-04-01 浙江大学宁波理工学院 KinFu point cloud auxiliary registration device and method thereof
CN114327334A (en) * 2021-12-27 2022-04-12 苏州金羲智慧科技有限公司 Environment information transmission system based on light ray analysis and transmission method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102824176A (en) * 2012-09-24 2012-12-19 南通大学 Upper limb joint movement degree measuring method based on Kinect sensor
CN102938142A (en) * 2012-09-20 2013-02-20 武汉大学 Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN103220543A (en) * 2013-04-25 2013-07-24 同济大学 Real time three dimensional (3D) video communication system and implement method thereof based on Kinect

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938142A (en) * 2012-09-20 2013-02-20 武汉大学 Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN102824176A (en) * 2012-09-24 2012-12-19 南通大学 Upper limb joint movement degree measuring method based on Kinect sensor
CN103220543A (en) * 2013-04-25 2013-07-24 同济大学 Real time three dimensional (3D) video communication system and implement method thereof based on Kinect

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019091116A1 (en) * 2017-11-10 2019-05-16 Guangdong Kang Yun Technologies Limited Systems and methods for 3d scanning of objects by providing real-time visual feedback

Also Published As

Publication number Publication date
CN103500013A (en) 2014-01-08

Similar Documents

Publication Publication Date Title
CN103500013B (en) Real-time three-dimensional plotting method based on Kinect and stream media technology
JP6171079B1 (en) Inconsistency detection system, mixed reality system, program, and inconsistency detection method
US9648346B2 (en) Multi-view video compression and streaming based on viewpoints of remote viewer
Chen et al. Real-time 3D unstructured environment reconstruction utilizing VR and Kinect-based immersive teleoperation for agricultural field robots
CN110060332B (en) High-precision three-dimensional mapping and modeling system based on airborne acquisition equipment
CN111415416A (en) Method and system for fusing monitoring real-time video and scene three-dimensional model
US11212507B2 (en) Method and apparatus for processing three-dimensional images
CN112270736B (en) Augmented reality processing method and device, storage medium and electronic equipment
CN109461210B (en) Panoramic roaming method for online home decoration
CN104715479A (en) Scene reproduction detection method based on augmented virtuality
CN111241615A (en) Highly realistic multi-source fusion three-dimensional modeling method for transformer substation
CN104376596A (en) Method for modeling and registering three-dimensional scene structures on basis of single image
CN113487723B (en) House online display method and system based on measurable panoramic three-dimensional model
CN103650001A (en) Moving image distribution server, moving image playback device, control method, program, and recording medium
CN109102566A (en) A kind of indoor outdoor scene method for reconstructing and its device of substation
CN108986221A (en) A kind of three-dimensional face grid texture method lack of standardization approached based on template face
CN110660125B (en) Three-dimensional modeling device for power distribution network system
JP2018106661A (en) Inconsistency detection system, mixed reality system, program, and inconsistency detection method
CN109035399A (en) Utilize the method for three-dimensional laser scanner quick obtaining substation three-dimensional information
CN113379901A (en) Method and system for establishing house live-action three-dimension by utilizing public self-photographing panoramic data
Du et al. Design and evaluation of a teleoperated robotic 3-D mapping system using an RGB-D sensor
CN105468154B (en) The interactive panorama display systems of electric system operation
CN116863083A (en) Method and device for processing three-dimensional point cloud data of transformer substation
Rajan et al. A realistic video avatar system for networked virtual environments
CN114531700A (en) Non-artificial base station antenna work parameter acquisition system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160511

Termination date: 20211018

CF01 Termination of patent right due to non-payment of annual fee