CN108287549A - A kind of method and system improving spacescan time performance - Google Patents

A kind of method and system improving spacescan time performance Download PDF

Info

Publication number
CN108287549A
CN108287549A CN201810083465.6A CN201810083465A CN108287549A CN 108287549 A CN108287549 A CN 108287549A CN 201810083465 A CN201810083465 A CN 201810083465A CN 108287549 A CN108287549 A CN 108287549A
Authority
CN
China
Prior art keywords
data
scanning
spacescan
scanning device
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810083465.6A
Other languages
Chinese (zh)
Inventor
李新福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Kang Yun Multidimensional Vision Intelligent Technology Co Ltd
Original Assignee
Guangdong Kang Yun Multidimensional Vision Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Kang Yun Multidimensional Vision Intelligent Technology Co Ltd filed Critical Guangdong Kang Yun Multidimensional Vision Intelligent Technology Co Ltd
Publication of CN108287549A publication Critical patent/CN108287549A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • G01C15/002Active optical surveying means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0272Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal
    • G05D1/0282Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal generated in a local control room

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Processing Or Creating Images (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a kind of method and system improving spacescan time performance, system includes multiple scanning devices and is connected to the server of display, each in the multiple scanning device includes modules of data capture;The server is for performing the following operations:Ambient data is obtained from multiple modules of data capture;Use scanning device described in pre-existing two dimensional navigation information configuration;The ranging data that pre-existing two dimensional navigation information moves in real time with the scanning device is integrated;Splice the data of each in multiple scanning devices in real time;Generate 3-D scanning object, three-dimensional polygon or the three-dimensional grid in space.The present invention includes that the two dimensional navigation information of moving region information carries out navigation motion scan to configure scanning device by receiving, efficient;The data and preset two dimensional navigation information obtained according to scanning carry out Data Integration to generate the 3-D view of space environment, and real-time is preferable.It the composite can be widely applied to environmental scanning field.

Description

A kind of method and system improving spacescan time performance
Technical field
The present invention relates to environmental scanning field, especially a kind of method and system improving spacescan time performance.
Background technology
In recent years, laser technology, technical development of computer are rapid, and environmental scanning technology is in Context awareness, navigation and positioning Etc. application it is also more and more extensive.By taking Google Earth and Google's streetscape as an example, the height based on GPS positioning information can be provided 360 degree of distant view photographs of precision, greatly facilitate user navigate, the operations such as path planning, application expanded to it is empty Between be distributed related all various aspects, such as natural environment monitoring and analysis, resource investigation and exploitation, communication navigation.However, at present Environmental scanning technology be directed to outdoor environment mostly, it is more rare to the sweeping scheme of indoor environment.In digital city, answer Under the huge applications demand driving of anxious response simulation and training, Digital Cultural Heritage, exhibition etc., especially anti-terrorism, fire-fighting, In the typical cases such as exhibition, wisdom building, the indoor environment information that indoor environment scanning technique obtains is indispensable basis Data.Different with outdoor, indoor environment has the characteristics that short distance, more corners, easily blocks, illumination complexity, lacks absolute fix, It efficiently and accurately obtains indoor environment information and is challenging project.The solution being commonly used is by making Either scans are carried out to indoor environment with scanning device, but the scan efficiency of this artificial scan mode is low (especially big Indoor environment scanning under scale scene, such as carries out environmental scanning in 10,000 square metres of museum) and it is difficult to ensure scanning essence Degree.And current either scans' mode fails to combine the environmental information of two-dimensional navigation information and acquisition to generate to wait sweeping The 3-D view in space is retouched, real-time is poor.
Invention content
In order to solve the above technical problems, it is an object of the invention to:It is empty to provide a kind of efficient and good real-time improvement Between sweep time performance method and system.
The first technical solution for being taken of the present invention is:
A method of improving spacescan time performance, includes the following steps:
Two dimensional navigation information is received, the two dimensional navigation information includes moving region;
Configuration scanning device is mobile by the moving region to navigate;
Ambient enviroment is scanned by least one modules of data capture in the scanning device;
The scanning device is according to the 3-D view for generating ambient enviroment when the fructufy of scanning.
Further, further include the steps that the 3-D view of generation is sent to user equipment.
Further, the 3-D view is by by the mobile ranging data of the pre- navigation information of pre-existing two dimension and navigation The mode merged generates.
Further, at least one modules of data capture includes flash of light radar, radar, stereoscopic vision equipment, video camera At least one of with speedometer sensor.
Further, the modules of data capture is used cooperatively with any one software application.
Further, the software application package includes mobile applications, laptop computer applications program, desktop computer and answers With at least one of program and web browser.
Further, the scanning device is from master scanning device, semi-autonomous scanning device or hand scanner.
The second technical solution for being taken of the present invention is:
A kind of system improving spacescan time performance, including multiple scanning devices and the service for being connected to display Device, each in the multiple scanning device include:
Modules of data capture, for acquiring surrounding enviroment information;
The server is for performing the following operations:
Ambient data is obtained from multiple modules of data capture;
Use scanning device described in pre-existing two dimensional navigation information configuration;
The ranging data that pre-existing two dimensional navigation information moves in real time with the scanning device is integrated;
Splice the data of each in multiple scanning devices when according to the fructufy of integration;
3-D scanning object, three-dimensional polygon or the three-dimensional grid in space are generated according to the result spliced in real time.
Further, the modules of data capture includes flash of light radar, radar, stereoscopic vision equipment, video camera and odometer At least one of sensor.
The beneficial effects of the invention are as follows:The present invention improves the method and system of spacescan time performance, is wrapped by receiving Two dimensional navigation information containing moving region information carries out navigation motion scan to configure scanning device, need not manually participate in i.e. The automatically scanning in space can be achieved, it is efficient;The data and preset two dimension obtained according to scanning device navigation motion scan are led Boat information carries out Data Integration to generate the 3-D view of space environment, and real-time is preferable.
Description of the drawings
Fig. 1 is a kind of example structure block diagram for the system that the present invention improves spacescan time performance;
Fig. 2 is a kind of preferred embodiment structural schematic diagram of robot of the present invention;
Fig. 3 is a kind of internal structure block diagram of server of the present invention;
Fig. 4 is the flow chart that the system of application drawing 1 carries out the method for spacescan.
Specific implementation mode
The present invention is further explained and is illustrated with specific embodiment with reference to the accompanying drawings of the specification.
With reference to figure 1, the present embodiment is robot for the system of spacescan, which can manually control, certainly Main control or combination.The robot can also use mobile application, laptop computer applications, desktop computer application program Or any software application such as browser controls.Application program can both be pre-installed in local, can also from website or URL is downloaded.As shown in Figure 1, the robot system includes the first modules of data capture 102, processor 202, display 204, net Network 206 and server 208.The robot can be connected to server 208 by network 206, moreover, network 206 can also be by machine Device people is connected to user display 204.
It is further used as preferred embodiment, network 206 can be LAN, MAN, WAN, mobile network, WI-FI and defend StarNet's network etc..
It is further used as preferred embodiment, display 204 can be only fitted to smart phone, laptop, tablet In any one equipment in computer, desktop computer and PDA etc..
It is further used as preferred embodiment, which may also include the second modules of data capture 104 and third number According to capture module 106.Preferably, the first modules of data capture 102, the second modules of data capture 104 and third data capture mould The quantity of block 106 can be one or more.
It is further used as preferred embodiment, the first modules of data capture 102 can be stereoscopic vision equipment or laser Radar LiDAR.Wherein, stereoscopic vision equipment determines the range, angle or speed of object to be scanned using radio wave. LiDAR is more suitable for the information for identifying and obtaining the object to be scanned being closer to robot distance.First data capture Module 102 is by being received after the signals such as the radio wave of the transmittings such as stereoscopic vision equipment or laser radar LiDAR or laser The signal being reflected back obtains the corresponding information of environment.First modules of data capture 102 can be positioned over the top of robot, so that Robot can scan entire ambient condition information.
It is further used as preferred embodiment, the second modules of data capture 104 can capture the movement number of robot According to speedometer sensor (speedometer sensor be robot measurement wheel rotation operation sensor).Speedometer sensor Can identify robot moved how far and robot pass through distance.In addition, mileometer flowmeter sensor is enough to identify The accurate movement of robot.
It is further used as preferred embodiment, third modules of data capture 106 is made of multiple video cameras.For example, the At least there are two video cameras to capture the image information of robot for three modules of data capture.Video camera may include with Spherical view mode captures the fish eye lens of data.Preferably, third modules of data capture 106 may include 4 directions not Tongfang To video camera, this 4 camera shooting functions the same time capture 360 degree of robot image information.
It is further used as preferred embodiment, processor 202 can be have various hardware inter-process components only The internal component of vertical processor or processor 202 can also be by Software Create.Processor 202 can also be multiple processors Set, it is common to realize function identical with above-mentioned independent processor.Processor 202 is machine by the moving region in space People navigates.The two dimensional navigation information of software application reception space in digital form, the two dimensional navigation information for so that Robot moves in the moving region defined by the space.Space can be indoor exhibition room or the exterior space.Robot can be with It is connected to server 208, and server 208 is then connected to display 204 and robot by network 206.Server 208 can be with Information is received from each robot for being connect with network 206, handled and is exported on the 3-D view to display 204 in space It is shown.
With reference to figure 2, the robot of the present embodiment includes main frame 110 and multiple support legs 112.As shown in Fig. 2, main frame 110 may include there is at least one third modules of data capture 106.Wherein, main frame 110 can be by timber, metal, alloy, modeling Any one in material, rubber and fiber or arbitrary several combination composition.Support leg 110 is used to can reach to robot offer It, and can the movement of auxiliary robot in space with the height (this can highly be adjusted) that can be scanned.In the present embodiment In, third modules of data capture 106 may include fish eye lens, in order to capture the spherical view in corresponding direction region.Main frame The top of frame 110 can also be equipped with the first modules of data capture 102.In addition, each support leg in multiple support legs 110 of robot It include at least one second modules of data capture 104 on 110.The arrangement of second modules of data capture 104 be it is highly important, The motor behavior of energy tight tracking robot.
It is further used as preferred embodiment, the multiple support leg 110 includes at least one mobile device.Wherein, Mobile device can select wheel, which can freely slide in any direction, so as to drive entire robot to move automatically Target location is arrived in dynamic (corresponding to robot autonomous control mode) or controlled movement (corresponding to manual control mode), to carry out reality When moving sweep, solve the problems, such as existing either scans' mode cannot achieve in real time carry out moving sweep.
It is further used as preferred embodiment, main frame 110 can have any shape and size.Master shown in Fig. 2 The shape of frame 110 only plays a part of to facilitate explanation, is not considered as limitation of the present invention in any way.
With reference to figure 3, the server 208 of the present embodiment includes data acquisition module 2082, navigation information module 2084, scanning Module 2086, data integrator 2088 and 3-D scanning object/three-dimensional polygon/three-dimensional mesh generator 2090.
Wherein, data acquisition module 2082 is from the first modules of data capture 102, the second modules of data capture 104 and third Any of modules of data capture 106 receives data.Data acquisition module 2082 is the data acquisition interface of server 208. Data acquisition module 2082 can be formatted conversion to the data of acquisition, to meet the data format requirement of robot (as turned It is changed to machine language).
Pre-existing two dimensional navigation information represents the path that robot moves in space, is used to that robot is made to transport Dynamic scanning minimizes the error.Navigation information module 2084 receives format data from data acquisition module 2082.It is pre-existing Two dimensional navigation information be used to analyze and identify moving region from all information.Moving region defines robot in sky Between in the path that can be moved into and mode.Navigation information module is additionally operable to configure robot, and robot is made to start It is moved in the moving region identified.
Scan module 2086, for starting the first modules of data capture 102, the second modules of data capture 104 and third number According in capture module 106 it is whole or at least one come start capture ambient enviroment data.First modules of data capture 102, Second modules of data capture 104 and third modules of data capture 106 can capture all information of ambient enviroment, including be present in this The information of object in space.
Data integrator 2088 is used for two dimension pre-existing navigation information and the first modules of data capture 102, second The information that modules of data capture 104 and third modules of data capture 106 are captured is integrated.
3-D scanning object or three-dimensional polygon or three-dimensional mesh generator 2090, for being obtained from data integrator 2088 Integration after data handled, to generate the 3-D scanning object or three-dimensional polygon or three-dimensional grid in the space.Three The processing that dimension sweep object or three-dimensional polygon or three-dimensional mesh generator 2090 carry out includes that splicing in real time and hole filling (are used for Quality is promoted, flaw is reduced) processing.The 3-D scanning object or 3D polygons or 3D grids of generation then may be forwarded to use It is shown in the display 204 (being not shown in Fig. 3) of family equipment.
With reference to figure 4, the present embodiment includes but not limited to following step come the method for scanning space based on system shown in FIG. 1 Suddenly:
Step 402:Initialization:Robot passes through the first modules of data capture 102, the second modules of data capture 104 and Any of three modules of data capture 106 obtain data, and receive navigation information.By robot receive navigation information be with The form of two-dimensional digital is pre-existing, represents the path that robot moves in space, is used to that robot motion is made to sweep That retouches minimizes the error.
Step 404:The data that data and navigation information that step 402 obtains are converted to machine format, in order to machine People carries out navigation movement using formatted data.
Step 406:Configuration of robotic waits for the moving region that navigation passes through in space, in order to which robot is changed using format Guidance path afterwards carries out navigation movement in space.
Step 408:Robot carries out navigation movement in space according to the moving region of configuration, passes through automatic or manual control The mode of system carries out the 3-D scanning in space, and generates ranging data by mobile in real time.
Step 410:By the ranging data (being determined by two dimensional navigation information) of data and generation that 3-D scanning process obtains It is integrated.
Step 412:The 3-D view in space is generated according to the data after integration.The 3-D view of generation can by splicing and Hole filling etc. is further processed to obtain 3-D scanning object or three-dimensional polygon or three-dimensional grid.
Step 414:3-D scanning object or three-dimensional polygon or three-dimensional grid are sent on user equipment and shown.
It is to be illustrated to the preferable implementation of the present invention, but the present invention is not limited to the embodiment above, it is ripe Various equivalent variations or replacement can also be made under the premise of without prejudice to spirit of that invention by knowing those skilled in the art, this Equivalent deformation or replacement are all contained in the application claim limited range a bit.

Claims (10)

1. a kind of method improving spacescan time performance, it is characterised in that:Include the following steps:
Two dimensional navigation information is received, the two dimensional navigation information includes moving region;
Configuration scanning device is mobile by the moving region to navigate;
Ambient enviroment is scanned by least one modules of data capture in the scanning device;
The scanning device is according to the 3-D view for generating ambient enviroment when the fructufy of scanning.
2. a kind of method improving spacescan time performance according to claim 1, it is characterised in that:Further include that will give birth to At 3-D view be sent to user equipment the step of.
3. a kind of method improving spacescan time performance according to claim 1, it is characterised in that:The three-dimensional regards Figure is generated by way of merging the mobile ranging data of the pre-existing pre- navigation information of two dimension and navigation.
4. a kind of method improving spacescan time performance according to claim 1, it is characterised in that:Described at least one A modules of data capture includes at least one in glistening radar, radar, stereoscopic vision equipment, video camera and speedometer sensor It is a.
5. a kind of method improving spacescan time performance according to claim 1, it is characterised in that:The data are caught Module is caught to be used cooperatively with any one software application.
6. a kind of method improving spacescan time performance according to claim 5, it is characterised in that:The software is answered With program include in mobile applications, laptop computer applications program, desktop computer application program and web browser extremely It is one few.
7. a kind of method improving spacescan time performance according to claim 1, it is characterised in that:The scanning is set Standby is from master scanning device, semi-autonomous scanning device or hand scanner.
8. a kind of system improving spacescan time performance, it is characterised in that:Including multiple scanning devices and it is connected to display The server of device, each in the multiple scanning device include:
Modules of data capture, for acquiring surrounding enviroment information;
The server is for performing the following operations:
Ambient data is obtained from multiple modules of data capture;
Use scanning device described in pre-existing two dimensional navigation information configuration;
The ranging data that pre-existing two dimensional navigation information moves in real time with the scanning device is integrated;
Splice the data of each in multiple scanning devices when according to the fructufy of integration;
3-D scanning object, three-dimensional polygon or the three-dimensional grid in space are generated according to the result spliced in real time.
9. a kind of system improving spacescan time performance according to claim 8, it is characterised in that:The data are caught It includes at least one of flash of light radar, radar, stereoscopic vision equipment, video camera and speedometer sensor to catch module.
10. a kind of system improving spacescan time performance according to claim 8, it is characterised in that:The scanning Equipment is from master scanning device, semi-autonomous scanning device or hand scanner.
CN201810083465.6A 2017-11-24 2018-01-29 A kind of method and system improving spacescan time performance Pending CN108287549A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762590371P 2017-11-24 2017-11-24
US62/590,371 2017-11-24

Publications (1)

Publication Number Publication Date
CN108287549A true CN108287549A (en) 2018-07-17

Family

ID=62836066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810083465.6A Pending CN108287549A (en) 2017-11-24 2018-01-29 A kind of method and system improving spacescan time performance

Country Status (2)

Country Link
CN (1) CN108287549A (en)
WO (1) WO2019100699A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022003215A1 (en) * 2020-07-02 2022-01-06 Bimertek, S.L. Method and device for obtaining representation models of structural elements

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN105203094A (en) * 2015-09-10 2015-12-30 联想(北京)有限公司 Map building method and equipment
CN205905026U (en) * 2016-08-26 2017-01-25 沈阳工学院 Robot system based on two mesh stereovisions

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103913174B (en) * 2012-12-31 2016-10-19 深圳先进技术研究院 The generation method and system of a kind of navigation information and mobile client and server end
CN105136064A (en) * 2015-09-13 2015-12-09 维希艾信息科技(无锡)有限公司 Moving object three-dimensional size detection system and method
CN105628034B (en) * 2016-02-04 2019-04-23 合肥杰发科技有限公司 Navigation map update method and equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103926927A (en) * 2014-05-05 2014-07-16 重庆大学 Binocular vision positioning and three-dimensional mapping method for indoor mobile robot
CN105203094A (en) * 2015-09-10 2015-12-30 联想(北京)有限公司 Map building method and equipment
CN205905026U (en) * 2016-08-26 2017-01-25 沈阳工学院 Robot system based on two mesh stereovisions

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱翔鹏: "室内环境中单目视觉3D地图创建", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Also Published As

Publication number Publication date
WO2019100699A1 (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN113570721B (en) Method and device for reconstructing three-dimensional space model and storage medium
US10297074B2 (en) Three-dimensional modeling from optical capture
CA3120725C (en) Surveying and mapping system, surveying and mapping method and device, apparatus and medium
US20190026400A1 (en) Three-dimensional modeling from point cloud data migration
WO2022036980A1 (en) Pose determination method and apparatus, electronic device, storage medium, and program
CN109387186B (en) Surveying and mapping information acquisition method and device, electronic equipment and storage medium
RU2741443C1 (en) Method and device for sampling points selection for surveying and mapping, control terminal and data storage medium
CN108789421B (en) Cloud robot interaction method based on cloud platform, cloud robot and cloud platform
CN112469967B (en) Mapping system, mapping method, mapping device, mapping apparatus, and recording medium
CN106292656B (en) A kind of environmental modeling method and device
CN108287345A (en) Spacescan method and system based on point cloud data
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
CN110428372A (en) Depth data and 2D laser data fusion method and device, storage medium
CN108340405A (en) A kind of robot three-dimensional scanning system and method
CN113378605A (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN111527375B (en) Planning method and device for surveying and mapping sampling point, control terminal and storage medium
CN108364340A (en) The method and system of synchronous spacescan
CN108287549A (en) A kind of method and system improving spacescan time performance
CN114089836B (en) Labeling method, terminal, server and storage medium
CN112672134B (en) Three-dimensional information acquisition control equipment and method based on mobile terminal
CN116129064A (en) Electronic map generation method, device, equipment and storage medium
CN111292288B (en) Target detection and positioning method and device
CN116266402A (en) Automatic object labeling method and device, electronic equipment and storage medium
CN108282615A (en) Surrounding enviroment scan method and system
CN113001985A (en) 3D model, device, electronic equipment and storage medium based on oblique photography construction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180717