CN116758157B - Unmanned aerial vehicle indoor three-dimensional space mapping method, system and storage medium - Google Patents

Unmanned aerial vehicle indoor three-dimensional space mapping method, system and storage medium Download PDF

Info

Publication number
CN116758157B
CN116758157B CN202310707778.5A CN202310707778A CN116758157B CN 116758157 B CN116758157 B CN 116758157B CN 202310707778 A CN202310707778 A CN 202310707778A CN 116758157 B CN116758157 B CN 116758157B
Authority
CN
China
Prior art keywords
data
pose data
point
dimensional
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310707778.5A
Other languages
Chinese (zh)
Other versions
CN116758157A (en
Inventor
林必毅
贺振中
王志敏
余风云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huasairuifei Intelligent Technology Co ltd
Original Assignee
Shenzhen Huasairuifei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huasairuifei Intelligent Technology Co ltd filed Critical Shenzhen Huasairuifei Intelligent Technology Co ltd
Priority to CN202310707778.5A priority Critical patent/CN116758157B/en
Publication of CN116758157A publication Critical patent/CN116758157A/en
Application granted granted Critical
Publication of CN116758157B publication Critical patent/CN116758157B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

Processing video data acquired by a received unmanned aerial vehicle to obtain a plurality of video frames arranged according to a preset time interval; analyzing the received unmanned aerial vehicle attitude information to obtain first pose data; obtaining second pose data according to two continuous video frames; under the condition that the second pose data is matched with the first pose data, a three-dimensional modeling diagram is obtained according to two continuous video frames and the second pose data, so that error matching during video frame splicing is avoided, and the obtained three-dimensional modeling diagram is more accurate.

Description

Unmanned aerial vehicle indoor three-dimensional space mapping method, system and storage medium
Technical Field
The invention relates to the technical field of unmanned aerial vehicle mapping, in particular to an unmanned aerial vehicle indoor three-dimensional space mapping method, system and storage medium.
Background
Along with the continuous progress of science and technology and the expansion of application field, unmanned aerial vehicle's range of application has also become more and more extensive, has related to a plurality of trades and fields such as film and television taking photo by plane, agricultural plant protection, electric power inspection, survey and drawing, police rescue, logistics distribution. The unmanned plane has the advantages of flying over complex and dangerous environments, executing efficient and accurate tasks, saving labor and material costs, improving working quality and safety and the like, brings revolution and innovation for social development, and promotes the progress in the aspects of industry transformation and upgrading, urban intelligent management, occupational innovation and the like.
However, in the application of the unmanned aerial vehicle, how to make the unmanned aerial vehicle autonomously perform mapping tasks in an indoor environment without signals of a global navigation satellite system (including us GPS, russian GLONASS, european Galileo, chinese beidou, etc.), becomes an important research subject, and a conventional indoor positioning method, such as a visual or laser-based marker, needs to be arranged with a specific mark in advance, increases cost and complexity, while a SLAM-based method can construct a three-dimensional map in real time by using motion information and environmental characteristics of the unmanned aerial vehicle itself and estimate the pose of the unmanned aerial vehicle, without depending on external marks or signals, and the method has higher flexibility and adaptability, and can be applied to various indoor scenes, such as factories, warehouses, underground spaces, etc., but because the indoor environment is relatively complex, there are many obstacles and obstructers, errors exist in performing SLAM modeling based on visual data acquired in real time by the unmanned aerial vehicle, and the errors will cause the constructed three-dimensional modeling map to be inaccurate.
Disclosure of Invention
The method provided by the invention avoids the error matching during video frame splicing, and the constructed three-dimensional modeling diagram is more accurate.
In a first aspect, in one embodiment, a method for mapping an indoor three-dimensional space of an unmanned aerial vehicle is provided, including: receiving video data and gesture information acquired by an unmanned aerial vehicle; processing the video data to obtain a plurality of video frames arranged according to a preset time interval; analyzing the gesture information to obtain first gesture data; obtaining second pose data according to two continuous video frames; judging whether the second pose data and the first pose data meet a preset condition or not; if yes, obtaining a three-dimensional modeling diagram according to two continuous video frames and the second pose data; and if not, obtaining second pose data again according to two continuous video frames.
In some embodiments, the obtaining second pose data according to two consecutive video frames includes: extracting SIFT feature points of a first video frame to obtain a first feature point set, and extracting SIFT feature points of a second video frame to obtain a second feature point set, wherein the first video frame and the second video frame are two continuous video frames; obtaining at least one feature matching point according to the first feature point set and the second feature point set; and obtaining the second pose data according to the two-dimensional point data and the three-dimensional point data of the pixel points corresponding to the feature matching points in the second video frame, wherein the two-dimensional point data is the two-dimensional coordinate data of the pixel points, and the three-dimensional point data is the three-dimensional coordinate data of the pixel points.
In some embodiments, the second pose data may be calculated by:wherein R is the second pose data pose, C is a calibration matrix parameter of the unmanned aerial vehicle, </i >>Said two-dimensional point data for said feature matching point,>and the three-dimensional point data of the feature matching points.
In some embodiments, the preset condition is that the second pose data matches the first pose data.
In some embodiments, the obtaining a three-dimensional modeling map according to two consecutive video frames and the second pose data includes: obtaining a first coordinate value of each pixel point in the second video frame according to the second pose data and a second coordinate value of each pixel point in the second video frame, wherein the second coordinate value is a coordinate value of the pixel point in the second video frame, and the first coordinate value is a coordinate value of the pixel point in the first video frame; and filling each first coordinate value of each pixel point into the first video frame to obtain a three-dimensional modeling diagram.
In some embodiments, the second pose data includes altitude data and angle data, and the first coordinate value may be calculated by:wherein, (x) 1 ,y 1 ) For the first coordinate value, (x) 2 ,y 2 ) For the second coordinate value, (x) 2 ,y 2 ) Characterizing the height data of the pixel, and θ characterizing the angle data of the pixel.
In a second aspect, in one embodiment, a three-dimensional indoor spatial mapping system for a drone is provided, including: the system comprises an unmanned aerial vehicle, a base station module and a terminal display module; the base station module is used for receiving video data and gesture information acquired by the unmanned aerial vehicle; processing the video data to obtain a plurality of video frames arranged according to a preset time interval; analyzing the gesture information to obtain first gesture data; obtaining second pose data according to two continuous video frames; judging whether the second pose data and the first pose data meet a preset condition or not; if yes, obtaining a three-dimensional modeling diagram according to two continuous video frames and the second pose data; the unmanned aerial vehicle is used for sending the video data and the gesture information acquired in real time to the base station module for processing; the terminal display module is used for receiving and displaying the receiving and displaying modeling process and the three-dimensional modeling diagram in real time and sending the modeling process and the three-dimensional modeling diagram to the base station module.
In some embodiments, the base station module is further configured to retrieve the second pose data according to two consecutive video frames if the second pose data and the first pose data do not meet a preset condition.
In a second aspect, an embodiment provides an electronic device, including: memory, a processor and a program stored on the memory and executable on the processor, which processor implements a method as described above when executing the program.
In a third aspect, an embodiment provides a computer storage medium having a program stored thereon, the program being executable by a processor to implement a method as described above.
According to the method of the embodiment, the video data collected by the unmanned aerial vehicle are processed to obtain a plurality of video frames which are arranged according to the preset time interval; analyzing the gesture information to obtain first gesture data; obtaining second pose data according to two continuous videos; under the condition that the second pose data is matched with the first pose data, a three-dimensional modeling diagram is obtained according to two continuous videos and the second pose data, so that error matching during video frame splicing is avoided, and the obtained three-dimensional modeling diagram is more accurate.
Drawings
Fig. 1 is a flow chart of an indoor three-dimensional space mapping method of an unmanned aerial vehicle;
FIG. 2 is a flow chart of obtaining second pose data from two consecutive video frames according to an embodiment;
FIG. 3 is a flow chart of one embodiment for obtaining a three-dimensional modeling map from two consecutive video frames and second pose data;
fig. 4 is a block diagram of an indoor three-dimensional space mapping system of an unmanned aerial vehicle provided by the invention;
FIG. 5 is a schematic diagram of a system communication mode according to an embodiment;
FIG. 6 is a block diagram of an electronic device provided by the present invention;
fig. 7 is a block diagram of a computer storage medium according to the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings by means of specific embodiments. Wherein like elements in different embodiments are numbered alike in association. In the following embodiments, numerous specific details are set forth in order to provide a better understanding of the present application. However, one skilled in the art will readily recognize that some of the features may be omitted, or replaced by other elements, materials, or methods in different situations. In some instances, some operations associated with the present application have not been shown or described in the specification to avoid obscuring the core portions of the present application, and may not be necessary for a person skilled in the art to describe in detail the relevant operations based on the description herein and the general knowledge of one skilled in the art.
Furthermore, the described features, operations, or characteristics of the description may be combined in any suitable manner in various embodiments. Also, various steps or acts in the method descriptions may be interchanged or modified in a manner apparent to those of ordinary skill in the art. Thus, the various orders in the description and drawings are for clarity of description of only certain embodiments, and are not meant to be required orders unless otherwise indicated.
The numbering of the components itself, e.g. "first", "second", etc., is used herein merely to distinguish between the described objects and does not have any sequential or technical meaning.
Referring to fig. 1, in an embodiment of the present invention, an indoor three-dimensional space mapping method for an unmanned aerial vehicle is provided, including:
s10: and receiving video data and gesture information acquired by the unmanned aerial vehicle.
S20: and processing the video data to obtain a plurality of video frames which are arranged according to a preset time interval.
In some embodiments, the received video data is processed, parsed into H264 encoded video stream data, and decoded into a plurality of video frames arranged at predetermined time intervals.
S30: and analyzing the gesture information to obtain first gesture data.
In some embodiments, the attitude information is acquired by inertial sensors.
S40: and obtaining second pose data according to the two continuous video frames.
In some embodiments, the second pose data is obtained according to two consecutive video frames, as shown in fig. 2, including:
s41: extracting SIFT feature points of a first video frame to obtain a first feature point set, and extracting SIFT feature points of a second video frame to obtain a second feature point set, wherein the first video frame and the second video frame are two continuous video frames.
S42: and obtaining at least one feature matching point according to the first feature point set and the second feature point set.
S43: and obtaining second pose data according to the two-dimensional point data and the three-dimensional point data of the pixel points corresponding to the feature matching points in the second video frame, wherein the two-dimensional point data is the two-dimensional coordinate data of the pixel points, and the three-dimensional point data is the three-dimensional coordinate data of the pixel points.
In some embodiments, the two-dimensional point data and the three-dimensional point data of the pixel point may be converted by:
wherein,two-dimensional dot data for a pixel dot, +.>Three-dimensional point data s, f which are pixel points x 、f y 、c x And c y Is a default coordinate system of a Kinect depth camera adopted in video data acquisition, f x =f y =525,c x =320,c y =240,s=5000。
In some embodiments, the second pose data may be calculated by:
wherein R is the second pose data pose, C is the calibration matrix parameter of the unmanned aerial vehicle,two-dimensional point data for feature matching points, +.>Three-dimensional point data that is feature matching points.
S50: and judging whether the second pose data and the first pose data meet preset conditions or not.
In some embodiments, the preset condition is that the second pose data is matched with the first pose data, the first pose data and the second pose data both comprise height data and angle data, the corresponding height data and angle data in the two pose data are respectively compared in numerical value, and if the difference value is smaller than or equal to a preset threshold value, the two pose data can be matched.
S60: and if so, obtaining a three-dimensional modeling diagram according to the two continuous video frames and the second pose data.
In the prior art, the images are spliced by directly adopting the gesture information obtained by the gyroscope, so that noise errors are easy to cause the three-dimensional modeling diagram to be inaccurate, the three-dimensional modeling diagram is constructed under the condition that the first gesture data and the second gesture data are matched, the characteristics of inaccurate matching can be removed, and the obtained stable noiseless video frames are matched, so that the three-dimensional modeling diagram is more accurate.
In some embodiments, the three-dimensional modeling map is obtained according to two consecutive video frames and the second pose data, as shown in fig. 3, including:
s61: and obtaining a first coordinate value of each pixel point in the second video frame according to the second pose data and the second coordinate value of each pixel point in the second video frame, wherein the second coordinate value is the coordinate value of the pixel point in the second video frame, and the first coordinate value is the coordinate value of the pixel point in the first video frame.
S62: and filling the first coordinate value of each pixel point into a first video frame to obtain a three-dimensional modeling diagram.
In some embodiments, the second pose data includes altitude data and angle data, and the first coordinate value may be calculated by:
x 1 =x 2 cosθ-y 2 sinθ
y 1 =x 2 sinθ+y 2 cosθ
wherein, (x) 1 ,y 1 ) Is a second coordinate value of (x) 2 ,y 2 ) Is a first coordinate value of (x) 2 ,y 2 ) The height data of the pixel point is characterized, the angle data of the pixel point is characterized by theta, the reference of theta is the rotation angle between two video frames, and the angle can be regarded as the included angle between the axes of the cameras in the two video frames.
S70: if not, the second pose data is obtained again according to the two continuous video frames.
In some embodiments, when the second pose data does not meet the preset condition, the SIFT feature points may be changed by changing the scale space parameter, changing the threshold parameter, using a plurality of scale spaces, and the like, so as to re-obtain new SIFT feature points to obtain at least one new feature matching point, and according to the two-dimensional point data and the three-dimensional point data of the pixel points corresponding to the new feature matching point, re-obtain the second pose data, and if the second pose data and the first pose data meet the preset condition, obtain the three-dimensional modeling map according to two continuous video frames and the second pose data.
Referring to fig. 4, another embodiment of the present invention provides an indoor three-dimensional space mapping system of an unmanned aerial vehicle, including: the system comprises a unmanned plane 100, a base station module 200 and a terminal display module 300; the base station module 200 is used for receiving video data and gesture information acquired by the unmanned aerial vehicle; processing the video data to obtain a plurality of video frames arranged according to a preset time interval; analyzing the gesture information to obtain first gesture data; obtaining second pose data according to two continuous video frames; judging whether the second pose data and the first pose data meet preset conditions or not; if yes, a three-dimensional modeling diagram is obtained according to two continuous video frames and the second pose data; the unmanned aerial vehicle 100 is used for sending the video data and the gesture information acquired in real time to the base station module for processing; the terminal display module 300 is used for receiving and displaying the modeling process and the three-dimensional modeling diagram in real time, and transmitting the modeling process and the three-dimensional modeling diagram to the base station module.
In some embodiments, the video data is collected through a camera mounted on the unmanned aerial vehicle 100, the gesture information is collected through an inertial sensor mounted on the unmanned aerial vehicle 100, communication among the unmanned aerial vehicle 100, the base station module 200 and the terminal display module 300 is achieved through udp and rtp, as shown in fig. 5, the system is divided into a client 10 and a server 20 for communication during practical application, the unmanned aerial vehicle 100 and the terminal display module 300 are used as clients, and the base station module 200 is used as a server.
In some embodiments, the drone 100 includes a communication module 110, a data processing module 120, and a device control module 130, wherein: the communication module 110 is used for initializing a network, connecting a server, starting a sending thread and a receiving thread, sending video data and attitude data to a base station by the sending thread, receiving a control message by the receiving thread, and calling the control module to control an airplane; the data processing module 120 is configured to initialize a camera, obtain a video stream encoded as H264, package the video stream into an RTP packet, invoke a sending thread to send the video stream to the base station module 200, collect data of all sensors, convert the data into a corresponding format according to a sensor type, invoke the sending thread to send the sensor data to the base station module 200, where the sensor data includes gesture information; the device control module 130 is configured to initialize a device, parse the received control message into a control command, and control the aircraft according to the command.
In some embodiments, the base station module 200 is further configured to initialize a device, obtain a device IP, and set a port; the base station is used as a server to bind IP and ports, and starts a sending thread and a receiving thread to wait for connection; receiving messages of the unmanned aerial vehicle 100 and the terminal display module 300; and forwarding the control signal sent by the terminal display module 300 to the unmanned aerial vehicle 100 to perform real-time control of the unmanned aerial vehicle 100.
In some embodiments, the base station module 200 further includes a video data processing module 210, an attitude information processing module 220, and a three-dimensional modeling module 230, where the video data processing module 210 is configured to receive RTP data, parse out H264 encoded video stream data, and decode into video frames arranged according to a preset time interval; the gesture information processing module 220 is configured to receive gesture data and parse the gesture data according to a sensor type to generate first gesture data; the three-dimensional modeling module 230 is configured to input the video frame and the first pose data to the SLAM module of the ROS to perform mapping to obtain a three-dimensional modeling map, and encapsulate the three-dimensional modeling map into an RTP packet after H264 encoding, and call a sending thread to send to the terminal display module 300 for display.
In some embodiments, the terminal display module 300 includes a display module 310 and a control module 320, where the display module is used for initializing a device, connecting to a server (base station module 200), starting a receiving and transmitting thread, receiving RTP data, decoding the RTP data into a video frame, and performing real-time display of a modeling process; the control module is used for initializing the device, monitoring the control rocker and the button, converting the control signal into a data packet with a fixed format, and calling the sending thread to send the control data packet to the base station module 200.
In some embodiments, the base station module is further configured to, when the second pose data and the first pose data do not meet the preset conditions, retrieve the second pose data according to two consecutive video frames, and when the second pose data do not meet the preset conditions, change SIFT feature points by changing scale space parameters, changing threshold parameters, using a plurality of scale spaces, and the like, thereby retrieving new SIFT feature points to obtain at least one new feature matching point, retrieve the second pose data according to two-dimensional point data and three-dimensional point data of pixel points corresponding to the new feature matching point, and obtain a three-dimensional modeling map according to two consecutive video frames and the second pose data if the second pose data and the first pose data meet the preset conditions.
In the prior art, the modeling process is difficult to display in real time, and is difficult to separate from the base station even if the real-time display is achieved, and the control is not free enough.
Another embodiment of the present invention provides an electronic device, as shown in fig. 6, including: the memory 400, the processor 500, and a program stored on the memory 400 and executable on the processor 500, the processor 500 implementing the method as described above when executing the program.
Another embodiment of the present invention provides a computer storage medium, as shown in fig. 7, on which a program is stored 600, the program being executable by a processor 500 to implement a method as described above.
The foregoing description of the invention has been presented for purposes of illustration and description, and is not intended to be limiting. Several simple deductions, modifications or substitutions may also be made by a person skilled in the art to which the invention pertains, based on the idea of the invention.

Claims (7)

1. An unmanned aerial vehicle indoor three-dimensional space mapping method is characterized by comprising the following steps:
receiving video data and gesture information acquired by an unmanned aerial vehicle;
processing the video data to obtain a plurality of video frames arranged according to a preset time interval;
analyzing the gesture information to obtain first gesture data;
obtaining second pose data according to two continuous video frames;
judging whether the second pose data and the first pose data meet a preset condition or not, wherein the preset condition is that the second pose data are matched with the first pose data;
if yes, obtaining a three-dimensional modeling diagram according to two continuous video frames and the second pose data;
if the first pose data and the second pose data are not met, changing SIFT feature points in a mode of changing scale space parameters, changing threshold parameters and using a plurality of scale spaces, so that new SIFT feature points are obtained again to obtain at least one new feature matching point, the second pose data are obtained again according to two-dimensional point data and three-dimensional point data of pixel points corresponding to the new feature matching point, and if the obtained second pose data and the obtained first pose data meet the preset conditions, the three-dimensional modeling diagram is obtained according to two continuous video frames and the second pose data;
the obtaining a three-dimensional modeling diagram according to two continuous video frames and the second pose data comprises the following steps:
obtaining a first coordinate value of each pixel point in a second video frame according to the second pose data and a second coordinate value of each pixel point in the second video frame, wherein the second coordinate value is a coordinate value of the pixel point in the second video frame, the first coordinate value is a coordinate value of the pixel point in a first video frame, and the first video frame and the second video frame are two continuous video frames;
and filling the first coordinate value of each pixel point into the first video frame to obtain the three-dimensional modeling diagram.
2. The method of claim 1, wherein obtaining second pose data from two consecutive video frames comprises:
extracting SIFT feature points of the first video frame to obtain a first feature point set, and extracting SIFT feature points of the second video frame to obtain a second feature point set;
obtaining at least one feature matching point according to the first feature point set and the second feature point set;
and obtaining the second pose data according to the two-dimensional point data and the three-dimensional point data of the pixel points corresponding to the feature matching points in the second video frame, wherein the two-dimensional point data is the two-dimensional coordinate data of the pixel points, and the three-dimensional point data is the three-dimensional coordinate data of the pixel points.
3. The method of claim 2, wherein the second pose data is calculated by:
wherein R is the second pose data pose, C is a calibration matrix parameter of the unmanned aerial vehicle,said two-dimensional point data for said feature matching point,>and the three-dimensional point data of the feature matching points.
4. The method of claim 3, wherein the second pose data comprises altitude data and angle data, the first coordinate value being calculated by:
x 1 =x 2 cosθ-y 2 sinθ
y 1 =x 2 sinθ+y 2 cosθ
wherein, (x) 1 ,y 1 ) For the first coordinate value, (x) 2 ,y 2 ) For the second coordinate value, (x) 2 ,y 2 ) Characterizing the height data of the pixel, and θ characterizing the angle data of the pixel.
5. An unmanned aerial vehicle indoor three-dimensional space mapping system, comprising: the system comprises an unmanned aerial vehicle, a base station module and a terminal display module;
the base station module is used for receiving video data and gesture information acquired by the unmanned aerial vehicle; processing the video data to obtain a plurality of video frames arranged according to a preset time interval; analyzing the gesture information to obtain first gesture data; obtaining second pose data according to two continuous video frames; judging whether the second pose data and the first pose data meet a preset condition or not, wherein the preset condition is that the second pose data are matched with the first pose data; if yes, obtaining a three-dimensional modeling diagram according to two continuous video frames and the second pose data; if the first pose data and the second pose data are not met, changing SIFT feature points in a mode of changing scale space parameters, changing threshold parameters and using a plurality of scale spaces, so that new SIFT feature points are obtained again to obtain at least one new feature matching point, the second pose data are obtained again according to two-dimensional point data and three-dimensional point data of pixel points corresponding to the new feature matching point, and if the obtained second pose data and the obtained first pose data meet the preset conditions, the three-dimensional modeling diagram is obtained according to two continuous video frames and the second pose data;
the unmanned aerial vehicle is used for sending the video data and the gesture information acquired in real time to the base station module for processing;
the terminal display module is used for receiving and displaying the modeling process and the three-dimensional modeling diagram in real time and sending the modeling process and the three-dimensional modeling diagram to the base station module;
the obtaining a three-dimensional modeling diagram according to two continuous video frames and the second pose data comprises the following steps:
obtaining a first coordinate value of each pixel point in a second video frame according to the second pose data and a second coordinate value of each pixel point in the second video frame, wherein the second coordinate value is a coordinate value of the pixel point in the second video frame, the first coordinate value is a coordinate value of the pixel point in a first video frame, and the first video frame and the second video frame are two continuous video frames;
and filling the first coordinate value of each pixel point into the first video frame to obtain a three-dimensional modeling diagram.
6. An electronic device, comprising: memory, a processor and a program stored on the memory and executable on the processor, which processor implements the method according to any of claims 1-4 when executing the program.
7. A computer storage medium having stored thereon a program executable by a processor to implement the method of any of claims 1-4.
CN202310707778.5A 2023-06-14 2023-06-14 Unmanned aerial vehicle indoor three-dimensional space mapping method, system and storage medium Active CN116758157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310707778.5A CN116758157B (en) 2023-06-14 2023-06-14 Unmanned aerial vehicle indoor three-dimensional space mapping method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310707778.5A CN116758157B (en) 2023-06-14 2023-06-14 Unmanned aerial vehicle indoor three-dimensional space mapping method, system and storage medium

Publications (2)

Publication Number Publication Date
CN116758157A CN116758157A (en) 2023-09-15
CN116758157B true CN116758157B (en) 2024-01-30

Family

ID=87960238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310707778.5A Active CN116758157B (en) 2023-06-14 2023-06-14 Unmanned aerial vehicle indoor three-dimensional space mapping method, system and storage medium

Country Status (1)

Country Link
CN (1) CN116758157B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018129794A1 (en) * 2017-01-12 2018-07-19 上海云拟科技有限公司 Method and system for real-time three-dimensional scan modeling for large-scale scene
CN110047108A (en) * 2019-03-07 2019-07-23 中国科学院深圳先进技术研究院 UAV position and orientation determines method, apparatus, computer equipment and storage medium
CN110631554A (en) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 Robot posture determining method and device, robot and readable storage medium
CN111583387A (en) * 2020-04-21 2020-08-25 北京鼎路科技有限公司 Method and system for three-dimensional reconstruction of outdoor scene of unmanned aerial vehicle
CN111586360A (en) * 2020-05-14 2020-08-25 佳都新太科技股份有限公司 Unmanned aerial vehicle projection method, device, equipment and storage medium
CN111640181A (en) * 2020-05-14 2020-09-08 佳都新太科技股份有限公司 Interactive video projection method, device, equipment and storage medium
CN112085849A (en) * 2020-07-28 2020-12-15 航天图景(北京)科技有限公司 Real-time iterative three-dimensional modeling method and system based on aerial video stream and readable medium
CN112106113A (en) * 2019-09-16 2020-12-18 深圳市大疆创新科技有限公司 Method and device for determining pose information of image in three-dimensional reconstruction
CN112907652A (en) * 2021-01-25 2021-06-04 脸萌有限公司 Camera pose acquisition method, video processing method, display device and storage medium
CN112950715A (en) * 2021-03-04 2021-06-11 杭州迅蚁网络科技有限公司 Visual positioning method and device for unmanned aerial vehicle, computer equipment and storage medium
CN113415433A (en) * 2021-07-30 2021-09-21 成都纵横大鹏无人机科技有限公司 Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle
WO2021189784A1 (en) * 2020-03-23 2021-09-30 南京科沃斯机器人技术有限公司 Scenario reconstruction method, system and apparatus, and sweeping robot
CN114529607A (en) * 2022-02-23 2022-05-24 三一重机有限公司 Pose determination method and device and working machine
WO2022127572A1 (en) * 2020-12-14 2022-06-23 北京石头创新科技有限公司 Method for displaying posture of robot in three-dimensional map, apparatus, device, and storage medium
WO2022147976A1 (en) * 2021-01-11 2022-07-14 浙江商汤科技开发有限公司 Three-dimensional reconstruction method, related interaction and measurement method, related apparatuses, and device
CN114877872A (en) * 2022-07-01 2022-08-09 北京今日蓝天科技有限公司 Unmanned aerial vehicle, operating system thereof, method, medium and equipment for generating map
CN115222808A (en) * 2021-06-30 2022-10-21 达闼机器人股份有限公司 Positioning method and device based on unmanned aerial vehicle, storage medium and electronic equipment
WO2022237048A1 (en) * 2021-05-11 2022-11-17 浙江商汤科技开发有限公司 Pose acquisition method and apparatus, and electronic device, storage medium and program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102647351B1 (en) * 2017-01-26 2024-03-13 삼성전자주식회사 Modeling method and modeling apparatus using 3d point cloud
CN111768454B (en) * 2020-08-05 2023-12-22 腾讯科技(深圳)有限公司 Pose determination method, pose determination device, pose determination equipment and storage medium

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018129794A1 (en) * 2017-01-12 2018-07-19 上海云拟科技有限公司 Method and system for real-time three-dimensional scan modeling for large-scale scene
CN110631554A (en) * 2018-06-22 2019-12-31 北京京东尚科信息技术有限公司 Robot posture determining method and device, robot and readable storage medium
CN110047108A (en) * 2019-03-07 2019-07-23 中国科学院深圳先进技术研究院 UAV position and orientation determines method, apparatus, computer equipment and storage medium
CN112106113A (en) * 2019-09-16 2020-12-18 深圳市大疆创新科技有限公司 Method and device for determining pose information of image in three-dimensional reconstruction
WO2021189784A1 (en) * 2020-03-23 2021-09-30 南京科沃斯机器人技术有限公司 Scenario reconstruction method, system and apparatus, and sweeping robot
CN111583387A (en) * 2020-04-21 2020-08-25 北京鼎路科技有限公司 Method and system for three-dimensional reconstruction of outdoor scene of unmanned aerial vehicle
WO2021227359A1 (en) * 2020-05-14 2021-11-18 佳都新太科技股份有限公司 Unmanned aerial vehicle-based projection method and apparatus, device, and storage medium
CN111640181A (en) * 2020-05-14 2020-09-08 佳都新太科技股份有限公司 Interactive video projection method, device, equipment and storage medium
CN111586360A (en) * 2020-05-14 2020-08-25 佳都新太科技股份有限公司 Unmanned aerial vehicle projection method, device, equipment and storage medium
CN112085849A (en) * 2020-07-28 2020-12-15 航天图景(北京)科技有限公司 Real-time iterative three-dimensional modeling method and system based on aerial video stream and readable medium
WO2022127572A1 (en) * 2020-12-14 2022-06-23 北京石头创新科技有限公司 Method for displaying posture of robot in three-dimensional map, apparatus, device, and storage medium
WO2022147976A1 (en) * 2021-01-11 2022-07-14 浙江商汤科技开发有限公司 Three-dimensional reconstruction method, related interaction and measurement method, related apparatuses, and device
CN112907652A (en) * 2021-01-25 2021-06-04 脸萌有限公司 Camera pose acquisition method, video processing method, display device and storage medium
CN112950715A (en) * 2021-03-04 2021-06-11 杭州迅蚁网络科技有限公司 Visual positioning method and device for unmanned aerial vehicle, computer equipment and storage medium
WO2022237048A1 (en) * 2021-05-11 2022-11-17 浙江商汤科技开发有限公司 Pose acquisition method and apparatus, and electronic device, storage medium and program
CN115222808A (en) * 2021-06-30 2022-10-21 达闼机器人股份有限公司 Positioning method and device based on unmanned aerial vehicle, storage medium and electronic equipment
CN113415433A (en) * 2021-07-30 2021-09-21 成都纵横大鹏无人机科技有限公司 Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle
CN114529607A (en) * 2022-02-23 2022-05-24 三一重机有限公司 Pose determination method and device and working machine
CN114877872A (en) * 2022-07-01 2022-08-09 北京今日蓝天科技有限公司 Unmanned aerial vehicle, operating system thereof, method, medium and equipment for generating map

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
RAIL: Robust Acoustic Indoor Localization for Drones;Alireza Famili 等;《arXiv:2111.03764v1 [eess.SP]》;1-6 *
人工智能与智慧建筑;林必毅 等;《智能建筑与智慧城市》;110-112 *
基于RGB-D图像的三维同步定位与建图研究;胡凌燕;曹禄;熊鹏文;辛勇;谢泽坤;;系统仿真学报(第11期);243-249 *

Also Published As

Publication number Publication date
CN116758157A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
US11698449B2 (en) User interface for displaying point clouds generated by a LiDAR device on a UAV
US10922881B2 (en) Three dimensional/360 degree (3D/360°) real-time full information smart management integrated mapping system (SMIMS) and process of generating the same
US9964955B2 (en) System and methods of determining a geometric pose of a camera based on spatial and visual mapping
US8818081B1 (en) 3D model updates using crowdsourced video
CN111081199B (en) Selecting a temporally distributed panoramic image for display
AU2018450490B2 (en) Surveying and mapping system, surveying and mapping method and device, and apparatus
CN113345028B (en) Method and equipment for determining target coordinate transformation information
CN108810473B (en) Method and system for realizing GPS mapping camera picture coordinate on mobile platform
WO2021035756A1 (en) Aircraft-based patrol inspection method and device, and storage medium
CN101971214A (en) Platform for the production of seamless orthographic imagery
AU2018449839B2 (en) Surveying and mapping method and device
George et al. Towards drone-sourced live video analytics for the construction industry
CN111327860A (en) Synchronous transmission method for figures and electronic equipment
US20210404840A1 (en) Techniques for mapping using a compact payload in a movable object environment
US20230177707A1 (en) Post-processing of mapping data for improved accuracy and noise-reduction
CN116758157B (en) Unmanned aerial vehicle indoor three-dimensional space mapping method, system and storage medium
AU2010364001B2 (en) System and method for camera control in a surveillance system
US20230314171A1 (en) Mapping apparatus, tracker, mapping method, and program
US20220113421A1 (en) Online point cloud processing of lidar and camera data
US20220415193A1 (en) Image processing device, image processing method, and program
US20230021556A1 (en) Environmental map management apparatus, environmental map management method, and program
Clift et al. Determining positions and distances using collaborative robots
Jurevičius et al. A data set of aerial imagery from robotics simulator for map-based localization systems benchmark
KR102204564B1 (en) Method for controlling unmanned air vehicle acquiring location information using survey antenna and method for generating location-matched image based on location information acquired from unmanned air vehicle
CN115439635B (en) Method and equipment for presenting marking information of target object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant